Commit Graph

257 Commits (b9d5db8a423a4e321546eb609bfba0c572306d24)

Author SHA1 Message Date
Kohya S 66edc5af7b invert condition for checking log_with 2023-04-22 18:05:19 +09:00
saltacc dc37fd2ff6
fix no logging command line arg 2023-04-22 01:26:31 -07:00
Kohya S 884e6bff5d fix face_crop_aug not working on finetune method, prepare upscaler 2023-04-22 10:41:36 +09:00
Kohya S 220436244c some minor fixes 2023-04-22 09:55:04 +09:00
Kohya S c430cf481a
Merge pull request #428 from p1atdev/dev
Add WandB logging support
2023-04-22 09:39:01 +09:00
tsukimiya e746829b5f おそらくlibgtk2がインストールされていない環境でcv2.waitKey() および cv2.destroyAllWindows() が動作しないので除外 2023-04-20 06:20:02 +09:00
Plat a69b24a069 fix: tensorboard not working 2023-04-20 05:33:32 +09:00
Plat 8090daca40 fix: wandb not working without logging_dir 2023-04-20 05:14:28 +09:00
Plat 27ffd9fe3d feat: support wandb logging 2023-04-20 01:41:12 +09:00
Kohya S 423e6c229c support metadata json+.npz caching (no prepare) 2023-04-13 22:12:13 +09:00
Kohya S a8632b7329 fix latents disk cache 2023-04-13 21:14:39 +09:00
Kohya S 2e9f7b5f91 cache latents to disk in dreambooth method 2023-04-12 23:10:39 +09:00
AI-Casanova 0d54609435
Merge branch 'kohya-ss:main' into weighted_captions 2023-04-07 14:55:40 -05:00
AI-Casanova 7527436549
Merge branch 'kohya-ss:main' into weighted_captions 2023-04-05 17:07:15 -05:00
Kohya S 541539a144 change method name, repo is private in default etc 2023-04-05 23:16:49 +09:00
Kohya S 74220bb52c
Merge pull request #348 from ddPn08/dev
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
AI-Casanova 1892c82a60 Reinstantiate weighted captions after a necessary revert to Main 2023-04-02 19:43:34 +00:00
ddPn08 16ba1cec69
change async uploading to optional 2023-04-02 17:45:26 +09:00
ddPn08 8bfa50e283
small fix 2023-04-02 17:39:23 +09:00
ddPn08 c4a11e5a5a
fix help 2023-04-02 17:39:23 +09:00
ddPn08 3cc4939dd3
Implement huggingface upload for all scripts 2023-04-02 17:39:22 +09:00
ddPn08 b5ff4e816f
resume from huggingface repository 2023-04-02 17:39:21 +09:00
ddPn08 d42431d73a
Added feature to upload to huggingface 2023-04-02 17:39:10 +09:00
Yuta Hayashibe 9577a9f38d Check needless num_warmup_steps 2023-04-01 20:33:20 +09:00
Kohya S 31069e1dc5 add comments about debice for clarify 2023-03-30 21:44:40 +09:00
Kohya S 6c28dfb417
Merge pull request #332 from guaneec/ddp-lowram
Reduce peak RAM usage
2023-03-30 21:37:37 +09:00
Jakaline-dev b0c33a4294 Merge remote-tracking branch 'upstream/main' 2023-03-30 01:35:38 +09:00
Kohya S 4f70e5dca6 fix to work with num_workers=0 2023-03-28 19:42:47 +09:00
Kohya S 238f01bc9c fix images are used twice, update debug dataset 2023-03-27 20:48:21 +09:00
guaneec 3cdae0cbd2
Reduce peak RAM usage 2023-03-27 14:34:17 +08:00
Kohya S 14891523ce fix seed for each dataset to make shuffling same 2023-03-26 22:17:03 +09:00
Kohya S 6732df93e2
Merge branch 'dev' into min-SNR 2023-03-26 17:10:53 +09:00
Kohya S 4f42f759ea
Merge pull request #322 from u-haru/feature/token_warmup
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
Jakaline-dev a35d7ef227 Implement XTI 2023-03-26 05:26:10 +09:00
u-haru a4b34a9c3c blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正 2023-03-26 03:26:55 +09:00
u-haru 5a3d564a30 print削除 2023-03-26 02:26:08 +09:00
u-haru 4dc1124f93 lora以外も対応 2023-03-26 02:19:55 +09:00
u-haru 292cdb8379 データセットにepoch、stepが通達されないバグ修正 2023-03-26 01:44:25 +09:00
u-haru 1b89b2a10e シャッフル前にタグを切り詰めるように変更 2023-03-24 13:44:30 +09:00
u-haru 447c56bf50 typo修正、stepをglobal_stepに修正、バグ修正 2023-03-23 09:53:14 +09:00
u-haru a9b26b73e0 implement token warmup 2023-03-23 07:37:14 +09:00
AI-Casanova 64c923230e Min-SNR Weighting Strategy: Refactored and added to all trainers 2023-03-22 01:27:29 +00:00
AI-Casanova 795a6bd2d8
Merge branch 'kohya-ss:main' into min-SNR 2023-03-21 13:19:15 -05:00
Kohya S 7b324bcc3b support extensions of image files with uppercases 2023-03-21 21:10:34 +09:00
Kohya S 6d9f3bc0b2 fix different reso in batch 2023-03-21 18:33:46 +09:00
Kohya S 1816ac3271 add vae_batch_size option for faster caching 2023-03-21 18:15:57 +09:00
Kohya S cb08fa0379 fix no npz with full path 2023-03-21 15:05:25 +09:00
AI-Casanova a265225972 Min-SNR Weighting Strategy 2023-03-20 22:51:38 +00:00
Kohya S de95431895 support win with diffusers, fix extra args eval 2023-03-19 22:09:36 +09:00
Kohya S 48c1be34f3
Merge branch 'dev' into main 2023-03-19 21:58:41 +09:00
Kohya S 140b4fad43 remove default values from output config 2023-03-19 20:06:31 +09:00
Kohya S 1f7babd2c7 Fix lpwp to support sdv2 and clip skip 2023-03-19 11:10:17 +09:00
Kohya S 1214760cea
Merge branch 'dev' into main 2023-03-19 10:56:56 +09:00
Kohya S 64d85b2f51 fix num_processes, fix indent 2023-03-19 10:52:46 +09:00
Kohya S ec7f9bab6c
Merge branch 'dev' into dev 2023-03-19 10:25:22 +09:00
Kohya S 83e102c691 refactor config parse, feature to output config 2023-03-19 10:11:11 +09:00
Kohya S c3f9eb10f1 format with black 2023-03-18 18:58:12 +09:00
orenwang 370ca9e8cd fix exception on training model in diffusers format 2023-03-13 14:32:43 +08:00
mio e24a43ae0b sample images with weight and no length limit 2023-03-12 16:08:31 +08:00
Linaqruf 44d4cfb453 feat: added function to load training config with .toml 2023-03-12 11:52:37 +07:00
Kohya S 618592c52b npz check to use subset, add dadap warn close #274 2023-03-10 21:31:59 +09:00
Kohya S e355b5e1d3
Merge pull request #269 from rvhfxb/patch-2
Allow to delete images after getting latents
2023-03-10 20:56:11 +09:00
Isotr0py e3b2bb5b80
Merge branch 'dev' into dev 2023-03-10 19:04:07 +08:00
Isotr0py 7544b38635 fix multi gpu 2023-03-10 18:45:53 +08:00
Isotr0py c4a596df9e replace unsafe eval() with ast 2023-03-10 13:44:16 +08:00
Kohya S 458173da5e
Merge branch 'dev' into dev 2023-03-10 13:00:49 +09:00
Kohya S 51249b1ba0 support conv2d 3x3 LoRA 2023-03-09 20:56:33 +09:00
Isotr0py ab05be11d2 fix wrong typing 2023-03-09 19:35:06 +08:00
Isotr0py eb68892ab1 add lr_scheduler_type etc 2023-03-09 16:51:22 +08:00
rvhfxb 82aac26469
Update train_util.py 2023-03-08 22:42:41 +09:00
Kohya S 8929bf31d9 sample gen h/w to div by 8, fix in steps=epoch 2023-03-08 21:18:28 +09:00
ddPn08 87846c043f
fix for multi gpu training 2023-03-08 09:46:37 +09:00
Kohya S 225c533279 accept empty caption #258 2023-03-07 08:23:34 +09:00
Kohya S 8d5ba29363 free pipe and cache after sample gen #260 2023-03-07 08:06:36 +09:00
Kohya S 2d2407410e show index in caching latents 2023-03-02 21:32:02 +09:00
Kohya S 859f8361bb minor fix in token shuffling 2023-03-02 20:31:07 +09:00
Kohya S c3024be8bf add help for keep_tokens 2023-03-02 20:28:42 +09:00
Kohya S 04af36e7e2 strip tag, fix tag frequency count 2023-03-01 22:10:15 +09:00
Kohya S d1d7d432e9 print dataset index in making buckets 2023-03-01 21:30:12 +09:00
Kohya S 089a63c573 shuffle at debug_dataset 2023-03-01 21:12:33 +09:00
Kohya S ed19a92bbe fix typos 2023-03-01 21:01:10 +09:00
fur0ut0 8abb8645ae
add detail dataset config feature by extra config file (#227)
* add config file schema

* change config file specification

* refactor config utility

* unify batch_size to train_batch_size

* fix indent size

* use batch_size instead of train_batch_size

* make cache_latents configurable on subset

* rename options
* bucket_repo_range
* shuffle_keep_tokens

* update readme

* revert to min_bucket_reso & max_bucket_reso

* use subset structure in dataset

* format import lines

* split mode specific options

* use only valid subset

* change valid subsets name

* manage multiple datasets by dataset group

* update config file sanitizer

* prune redundant validation

* add comments

* update type annotation

* rename json_file_name to metadata_file

* ignore when image dir is invalid

* fix tag shuffle and dropout

* ignore duplicated subset

* add method to check latent cachability

* fix format

* fix bug

* update caption dropout default values

* update annotation

* fix bug

* add option to enable bucket shuffle across dataset

* update blueprint generate function

* use blueprint generator for dataset initialization

* delete duplicated function

* update config readme

* delete debug print

* print dataset and subset info as info

* enable bucket_shuffle_across_dataset option

* update config readme for clarification

* compensate quotes for string option example

* fix bug of bad usage of join

* conserve trained metadata backward compatibility

* enable shuffle in data loader by default

* delete resolved TODO

* add comment for image data handling

* fix reference bug

* fix undefined variable bug

* prevent raise overwriting

* assert image_dir and metadata_file validity

* add debug message for ignoring subset

* fix inconsistent import statement

* loosen too strict validation on float value

* sanitize argument parser separately

* make image_dir optional for fine tuning dataset

* fix import

* fix trailing characters in print

* parse flexible dataset config deterministically

* use relative import

* print supplementary message for parsing error

* add note about different methods

* add note of benefit of separate dataset

* add error example

* add note for english readme plan

---------

Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
2023-03-01 20:58:08 +09:00
Kohya S 82707654ad support sample generation in TI training 2023-02-28 22:05:31 +09:00
Kohya S dd523c94ff sample images in training (not fully tested) 2023-02-27 17:48:32 +09:00
Kohya S a28f9ae7a3 support tokenizer caching for offline training/gen 2023-02-25 18:46:59 +09:00
Kohya S 9b13444b9c raise error if options conflict 2023-02-23 21:35:47 +09:00
Kohya S 9ab964d0b8 Add Adafactor optimzier 2023-02-22 21:09:47 +09:00
Kohya S 663aad2b0d refactor get_scheduler etc. 2023-02-20 22:47:43 +09:00
Kohya S 107fa754e5
Merge branch 'dev' into optimizer-expand-and-refactor 2023-02-20 20:12:42 +09:00
mgz-dev b29c5a750c expand optimizer options and refactor
Refactor code to make it easier to add new optimizers, and support alternate optimizer parameters

-move redundant code to train_util for initializing optimizers
- add SGD Nesterov optimizers as option (since they are already available)
- add new parameters which may be helpful for tuning existing and new optimizers
2023-02-19 17:45:09 -06:00
unknown 045a3dbe48 apply dadaptation 2023-02-19 18:37:07 +09:00
Kohya S 048e7cd428 add lion optimizer support 2023-02-19 15:26:14 +09:00
Kohya S 9d0f9736bf
Merge pull request #202 from vladmandic/main
fix git path
2023-02-19 15:01:21 +09:00
Vladimir Mandic dac2bd163a
fix git path 2023-02-17 14:19:08 -05:00
Isotr0py 78d1fb5ce6 Add '--lowram' argument 2023-02-17 12:08:54 +08:00
Kohya S 43c0a69843 Add noise_offset 2023-02-14 21:15:48 +09:00
Kohya S 8f1e930bf4
Merge pull request #187 from space-nuko/add-commit-hash
Add commit hash to metadata
2023-02-14 19:52:30 +09:00
space-nuko 5471b0deb0 Add commit hash to metadata 2023-02-13 02:58:06 -08:00
Isotr0py 92a1af8024
Merge branch 'kohya-ss:main' into support-multi-gpu 2023-02-12 15:06:46 +08:00
Kohya S 4c561411aa revert batch size limiting for bucket 2023-02-11 16:02:56 +09:00
Kohya S 2c5f5c324a Fix crash TI train close #172, tag drop wo shuffle 2023-02-11 14:41:44 +09:00
Kohya S b03721b4d9 Add todo comment 2023-02-10 17:36:38 +09:00
Kohya S c2e1d4b71b fix typo 2023-02-09 21:38:01 +09:00
Kohya S 3a72e6f003 add tag dropout 2023-02-09 21:35:27 +09:00
Isotr0py 5e96e1369d fix get_hidden_states expected scalar Error 2023-02-08 20:14:13 +08:00
Isotr0py c0be52a773 ignore get_hidden_states expected scalar Error 2023-02-08 20:13:09 +08:00
Kohya S e42b2f7aa9 conditional caption dropout (in progress) 2023-02-07 22:28:56 +09:00
Kohya S f9478f0d47
Merge pull request #159 from forestsource/main
Add Conditional Dropout options
2023-02-07 21:50:26 +09:00
Kohya S 4fc9f1f8c5
Merge pull request #157 from shirayu/improve_tag_shuffle
Always join with ", "
2023-02-07 21:47:05 +09:00
forestsource 7db98baa86 Add dropout options 2023-02-07 00:01:30 +09:00
Kohya S 2aa27b7a4b Update downsampling for larger image in no_upscale 2023-02-06 20:52:24 +09:00
Yuta Hayashibe 5ea5fefcd2 Always join with ", " 2023-02-06 12:29:41 +09:00
Kohya S ea2dfd09ef update bucketing features 2023-02-05 21:37:46 +09:00
Kohya S b1635f4bf6
Merge pull request #144 from tsukimiya/debug_dataset_linux_support
Fixed --debug_dataset option to work in non-Windows environments
2023-02-04 18:19:04 +09:00
Kohya S 9fd7fb813d
Merge branch 'dev' into main 2023-02-04 18:16:03 +09:00
Kohya S 93134cdd15 Add tag freq for FinetuneDataset 2023-02-03 21:03:42 +09:00
Kohya S 57d8483eaf add GIT captioning, refactoring, DataLoader 2023-02-03 08:45:33 +09:00
tsukimiya 949ee6fcc9 Fixed --debug_dataset option to work in non-Windows environments 2023-02-03 00:37:27 +09:00
hitomi 26a81d075c add --persistent_data_loader_workers option 2023-02-01 16:02:15 +08:00
Kohya S ed2e431950
Merge branch 'main' into caption-frequency-metadata 2023-01-29 17:50:23 +09:00
Kohya S 3fb12e41b7 Merge branch 'main' into textual_inversion 2023-01-26 17:50:20 +09:00
Kohya S 91a50ea637 Change img_ar_errors to mean because too many imgs 2023-01-24 20:17:15 +09:00
Kohya S 36dc97c841
Merge pull request #103 from space-nuko/bucketing-metadata
Add bucketing metadata
2023-01-24 19:06:21 +09:00
Kohya S e6bad080cb
Merge pull request #102 from space-nuko/precalculate-hashes
Precalculate .safetensors model hashes after training
2023-01-24 19:03:45 +09:00
Kohya S 7f17237ada
Merge pull request #92 from forestsource/add_save_n_epoch_ratio
Add save_n_epoch_ratio
2023-01-24 18:59:47 +09:00
space-nuko 2e8a3d20dd Add tag frequency metadata 2023-01-23 17:43:03 -08:00
space-nuko 66051883fb Add bucketing metadata 2023-01-23 17:26:58 -08:00
space-nuko f7fbdc4b2a Precalculate .safetensors model hashes after training 2023-01-23 17:21:04 -08:00
forestsource 5e817e4343 Add save_n_epoch_ratio 2023-01-22 03:00:28 +09:00
Kohya S 22ee0ac467 Move TE/UN loss calc to train script 2023-01-21 12:51:17 +09:00
Kohya S 17089b1287 Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev 2023-01-21 12:46:20 +09:00
Kohya S 7ee808d5d7
Merge pull request #79 from mgz-dev/tensorboard-improvements
expand details in tensorboard logs
2023-01-21 12:46:13 +09:00
Kohya S 9ff26af68b Update to add grad_ckpting etc to metadata 2023-01-21 12:36:31 +09:00
Kohya S 7dbcef745a
Merge pull request #77 from space-nuko/ss-extra-metadata
More helpful metadata
2023-01-21 12:18:23 +09:00
Kohya S 758323532b add save_last_n_epochs_state to train_network 2023-01-19 20:59:45 +09:00
space-nuko da48f74e7b Add new version model/VAE hash to training metadata 2023-01-18 23:00:16 -08:00
michaelgzhang 303c3410e2 expand details in tensorboard logs
- Update tensorboard logging to track both unet and textencoder learning rates
- Update tensorboard logging to track both current and moving average epoch loss
- Clean up tensorboard log variable names for dashboard formatting
2023-01-18 13:10:13 -06:00
space-nuko de1dde1a06 More helpful metadata
- dataset/reg image dirs
- random session ID
- keep_tokens
- training date
- output name
2023-01-17 16:28:35 -08:00
Yuta Hayashibe 3eb8fb1875 Make not to save state when args.save_state is False 2023-01-18 01:31:38 +09:00
Yuta Hayashibe 3815b82bef Removed --save_last_n_epochs_model 2023-01-16 21:02:27 +09:00
Yuta Hayashibe c6e28faa57 Save state when args.save_last_n_epochs_state is designated 2023-01-15 19:43:37 +09:00
Yuta Hayashibe a888223869 Fix a bug 2023-01-15 18:02:17 +09:00
Yuta Hayashibe d30ea7966d Updated help 2023-01-15 18:00:51 +09:00
Yuta Hayashibe df9cb2f11c Add --save_last_n_epochs_model and --save_last_n_epochs_state 2023-01-15 17:52:22 +09:00
Kohya S 186a2665ad Merge branch 'main' into textual_inversion 2023-01-15 16:08:53 +09:00
Kohya S aa40cb9345 Add train epochs and max workers option to train 2023-01-15 13:07:47 +09:00
Kohya S c1b14fcdd6 initial version of TI 2023-01-12 20:47:08 +09:00
Kohya S e4f9b2b715 Add VAE to meatada, add no_metadata option 2023-01-11 23:12:18 +09:00
space-nuko 2e4ce0fdff Add training metadata to output LoRA model 2023-01-10 02:49:52 -08:00
Kohya S 673f9ced47 Fix '*' is not working for DreamBooth 2023-01-09 21:06:58 +09:00