Commit Graph

257 Commits (b9d5db8a423a4e321546eb609bfba0c572306d24)

Author SHA1 Message Date
Kohya S 140b4fad43 remove default values from output config 2023-03-19 20:06:31 +09:00
Kohya S 1f7babd2c7 Fix lpwp to support sdv2 and clip skip 2023-03-19 11:10:17 +09:00
Kohya S 1214760cea
Merge branch 'dev' into main 2023-03-19 10:56:56 +09:00
Kohya S 64d85b2f51 fix num_processes, fix indent 2023-03-19 10:52:46 +09:00
Kohya S ec7f9bab6c
Merge branch 'dev' into dev 2023-03-19 10:25:22 +09:00
Kohya S 83e102c691 refactor config parse, feature to output config 2023-03-19 10:11:11 +09:00
Kohya S c3f9eb10f1 format with black 2023-03-18 18:58:12 +09:00
orenwang 370ca9e8cd fix exception on training model in diffusers format 2023-03-13 14:32:43 +08:00
mio e24a43ae0b sample images with weight and no length limit 2023-03-12 16:08:31 +08:00
Linaqruf 44d4cfb453 feat: added function to load training config with .toml 2023-03-12 11:52:37 +07:00
Kohya S 618592c52b npz check to use subset, add dadap warn close #274 2023-03-10 21:31:59 +09:00
Kohya S e355b5e1d3
Merge pull request #269 from rvhfxb/patch-2
Allow to delete images after getting latents
2023-03-10 20:56:11 +09:00
Isotr0py e3b2bb5b80
Merge branch 'dev' into dev 2023-03-10 19:04:07 +08:00
Isotr0py 7544b38635 fix multi gpu 2023-03-10 18:45:53 +08:00
Isotr0py c4a596df9e replace unsafe eval() with ast 2023-03-10 13:44:16 +08:00
Kohya S 458173da5e
Merge branch 'dev' into dev 2023-03-10 13:00:49 +09:00
Kohya S 51249b1ba0 support conv2d 3x3 LoRA 2023-03-09 20:56:33 +09:00
Isotr0py ab05be11d2 fix wrong typing 2023-03-09 19:35:06 +08:00
Isotr0py eb68892ab1 add lr_scheduler_type etc 2023-03-09 16:51:22 +08:00
rvhfxb 82aac26469
Update train_util.py 2023-03-08 22:42:41 +09:00
Kohya S 8929bf31d9 sample gen h/w to div by 8, fix in steps=epoch 2023-03-08 21:18:28 +09:00
ddPn08 87846c043f
fix for multi gpu training 2023-03-08 09:46:37 +09:00
Kohya S 225c533279 accept empty caption #258 2023-03-07 08:23:34 +09:00
Kohya S 8d5ba29363 free pipe and cache after sample gen #260 2023-03-07 08:06:36 +09:00
Kohya S 2d2407410e show index in caching latents 2023-03-02 21:32:02 +09:00
Kohya S 859f8361bb minor fix in token shuffling 2023-03-02 20:31:07 +09:00
Kohya S c3024be8bf add help for keep_tokens 2023-03-02 20:28:42 +09:00
Kohya S 04af36e7e2 strip tag, fix tag frequency count 2023-03-01 22:10:15 +09:00
Kohya S d1d7d432e9 print dataset index in making buckets 2023-03-01 21:30:12 +09:00
Kohya S 089a63c573 shuffle at debug_dataset 2023-03-01 21:12:33 +09:00
Kohya S ed19a92bbe fix typos 2023-03-01 21:01:10 +09:00
fur0ut0 8abb8645ae
add detail dataset config feature by extra config file (#227)
* add config file schema

* change config file specification

* refactor config utility

* unify batch_size to train_batch_size

* fix indent size

* use batch_size instead of train_batch_size

* make cache_latents configurable on subset

* rename options
* bucket_repo_range
* shuffle_keep_tokens

* update readme

* revert to min_bucket_reso & max_bucket_reso

* use subset structure in dataset

* format import lines

* split mode specific options

* use only valid subset

* change valid subsets name

* manage multiple datasets by dataset group

* update config file sanitizer

* prune redundant validation

* add comments

* update type annotation

* rename json_file_name to metadata_file

* ignore when image dir is invalid

* fix tag shuffle and dropout

* ignore duplicated subset

* add method to check latent cachability

* fix format

* fix bug

* update caption dropout default values

* update annotation

* fix bug

* add option to enable bucket shuffle across dataset

* update blueprint generate function

* use blueprint generator for dataset initialization

* delete duplicated function

* update config readme

* delete debug print

* print dataset and subset info as info

* enable bucket_shuffle_across_dataset option

* update config readme for clarification

* compensate quotes for string option example

* fix bug of bad usage of join

* conserve trained metadata backward compatibility

* enable shuffle in data loader by default

* delete resolved TODO

* add comment for image data handling

* fix reference bug

* fix undefined variable bug

* prevent raise overwriting

* assert image_dir and metadata_file validity

* add debug message for ignoring subset

* fix inconsistent import statement

* loosen too strict validation on float value

* sanitize argument parser separately

* make image_dir optional for fine tuning dataset

* fix import

* fix trailing characters in print

* parse flexible dataset config deterministically

* use relative import

* print supplementary message for parsing error

* add note about different methods

* add note of benefit of separate dataset

* add error example

* add note for english readme plan

---------

Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
2023-03-01 20:58:08 +09:00
Kohya S 82707654ad support sample generation in TI training 2023-02-28 22:05:31 +09:00
Kohya S dd523c94ff sample images in training (not fully tested) 2023-02-27 17:48:32 +09:00
Kohya S a28f9ae7a3 support tokenizer caching for offline training/gen 2023-02-25 18:46:59 +09:00
Kohya S 9b13444b9c raise error if options conflict 2023-02-23 21:35:47 +09:00
Kohya S 9ab964d0b8 Add Adafactor optimzier 2023-02-22 21:09:47 +09:00
Kohya S 663aad2b0d refactor get_scheduler etc. 2023-02-20 22:47:43 +09:00
Kohya S 107fa754e5
Merge branch 'dev' into optimizer-expand-and-refactor 2023-02-20 20:12:42 +09:00
mgz-dev b29c5a750c expand optimizer options and refactor
Refactor code to make it easier to add new optimizers, and support alternate optimizer parameters

-move redundant code to train_util for initializing optimizers
- add SGD Nesterov optimizers as option (since they are already available)
- add new parameters which may be helpful for tuning existing and new optimizers
2023-02-19 17:45:09 -06:00
unknown 045a3dbe48 apply dadaptation 2023-02-19 18:37:07 +09:00
Kohya S 048e7cd428 add lion optimizer support 2023-02-19 15:26:14 +09:00
Kohya S 9d0f9736bf
Merge pull request #202 from vladmandic/main
fix git path
2023-02-19 15:01:21 +09:00
Vladimir Mandic dac2bd163a
fix git path 2023-02-17 14:19:08 -05:00
Isotr0py 78d1fb5ce6 Add '--lowram' argument 2023-02-17 12:08:54 +08:00
Kohya S 43c0a69843 Add noise_offset 2023-02-14 21:15:48 +09:00
Kohya S 8f1e930bf4
Merge pull request #187 from space-nuko/add-commit-hash
Add commit hash to metadata
2023-02-14 19:52:30 +09:00
space-nuko 5471b0deb0 Add commit hash to metadata 2023-02-13 02:58:06 -08:00
Isotr0py 92a1af8024
Merge branch 'kohya-ss:main' into support-multi-gpu 2023-02-12 15:06:46 +08:00
Kohya S 4c561411aa revert batch size limiting for bucket 2023-02-11 16:02:56 +09:00
Kohya S 2c5f5c324a Fix crash TI train close #172, tag drop wo shuffle 2023-02-11 14:41:44 +09:00
Kohya S b03721b4d9 Add todo comment 2023-02-10 17:36:38 +09:00
Kohya S c2e1d4b71b fix typo 2023-02-09 21:38:01 +09:00
Kohya S 3a72e6f003 add tag dropout 2023-02-09 21:35:27 +09:00
Isotr0py 5e96e1369d fix get_hidden_states expected scalar Error 2023-02-08 20:14:13 +08:00
Isotr0py c0be52a773 ignore get_hidden_states expected scalar Error 2023-02-08 20:13:09 +08:00
Kohya S e42b2f7aa9 conditional caption dropout (in progress) 2023-02-07 22:28:56 +09:00
Kohya S f9478f0d47
Merge pull request #159 from forestsource/main
Add Conditional Dropout options
2023-02-07 21:50:26 +09:00
Kohya S 4fc9f1f8c5
Merge pull request #157 from shirayu/improve_tag_shuffle
Always join with ", "
2023-02-07 21:47:05 +09:00
forestsource 7db98baa86 Add dropout options 2023-02-07 00:01:30 +09:00
Kohya S 2aa27b7a4b Update downsampling for larger image in no_upscale 2023-02-06 20:52:24 +09:00
Yuta Hayashibe 5ea5fefcd2 Always join with ", " 2023-02-06 12:29:41 +09:00
Kohya S ea2dfd09ef update bucketing features 2023-02-05 21:37:46 +09:00
Kohya S b1635f4bf6
Merge pull request #144 from tsukimiya/debug_dataset_linux_support
Fixed --debug_dataset option to work in non-Windows environments
2023-02-04 18:19:04 +09:00
Kohya S 9fd7fb813d
Merge branch 'dev' into main 2023-02-04 18:16:03 +09:00
Kohya S 93134cdd15 Add tag freq for FinetuneDataset 2023-02-03 21:03:42 +09:00
Kohya S 57d8483eaf add GIT captioning, refactoring, DataLoader 2023-02-03 08:45:33 +09:00
tsukimiya 949ee6fcc9 Fixed --debug_dataset option to work in non-Windows environments 2023-02-03 00:37:27 +09:00
hitomi 26a81d075c add --persistent_data_loader_workers option 2023-02-01 16:02:15 +08:00
Kohya S ed2e431950
Merge branch 'main' into caption-frequency-metadata 2023-01-29 17:50:23 +09:00
Kohya S 3fb12e41b7 Merge branch 'main' into textual_inversion 2023-01-26 17:50:20 +09:00
Kohya S 91a50ea637 Change img_ar_errors to mean because too many imgs 2023-01-24 20:17:15 +09:00
Kohya S 36dc97c841
Merge pull request #103 from space-nuko/bucketing-metadata
Add bucketing metadata
2023-01-24 19:06:21 +09:00
Kohya S e6bad080cb
Merge pull request #102 from space-nuko/precalculate-hashes
Precalculate .safetensors model hashes after training
2023-01-24 19:03:45 +09:00
Kohya S 7f17237ada
Merge pull request #92 from forestsource/add_save_n_epoch_ratio
Add save_n_epoch_ratio
2023-01-24 18:59:47 +09:00
space-nuko 2e8a3d20dd Add tag frequency metadata 2023-01-23 17:43:03 -08:00
space-nuko 66051883fb Add bucketing metadata 2023-01-23 17:26:58 -08:00
space-nuko f7fbdc4b2a Precalculate .safetensors model hashes after training 2023-01-23 17:21:04 -08:00
forestsource 5e817e4343 Add save_n_epoch_ratio 2023-01-22 03:00:28 +09:00
Kohya S 22ee0ac467 Move TE/UN loss calc to train script 2023-01-21 12:51:17 +09:00
Kohya S 17089b1287 Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev 2023-01-21 12:46:20 +09:00
Kohya S 7ee808d5d7
Merge pull request #79 from mgz-dev/tensorboard-improvements
expand details in tensorboard logs
2023-01-21 12:46:13 +09:00
Kohya S 9ff26af68b Update to add grad_ckpting etc to metadata 2023-01-21 12:36:31 +09:00
Kohya S 7dbcef745a
Merge pull request #77 from space-nuko/ss-extra-metadata
More helpful metadata
2023-01-21 12:18:23 +09:00
Kohya S 758323532b add save_last_n_epochs_state to train_network 2023-01-19 20:59:45 +09:00
space-nuko da48f74e7b Add new version model/VAE hash to training metadata 2023-01-18 23:00:16 -08:00
michaelgzhang 303c3410e2 expand details in tensorboard logs
- Update tensorboard logging to track both unet and textencoder learning rates
- Update tensorboard logging to track both current and moving average epoch loss
- Clean up tensorboard log variable names for dashboard formatting
2023-01-18 13:10:13 -06:00
space-nuko de1dde1a06 More helpful metadata
- dataset/reg image dirs
- random session ID
- keep_tokens
- training date
- output name
2023-01-17 16:28:35 -08:00
Yuta Hayashibe 3eb8fb1875 Make not to save state when args.save_state is False 2023-01-18 01:31:38 +09:00
Yuta Hayashibe 3815b82bef Removed --save_last_n_epochs_model 2023-01-16 21:02:27 +09:00
Yuta Hayashibe c6e28faa57 Save state when args.save_last_n_epochs_state is designated 2023-01-15 19:43:37 +09:00
Yuta Hayashibe a888223869 Fix a bug 2023-01-15 18:02:17 +09:00
Yuta Hayashibe d30ea7966d Updated help 2023-01-15 18:00:51 +09:00
Yuta Hayashibe df9cb2f11c Add --save_last_n_epochs_model and --save_last_n_epochs_state 2023-01-15 17:52:22 +09:00
Kohya S 186a2665ad Merge branch 'main' into textual_inversion 2023-01-15 16:08:53 +09:00
Kohya S aa40cb9345 Add train epochs and max workers option to train 2023-01-15 13:07:47 +09:00
Kohya S c1b14fcdd6 initial version of TI 2023-01-12 20:47:08 +09:00
Kohya S e4f9b2b715 Add VAE to meatada, add no_metadata option 2023-01-11 23:12:18 +09:00
space-nuko 2e4ce0fdff Add training metadata to output LoRA model 2023-01-10 02:49:52 -08:00
Kohya S 673f9ced47 Fix '*' is not working for DreamBooth 2023-01-09 21:06:58 +09:00
Gaetano Bonofiglio d8da85b38b
fix file not found when `[` is in the filename 2023-01-09 11:40:00 +01:00
Kohya S 6b62c44022 fix errors in fine tuning 2023-01-08 21:40:40 +09:00
Kohya S 1945fa186d Show error if caption isn't UTF-8, add bmp support 2023-01-08 18:50:52 +09:00
Kohya S 9f1d3aca24 add save_state_on_train end, fix reg imgs repeats 2023-01-07 20:20:37 +09:00
Kohya S f56988b252 unify dataset and save functions 2023-01-05 08:10:22 +09:00
Kohya S 4c35006731 split common function from train_network to util 2023-01-03 20:22:25 +09:00
Kohya S 6b522b34c1 move code for xformers to train_util 2023-01-02 16:08:21 +09:00