Commit Graph

257 Commits (b9d5db8a423a4e321546eb609bfba0c572306d24)

Author SHA1 Message Date
ykume 4e25c8f78e fix to work with Diffusers 0.17.0 2023-06-11 16:57:17 +09:00
Kohya S c0a7df9ee1 fix eps value, enable xformers, etc. 2023-06-03 21:29:27 +09:00
Kohya S ec2efe52e4 scale v-pred loss like noise pred 2023-06-03 10:52:22 +09:00
ddPn08 1e3daa247b
fix bucketing 2023-06-01 21:58:45 +09:00
ddPn08 3bd00b88c2
support for controlnet in sample output 2023-06-01 20:48:30 +09:00
ddPn08 62d00b4520
add controlnet training 2023-06-01 20:48:25 +09:00
ddPn08 c8d209d36c
update diffusers to 1.16 | train_network 2023-06-01 20:39:26 +09:00
AI-Casanova 9c7237157d
Dropout and Max Norm Regularization for LoRA training (#545)
* Instantiate max_norm

* minor

* Move to end of step

* argparse

* metadata

* phrasing

* Sqrt ratio and logging

* fix logging

* Dropout test

* Dropout Args

* Dropout changed to affect LoRA only

---------

Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
2023-06-01 14:58:38 +09:00
Kohya S 3a06968332 warn and continue if huggingface uploading failed 2023-05-31 20:48:33 +09:00
Kohya S 990ceddd14 show warning if no caption and no class token 2023-05-30 22:53:50 +09:00
Kohya S 2429ac73b2
Merge pull request #533 from TingTingin/main
Added warning on training without captions
2023-05-29 08:37:33 +09:00
TingTingin db756e9a34
Update train_util.py
I removed the sleep since it triggers per subset and if someone had a lot of subsets it would trigger multiple times
2023-05-26 08:08:34 -04:00
青龍聖者@bdsqlsz 5cdf4e34a1
support for dadapaption V3 (#530)
* Update train_util.py for DAdaptLion

* Update train_README-zh.md for dadaptlion

* Update train_README-ja.md for DAdaptLion

* add DAdatpt V3

* Alignment

* Update train_util.py for experimental

* Update train_util.py V3

* Update train_README-zh.md

* Update train_README-ja.md

* Update train_util.py fix

* Update train_util.py

---------

Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
2023-05-25 21:52:36 +09:00
TingTingin 061e157191
Update train_util.py 2023-05-23 02:02:39 -04:00
TingTingin d859a3a925
Update train_util.py
fix mistake
2023-05-23 02:00:33 -04:00
TingTingin 5a1a14f9fc
Update train_util.py
Added feature to add "." if missing in caption_extension
Added warning on training without captions
2023-05-23 01:57:35 -04:00
Kohya S 02bb8e0ac3 use xformers in VAE in gen script 2023-05-21 12:59:01 +09:00
Kohya S bc909e8359
Merge pull request #521 from akshaal/fix/save_state
fix: don't save state if no --save-state arg given
2023-05-21 08:48:48 +09:00
Evgeny Chukreev 0c942106bf
fix: don't save state if no --save-state arg given 2023-05-18 20:09:06 +02:00
Fair c0c4d4ddc6 new line with print "generating sample images" 2023-05-17 10:59:06 +08:00
青龍聖者@bdsqlsz 7e5b6154d0
Update train_util.py 2023-05-16 00:09:53 +08:00
Kohya S 714846e1e1 revert perlin_noise 2023-05-15 23:12:11 +09:00
Kohya S 08d85d4013 Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev 2023-05-15 20:58:04 +09:00
Kohya S 0ec7743436 show loading model path 2023-05-15 20:57:53 +09:00
Kohya S a72d80aa85
Merge pull request #507 from HkingAuditore/main
Added support for Perlin noise in Noise Offset
2023-05-15 20:56:46 +09:00
hkinghuang bca6a44974 Perlin noise 2023-05-15 11:16:08 +08:00
Linaqruf 8ab5c8cb28 feat: added json support as well 2023-05-14 19:49:54 +07:00
Linaqruf 774c4059fb feat: added toml support for sample prompt 2023-05-14 19:38:44 +07:00
Kohya S 968bbd2f47
Merge pull request #480 from yanhuifair/main
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
Kohya S 09c719c926 add adaptive noise scale 2023-05-07 18:09:08 +09:00
Kohya S e54b6311ef do not save cuda_rng_state if no cuda closes #390 2023-05-07 10:23:25 +09:00
Fair b08154dc36 fix print "saving" and "epoch" in newline 2023-05-07 02:51:01 +08:00
Kohya S 165fc43655 fix comment 2023-05-06 18:25:26 +09:00
Kohya S 2127907dd3 refactor selection and logging for DAdaptation 2023-05-06 18:14:16 +09:00
青龍聖者@bdsqlsz 164a1978de
Support for more Dadaptation (#455)
* Update train_util.py for add DAdaptAdan and DAdaptSGD

* Update train_util.py for DAdaptadam

* Update train_network.py for dadapt

* Update train_README-ja.md for DAdapt

* Update train_util.py for DAdapt

* Update train_network.py for DAdaptAdaGrad

* Update train_db.py for DAdapt

* Update fine_tune.py for DAdapt

* Update train_textual_inversion.py for DAdapt

* Update train_textual_inversion_XTI.py for DAdapt
2023-05-06 17:30:09 +09:00
Kohya S 60bbe64489 raise error when both noise offset and multires 2023-05-03 20:58:12 +09:00
ykume f6556f7972 add ja help message for mutires noise 2023-05-03 11:31:13 +09:00
ykume 69579668bb Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev 2023-05-03 11:17:43 +09:00
Kohya S 2e688b7cd3
Merge pull request #471 from pamparamm/multires-noise
Multi-Resolution Noise
2023-05-03 11:17:21 +09:00
ykume 2fcbfec178 make transform_DDP more intuitive 2023-05-03 11:07:29 +09:00
Isotr0py e1143caf38
Fix DDP issues and Support DDP for all training scripts (#448)
* Fix DDP bugs

* Fix DDP bugs for finetune and db

* refactor model loader

* fix DDP network

* try to fix DDP network in train unet only

* remove unuse DDP import

* refactor DDP transform

* refactor DDP transform

* fix sample images bugs

* change DDP tranform location

* add autocast to train_db

* support DDP in XTI

* Clear DDP import
2023-05-03 10:37:47 +09:00
ykume a7485e4d9e Add error message if no Lion8bit 2023-05-03 10:35:47 +09:00
青龍聖者@bdsqlsz 335b2f960e
Support for Lion8bit (#447)
* ADD libbitsandbytes.dll for 0.38.1

* Delete libbitsandbytes_cuda116.dll

* Delete cextension.py

* add main.py

* Update requirements.txt for bitsandbytes 0.38.1

* Update README.md for bitsandbytes-windows

* Update README-ja.md  for bitsandbytes 0.38.1

* Update main.py for return cuda118

* Update train_util.py for lion8bit

* Update train_README-ja.md for lion8bit

* Update train_util.py for add DAdaptAdan and DAdaptSGD

* Update train_util.py for DAdaptadam

* Update train_network.py for dadapt

* Update train_README-ja.md for DAdapt

* Update train_util.py for DAdapt

* Update train_network.py for DAdaptAdaGrad

* Update train_db.py for DAdapt

* Update fine_tune.py for DAdapt

* Update train_textual_inversion.py for DAdapt

* Update train_textual_inversion_XTI.py for DAdapt

* Revert "Merge branch 'qinglong' into main"

This reverts commit b65c023083d6d1e8a30eb42eddd603d1aac97650, reversing
changes made to f6fda20caf5e773d56bcfb5c4575c650bb85362b.

* Revert "Update requirements.txt for bitsandbytes 0.38.1"

This reverts commit 83abc60dfaddb26845f54228425b98dd67997528.

* Revert "Delete cextension.py"

This reverts commit 3ba4dfe046874393f2a022a4cbef3628ada35391.

* Revert "Update README.md for bitsandbytes-windows"

This reverts commit 4642c52086b5e9791233007e2fdfd97f832cd897.

* Revert "Update README-ja.md  for bitsandbytes 0.38.1"

This reverts commit fa6d7485ac067ebc49e6f381afdb8dd2f12caa8f.

* Revert "ADD libbitsandbytes.dll for 0.38.1"

This reverts commit bee1e6f731d2428dacb34b61997f06143c69c278.

* Revert "Delete libbitsandbytes_cuda116.dll"

This reverts commit 891c7e92623dab92f3767663982627cca6a26724.

* reverse main.py

* Reverse main.py
2023-05-03 10:22:40 +09:00
Pam b18d099291 Multi-Resolution Noise 2023-05-02 09:42:17 +05:00
Kohya S 1890535d1b enable `cache_latents` when `_to_disk` #438 2023-04-25 08:08:49 +09:00
Kohya S 74008ce487 add `save_every_n_steps` option 2023-04-24 23:22:24 +09:00
Kohya S 46cbae088e fix to log with logging_dir without log_with 2023-04-23 19:15:48 +09:00
Kohya S 9ba4c3edca update readme/comment 2023-04-22 20:18:25 +09:00
Linaqruf e9a641bde7 Merge branch 'main' of https://github.com/Linaqruf/sd-scripts 2023-04-22 16:17:22 +07:00
Linaqruf ae3965a2a7 feat: add arguments to set \--wandb_api_key\ before training 2023-04-22 16:14:14 +07:00
Kohya S 66edc5af7b invert condition for checking log_with 2023-04-22 18:05:19 +09:00
saltacc dc37fd2ff6
fix no logging command line arg 2023-04-22 01:26:31 -07:00
Kohya S 884e6bff5d fix face_crop_aug not working on finetune method, prepare upscaler 2023-04-22 10:41:36 +09:00
Kohya S 220436244c some minor fixes 2023-04-22 09:55:04 +09:00
Kohya S c430cf481a
Merge pull request #428 from p1atdev/dev
Add WandB logging support
2023-04-22 09:39:01 +09:00
tsukimiya e746829b5f おそらくlibgtk2がインストールされていない環境でcv2.waitKey() および cv2.destroyAllWindows() が動作しないので除外 2023-04-20 06:20:02 +09:00
Plat a69b24a069 fix: tensorboard not working 2023-04-20 05:33:32 +09:00
Plat 8090daca40 fix: wandb not working without logging_dir 2023-04-20 05:14:28 +09:00
Plat 27ffd9fe3d feat: support wandb logging 2023-04-20 01:41:12 +09:00
Kohya S 423e6c229c support metadata json+.npz caching (no prepare) 2023-04-13 22:12:13 +09:00
Kohya S a8632b7329 fix latents disk cache 2023-04-13 21:14:39 +09:00
Kohya S 2e9f7b5f91 cache latents to disk in dreambooth method 2023-04-12 23:10:39 +09:00
AI-Casanova 0d54609435
Merge branch 'kohya-ss:main' into weighted_captions 2023-04-07 14:55:40 -05:00
AI-Casanova 7527436549
Merge branch 'kohya-ss:main' into weighted_captions 2023-04-05 17:07:15 -05:00
Kohya S 541539a144 change method name, repo is private in default etc 2023-04-05 23:16:49 +09:00
Kohya S 74220bb52c
Merge pull request #348 from ddPn08/dev
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
AI-Casanova 1892c82a60 Reinstantiate weighted captions after a necessary revert to Main 2023-04-02 19:43:34 +00:00
ddPn08 16ba1cec69
change async uploading to optional 2023-04-02 17:45:26 +09:00
ddPn08 8bfa50e283
small fix 2023-04-02 17:39:23 +09:00
ddPn08 c4a11e5a5a
fix help 2023-04-02 17:39:23 +09:00
ddPn08 3cc4939dd3
Implement huggingface upload for all scripts 2023-04-02 17:39:22 +09:00
ddPn08 b5ff4e816f
resume from huggingface repository 2023-04-02 17:39:21 +09:00
ddPn08 d42431d73a
Added feature to upload to huggingface 2023-04-02 17:39:10 +09:00
Yuta Hayashibe 9577a9f38d Check needless num_warmup_steps 2023-04-01 20:33:20 +09:00
Kohya S 31069e1dc5 add comments about debice for clarify 2023-03-30 21:44:40 +09:00
Kohya S 6c28dfb417
Merge pull request #332 from guaneec/ddp-lowram
Reduce peak RAM usage
2023-03-30 21:37:37 +09:00
Jakaline-dev b0c33a4294 Merge remote-tracking branch 'upstream/main' 2023-03-30 01:35:38 +09:00
Kohya S 4f70e5dca6 fix to work with num_workers=0 2023-03-28 19:42:47 +09:00
Kohya S 238f01bc9c fix images are used twice, update debug dataset 2023-03-27 20:48:21 +09:00
guaneec 3cdae0cbd2
Reduce peak RAM usage 2023-03-27 14:34:17 +08:00
Kohya S 14891523ce fix seed for each dataset to make shuffling same 2023-03-26 22:17:03 +09:00
Kohya S 6732df93e2
Merge branch 'dev' into min-SNR 2023-03-26 17:10:53 +09:00
Kohya S 4f42f759ea
Merge pull request #322 from u-haru/feature/token_warmup
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
Jakaline-dev a35d7ef227 Implement XTI 2023-03-26 05:26:10 +09:00
u-haru a4b34a9c3c blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正 2023-03-26 03:26:55 +09:00
u-haru 5a3d564a30 print削除 2023-03-26 02:26:08 +09:00
u-haru 4dc1124f93 lora以外も対応 2023-03-26 02:19:55 +09:00
u-haru 292cdb8379 データセットにepoch、stepが通達されないバグ修正 2023-03-26 01:44:25 +09:00
u-haru 1b89b2a10e シャッフル前にタグを切り詰めるように変更 2023-03-24 13:44:30 +09:00
u-haru 447c56bf50 typo修正、stepをglobal_stepに修正、バグ修正 2023-03-23 09:53:14 +09:00
u-haru a9b26b73e0 implement token warmup 2023-03-23 07:37:14 +09:00
AI-Casanova 64c923230e Min-SNR Weighting Strategy: Refactored and added to all trainers 2023-03-22 01:27:29 +00:00
AI-Casanova 795a6bd2d8
Merge branch 'kohya-ss:main' into min-SNR 2023-03-21 13:19:15 -05:00
Kohya S 7b324bcc3b support extensions of image files with uppercases 2023-03-21 21:10:34 +09:00
Kohya S 6d9f3bc0b2 fix different reso in batch 2023-03-21 18:33:46 +09:00
Kohya S 1816ac3271 add vae_batch_size option for faster caching 2023-03-21 18:15:57 +09:00
Kohya S cb08fa0379 fix no npz with full path 2023-03-21 15:05:25 +09:00
AI-Casanova a265225972 Min-SNR Weighting Strategy 2023-03-20 22:51:38 +00:00
Kohya S de95431895 support win with diffusers, fix extra args eval 2023-03-19 22:09:36 +09:00
Kohya S 48c1be34f3
Merge branch 'dev' into main 2023-03-19 21:58:41 +09:00