Commit Graph

178 Commits (321bfe8bc77d4b4e0e223089f437df1eba9d2d9d)

Author SHA1 Message Date
AUTOMATIC 81823407d9 add --no-hashing 2023-02-04 11:38:56 +03:00
AUTOMATIC 78f59a4e01 enable compact view for train tab
prevent  previews from ruining hypernetwork training
2023-01-22 00:02:51 +03:00
AUTOMATIC 40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
AUTOMATIC 924e222004 add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
aria1th 13445738d9 Fix tensorboard related functions 2023-01-16 03:02:54 +09:00
aria1th 598f7fcd84 Fix loss_dict problem 2023-01-16 02:46:21 +09:00
AngelBottomless 16f410893e
fix missing 'mean loss' for tensorboard integration 2023-01-16 02:08:47 +09:00
AUTOMATIC d8b90ac121 big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other 2023-01-15 18:51:04 +03:00
AUTOMATIC f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC1111 9cd7716753
Merge branch 'master' into tensorboard 2023-01-13 14:57:38 +03:00
Vladimir Mandic 3f43d8a966
set descriptions 2023-01-11 10:28:55 -05:00
aria1th a4a5475cfa Variable dropout rate
Implements variable dropout rate from #4549

Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training.

Changes function name to match torch.nn.module standard

Fixes RNG reset issue when generating previews by restoring RNG state
2023-01-10 14:56:57 +09:00
AUTOMATIC 1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
dan 72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan 669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
AUTOMATIC 683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
timntorres b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
AUTOMATIC1111 eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
Vladimir Mandic 192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
AUTOMATIC1111 b12de850ae
Merge pull request #5992 from yuvalabou/F541
Fix F541: f-string without any placeholders
2022-12-25 09:16:08 +03:00
Vladimir Mandic 5f1dfbbc95 implement train api 2022-12-24 18:02:22 -05:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
AUTOMATIC1111 c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
brkirch 4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
flamelaw 1bd57cc979 last_layer_dropout default to False 2022-11-23 20:21:52 +09:00
flamelaw d2c97fc3fe fix dropout, implement train/eval mode 2022-11-23 20:00:00 +09:00
flamelaw 89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw 5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
flamelaw bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur cabd4e3b3b Merge branch 'master' into gradient-clipping 2022-11-07 22:43:38 +07:00
AUTOMATIC 62e3d71aa7 rework the code to not use the walrus operator because colab's 3.7 does not support it 2022-11-05 17:09:42 +03:00
AUTOMATIC1111 cb84a304f0
Merge pull request #4273 from Omegastick/ordered_hypernetworks
Sort hypernetworks list
2022-11-05 16:16:18 +03:00
Muhammad Rizqi Nur bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
Isaac Poulton 08feb4c364
Sort straight out of the glob 2022-11-04 20:53:11 +07:00
Muhammad Rizqi Nur 3277f90e93 Merge branch 'master' into gradient-clipping 2022-11-04 18:47:28 +07:00
Isaac Poulton fd62727893
Sort hypernetworks 2022-11-04 18:34:35 +07:00
Fampai 39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
aria1th 1ca0bcd3a7 only save if option is enabled 2022-11-04 16:09:19 +09:00
aria1th f5d394214d split before declaring file name 2022-11-04 16:04:03 +09:00
aria1th 283249d239 apply 2022-11-04 15:57:17 +09:00
AUTOMATIC1111 4918eb6ce4
Merge branch 'master' into hn-activation 2022-11-04 09:02:15 +03:00
Muhammad Rizqi Nur d5ea878b2a Fix merge conflicts 2022-10-31 13:54:40 +07:00
Muhammad Rizqi Nur 4123be632a Fix merge conflicts 2022-10-31 13:53:22 +07:00
Muhammad Rizqi Nur cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
AUTOMATIC1111 17a2076f72
Merge pull request #3928 from R-N/validate-before-load
Optimize training a little
2022-10-30 09:51:36 +03:00
Muhammad Rizqi Nur 3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur 3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur ab27c111d0 Add input validations before loading dataset for training 2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur 05e2e40537 Merge branch 'master' into gradient-clipping 2022-10-29 15:04:21 +07:00
timntorres e98f72be33
Merge branch 'AUTOMATIC1111:master' into 3825-save-hypernet-strength-to-info 2022-10-29 00:31:23 -07:00
AUTOMATIC1111 810e6a407d
Merge pull request #3858 from R-N/log-csv
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
Muhammad Rizqi Nur 9ceef81f77 Fix log off by 1 2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur 16451ca573 Learning rate sched syntax support for grad clipping 2022-10-28 17:16:23 +07:00
timntorres db5a354c48 Always ignore "None.pt" in the hypernet directory. 2022-10-28 01:41:57 -07:00
benkyoujouzu b2a8b263b2 Add missing support for linear activation in hypernetwork 2022-10-28 12:54:59 +08:00
Muhammad Rizqi Nur 2a25729623 Gradient clipping in train tab 2022-10-28 09:44:56 +07:00
AngelBottomless 029d7c7543
Revert unresolved changes in Bias initialization
it should be zeros_ or parameterized in future properly.
2022-10-27 14:44:53 +09:00
guaneec cc56df996e Fix dropout logic 2022-10-27 14:38:21 +09:00
AngelBottomless 85fcccc105 Squashed commit of fixing dropout silently
fix dropouts for future hypernetworks

add kwargs for Hypernetwork class

hypernet UI for gradio input

add recommended options

remove as options

revert adding options in ui
2022-10-27 14:38:21 +09:00
guaneec b6a8bb123b
Fix merge 2022-10-26 15:15:19 +08:00
timntorres a524d137d0 patch bug (SeverianVoid's comment on 5245c7a) 2022-10-26 10:12:46 +03:00
guaneec 91bb35b1e6
Merge fix 2022-10-26 15:00:03 +08:00
guaneec 649d79a8ec
Merge branch 'master' into hn-activation 2022-10-26 14:58:04 +08:00
guaneec 877d94f97c
Back compatibility 2022-10-26 14:50:58 +08:00
AngelBottomless 7207e3bf49 remove duplicate keys and lowercase 2022-10-26 09:17:01 +03:00
AngelBottomless de096d0ce7 Weight initialization and More activation func
add weight init

add weight init option in create_hypernetwork

fstringify hypernet info

save weight initialization info for further debugging

fill bias with zero for He/Xavier

initialize LayerNorm with Normal

fix loading weight_init
2022-10-26 09:17:01 +03:00
guaneec c702d4d0df
Fix off-by-one 2022-10-26 13:43:04 +08:00
guaneec 2f4c91894d
Remove activation from final layer of HNs 2022-10-26 12:10:30 +08:00
Melan 18f86e41f6 Removed two unused imports 2022-10-24 17:21:18 +02:00
AngelBottomless e9a410b535 check length for variance 2022-10-24 09:07:39 +03:00
AngelBottomless 0d2e1dac40 convert deque -> list
I don't feel this being efficient
2022-10-24 09:07:39 +03:00
AngelBottomless 348f89c8d4 statistics for pbar 2022-10-24 09:07:39 +03:00
AngelBottomless 40b56c9289 cleanup some code 2022-10-24 09:07:39 +03:00
AngelBottomless b297cc3324 Hypernetworks - fix KeyError in statistics caching
Statistics logging has changed to {filename : list[losses]}, so it has to use loss_info[key].pop()
2022-10-24 09:07:39 +03:00
DepFA 1fbfc052eb Update hypernetwork.py 2022-10-23 08:34:33 +03:00
AngelBottomless 48dbf99e84 Allow tracking real-time loss
Someone had 6000 images in their dataset, and it was shown as 0, which was confusing.
This will allow tracking real time dataset-average loss for registered objects.
2022-10-22 22:24:19 +03:00
AngelBottomless 24694e5983 Update hypernetwork.py 2022-10-22 20:25:32 +03:00
discus0434 6a4fa73a38 small fix 2022-10-22 13:44:39 +00:00
discus0434 97749b7c7d
Merge branch 'AUTOMATIC1111:master' into master 2022-10-22 22:00:59 +09:00
discus0434 7912acef72 small fix 2022-10-22 13:00:44 +00:00
discus0434 fccba4729d add an option to avoid dying relu 2022-10-22 12:02:41 +00:00
AUTOMATIC 7fd90128eb added a guard for hypernet training that will stop early if weights are getting no gradients 2022-10-22 14:48:43 +03:00
discus0434 dcb45dfecf Merge branch 'master' of upstream 2022-10-22 11:14:46 +00:00
discus0434 0e8ca8e7af add dropout 2022-10-22 11:07:00 +00:00
timntorres 272fa527bb Remove unused variable. 2022-10-21 16:52:24 +03:00
timntorres 19818f023c Match hypernet name with filename in all cases. 2022-10-21 16:52:24 +03:00
AUTOMATIC 03a1e288c4 turns out LayerNorm also has weight and bias and needs to be pre-multiplied and trained for hypernets 2022-10-21 10:13:24 +03:00
AUTOMATIC1111 0c5522ea21
Merge branch 'master' into training-help-text 2022-10-21 09:57:55 +03:00
timntorres 4ff274e1e3 Revise comments. 2022-10-21 09:55:00 +03:00
timntorres 5245c7a493 Issue #2921-Give PNG info to Hypernet previews. 2022-10-21 09:55:00 +03:00
AUTOMATIC c23f666dba a more strict check for activation type and a more reasonable check for type of layer in hypernets 2022-10-21 09:47:43 +03:00
Melan 7543cf5e3b Fixed some typos in the code 2022-10-20 22:43:08 +02:00
Melan 8f59129847 Some changes to the tensorboard code and hypernetwork support 2022-10-20 22:37:16 +02:00
aria1th f89829ec3a Revert "fix bugs and optimizations"
This reverts commit 108be15500.
2022-10-21 01:37:11 +09:00
AngelBottomless 108be15500
fix bugs and optimizations 2022-10-21 01:00:41 +09:00