2.6 KiB
Hypernetwork-MonkeyPatch-Extension
Extension that patches Hypernetwork structures and training

For Hypernetwork structure, see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4334
For Variable Dropout, see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4288
Train_Beta(now, train_gamma) tab allows some more options with improved training.
Features
No-Crop Training
You can train without cropping images.
Fix OSError while training
Unload Optimizer while generating Previews
Create_Beta_hypernetwork allows creating beta hypernetworks.
Beta hypernetworks* can contain additional informations and specified dropout structures. It will be loaded without extension too, but it won't load dropout structure, so training won't work as original. Generating images should work identically.
This extension also overrides how webui loads and finds hypernetworks, to use variable dropout rates, and etc. Thus, hypernetwork created with variable dropout rate might not work correctly in original webui.
Well, at least now it should work, without having any problem except you cannot use variable dropout rate in original webui. If you have problem with loading hypernetworks, please create an issue. I can submit pr to original branch to load these beta typed hypernetworks correctly.
Training features are in train_gamma tab.
If you're unsure about options, just enable every checkbox, and don't change default value.
CosineAnnealingWarmupRestarts
This also fixes some CUDA memory issues. Currently both Beta and Gamma Training is working very well, as far as I could say.
Planned features
Allow using general models with .safetensor save /loading
Some personal researches
We cannot apply convolution for attention, it does do something, but hypernetwork here, only affects attention, and its different from 'attention map' which is already a decoded form(image BW vectors) of attention(latent space). Same goes to SENet, unfortunately.



