Try to eliminate spaghetti by moving items to their own respective files.
Remove unused files, classes, methods.
Continue the good work of wrapping any dependencies on Auto1111 in the dreambooth/shared.py
Add fancy new graph generation showing LR and loss values.
Add LR and loss values to UI during updates.
Fix UI layout for progress bar and textinfos.
Remove hypernetwork junk from xattention, it's not applicable when training a model.
Add get_scheduler method from unreleased diffusers version to allow for new LR params.
Add wrapper class for training output - need to add for imagic yet.
Remove use CPU option entirely, replace with "use lora" in wizard.
Add grad acuumulataion steps to wizard.
Add lr cycles and lr power UI params for applicable LR schedulers.
Remove broke min_learning_rate param.
Remove unused "save class text" param.
Update js ui hints.
Bump diffusers version.
Make labels more useful, auto-adjusting as needed.
Add manual "check progress" button to UI, because "gradio".
Fix gitignore.
Add epoch to db_config.
Compile lora checkpoints when training.
Remove unused messages and files.
Detect 512 V2 models and save the appropriate config file.
Fix global step counter.
Cleanup startup script to not freak people out.
Use pypi diffusers version, bump transformers version.
Remove annoying message from bnb import, fix missing os import.
Move db_concept to it's own thing so we can preview prompts before training.
Fix dumb issues when loading/saving configs.
Add setting for saving class txt prompts, versus just doing it.
V2 conversion fixes.
Move methods not specifically related to dreambooth training to utils class.
Add automatic setter for UI details when switching models.
Loading model params won't overwrite v2/ema/revision settings.
Cleanup installer, better logging, etc.
Use github diffusers version, for now.
Add universal bitsandbytes dll for windows user.
Update bnb support files.
V2 compile/extract fixes. Set proper scheduler.
Better state dict rebuilding.
Maybe really maybe fix same checkpoint hash?
Ensure older db configs work.
Fix up sample image generation from diffusers.
Remove "custom" EmaModel, use the real one like Huggingface does.
Revert breaking changes involving disabling text encoder training.
Add option to load sample prompts from a template file for each concept.
Add option to extract EMA weights from model if they exist.
Save global_step value to generated checkpoint.
Bump diffusers version to 0.9.0[torch]
Remove safety_checker and feature_extractor.
Create new "SuperDataset" class that can shuffle our images, handle multiple concepts intelligently, and lay the groundwork for resolution bucketing.
Stop being lazy with DreamboothConfig, define all params.
Add new "Concept" Class to neatly encapsulate parameters for individual concepts.
Begin editing the UI to allow training multiple concepts without needing JSON.
Remove "shuffle after epoch" stuff, as it's no longer needed with SuperDataset.
If xformers is installed and a cmd flag set to use it, nag the user before training that they need to enable fp16 and 8bit adam.
If user is training on CPU only, tell them they need to disable fp16 and adam.
Ensure warning messages are REALLy shown in the UI.
Better logging of installed features on launch.