commit 3933b0b47ec546ecf8a57c81a002ac2583e332dc
Author: d8ahazard <d8ahazard@gmail.com>
Date: Fri Jan 6 08:44:36 2023 -0600
Ensure model params are set before creating new model.
commit 2d112955194c1094cdbe7f548b16cc131febb3e2
Author: d8ahazard <d8ahazard@gmail.com>
Date: Fri Jan 6 08:43:35 2023 -0600
Revert to old loss calculation method.
Use original loss weight calc method from HF script, versus the version used in Kohya_SS.
Only clip tenc params if training tenc.
commit 9f437e2d68da68e46675607504edeb400c5f4efa
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 22:22:48 2023 -0600
Python 3.8 fix
commit bdef26f0d00d6be4e0c0d5df9afcbb1d0e824463
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 22:22:36 2023 -0600
Update hints
commit c9773476a45e5b416875fb7ab6790d516bb88194
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 22:22:24 2023 -0600
Fix warnings and errors generating graphs
commit b071d59932febc83f947a349f0433b1d9e04c07b
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 16:02:32 2023 -0600
Code cleanup
Fix typos, grammar.
Remove unused vars.
Optimize imports.
Rename dupe/repeated vars.
commit df20e2278164eea3bb61f203c4e2b2cfa3d3a07a
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 15:13:11 2023 -0600
clean
commit 9477b2e55fdcc48eb1d2b73a57f34c99cdbff696
Author: d8ahazard <d8ahazard@gmail.com>
Date: Thu Jan 5 14:07:06 2023 -0600
Bippity Boppity Booyah!
Remove "class_buckets" param.
Remove excess imports.
Fix saving chedkpoint counts during training.
Ensure images returned to UI are images.
Fix bucket matching, class images generation.
Use same methods for bucketing and class generation.
Add "Debug Buckets" button to UI.
Fix sample image generation only generating samples for one concept.
commit 9ceb5c27980f19ec3e5ea50cb5186e28e3245d1c
Author: d8ahazard <d8ahazard@gmail.com>
Date: Wed Jan 4 13:13:46 2023 -0600
Bad Bucket Matching Is Bad
This is betterbuckets! We can't have bad bucket batching up in this bio...logy class. Better bucket bucketing is all about bucketing the buckets better while batching the buckets with the best bucket browsing.
commit e643624aa2adc788422941253cb96453d2689327
Author: d8ahazard <d8ahazard@gmail.com>
Date: Wed Jan 4 12:26:52 2023 -0600
Building Better Bucket Resos
commit 690761f3a3bcdbb7a7ad99cec0f95a61ed0baf78
Author: d8ahazard <d8ahazard@gmail.com>
Date: Wed Jan 4 11:34:33 2023 -0600
This too
commit efb79859101b82e9a2deddd5d47147881e24cb9d
Author: d8ahazard <d8ahazard@gmail.com>
Date: Wed Jan 4 11:34:15 2023 -0600
More class generation work, fixes
commit 4325dfacf7fbb1de7c6eb671ea3f33a9ae5b6065
Author: d8ahazard <d8ahazard@gmail.com>
Date: Tue Jan 3 13:51:01 2023 -0600
Remove old finetuning_dataset
commit e99940ca215660e2eda1e2e4fe54421623197f58
Author: d8ahazard <d8ahazard@gmail.com>
Date: Tue Jan 3 13:30:18 2023 -0600
Cleanup print messages...
commit ff0ea01a965699d90bc8b949da8676ad34f39748
Merge: fe605c26d17489
Author: d8ahazard <d8ahazard@gmail.com>
Date: Tue Jan 3 12:56:44 2023 -0600
Merge pull request #675 from Zuxier/FriedChickenBuckets
Rework of stop text encoder and fix to reported total batch size
commit 6d1748935d
Author: Zuxier <120954436+Zuxier@users.noreply.github.com>
Date: Tue Jan 3 18:57:39 2023 +0100
add tenc encoder ratio and total batch size fix
commit fe605c250e
Author: d8ahazard <d8ahazard@gmail.com>
Date: Tue Jan 3 09:47:42 2023 -0600
Cleanup
commit 9dfbebfc4c
Author: d8ahazard <d8ahazard@gmail.com>
Date: Tue Jan 3 09:46:51 2023 -0600
Better Bucketing
Instead of randomly grabbing data out of one bucket, actually grab N instance images and N class images of matching resolution and caption, which should be more aligned with the ide of reg images.
Refactor num_class_images to num_class_images_per - which ensures each instance image has N matching class images. Again, this should better align with the whole point of reg images.
Better VRAM management while training as well.
Add specific flag to allow debugging of vram stuff without constantly adding/removing print messages.
Remove unused "max steps" param. Epochs does the same thing with less confusion and math.
Add JS hints for missing elements.
Re-arrange gradients checkpoint steps so it's next to batch size.
Fix weight mismatch (again).
Update default params.
Add warnings when saving without loading params first.
Delete vae if not needed.
Wizardy wizard stuff.
Adjust hidden UI elements on params load.
Really, fix step counting to align with epoch stop.
Select individual image prompts in gallery.
Suppress tensorflow message on import.
Remove print messages that break the status bar.
Update state messages when generating training dataset.
Add resolution bucketing from koyha_ss.
Add text encoder training limit.
Remove dumb adam beta params.
Remove printm calls, mem_record.
Superdataset is all but dead.
Lr loss average still looks janky AF.
Add Clip skip UI Param
Remove "scale learning rate" param
Add "sanity prompt" and "sanity seed" options - generate a sample while training with an unrelated prompt to sanity check the overall state of the model.
Fixes for copying original config file when extracting/generating checkpoints.
Status update changes.
Better unique sample generation when selecting sample prompts.
Add cosine annealing with warm restarts lr scheduler.
Fix step counts (AGAIN).
Overlay graph for plot showing lr and loss avg.
Remove leftover use_cpu call.
Attempt to show multiple images generated during training in gallery.
Add v2 tokenizer method to superdataset.
Add "set_diffusers_xformers_flag" method from kohya_ss
Adjust dataloader params to not pin memory and use 0 workers.
Revert breaking change in log parser (for now).
Add fancy new graph generation showing LR and loss values.
Add LR and loss values to UI during updates.
Fix UI layout for progress bar and textinfos.
Remove hypernetwork junk from xattention, it's not applicable when training a model.
Add get_scheduler method from unreleased diffusers version to allow for new LR params.
Add wrapper class for training output - need to add for imagic yet.
Remove use CPU option entirely, replace with "use lora" in wizard.
Add grad acuumulataion steps to wizard.
Add lr cycles and lr power UI params for applicable LR schedulers.
Remove broke min_learning_rate param.
Remove unused "save class text" param.
Update js ui hints.
Bump diffusers version.
Make labels more useful, auto-adjusting as needed.
Add manual "check progress" button to UI, because "gradio".
Start replacing references to native sd-webui methods with calls to a single wrapper class.
Add a minimum learning rate param for polynomial scheduler.
Add model epoch to UI.
Revert "actual_steps" value, because it's must more confusing...
Fix UI popups, regardless of if they work in main.
Add loss, vram usage to UI status text.
Fix tag shuffle, keep "first tag" in place.
Shuffle sample prompts?
Move DreamState class to db_shared.
Fix indenting, API return values, remove unused prints and messages.
Rename some invalid objects.
Add __future__ imports to try to not break python < 3.9
Remove reference to misnamed SD method that will eventually get fixed and then break everything.
Clean up that potty mouth.
Add additional save settings to config.
Update saving code, not to use the new settings, but just to work better when using batches.
Move "Generate Sample Images" to dreambooth.py so we can reload it on the fly.
Hide progressbar on start of new function.
Progress bar style work/fixes.
Create unique db_state object we can use for stuff.
Add custom method wrappers so we don't depend on main.
Fix gallery, status messages.
Move things to /scripts folder so we can reload them faster.
Moar ui improvements.