Commit Graph

578 Commits (0d4e6c4ece67e8375062cf39f3a8a1e83317382f)

Author SHA1 Message Date
bmaltais 90c0d55457 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:17 -05:00
Kohya S 6b790bace6
Update README.md 2023-02-09 23:14:41 +09:00
jonathanzhang53 6c4348233f README documentation update 2023-02-07 22:32:54 -05:00
bmaltais 8d559ded18 * 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
    - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
        - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
        - If less than 64 is specified, padding will occur within U-Net. The result is unknown.
        - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
    - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
        - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
        - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
        - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
    - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - ``--random_crop`` now also works with buckets enabled.
        - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
        - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
2023-02-06 11:04:07 -05:00
Kohya S d591891048
Update README.md 2023-02-06 21:30:38 +09:00
Kohya S 7511674333 update readme 2023-02-06 21:14:16 +09:00
bmaltais 2486af9903 Update to latest dev code of kohya_s. WIP 2023-02-05 14:16:53 -05:00
Kohya S 6a79ac6a03
Update README.md 2023-02-05 21:59:55 +09:00
bmaltais 2626214f8a Add support for LoRA resizing 2023-02-04 11:55:06 -05:00
Kohya S fb230aff1b
Update README.md 2023-02-04 20:52:24 +09:00
bmaltais 20e62af1a6 Update to latest kohya_ss sd-script code 2023-02-03 14:40:03 -05:00
Kohya S b18a09edb5
Update README.md 2023-02-03 22:09:55 +09:00
Kohya S 26efa88908
Update README.md 2023-02-03 22:02:49 +09:00
bmaltais c8f4c9d6e8 Add support for lr_scheduler_num_cycles, lr_scheduler_power 2023-01-30 08:26:15 -05:00
bmaltais 2ec7432440 Fix issue 81:
https://github.com/bmaltais/kohya_ss/issues/81
2023-01-29 11:17:30 -05:00
bmaltais d45a7abb46 Add reference to Linux docker port 2023-01-29 11:12:05 -05:00
bmaltais bc8a4757f8 Sync with kohya 2023/01/29 update 2023-01-29 11:10:06 -05:00
Kohya S 4cabb37977
Update README.md 2023-01-29 21:50:17 +09:00
Kohya S 86eba1d2cf
Update README.md 2023-01-29 21:23:05 +09:00
bmaltais a4957cfea7 Adding LoRA tutorial 2023-01-27 19:46:13 -05:00
bmaltais 202923b3ce Add support for --keep_token option 2023-01-27 07:33:44 -05:00
bmaltais bf371b49bf Fix issue 71 2023-01-27 07:04:35 -05:00
bmaltais 03bd2e9b01 Add TI training support 2023-01-26 16:22:58 -05:00
Yuta Hayashibe 481823796e Fix typos 2023-01-26 22:12:29 +09:00
Kohya S 505768ea86 Update documents for TI 2023-01-26 22:06:29 +09:00
Kohya S c425afb08b
Update README.md 2023-01-25 14:00:42 +09:00
Kohya S 46029b2707
Update README.md 2023-01-24 20:57:33 +09:00
bmaltais 511361c80b - Add new tool to verify LoRA weights produced by the trainer. Can be found under "Dreambooth LoRA/Tools/Verify LoRA 2023-01-22 11:40:14 -05:00
bmaltais 2ca17f69dd v20.4.0:
Add support for `network_alpha` under the Training tab and support for `--training_comment` under the Folders tab.
2023-01-22 10:18:00 -05:00
Kohya S 4ba1667978
Update README.md 2023-01-22 22:19:07 +09:00
Kohya S 0ca064287e
Update README.md 2023-01-22 22:03:15 +09:00
Kohya S a3171714ce
Update README.md 2023-01-22 21:57:59 +09:00
Kohya S 4eb356f165 Upate readme 2023-01-22 21:33:58 +09:00
Kohya S d3bc5a1413
Update README.md 2023-01-22 10:55:57 +09:00
bmaltais 31a1c8a71a Merge kohya Jan 19 updates 2023-01-19 15:47:43 -05:00
Kohya S cae42728ab
Update README.md 2023-01-19 22:21:11 +09:00
bmaltais cb953d716f Update 2023-01-17 17:54:38 -05:00
Kohya S fda66db0d8
Update README.md
Add about gradient checkpointing
2023-01-17 22:05:39 +09:00
bmaltais 7886dfe9c7 Update gui start instructions 2023-01-16 13:39:10 -05:00
bmaltais 95b9ab7c4d Update README 2023-01-16 13:33:17 -05:00
bmaltais bfb0d18d4c Update install instructions 2023-01-16 10:28:20 -05:00
bmaltais 6aed2bb402 Add support for new arguments:
- max_train_epochs
- max_data_loader_n_workers
Move some of the codeto  common gui library.
2023-01-15 11:05:22 -05:00
Yuta Hayashibe 8544e219b0 Fix typos 2023-01-15 17:29:42 +09:00
Kohya S f2f2ce0d7d
Update README.md 2023-01-15 13:46:27 +09:00
Kohya S b8734405c6
Update README.md
Add about release
2023-01-15 12:52:31 +09:00
Kohya S 60e5793d5e
Update README.md 2023-01-14 21:53:09 +09:00
Kohya S 98b0cf0b3d
Update README.md 2023-01-14 21:30:11 +09:00
Kohya S 88515c2985
Update README.md 2023-01-14 21:29:49 +09:00
Kohya S bf691aef69
Update README.md
Add updates.
2023-01-12 23:21:21 +09:00
bmaltais 43116feda8 Add support for max token 2023-01-10 09:38:32 -05:00
Kohya S f981dfd38a Add credits 2023-01-10 17:43:35 +09:00
bmaltais 42a3646d4a Update readme 2023-01-09 17:59:11 -05:00
bmaltais dc5afbb057 Move functions to common_gui
Add model name support
2023-01-09 11:48:57 -05:00
bmaltais 442eb7a292 Merge latest kohya code release into GUI repo 2023-01-09 07:47:07 -05:00
Kohya S c4bc435bc4
Update README 2023-01-09 15:00:20 +09:00
Kohya S 223640e1ae Add updates. 2023-01-09 14:49:56 +09:00
bmaltais a4262c0a66 - Add vae support to dreambooth GUI
- Add gradient_checkpointing, gradient_accumulation_steps, mem_eff_attn, shuffle_caption to finetune GUI
- Add gradient_accumulation_steps, mem_eff_attn to dreambooth lora gui
2023-01-08 20:55:41 -05:00
bmaltais f1d53ae3f9 Add resume training and save_state option to finetune UI 2023-01-08 19:31:44 -05:00
bmaltais 115ed35187 Emergency fix 2023-01-06 23:19:49 -05:00
Kohya S cbfe8126d6 Update readme for error: fp16 ... requires a GPU 2023-01-07 12:29:03 +09:00
bmaltais aa0e39e14e Update readme 2023-01-06 18:38:24 -05:00
bmaltais 8ec1edbadc Update README 2023-01-06 18:33:07 -05:00
bmaltais 34f7cd8e57 Add new Utility to Extract a LoRA from a finetuned model 2023-01-06 18:25:55 -05:00
bmaltais c20a10d7fd Emergency fix for dreambooth_ui no longer working, sorry
- Add LoRA network merge too GUI. Run `pip install -U -r requirements.txt` after pulling this new release.
2023-01-06 07:13:12 -05:00
bmaltais b8100b1a0a - Add support for `--clip_skip` option
- Add missing `detect_face_rotate.py` to tools folder
- Add `gui.cmd` for easy start of GUI
2023-01-05 19:16:13 -05:00
Kohya S d62725b644
Update README.md
update link to dreambooth doc
2023-01-05 21:35:47 +09:00
bmaltais 9d3c402973 - Finetune, add xformers, 8bit adam, min bucket, max bucket, batch size and flip augmentation support for dataset preparation
- Finetune, add "Dataset preparation" tab to group task specific options
2023-01-02 13:07:17 -05:00
bmaltais 1d460a09fd add support for color and flip augmentation to "Dreambooth LoRA" 2023-01-01 22:43:44 -05:00
bmaltais af46ce4c47 Update LoRA GUI
Various improvements
2023-01-01 14:14:58 -05:00
bmaltais 0f75b7c2db Update readme 2022-12-30 20:50:01 -05:00
Kohya S 7e51bd837e
Update README.md
Add link to training LoRA documentation.
2022-12-26 12:24:18 +09:00
bmaltais 2cdf4cf741 - Fix for conversion tool issue when the source was an sd1.x diffuser model
- Other minor code and GUI fix
2022-12-23 07:56:35 -05:00
bmaltais fd10512bf4 Add revision info 2022-12-22 13:19:28 -05:00
bmaltais 6a7e27e100 fix issue with dataset balancing when the number of detected images in the folder is 0 2022-12-21 11:02:49 -05:00
bmaltais aa5d13f9a7 Add authentication support 2022-12-21 09:05:06 -05:00
Kohya S 5f6eb13df9 Update README. 2022-12-21 22:15:16 +09:00
bmaltais 600d78ae08 Merge all requirements into one 2022-12-20 10:17:22 -05:00
bmaltais 1bc5089db8 Create model and log folder when running th dreambooth folder creation utility 2022-12-20 10:07:22 -05:00
bmaltais 706dfe157f
Merge dreambooth and finetuning in one repo to align with kohya_ss new repo (#10)
* Merge both dreambooth and finetune back in one repo
2022-12-20 09:15:17 -05:00
bmaltais 69558b5951 Update readme 2022-12-19 21:51:52 -05:00
bmaltais 6987f51b0a Fix stop encoder training issue 2022-12-19 10:39:04 -05:00
bmaltais c90aa2cc61 - Fix file/folder opening behind the browser window
- Add WD14 and BLIP captioning to utilities
- Improve overall GUI layout
2022-12-19 09:22:52 -05:00
Kohya S 20055752bd Add license information. 2022-12-19 21:59:27 +09:00
bmaltais c5cca931ab Proposed file structure rework and required file changes 2022-12-18 15:24:46 -05:00
bmaltais 0ca93a7aa7 v18.1: Model conversion utility 2022-12-18 13:11:10 -05:00
Kohya S bedea62ff0 Fix readme. 2022-12-18 15:03:29 +09:00
Kohya S 7a04196e66 Add scripts. 2022-12-18 14:55:34 +09:00
Kohya S f0dfe6bf31 add readme. 2022-12-18 14:09:50 +09:00
bmaltais f459c32a3e v18: Save model as option added 2022-12-17 20:36:31 -05:00
bmaltais fc22813b8f Fix README 2022-12-17 16:24:23 -05:00
bmaltais e1d66e47f4
v17.2 (#8)
* Update youtube video

* Dataset balancing

* Fix typo
2022-12-17 16:22:34 -05:00
bmaltais 88561720e0 Update README for v17.1 2022-12-17 11:52:31 -05:00
bmaltais 969eea519f Remove redundant code 2022-12-15 07:48:29 -05:00
bmaltais 2b5c7312f1 Add improvement for 4090 users 2022-12-14 15:19:32 -05:00
bmaltais dba1cde7c4 Update README 2022-12-14 15:16:22 -05:00
bmaltais 38699cf910 Update Readme 2022-12-13 13:50:39 -05:00
bmaltais 5f1a465a45 Update to v17
New GUI
2022-12-13 13:49:14 -05:00
bmaltais e8db30b9d1 Publish v15 2022-12-05 10:49:02 -05:00
bmaltais 30b4be5680
Update README.md 2022-12-04 21:22:57 -05:00
bmaltais d037c1f429 v13
- fix training text encoder at specified step (`--stop_text_encoder_training=<step #>`) that was causing both Unet and text encoder training to stop completely at the specified step rather than continue without text encoding training.
2022-11-30 07:31:52 -05:00
bmaltais 188edd34af v12 release 2022-11-29 12:47:48 -05:00
bmaltais 5a68cd02e7 Update install instructions 2022-11-27 21:07:57 -05:00
bmaltais b40f909708 Quick readme update 2022-11-27 20:49:04 -05:00
bmaltais 83fe8f59b9 Update asd dog example 2022-11-27 20:47:59 -05:00
bmaltais 3e3fe50ab4 Add upgrade instructions 2022-11-27 20:42:39 -05:00
bmaltais e84f3ccd01 Updated Readme with SD2.0 examples 2022-11-27 20:37:18 -05:00
bmaltais f13f641841 Update upgrade documentation for v11 2022-11-27 11:29:37 -05:00
bmaltais b3d8682a15 Fix readme issue 2022-11-27 10:09:36 -05:00
bmaltais 601f81e554 Adding options options list 2022-11-27 10:04:29 -05:00
bmaltais c0b9947fe5 V11 release 2022-11-27 09:57:07 -05:00
Bernard Maltais 2629617de7 Update to v10 2022-11-21 07:50:04 -05:00
Bernard Maltais 0e8b993def Update train_db_fixed to v9 2022-11-19 08:49:42 -05:00
Bernard Maltais 16564f832e Update diffuser_fine_tuning version 2022-11-14 09:48:09 -05:00
Bernard Maltais 202a416251 Update README
Add example powershell code
2022-11-13 11:28:08 -05:00
Bernard Maltais 36b06d41bf Add v8 of train_db_fixed.py
Add diffusers_fine_tuning
2022-11-09 20:48:27 -05:00
Bernard Maltais 23a5b7f946 Adding new v7 2022-11-07 18:40:34 -05:00
Bernard Maltais 4eae58fd0e Update readme with finetuing 2022-11-05 16:56:34 -04:00
Bernard Maltais e0fc105f7a Update README to help with Powershell rights issue. 2022-11-01 07:46:55 -04:00
Bernard Maltais 33b1c00daf Fix typo 2022-10-30 14:06:20 -04:00
Bernard Maltais d5290e97d1 Update instructions 2022-10-30 14:01:12 -04:00
Bernard Maltais 76edd66cf7 Update documentation 2022-10-30 14:00:04 -04:00
Bernard Maltais b0825446c9 Update doc 2022-10-30 13:51:28 -04:00
Bernard Maltais 977ceec90e Update doc 2022-10-30 13:39:00 -04:00
Bernard Maltais 80e0e7e1a5 Add dependencies 2022-10-30 13:37:42 -04:00
Bernard Maltais 1b3ad40471 Update readme 2022-10-30 11:40:55 -04:00
Bernard Maltais a6c02d8edc Update readme 2022-10-30 11:19:24 -04:00
Bernard Maltais 22125faa76 Update README 2022-10-30 11:18:20 -04:00
Bernard Maltais 9535d1248c 1st commit 2022-10-30 11:15:09 -04:00