Commit Graph

173 Commits (master)

Author SHA1 Message Date
bmaltais 18229b26fc Merge branch 'main' of https://github.com/kohya-ss/sd-scripts into dev2 2023-05-11 14:02:18 -04:00
Kohya S ed5bfda372 Fix controlnet input to rgb from bgr 2023-05-11 21:12:06 +09:00
bmaltais cb74a17f42 Update some module versions 2023-05-08 20:50:54 -04:00
bmaltais bab827e38e Mark bitsandbytes as deprecated 2023-05-03 09:07:59 -04:00
bmaltais 111527b490 Add support for Lion8bit 2023-05-03 08:49:48 -04:00
bmaltais e2b56451d1 Merge branch 'main' of https://github.com/kohya-ss/sd-scripts into dev 2023-05-03 07:06:06 -04:00
ykume 758a1e7f66 Revert unet config, add option to convert script 2023-05-03 16:05:15 +09:00
ykume 1cba447102 fix unet cfg is different in saving diffuser model 2023-05-03 14:06:51 +09:00
bmaltais 856d610ea9 Add --cache_latents_to_disk option to the GUI 2023-05-01 20:10:47 -04:00
bmaltais 536260dc15 update logging 2023-04-28 21:40:34 -04:00
bmaltais 7bd1cb9d08 Merge branch 'main' of https://github.com/kohya-ss/sd-scripts into dev 2023-04-27 09:03:59 -04:00
Kohya S a85fcfe05f fix latent upscale not working if bs>1 2023-04-25 08:10:21 +09:00
bmaltais 6bf994c359 Merge branch 'main' of https://github.com/kohya-ss/sd-scripts into dev 2023-04-24 18:44:36 -04:00
Kohya S 884e6bff5d fix face_crop_aug not working on finetune method, prepare upscaler 2023-04-22 10:41:36 +09:00
bmaltais 61318ba9f0 Update debug style 2023-04-21 20:09:16 -04:00
bmaltais 7850bd7194 Add new debug info on startup of gui 2023-04-21 07:37:55 -04:00
bmaltais 55d6d7a95d Add new Merge LyCORIS models 2023-04-20 21:14:36 -04:00
bmaltais 35afccc37a Update merge lora 2023-04-20 09:56:40 -04:00
bmaltais b2d44473a1 Merge branch 'main' of https://github.com/kohya-ss/sd-scripts into dev 2023-04-17 20:55:28 -04:00
bmaltais 2e07329088 Upgrading to latest gradio release 2023-04-17 20:54:55 -04:00
Kohya S 47d61e2c02 format by black 2023-04-17 22:00:26 +09:00
Kohya S 8f6fc8daa1 add ja comment, restore arg for backward compat 2023-04-17 21:58:55 +09:00
Linaqruf 7f8e05ccad feat: add --save_precision args 2023-04-12 05:43:19 +07:00
bmaltais 2eddd64b90 Merge latest sd-script updates 2023-04-01 07:14:25 -04:00
bmaltais 9f6e0c1c8f Fix issue with LyCORIS version 2023-03-30 07:23:37 -04:00
Bernard Maltais ac5eccbaca Add MacOS support 2023-03-24 22:39:45 -04:00
bmaltais acf7d4785f Add support for custom user gui startup files 2023-03-24 13:26:29 -04:00
bmaltais 1c8d901c3b Update to latest sd-scripts updates 2023-03-21 20:20:57 -04:00
Kohya S 4f92b6266c fix do not starting script 2023-03-21 21:29:10 +09:00
Robert Smieja eb66e5ebac Extract parser setup to helper function
- Allows users who `import` the scripts to examine the parser programmatically
2023-03-20 00:06:47 -04:00
bmaltais 9f8c1e9660
Merge pull request #399 from zrma/feature/fix_gui.sh
modify gui.sh to validate requirements and apply args
2023-03-19 20:05:41 -04:00
zrma 6bfdbaf3aa
modify gui.sh to validate requirements and apply args 2023-03-19 23:22:42 +09:00
bmaltais baf009d2b1 Fix basic captioning logic 2023-03-15 19:31:52 -04:00
bmaltais 91e19ca9d9 Fix issue with kohya locon not training the convolution layers 2023-03-12 20:36:58 -04:00
bmaltais 79c2c2debe Add validation that all requirements are met 2023-03-12 10:11:41 -04:00
bmaltais 2deddd5f3c Update to sd-script latest update 2023-03-09 11:06:59 -05:00
bmaltais 819a5718ea Add new lora_resize tool under tools 2023-03-04 09:52:14 -05:00
bmaltais c29f96a1f5 Add extract locon tool 2023-03-04 08:04:49 -05:00
bmaltais 5498539fda Fix typos 2023-03-01 19:20:05 -05:00
bmaltais 1e3055c895 Update tensorboard 2023-03-01 13:14:47 -05:00
bmaltais 60ad22733c Update to latest code version 2023-02-23 19:21:30 -05:00
Kohya S 014fd3d037 support original controlnet 2023-02-20 12:54:44 +09:00
bmaltais f9863e3950 add dadapation to other trainers 2023-02-16 19:33:46 -05:00
Kohya S cebee02698 Official weights to LoRA 2023-02-13 23:38:38 +09:00
bmaltais 261b6790ee Update tool 2023-02-12 07:02:05 -05:00
bmaltais a49fb9cb8c 2023/02/11 (v20.7.2):
- ``lora_interrogator.py`` is added in ``networks`` folder. See ``python networks\lora_interrogator.py -h`` for usage.
        - For LoRAs where the activation word is unknown, this script compares the output of Text Encoder after applying LoRA to that of unapplied to find out which token is affected by LoRA. Hopefully you can figure out the activation word. LoRA trained with captions does not seem to be able to interrogate.
        - Batch size can be large (like 64 or 128).
    - ``train_textual_inversion.py`` now supports multiple init words.
    - Following feature is reverted to be the same as before. Sorry for confusion:
        > Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - Add new tool to sort, group and average crop image in a dataset
2023-02-11 11:59:38 -05:00
Kohya S e5cc64a563 support multibyte characters for filename 2023-02-10 22:55:21 +09:00
bmaltais 7bc93821a0 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:24 -05:00
bmaltais 90c0d55457 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:17 -05:00
Kohya S f0c8c95871 add assocatied files copying 2023-02-09 22:12:41 +09:00
Kohya S f7b5abb595 add resizing script 2023-02-09 21:30:27 +09:00
bmaltais 09d3a72cd8 Adding support for caption dropout 2023-02-07 20:58:35 -05:00
bmaltais 8d559ded18 * 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
    - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
        - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
        - If less than 64 is specified, padding will occur within U-Net. The result is unknown.
        - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
    - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
        - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
        - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
        - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
    - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - ``--random_crop`` now also works with buckets enabled.
        - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
        - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
2023-02-06 11:04:07 -05:00
bmaltais cbfc311687 Integrate new bucket parameters in GUI 2023-02-05 20:07:00 -05:00
bmaltais 20e62af1a6 Update to latest kohya_ss sd-script code 2023-02-03 14:40:03 -05:00
bmaltais 2ec7432440 Fix issue 81:
https://github.com/bmaltais/kohya_ss/issues/81
2023-01-29 11:17:30 -05:00
bmaltais bc8a4757f8 Sync with kohya 2023/01/29 update 2023-01-29 11:10:06 -05:00
Kohya S 6bbb4d426e Fix unet config in Diffusers (sample_size=64) 2023-01-29 20:43:58 +09:00
bmaltais 6aed2bb402 Add support for new arguments:
- max_train_epochs
- max_data_loader_n_workers
Move some of the codeto  common gui library.
2023-01-15 11:05:22 -05:00
bmaltais b8100b1a0a - Add support for `--clip_skip` option
- Add missing `detect_face_rotate.py` to tools folder
- Add `gui.cmd` for easy start of GUI
2023-01-05 19:16:13 -05:00
Yuta Hayashibe 0f20453997 Fix typos 2022-12-31 14:39:36 +09:00
bmaltais 2cdf4cf741 - Fix for conversion tool issue when the source was an sd1.x diffuser model
- Other minor code and GUI fix
2022-12-23 07:56:35 -05:00
Kohya S 3800e145bd Fix conversion error for v1 Diffusers->ckpt. #10 2022-12-23 12:32:11 +09:00
bmaltais 706dfe157f
Merge dreambooth and finetuning in one repo to align with kohya_ss new repo (#10)
* Merge both dreambooth and finetune back in one repo
2022-12-20 09:15:17 -05:00
bmaltais c90aa2cc61 - Fix file/folder opening behind the browser window
- Add WD14 and BLIP captioning to utilities
- Improve overall GUI layout
2022-12-19 09:22:52 -05:00
bmaltais c5cca931ab Proposed file structure rework and required file changes 2022-12-18 15:24:46 -05:00
bmaltais 0ca93a7aa7 v18.1: Model conversion utility 2022-12-18 13:11:10 -05:00
bmaltais 5f1a465a45 Update to v17
New GUI
2022-12-13 13:49:14 -05:00
bmaltais 449a35368f Update model conversion util 2022-12-05 11:13:41 -05:00
bmaltais e8db30b9d1 Publish v15 2022-12-05 10:49:02 -05:00
bmaltais 37133218bf Adding conversion tool doc 2022-12-03 06:28:27 -05:00
bmaltais 231030b3b2 Add tool to convert diffusers 2.0 model to ckpt 2022-12-03 06:23:25 -05:00
bmaltais 621dabcadf Add prunt tool 2022-12-01 19:06:33 -05:00