|
|
||
|---|---|---|
| .cache | ||
| .github | ||
| .vscode | ||
| bitsandbytes_windows | ||
| config_files/accelerate | ||
| dataset | ||
| docs | ||
| examples | ||
| finetune | ||
| js | ||
| library | ||
| localizations | ||
| networks | ||
| presets | ||
| setup | ||
| test | ||
| tools | ||
| v2_inference | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .release | ||
| Dockerfile | ||
| LICENSE.md | ||
| README-ja.md | ||
| README.md | ||
| README_中文教程.md | ||
| SC_finetuning_gui.py | ||
| XTI_hijack.py | ||
| _typos.toml | ||
| activate.ps1 | ||
| config_README-zh.md | ||
| converted_markdown.md | ||
| docker-compose.yaml | ||
| dreambooth_gui.py | ||
| fine_tune.py | ||
| fine_tune_README.md | ||
| finetune_gui.py | ||
| gen_img.py | ||
| gen_img_diffusers.py | ||
| gui.bat | ||
| gui.ps1 | ||
| gui.sh | ||
| kohya_gui.py | ||
| lora_gui.py | ||
| pyproject.toml | ||
| requirements.txt | ||
| requirements_linux.txt | ||
| requirements_linux_docker.txt | ||
| requirements_linux_ipex.txt | ||
| requirements_macos_amd64.txt | ||
| requirements_macos_arm64.txt | ||
| requirements_runpod.txt | ||
| requirements_windows_torch2.txt | ||
| sdxl_gen_img.py | ||
| sdxl_minimal_inference.py | ||
| sdxl_train.py | ||
| sdxl_train_control_net_lllite.py | ||
| sdxl_train_control_net_lllite_old.py | ||
| sdxl_train_network.py | ||
| sdxl_train_textual_inversion.py | ||
| setup-runpod.sh | ||
| setup.bat | ||
| setup.ps1 | ||
| setup.sh | ||
| stable_cascade_gen_img.py | ||
| stable_cascade_train_c_network.py | ||
| stable_cascade_train_stage_c.py | ||
| style.css | ||
| textual_inversion_gui.py | ||
| train_controlnet.py | ||
| train_db.py | ||
| train_db_README.md | ||
| train_network.py | ||
| train_network_README.md | ||
| train_network_appl_weights_README.md | ||
| train_textual_inversion.py | ||
| train_textual_inversion_XTI.py | ||
| train_ti_README.md | ||
README.md
Kohya's GUI
This repository mostly provides a Windows-focused Gradio GUI for Kohya's Stable Diffusion trainers... but support for Linux OS is also provided through community contributions. Macos is not great at the moment.
The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.
Table of Contents
- Kohya's GUI
🦒 Colab
This Colab notebook was not created or maintained by me; however, it appears to function effectively. The source can be found at: https://github.com/camenduru/kohya_ss-colab.
I would like to express my gratitude to camendutu for their valuable contribution. If you encounter any issues with the Colab notebook, please report them on their repository.
| Colab | Info |
|---|---|
| kohya_ss_gui_colab |
Installation
Windows
Windows Pre-requirements
To install the necessary dependencies on a Windows system, follow these steps:
-
Install Python 3.10.
- During the installation process, ensure that you select the option to add Python to the 'PATH' environment variable.
-
Install Git.
-
Install the Visual Studio 2015, 2017, 2019, and 2022 redistributable.
Setup
To set up the project, follow these steps:
-
Open a terminal and navigate to the desired installation directory.
-
Clone the repository by running the following command:
git clone https://github.com/bmaltais/kohya_ss.git -
Change into the
kohya_ssdirectory:cd kohya_ss -
Run the setup script by executing the following command:
.\setup.batDuring the accelerate config step use the default values as proposed during the configuration unless you know your hardware demand otherwise. The amount of VRAM on your GPU does not have an impact on the values used.
Optional: CUDNN 8.6
The following steps are optional but can improve the learning speed for owners of NVIDIA 30X0/40X0 GPUs. These steps enable larger training batch sizes and faster training speeds.
Please note that the CUDNN 8.6 DLLs needed for this process cannot be hosted on GitHub due to file size limitations. You can download them here to boost sample generation speed (almost 50% on a 4090 GPU). After downloading the ZIP file, follow the installation steps below:
-
Unzip the downloaded file and place the
cudnn_windowsfolder in the root directory of thekohya_ssrepository. -
Run .\setup.bat and select the option to install cudnn.
Linux and macOS
Linux Pre-requirements
To install the necessary dependencies on a Linux system, ensure that you fulfill the following requirements:
-
Ensure that
venvsupport is pre-installed. You can install it on Ubuntu 22.04 using the command:apt install python3.10-venv -
Install the cudNN drivers by following the instructions provided in this link.
-
Make sure you have Python version 3.10.6 or higher (but lower than 3.11.0) installed on your system.
-
If you are using WSL2, set the
LD_LIBRARY_PATHenvironment variable by executing the following command:export LD_LIBRARY_PATH=/usr/lib/wsl/lib/
Setup
To set up the project on Linux or macOS, perform the following steps:
-
Open a terminal and navigate to the desired installation directory.
-
Clone the repository by running the following command:
git clone https://github.com/bmaltais/kohya_ss.git -
Change into the
kohya_ssdirectory:cd kohya_ss -
If you encounter permission issues, make the
setup.shscript executable by running the following command:chmod +x ./setup.sh -
Run the setup script by executing the following command:
./setup.shNote: If you need additional options or information about the runpod environment, you can use
setup.sh -horsetup.sh --helpto display the help message.
Install Location
The default installation location on Linux is the directory where the script is located. If a previous installation is detected in that location, the setup will proceed there. Otherwise, the installation will fall back to /opt/kohya_ss. If /opt is not writable, the fallback location will be $HOME/kohya_ss. Finally, if none of the previous options are viable, the installation will be performed in the current directory.
For macOS and other non-Linux systems, the installation process will attempt to detect the previous installation directory based on where the script is run. If a previous installation is not found, the default location will be $HOME/kohya_ss. You can override this behavior by specifying a custom installation directory using the -d or --dir option when running the setup script.
If you choose to use the interactive mode, the default values for the accelerate configuration screen will be "This machine," "None," and "No" for the remaining questions. These default answers are the same as the Windows installation.
Runpod
Manual installation
To install the necessary components for Runpod and run kohya_ss, follow these steps:
-
Select the Runpod pytorch 2.0.1 template. This is important. Other templates may not work.
-
SSH into the Runpod.
-
Clone the repository by running the following command:
cd /workspace git clone https://github.com/bmaltais/kohya_ss.git -
Run the setup script:
cd kohya_ss ./setup-runpod.sh -
Run the gui with:
./gui.sh --share --headlessor with this if you expose 7860 directly via the runpod configuration
./gui.sh --listen=0.0.0.0 --headless -
Connect to the public URL displayed after the installation process is completed.
Pre-built Runpod template
To run from a pre-built Runpod template you can:
-
Open the Runpod template by clicking on https://runpod.io/gsc?template=ya6013lj5a&ref=w18gds2n
-
Deploy the template on the desired host
-
Once deployed connect to the Runpod on HTTP 3010 to connect to kohya_ss GUI. You can also connect to auto1111 on HTTP 3000.
Docker
Local docker build
If you prefer to use Docker, follow the instructions below:
-
Ensure that you have Git and Docker installed on your Windows or Linux system.
-
Open your OS shell (Command Prompt or Terminal) and run the following commands:
git clone https://github.com/bmaltais/kohya_ss.git cd kohya_ss docker compose create docker compose build docker compose run --service-ports kohya-ss-guiNote: The initial run may take up to 20 minutes to complete.
Please be aware of the following limitations when using Docker:
- All training data must be placed in the
datasetsubdirectory, as the Docker container cannot access files from other directories. - The file picker feature is not functional. You need to manually set the folder path and config file path.
- Dialogs may not work as expected, and it is recommended to use unique file names to avoid conflicts.
- There is no built-in auto-update support. To update the system, you must run update scripts outside of Docker and rebuild using
docker compose build.
If you are running Linux, an alternative Docker container port with fewer limitations is available here.
- All training data must be placed in the
ashleykleynhans runpod docker builds
You may want to use the following Dockerfile repos to build the images:
- Standalone Kohya_ss template: https://github.com/ashleykleynhans/kohya-docker
- Auto1111 + Kohya_ss GUI template: https://github.com/ashleykleynhans/stable-diffusion-docker
Upgrading
To upgrade your installation to a new version, follow the instructions below.
Windows Upgrade
If a new release becomes available, you can upgrade your repository by running the following commands from the root directory of the project:
-
Pull the latest changes from the repository:
git pull -
Run the setup script:
.\setup.bat
Linux and macOS Upgrade
To upgrade your installation on Linux or macOS, follow these steps:
- Open a terminal and navigate to the root
directory of the project.
-
Pull the latest changes from the repository:
git pull -
Refresh and update everything:
./setup.sh
Starting GUI Service
To launch the GUI service, you can use the provided scripts or run the kohya_gui.py script directly. Use the command line arguments listed below to configure the underlying service.
--listen: Specify the IP address to listen on for connections to Gradio.
--username: Set a username for authentication.
--password: Set a password for authentication.
--server_port: Define the port to run the server listener on.
--inbrowser: Open the Gradio UI in a web browser.
--share: Share the Gradio UI.
--language: Set custom language
Launching the GUI on Windows
On Windows, you can use either the gui.ps1 or gui.bat script located in the root directory. Choose the script that suits your preference and run it in a terminal, providing the desired command line arguments. Here's an example:
gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share
or
gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
Launching the GUI on Linux and macOS
To launch the GUI on Linux or macOS, run the gui.sh script located in the root directory. Provide the desired command line arguments as follows:
gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
Training Stable Cascade Stage C
This is an experimental feature. There may be bugs.
Usage
Training is run with stable_cascade_train_stage_c.py.
The main options are the same as sdxl_train.py. The following options have been added.
--effnet_checkpoint_path: Specifies the path to the EfficientNetEncoder weights.--stage_c_checkpoint_path: Specifies the path to the Stage C weights.--text_model_checkpoint_path: Specifies the path to the Text Encoder weights. If omitted, the model from Hugging Face will be used.--save_text_model: Saves the model downloaded from Hugging Face to--text_model_checkpoint_path.--previewer_checkpoint_path: Specifies the path to the Previewer weights. Used to generate sample images during training.--adaptive_loss_weight: Uses Adaptive Loss Weight . If omitted, P2LossWeight is used. The official settings use Adaptive Loss Weight.
The learning rate is set to 1e-4 in the official settings.
The first time, specify --text_model_checkpoint_path and --save_text_model to save the Text Encoder weights. From the next time, specify --text_model_checkpoint_path to load the saved weights.
Sample image generation during training is done with Perviewer. Perviewer is a simple decoder that converts EfficientNetEncoder latents to images.
Some of the options for SDXL are simply ignored or cause an error (especially noise-related options such as --noise_offset). --vae_batch_size and --no_half_vae are applied directly to the EfficientNetEncoder (when bf16 is specified for mixed precision, --no_half_vae is not necessary).
Options for latents and Text Encoder output caches can be used as is, but since the EfficientNetEncoder is much lighter than the VAE, you may not need to use the cache unless memory is particularly tight.
--gradient_checkpointing, --full_bf16, and --full_fp16 (untested) to reduce memory consumption can be used as is.
A scale of about 4 is suitable for sample image generation.
Since the official settings use bf16 for training, training with fp16 may be unstable.
The code for training the Text Encoder is also written, but it is untested.
Command line sample
accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 stable_cascade_train_stage_c.py --mixed_precision bf16 --save_precision bf16 --max_data_loader_n_workers 2 --persistent_data_loader_workers --gradient_checkpointing --learning_rate 1e-4 --optimizer_type adafactor --optimizer_args "scale_parameter=False" "relative_step=False" "warmup_init=False" --max_train_epochs 10 --save_every_n_epochs 1 --save_precision bf16 --output_dir ../output --output_name sc_test - --stage_c_checkpoint_path ../models/stage_c_bf16.safetensors --effnet_checkpoint_path ../models/effnet_encoder.safetensors --previewer_checkpoint_path ../models/previewer.safetensors --dataset_config ../dataset/config_bs1.toml --sample_every_n_epochs 1 --sample_prompts ../dataset/prompts.txt --adaptive_loss_weight
About the dataset for fine tuning
If the latents cache files for SD/SDXL exist (extension *.npz), it will be read and an error will occur during training. Please move them to another location in advance.
After that, run finetune/prepare_buckets_latents.py with the --stable_cascade option to create latents cache files for Stable Cascade (suffix _sc_latents.npz is added).
Dreambooth
For specific instructions on using the Dreambooth solution, please refer to the Dreambooth README.
Finetune
For specific instructions on using the Finetune solution, please refer to the Finetune README.
Train Network
For specific instructions on training a network, please refer to the Train network README.
LoRA
To train a LoRA, you can currently use the train_network.py code. You can create a LoRA network by using the all-in-one GUI.
Once you have created the LoRA network, you can generate images using auto1111 by installing this extension.
The following are the names of LoRA types used in this repository:
-
LoRA-LierLa: LoRA for Linear layers and Conv2d layers with a 1x1 kernel.
-
LoRA-C3Lier: LoRA for Conv2d layers with a 3x3 kernel, in addition to LoRA-LierLa.
LoRA-LierLa is the default LoRA type for train_network.py (without conv_dim network argument). You can use LoRA-LierLa with our extension for AUTOMATIC1111's Web UI or the built-in LoRA feature of the Web UI.
To use LoRA-C3Lier with the Web UI, please use our extension.
Sample image generation during training
A prompt file might look like this, for example:
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy, bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy, bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with # are comments. You can specify options for the generated image with options like --n after the prompt. The following options can be used:
--n: Negative prompt up to the next option.--w: Specifies the width of the generated image.--h: Specifies the height of the generated image.--d: Specifies the seed of the generated image.--l: Specifies the CFG scale of the generated image.--s: Specifies the number of steps in the generation.
The prompt weighting such as ( ) and [ ] are working.
Troubleshooting
If you encounter any issues, refer to the troubleshooting steps below.
Page File Limit
If you encounter an X error related to the page file, you may need to increase the page file size limit in Windows.
No module called tkinter
If you encounter an error indicating that the module tkinter is not found, try reinstalling Python 3.10 on your system.
FileNotFoundError
If you come across a FileNotFoundError, it is likely due to an installation issue. Make sure you do not have any locally installed Python modules that could conflict with the ones installed in the virtual environment. You can uninstall them by following these steps:
-
Open a new PowerShell terminal and ensure that no virtual environment is active.
-
Run the following commands to create a backup file of your locally installed pip packages and then uninstall them:
pip freeze > uninstall.txt pip uninstall -r uninstall.txtAfter uninstalling the local packages, redo the installation steps within the
kohya_ssvirtual environment.
SDXL training
The documentation in this section will be moved to a separate document later.
Training scripts for SDXL
-
sdxl_train.pyis a script for SDXL fine-tuning. The usage is almost the same asfine_tune.py, but it also supports DreamBooth dataset.--full_bf16option is added. Thanks to KohakuBlueleaf!- This option enables the full bfloat16 training (includes gradients). This option is useful to reduce the GPU memory usage.
- The full bfloat16 training might be unstable. Please use it at your own risk.
- The different learning rates for each U-Net block are now supported in sdxl_train.py. Specify with
--block_lroption. Specify 23 values separated by commas like--block_lr 1e-3,1e-3 ... 1e-3.- 23 values correspond to
0: time/label embed, 1-9: input blocks 0-8, 10-12: mid blocks 0-2, 13-21: output blocks 0-8, 22: out.
- 23 values correspond to
-
prepare_buckets_latents.pynow supports SDXL fine-tuning. -
sdxl_train_network.pyis a script for LoRA training for SDXL. The usage is almost the same astrain_network.py. -
Both scripts has following additional options:
--cache_text_encoder_outputsand--cache_text_encoder_outputs_to_disk: Cache the outputs of the text encoders. This option is useful to reduce the GPU memory usage. This option cannot be used with options for shuffling or dropping the captions.--no_half_vae: Disable the half-precision (mixed-precision) VAE. VAE for SDXL seems to produce NaNs in some cases. This option is useful to avoid the NaNs.
-
--weighted_captionsoption is not supported yet for both scripts. -
sdxl_train_textual_inversion.pyis a script for Textual Inversion training for SDXL. The usage is almost the same astrain_textual_inversion.py.--cache_text_encoder_outputsis not supported.- There are two options for captions:
- Training with captions. All captions must include the token string. The token string is replaced with multiple tokens.
- Use
--use_object_templateor--use_style_templateoption. The captions are generated from the template. The existing captions are ignored.
- See below for the format of the embeddings.
-
--min_timestepand--max_timestepoptions are added to each training script. These options can be used to train U-Net with different timesteps. The default values are 0 and 1000.
Utility scripts for SDXL
-
tools/cache_latents.pyis added. This script can be used to cache the latents to disk in advance.- The options are almost the same as `sdxl_train.py'. See the help message for the usage.
- Please launch the script as follows:
accelerate launch --num_cpu_threads_per_process 1 tools/cache_latents.py ... - This script should work with multi-GPU, but it is not tested in my environment.
-
tools/cache_text_encoder_outputs.pyis added. This script can be used to cache the text encoder outputs to disk in advance.- The options are almost the same as
cache_latents.pyandsdxl_train.py. See the help message for the usage.
- The options are almost the same as
-
sdxl_gen_img.pyis added. This script can be used to generate images with SDXL, including LoRA, Textual Inversion and ControlNet-LLLite. See the help message for the usage.
Tips for SDXL training
- The default resolution of SDXL is 1024x1024.
- The fine-tuning can be done with 24GB GPU memory with the batch size of 1. For 24GB GPU, the following options are recommended for the fine-tuning with 24GB GPU memory:
- Train U-Net only.
- Use gradient checkpointing.
- Use
--cache_text_encoder_outputsoption and caching latents. - Use Adafactor optimizer. RMSprop 8bit or Adagrad 8bit may work. AdamW 8bit doesn't seem to work.
- The LoRA training can be done with 8GB GPU memory (10GB recommended). For reducing the GPU memory usage, the following options are recommended:
- Train U-Net only.
- Use gradient checkpointing.
- Use
--cache_text_encoder_outputsoption and caching latents. - Use one of 8bit optimizers or Adafactor optimizer.
- Use lower dim (4 to 8 for 8GB GPU).
--network_train_unet_onlyoption is highly recommended for SDXL LoRA. Because SDXL has two text encoders, the result of the training will be unexpected.- PyTorch 2 seems to use slightly less GPU memory than PyTorch 1.
--bucket_reso_stepscan be set to 32 instead of the default value 64. Smaller values than 32 will not work for SDXL training.
Example of the optimizer settings for Adafactor with the fixed learning rate:
optimizer_type = "adafactor"
optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False" ]
lr_scheduler = "constant_with_warmup"
lr_warmup_steps = 100
learning_rate = 4e-7 # SDXL original learning rate
Format of Textual Inversion embeddings for SDXL
from safetensors.torch import save_file
state_dict = {"clip_g": embs_for_text_encoder_1280, "clip_l": embs_for_text_encoder_768}
save_file(state_dict, file)
ControlNet-LLLite
ControlNet-LLLite, a novel method for ControlNet with SDXL, is added. See documentation for details.
Sample image generation during training
A prompt file might look like this, for example
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with # are comments. You can specify options for the generated image with options like --n after the prompt. The following can be used.
--nNegative prompt up to the next option.--wSpecifies the width of the generated image.--hSpecifies the height of the generated image.--dSpecifies the seed of the generated image.--lSpecifies the CFG scale of the generated image.--sSpecifies the number of steps in the generation.
The prompt weighting such as ( ) and [ ] are working.
Change History
Working in progress
- The log output has been improved. PR #905 Thanks to shirayu!
- The log is formatted by default. The
richlibrary is required. Please see Upgrade and update the library. - If
richis not installed, the log output will be the same as before. - The following options are available in each training script:
--console_log_simpleoption can be used to switch to the previous log output.--console_log_leveloption can be used to specify the log level. The default isINFO.--console_log_fileoption can be used to output the log to a file. The default isNone(output to the console).
- The log is formatted by default. The
- The sample image generation during multi-GPU training is now done with multiple GPUs. PR #1061 Thanks to DKnight54!
- The support for mps devices is improved. PR #1054 Thanks to akx! If mps device exists instead of CUDA, the mps device is used automatically.
- An option
--highvramto disable the optimization for environments with little VRAM is added to the training scripts. If you specify it when there is enough VRAM, the operation will be faster.- Currently, only the cache part of latents is optimized.
- The IPEX support is improved. PR #1086 Thanks to Disty0!
- Fixed a bug that
svd_merge_lora.pycrashes in some cases. PR #1087 Thanks to mgz-dev! - The common image generation script
gen_img.pyfor SD 1/2 and SDXL is added. The basic functions are the same as the scripts for SD 1/2 and SDXL, but some new features are added.- External scripts to generate prompts can be supported. It can be called with
--from_moduleoption. (The documentation will be added later) - The normalization method after prompt weighting can be specified with
--emb_normalize_modeoption.originalis the original method,absis the normalization with the average of the absolute values,noneis no normalization.
- External scripts to generate prompts can be supported. It can be called with
- Gradual Latent Hires fix is added to each generation script. See here for details.
Jan 27, 2024 / 2024/1/27: v0.8.3
- Fixed a bug that the training crashes when
--fp8_baseis specified with--save_state. PR #1079 Thanks to feffy380!safetensorsis updated. Please see Upgrade and update the library.
- Fixed a bug that the training crashes when
network_multiplieris specified with multi-GPU training. PR #1084 Thanks to fireicewolf! - Fixed a bug that the training crashes when training ControlNet-LLLite.
- 2024/02/17 (v22.6.2)
- Fix issue with Lora Extract GUI
-
- Fix syntax issue where parameter lora_network_weights is actually called network_weights
- 2024/02/15 (v22.6.1)
- Add support for multi-gpu parameters in the GUI under the "Parameters > Advanced" tab.
- Significant rewrite of how parameters are created in the code. I hope I did not break anything in the process... Will make the code easier to update.
- Update TW locallisation
- Update gradio module version to latest 3.x
- 2024/01/27 (v22.6.0)
-
Merge sd-scripts v0.8.3 code update
- Fixed a bug that the training crashes when
--fp8_baseis specified with--save_state. PR #1079 Thanks to feffy380!safetensorsis updated. Please see Upgrade and update the library.
- Fixed a bug that the training crashes when
network_multiplieris specified with multi-GPU training. PR #1084 Thanks to fireicewolf! - Fixed a bug that the training crashes when training ControlNet-LLLite.
- Fixed a bug that the training crashes when
-
Merge sd-scripts v0.8.2 code update
-
[Experimental] The
--fp8_baseoption is added to the training scripts for LoRA etc. The base model (U-Net, and Text Encoder when training modules for Text Encoder) can be trained with fp8. PR #1057 Thanks to KohakuBlueleaf!- Please specify
--fp8_baseintrain_network.pyorsdxl_train_network.py. - PyTorch 2.1 or later is required.
- If you use xformers with PyTorch 2.1, please see xformers repository and install the appropriate version according to your CUDA version.
- The sample image generation during training consumes a lot of memory. It is recommended to turn it off.
- Please specify
-
[Experimental] The network multiplier can be specified for each dataset in the training scripts for LoRA etc.
- This is an experimental option and may be removed or changed in the future.
- For example, if you train with state A as
1.0and state B as-1.0, you may be able to generate by switching between state A and B depending on the LoRA application rate. - Also, if you prepare five states and train them as
0.2,0.4,0.6,0.8, and1.0, you may be able to generate by switching the states smoothly depending on the application rate. - Please specify
network_multiplierin[[datasets]]in.tomlfile.
-
Some options are added to
networks/extract_lora_from_models.pyto reduce the memory usage.--load_precisionoption can be used to specify the precision when loading the model. If the model is saved in fp16, you can reduce the memory usage by specifying--load_precision fp16without losing precision.--load_original_model_tooption can be used to specify the device to load the original model.--load_tuned_model_tooption can be used to specify the device to load the derived model. The default iscpufor both options, but you can specifycudaetc. You can reduce the memory usage by loading one of them to GPU. This option is available only for SDXL.
-
The gradient synchronization in LoRA training with multi-GPU is improved. PR #1064 Thanks to KohakuBlueleaf!
-
The code for Intel IPEX support is improved. PR #1060 Thanks to akx!
-
Fixed a bug in multi-GPU Textual Inversion training.
-
.tomlexample for network multiplier[general] [[datasets]] resolution = 512 batch_size = 8 network_multiplier = 1.0 ... subset settings ... [[datasets]] resolution = 512 batch_size = 8 network_multiplier = -1.0 ... subset settings ...
-
-
Merge sd-scripts v0.8.1 code update
-
Fixed a bug that the VRAM usage without Text Encoder training is larger than before in training scripts for LoRA etc (
train_network.py,sdxl_train_network.py).- Text Encoders were not moved to CPU.
-
Fixed typos. Thanks to akx! PR #1053
-
- 2024/01/15 (v22.5.0)
- Merged sd-scripts v0.8.0 updates
- Diffusers, Accelerate, Transformers and other related libraries have been updated. Please update the libraries with Upgrade.
- Some model files (Text Encoder without position_id) based on the latest Transformers can be loaded.
torch.compileis supported (experimental). PR #1024 Thanks to p1atdev!- This feature works only on Linux or WSL.
- Please specify
--torch_compileoption in each training script. - You can select the backend with
--dynamo_backendoption. The default is"inductor".inductororeagerseems to work. - Please use
--spdaoption instead of--xformersoption. - PyTorch 2.1 or later is recommended.
- Please see PR for details.
- The session name for wandb can be specified with
--wandb_run_nameoption. PR #1032 Thanks to hopl1t! - IPEX library is updated. PR #1030 Thanks to Disty0!
- Fixed a bug that Diffusers format model cannot be saved.
- Diffusers, Accelerate, Transformers and other related libraries have been updated. Please update the libraries with Upgrade.
- Fix LoRA config display after load that would sometime hide some of the feilds
- 2024/01/02 (v22.4.1)
- Minor bug fixed and enhancements.
- 2023/12/28 (v22.4.0)
- Fixed to work
tools/convert_diffusers20_original_sd.py. Thanks to Disty0! PR #1016 - The issues in multi-GPU training are fixed. Thanks to Isotr0py! PR #989 and #1000
--ddp_gradient_as_bucket_viewand--ddp_bucket_viewoptions are added tosdxl_train.py. Please specify these options for multi-GPU training.
- IPEX support is updated. Thanks to Disty0!
- Fixed the bug that the size of the bucket becomes less than
min_bucket_reso. Thanks to Cauldrath! PR #1008 --sample_at_firstoption is added to each training script. This option is useful to generate images at the first step, before training. Thanks to shirayu! PR #907--ssoption is added to the sampling prompt in training. You can specify the scheduler for the sampling like--ss euler_a. Thanks to shirayu! PR #906keep_tokens_separatoris added to the dataset config. This option is useful to keep (prevent from shuffling) the tokens in the captions. See #975 for details. Thanks to Linaqruf!- You can specify the separator with an option like
--keep_tokens_separator "|||"or withkeep_tokens_separator: "|||"in.toml. The tokens before|||are not shuffled.
- You can specify the separator with an option like
- Attention processor hook is added. See #961 for details. Thanks to rockerBOO!
- The optimizer
PagedAdamWis added. Thanks to xzuyn! PR #955 - NaN replacement in SDXL VAE is sped up. Thanks to liubo0902! PR #1009
- Fixed the path error in
finetune/make_captions.py. Thanks to CjangCjengh! PR #986
- 2023/12/20 (v22.3.1)
- Add goto button to manual caption utility
- Add missing options for various LyCORIS training algorithms
- Refactor how feilds are shown or hidden
- Made max value for network and convolution rank 512 except for LyCORIS/LoKr.
- 2023/12/06 (v22.3.0)
- Merge sd-scripts updates:
finetune\tag_images_by_wd14_tagger.pynow supports the separator other than,with--caption_separatoroption. Thanks to KohakuBlueleaf! PR #913- Min SNR Gamma with V-predicition (SD 2.1) is fixed. Thanks to feffy380! PR#934
- See #673 for details.
--min_diffand--clamp_quantileoptions are added tonetworks/extract_lora_from_models.py. Thanks to wkpark! PR #936- The default values are same as the previous version.
- Deep Shrink hires fix is supported in
sdxl_gen_img.pyandgen_img_diffusers.py.--ds_timesteps_1and--ds_timesteps_2options denote the timesteps of the Deep Shrink for the first and second stages.--ds_depth_1and--ds_depth_2options denote the depth (block index) of the Deep Shrink for the first and second stages.--ds_ratiooption denotes the ratio of the Deep Shrink.0.5means the half of the original latent size for the Deep Shrink.--dst1,--dst2,--dsd1,--dsd2and--dsrprompt options are also available.
- Add GLoRA support