mirror of https://github.com/bmaltais/kohya_ss
commit
4161d1d80a
|
|
@ -54,4 +54,5 @@ data
|
|||
config.toml
|
||||
sd-scripts
|
||||
venv
|
||||
venv*
|
||||
venv*
|
||||
.python-version
|
||||
|
|
@ -1 +1 @@
|
|||
3.10
|
||||
3.11
|
||||
|
|
|
|||
509
README.md
509
README.md
|
|
@ -5,7 +5,11 @@
|
|||
[](LICENSE.md)
|
||||
[](https://github.com/bmaltais/kohya_ss/issues)
|
||||
|
||||
This project provides a user-friendly Gradio-based Graphical User Interface (GUI) for [Kohya's Stable Diffusion training scripts](https://github.com/kohya-ss/sd-scripts). Stable Diffusion training empowers users to customize image generation models by fine-tuning existing models, creating unique artistic styles, and training specialized models like LoRA (Low-Rank Adaptation).
|
||||
This is a GUI and CLI for training diffusion models.
|
||||
|
||||
This project provides a user-friendly Gradio-based Graphical User Interface (GUI) for [Kohya's Stable Diffusion training scripts](https://github.com/kohya-ss/sd-scripts).
|
||||
Stable Diffusion training empowers users to customize image generation models by fine-tuning existing models, creating unique artistic styles,
|
||||
and training specialized models like LoRA (Low-Rank Adaptation).
|
||||
|
||||
Key features of this GUI include:
|
||||
* Easy-to-use interface for setting a wide range of training parameters.
|
||||
|
|
@ -16,404 +20,165 @@ Support for Linux and macOS is also available. While Linux support is actively m
|
|||
|
||||
## Table of Contents
|
||||
|
||||
- [Kohya's GUI](#kohyas-gui)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [🦒 Colab](#-colab)
|
||||
- [Installation](#installation)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installing Prerequisites on Windows](#installing-prerequisites-on-windows)
|
||||
- [Installing Prerequisites on Linux--macos](#installing-prerequisites-on-linux--macos)
|
||||
- [Cloning the Repository](#cloning-the-repository)
|
||||
- [Installation Methods](#installation-methods)
|
||||
- [Using `uv` (Recommended)](#using-uv-recommended)
|
||||
- [For Windows](#for-windows)
|
||||
- [For Linux](#for-linux)
|
||||
- [Using `pip` (Traditional Method)](#using-pip-traditional-method)
|
||||
- [Using `pip` For Windows](#using-pip-for-windows)
|
||||
- [Using `pip` For Linux and macOS](#using-pip-for-linux-and-macos)
|
||||
- [Using `conda`](#using-conda)
|
||||
- [Optional: Install Location Details](#optional-install-location-details)
|
||||
- [Runpod](#runpod)
|
||||
- [Novita](#novita)
|
||||
- [Docker](#docker)
|
||||
- [Upgrading](#upgrading)
|
||||
- [Windows Upgrade](#windows-upgrade)
|
||||
- [Linux and macOS Upgrade](#linux-and-macos-upgrade)
|
||||
- [Starting GUI Service](#starting-gui-service)
|
||||
- [Launching the GUI on Windows (pip method)](#launching-the-gui-on-windows-pip-method)
|
||||
- [Launching the GUI on Windows (uv method)](#launching-the-gui-on-windows-uv-method)
|
||||
- [Launching the GUI on Linux and macOS](#launching-the-gui-on-linux-and-macos)
|
||||
- [Launching the GUI on Linux (uv method)](#launching-the-gui-on-linux-uv-method)
|
||||
- [Custom Path Defaults](#custom-path-defaults)
|
||||
- [LoRA](#lora)
|
||||
- [Installation Options](#installation-options)
|
||||
- [Local Installation Overview](#local-installation-overview)
|
||||
- [`uv` vs `pip` – What's the Difference?](#uv-vs-pip--whats-the-difference)
|
||||
- [Cloud Installation Overview](#cloud-installation-overview)
|
||||
- [Colab](#-colab)
|
||||
- [Runpod, Novita, Docker](#runpod-novita-docker)
|
||||
- [Custom Path Defaults](#custom-path-defaults)
|
||||
- [LoRA](#lora)
|
||||
- [Sample image generation during training](#sample-image-generation-during-training)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Page File Limit](#page-file-limit)
|
||||
- [No module called tkinter](#no-module-called-tkinter)
|
||||
- [LORA Training on TESLA V100 - GPU Utilization Issue](#lora-training-on-tesla-v100---gpu-utilization-issue)
|
||||
- [SDXL training](#sdxl-training)
|
||||
- [Masked loss](#masked-loss)
|
||||
- [Guides](#guides)
|
||||
- [Using Accelerate Lora Tab to Select GPU ID](#using-accelerate-lora-tab-to-select-gpu-id)
|
||||
- [Starting Accelerate in GUI](#starting-accelerate-in-gui)
|
||||
- [Running Multiple Instances (linux)](#running-multiple-instances-linux)
|
||||
- [Monitoring Processes](#monitoring-processes)
|
||||
- [Interesting Forks](#interesting-forks)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Change History](#change-history)
|
||||
- [v25.0.3](#v2503)
|
||||
- [v25.0.2](#v2502)
|
||||
- [v25.0.1](#v2501)
|
||||
- [v25.0.0](#v2500)
|
||||
|
||||
## 🦒 Colab
|
||||
- [Page File Limit](#page-file-limit)
|
||||
- [No module called tkinter](#no-module-called-tkinter)
|
||||
- [LORA Training on TESLA V100 - GPU Utilization Issue](#lora-training-on-tesla-v100---gpu-utilization-issue)
|
||||
- [SDXL training](#sdxl-training)
|
||||
- [Masked loss](#masked-loss)
|
||||
- [Guides](#guides)
|
||||
- [Using Accelerate Lora Tab to Select GPU ID](#using-accelerate-lora-tab-to-select-gpu-id)
|
||||
- [Starting Accelerate in GUI](#starting-accelerate-in-gui)
|
||||
- [Running Multiple Instances (linux)](#running-multiple-instances-linux)
|
||||
- [Monitoring Processes](#monitoring-processes)
|
||||
- [Interesting Forks](#interesting-forks)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Change History](#change-history)
|
||||
- [v25.0.3](#v2503)
|
||||
- [v25.0.2](#v2502)
|
||||
- [v25.0.1](#v2501)
|
||||
- [v25.0.0](#v2500)
|
||||
|
||||
This Colab notebook was not created or maintained by me; however, it appears to function effectively. The source can be found at: <https://github.com/camenduru/kohya_ss-colab>.
|
||||
|
||||
I would like to express my gratitude to camenduru for their valuable contribution. If you encounter any issues with the Colab notebook, please report them on their repository.
|
||||
## Installation Options
|
||||
|
||||
You can run `kohya_ss` either **locally on your machine** or via **cloud-based solutions** like Colab or Runpod.
|
||||
|
||||
- If you have a GPU-equipped PC and want full control: install it locally using `uv` or `pip`.
|
||||
- If your system doesn’t meet requirements or you prefer a browser-based setup: use Colab or a paid GPU provider like Runpod or Novita.
|
||||
- If you are a developer or DevOps user, Docker is also supported.
|
||||
|
||||
---
|
||||
|
||||
### Local Installation Overview
|
||||
|
||||
You can install `kohya_ss` locally using either the `uv` or `pip` method. Choose one depending on your platform and preferences:
|
||||
|
||||
| Platform | Recommended Method | Instructions |
|
||||
|--------------|----------------|---------------------------------------------|
|
||||
| Linux | `uv` | [uv_linux.md](./docs/Installation/uv_linux.md) |
|
||||
| Linux or Mac | `pip` | [pip_linux.md](./docs/Installation/pip_linux.md) |
|
||||
| Windows | `uv` | [uv_windows.md](./docs/Installation/uv_windows.md) |
|
||||
| Windows | `pip` | [pip_windows.md](./docs/Installation/pip_windows.md) |
|
||||
|
||||
#### `uv` vs `pip` – What's the Difference?
|
||||
|
||||
- `uv` is faster and isolates dependencies more cleanly, ideal if you want minimal setup hassle.
|
||||
- `pip` is more traditional, easier to debug if issues arise, and works better with some IDEs or Python tooling.
|
||||
- If unsure: try `uv`. If it doesn't work for you, fall back to `pip`.
|
||||
|
||||
### Cloud Installation Overview
|
||||
|
||||
#### 🦒 Colab
|
||||
|
||||
For browser-based training without local setup, use this Colab notebook:
|
||||
<https://github.com/camenduru/kohya_ss-colab>
|
||||
|
||||
- No installation required
|
||||
- Free to use (GPU availability may vary)
|
||||
- Maintained by **camenduru**, not the original author
|
||||
|
||||
| Colab | Info |
|
||||
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |
|
||||
| [](https://colab.research.google.com/github/camenduru/kohya_ss-colab/blob/main/kohya_ss_colab.ipynb) | kohya_ss_gui_colab |
|
||||
|
||||
## Installation
|
||||
> 💡 If you encounter issues, please report them on camenduru’s repo.
|
||||
|
||||
### Prerequisites
|
||||
**Special thanks**
|
||||
I would like to express my gratitude to camenduru for their valuable contribution.
|
||||
|
||||
Before you begin, make sure your system meets the following minimum requirements:
|
||||
#### Runpod, Novita, Docker
|
||||
|
||||
- **Python**
|
||||
- Windows: Version **3.11.9**
|
||||
- Linux/macOS: Version **3.10.9 or higher**, but **below 3.11.0**
|
||||
- **Git** – Required for cloning the repository
|
||||
- **NVIDIA CUDA Toolkit** – Version 12.8 or compatible
|
||||
- **NVIDIA GPU** – Required for training; VRAM needs vary
|
||||
- **(Optional) NVIDIA cuDNN** – Improves training speed and batch size
|
||||
- **Windows only** – Visual Studio 2015–2022 Redistributables
|
||||
These options are for users running training on hosted GPU infrastructure or containers.
|
||||
|
||||
#### Installing Prerequisites on Windows
|
||||
|
||||
1. Install [Python 3.11.9](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe)
|
||||
✅ Enable the "Add to PATH" option during setup
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Windows&target_arch=x86_64)
|
||||
|
||||
3. Install [Git](https://git-scm.com/download/win)
|
||||
|
||||
4. Install [Visual Studio Redistributables](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
- **[Runpod setup](docs/runpod_setup.md)** – Ready-made GPU background training via templates.
|
||||
- **[Novita setup](docs/novita_setup.md)** – Similar to Runpod, but integrated into the Novita UI.
|
||||
- **[Docker setup](docs/docker.md)** – For developers/sysadmins using containerized environments.
|
||||
|
||||
|
||||
#### Installing Prerequisites on Linux / macOS
|
||||
## Custom Path Defaults with `config.toml`
|
||||
|
||||
1. Install Python (Make sure you have Python version 3.10.9 or higher (but lower than 3.11.0) installed on your system.)
|
||||
On Ubuntu 22.04 or later:
|
||||
The GUI supports a configuration file named `config.toml` that allows you to set default paths for many of the input fields. This is useful for avoiding repetitive manual selection of directories every time you start the GUI.
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install python3.10 python3.10-venv
|
||||
```
|
||||
**Purpose of `config.toml`:**
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Linux&target_arch=x86_64)
|
||||
Follow the instructions for your distribution.
|
||||
* Pre-fill default directory paths for pretrained models, datasets, output folders, LoRA models, etc.
|
||||
* Streamline your workflow by having the GUI remember your preferred locations.
|
||||
|
||||
> [!NOTE]
|
||||
> macOS is only supported via the **pip method**.
|
||||
> CUDA is usually not required and may not be compatible with Apple Silicon GPUs.
|
||||
**How to Use and Customize:**
|
||||
|
||||
### Cloning the Repository
|
||||
1. **Create your configuration file:**
|
||||
* In the root directory of the `kohya_ss` repository, you'll find a file named `config example.toml`.
|
||||
* Copy this file and rename the copy to `config.toml`. This `config.toml` file will be automatically loaded when the GUI starts.
|
||||
2. **Edit `config.toml`:**
|
||||
* Open `config.toml` with a text editor.
|
||||
* The file uses TOML (Tom's Obvious, Minimal Language) format, which consists of `key = "value"` pairs.
|
||||
* Modify the paths for the keys according to your local directory structure.
|
||||
* **Important:**
|
||||
* Use absolute paths (e.g., `C:/Users/YourName/StableDiffusion/Models` or `/home/yourname/sd-models`).
|
||||
* Alternatively, you can use paths relative to the `kohya_ss` root directory.
|
||||
* Ensure you use forward slashes (`/`) for paths, even on Windows, as this is generally more compatible with TOML and Python.
|
||||
* Make sure the specified directories exist on your system.
|
||||
|
||||
To install the project, you must first clone the repository **with submodules**:
|
||||
**Structure of `config.toml`:**
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/bmaltais/kohya_ss.git
|
||||
cd kohya_ss
|
||||
The `config.toml` file can have several sections, typically corresponding to different training modes or general settings. Common keys you might want to set include:
|
||||
|
||||
* `model_dir`: Default directory for loading base Stable Diffusion models.
|
||||
* `lora_model_dir`: Default directory for saving and loading LoRA models.
|
||||
* `output_dir`: Default base directory for training outputs (images, logs, model checkpoints).
|
||||
* `dataset_dir`: A general default if you store all your datasets in one place.
|
||||
* Specific input paths for different training tabs like Dreambooth, Finetune, LoRA, etc. (e.g., `db_model_dir`, `ft_source_model_name_or_path`).
|
||||
|
||||
**Example Configurations:**
|
||||
|
||||
Here's an example snippet of what your `config.toml` might look like:
|
||||
|
||||
```toml
|
||||
# General settings
|
||||
model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||
lora_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Lora"
|
||||
vae_dir = "C:/ai_stuff/stable-diffusion-webui/models/VAE"
|
||||
output_dir = "C:/ai_stuff/kohya_ss_outputs"
|
||||
logging_dir = "C:/ai_stuff/kohya_ss_outputs/logs"
|
||||
|
||||
# Dreambooth specific paths
|
||||
db_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||
db_reg_image_dir = "C:/ai_stuff/datasets/dreambooth_regularization_images"
|
||||
# Add other db_... paths as needed
|
||||
|
||||
# Finetune specific paths
|
||||
ft_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||
# Add other ft_... paths as needed
|
||||
|
||||
# LoRA / LoCon specific paths
|
||||
lc_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion" # Base model for LoRA training
|
||||
lc_output_dir = "C:/ai_stuff/kohya_ss_outputs/lora"
|
||||
lc_dataset_dir = "C:/ai_stuff/datasets/my_lora_project"
|
||||
# Add other lc_... paths as needed
|
||||
|
||||
# You can find a comprehensive list of all available keys in the `config example.toml` file.
|
||||
# Refer to it to customize paths for all supported options in the GUI.
|
||||
```
|
||||
|
||||
> The `--recursive` flag ensures that all required Git submodules are also cloned.
|
||||
**Using a Custom Config File Path:**
|
||||
|
||||
---
|
||||
If you prefer to name your configuration file differently or store it in another location, you can specify its path using the `--config` command-line argument when launching the GUI:
|
||||
|
||||
### Installation Methods
|
||||
* On Windows: `gui.bat --config D:/my_configs/kohya_settings.toml`
|
||||
* On Linux/macOS: `./gui.sh --config /home/user/my_configs/kohya_settings.toml`
|
||||
|
||||
This project offers two primary methods for installing and running the GUI: using the `uv` package manager (recommended for ease of use and automatic updates) or using the traditional `pip` package manager. Below, you'll find details on both approaches. Please read this section to decide which method best suits your needs before proceeding to the OS-specific installation prerequisites.
|
||||
|
||||
**Key Differences:**
|
||||
|
||||
* **`uv` method:**
|
||||
* Simplifies the setup process.
|
||||
* Automatically handles updates when you run `gui-uv.bat` (Windows) or `gui-uv.sh` (Linux).
|
||||
* No need to run `setup.bat` or `setup.sh` after the initial clone.
|
||||
* This is the recommended method for most users on Windows and Linux.
|
||||
* **Not recommended for Runpod or macOS installations.** For these, please use the `pip` method.
|
||||
* **`pip` method:**
|
||||
* The traditional method, requiring manual execution of `setup.bat` (Windows) or `setup.sh` (Linux) after cloning and for updates.
|
||||
* Necessary for environments like Runpod and macOS where the `uv` scripts are not intended to be used.
|
||||
|
||||
Subsequent sections will detail the specific commands for each method.
|
||||
|
||||
#### Using `uv` (Recommended)
|
||||
|
||||
> [!NOTE]
|
||||
> This method is not intended for runpod or MacOS installation. Use the "pip based package manager" setup instead.
|
||||
|
||||
##### For Windows
|
||||
|
||||
Run:
|
||||
|
||||
```powershell
|
||||
gui-uv.bat
|
||||
```
|
||||
|
||||
For full details and command-line options, see:
|
||||
[Launching the GUI on Windows (uv method)](https://github.com/bmaltais/kohya_ss#launching-the-gui-on-windows-uv-method)
|
||||
|
||||
|
||||
##### For Linux
|
||||
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
./gui-uv.sh
|
||||
```
|
||||
|
||||
|
||||
For full details, including headless mode, see:
|
||||
[Launching the GUI on Linux (uv method)](https://github.com/bmaltais/kohya_ss#launching-the-gui-on-linux-uv-method)
|
||||
|
||||
#### Using `pip` (Traditional Method)
|
||||
This method uses the traditional `pip` package manager and requires manual script execution for setup and updates.
|
||||
It is necessary for environments like Runpod or macOS, or if you prefer managing your environment with `pip`.
|
||||
|
||||
##### Using `pip` For Windows
|
||||
|
||||
For systems with only python 3.10.11 installed:
|
||||
|
||||
```shell
|
||||
.\setup.bat
|
||||
```
|
||||
|
||||
For systems with only more than one python release installed:
|
||||
|
||||
```shell
|
||||
.\setup-3.10.bat
|
||||
```
|
||||
|
||||
During the accelerate config step, use the default values as proposed during the configuration unless you know your hardware demands otherwise.
|
||||
The amount of VRAM on your GPU does not impact the values used.
|
||||
|
||||
* Optional: CUDNN 8.9.6.50
|
||||
|
||||
The following steps are optional but will improve the learning speed for owners of NVIDIA 30X0/40X0 GPUs. These steps enable larger training batch sizes and faster training speeds.
|
||||
|
||||
Run `.\setup.bat` and select `2. (Optional) Install cudnn files (if you want to use the latest supported cudnn version)`.
|
||||
|
||||
##### Using `pip` For Linux and macOS
|
||||
|
||||
If you encounter permission issues, make the `setup.sh` script executable by running the following command:
|
||||
|
||||
```shell
|
||||
chmod +x ./setup.sh
|
||||
```
|
||||
|
||||
Run the setup script by executing the following command:
|
||||
|
||||
```shell
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you need additional options or information about the runpod environment, you can use `setup.sh -h` or `setup.sh --help` to display the help message.
|
||||
|
||||
##### Using `conda`
|
||||
|
||||
```shell
|
||||
# Create Conda Environment
|
||||
conda create -n kohyass python=3.11
|
||||
conda activate kohyass
|
||||
|
||||
# Run the Scripts
|
||||
chmod +x setup.sh
|
||||
./setup.sh
|
||||
|
||||
chmod +x gui.sh
|
||||
./gui.sh
|
||||
```
|
||||
> [!NOTE]
|
||||
> For Windows users, the `chmod +x` commands are not necessary. You should run `setup.bat` and subsequently `gui.bat` (or `gui.ps1` if you prefer PowerShell) instead of the `.sh` scripts.
|
||||
|
||||
#### Optional: Install Location Details for Linux and Mac
|
||||
|
||||
> Note:
|
||||
> The information below regarding install location applies to both `uv` and `pip` installation methods.
|
||||
> Most users don’t need to change the install directory. The following applies only if you want to customize the installation path or troubleshoot permission issues.
|
||||
|
||||
The default installation location on Linux is the directory where the script is located. If a previous installation is detected in that location, the setup will proceed there. Otherwise, the installation will fall back to `/opt/kohya_ss`. If `/opt` is not writable, the fallback location will be `$HOME/kohya_ss`. Finally, if none of the previous options are viable, the installation will be performed in the current directory.
|
||||
|
||||
For macOS and other non-Linux systems, the installation process will attempt to detect the previous installation directory based on where the script is run. If a previous installation is not found, the default location will be `$HOME/kohya_ss`. You can override this behavior by specifying a custom installation directory using the `-d` or `--dir` option when running the setup script.
|
||||
|
||||
If you choose to use the interactive mode, the default values for the accelerate configuration screen will be "This machine," "None," and "No" for the remaining questions. These default answers are the same as the Windows installation.
|
||||
|
||||
#### Runpod
|
||||
|
||||
See [Runpod Installation Guide](docs/installation_runpod.md) for details.
|
||||
|
||||
#### Novita
|
||||
|
||||
See [Novita Installation Guide](docs/installation_novita.md) for details.
|
||||
|
||||
#### Docker
|
||||
|
||||
See [Docker Installation Guide](docs/installation_docker.md) for details.
|
||||
|
||||
## Upgrading
|
||||
|
||||
To upgrade your installation to a new version, follow the instructions below.
|
||||
|
||||
### Windows Upgrade
|
||||
|
||||
If a new release becomes available, you can upgrade your repository by following these steps:
|
||||
|
||||
* **If you are using the `uv`-based installation (`gui-uv.bat`):**
|
||||
1. Pull the latest changes from the repository:
|
||||
```powershell
|
||||
git pull
|
||||
```
|
||||
2. Updates to the Python environment are handled automatically when you next run the `gui-uv.bat` script. No separate setup script execution is needed.
|
||||
|
||||
* **If you are using the `pip`-based installation (`gui.bat` or `gui.ps1`):**
|
||||
1. Pull the latest changes from the repository:
|
||||
```powershell
|
||||
git pull
|
||||
```
|
||||
2. Run the setup script to update dependencies:
|
||||
```powershell
|
||||
.\setup.bat
|
||||
```
|
||||
|
||||
### Linux and macOS Upgrade
|
||||
|
||||
To upgrade your installation on Linux or macOS, follow these steps:
|
||||
|
||||
* **If you are using the `uv`-based installation (`gui-uv.sh`):**
|
||||
1. Open a terminal and navigate to the root directory of the project.
|
||||
2. Pull the latest changes from the repository:
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
3. Updates to the Python environment are handled automatically when you next run the `gui-uv.sh` script. No separate setup script execution is needed.
|
||||
|
||||
* **If you are using the `pip`-based installation (`gui.sh`):**
|
||||
1. Open a terminal and navigate to the root directory of the project.
|
||||
2. Pull the latest changes from the repository:
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
3. Refresh and update everything by running the setup script:
|
||||
```bash
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
## Starting GUI Service
|
||||
|
||||
To launch the GUI service, use the script corresponding to your chosen installation method (`uv` or `pip`), or run the `kohya_gui.py` script directly. Use the command line arguments listed below to configure the underlying service.
|
||||
|
||||
```text
|
||||
--help show this help message and exit
|
||||
--config CONFIG Path to the toml config file for interface defaults
|
||||
--debug Debug on
|
||||
--listen LISTEN IP to listen on for connections to Gradio
|
||||
--username USERNAME Username for authentication
|
||||
--password PASSWORD Password for authentication
|
||||
--server_port SERVER_PORT
|
||||
Port to run the server listener on
|
||||
--inbrowser Open in browser
|
||||
--share Share the gradio UI
|
||||
--headless Is the server headless
|
||||
--language LANGUAGE Set custom language
|
||||
--use-ipex Use IPEX environment
|
||||
--use-rocm Use ROCm environment
|
||||
--do_not_use_shell Enforce not to use shell=True when running external commands
|
||||
--do_not_share Do not share the gradio UI
|
||||
--requirements REQUIREMENTS
|
||||
requirements file to use for validation
|
||||
--root_path ROOT_PATH
|
||||
`root_path` for Gradio to enable reverse proxy support. e.g. /kohya_ss
|
||||
--noverify Disable requirements verification
|
||||
```
|
||||
|
||||
### Launching the GUI on Windows (pip method)
|
||||
|
||||
If you installed using the `pip` method, use either the `gui.ps1` or `gui.bat` script located in the root directory. Choose the script that suits your preference and run it in a terminal, providing the desired command line arguments. Here's an example:
|
||||
|
||||
```powershell
|
||||
gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```powershell
|
||||
gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
### Launching the GUI on Windows (uv method)
|
||||
|
||||
If you installed using the `uv` method, use the `gui-uv.bat` script to start the GUI. Follow these steps:
|
||||
|
||||
When you run `gui-uv.bat`, it will first check if `uv` is installed on your system. If `uv` is not found, the script will prompt you, asking if you'd like to attempt an automatic installation. You can choose 'Y' to let the script try to install `uv` for you, or 'N' to cancel. If you cancel, you'll need to install `uv` manually from [https://astral.sh/uv](https://astral.sh/uv) before running `gui-uv.bat` again.
|
||||
|
||||
```cmd
|
||||
.\gui-uv.bat
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```powershell
|
||||
.\gui-uv.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
This script utilizes the `uv` managed environment.
|
||||
|
||||
### Launching the GUI on Linux and macOS
|
||||
|
||||
If you installed using the `pip` method on Linux or macOS, run the `gui.sh` script located in the root directory. Provide the desired command line arguments as follows:
|
||||
|
||||
```bash
|
||||
./gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
### Launching the GUI on Linux (uv method)
|
||||
|
||||
If you installed using the `uv` method on Linux, use the `gui-uv.sh` script to start the GUI. Follow these steps:
|
||||
|
||||
When you run `gui-uv.sh`, it will first check if `uv` is installed on your system. If `uv` is not found, the script will prompt you, asking if you'd like to attempt an automatic installation. You can choose 'Y' (or 'y') to let the script try to install `uv` for you, or 'N' (or 'n') to cancel. If you cancel, you'll need to install `uv` manually from [https://astral.sh/uv](https://astral.sh/uv) before running `gui-uv.sh` again.
|
||||
|
||||
```shell
|
||||
./gui-uv.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
If you are running on a headless server, use:
|
||||
|
||||
```shell
|
||||
./gui-uv.sh --headless --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
This script utilizes the `uv` managed environment.
|
||||
|
||||
## Custom Path Defaults
|
||||
|
||||
The repository now provides a default configuration file named `config.toml`. This file is a template that you can customize to suit your needs.
|
||||
|
||||
To use the default configuration file, follow these steps:
|
||||
|
||||
1. Copy the `config example.toml` file from the root directory of the repository to `config.toml`.
|
||||
2. Open the `config.toml` file in a text editor.
|
||||
3. Modify the paths and settings as per your requirements.
|
||||
|
||||
This approach allows you to easily adjust the configuration to suit your specific needs to open the desired default folders for each type of folder/file input supported in the GUI.
|
||||
|
||||
You can specify the path to your config.toml (or any other name you like) when running the GUI. For instance: ./gui.bat --config c:\my_config.toml
|
||||
By effectively using `config.toml`, you can significantly speed up your training setup process. Always refer to the `config example.toml` for the most up-to-date list of configurable paths.
|
||||
|
||||
## LoRA
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,163 @@
|
|||
# Linux – Installation (pip method)
|
||||
|
||||
Use this method if you prefer `pip` or are on macOS.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Linux – Installation (pip method)](#linux--installation-pip-method)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [Using `conda`](#using-conda)
|
||||
- [Clone the Repository](#clone-the-repository)
|
||||
- [Run the Setup Script](#run-the-setup-script)
|
||||
- [Start the GUI](#start-the-gui)
|
||||
- [Available CLI Options](#available-cli-options)
|
||||
- [Upgrade Instructions](#upgrade-instructions)
|
||||
- [Optional: Install Location Details](#optional-install-location-details)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python 3.10.9** (or higher, but below 3.13)
|
||||
- **Git** – Required for cloning the repository
|
||||
- **NVIDIA CUDA Toolkit 12.8**
|
||||
- **NVIDIA GPU** – Required for training; VRAM needs vary
|
||||
- **(Optional) NVIDIA cuDNN** – Improves training speed and batch size
|
||||
|
||||
## Installation Steps
|
||||
|
||||
1. Install Python and Git. On Ubuntu 22.04 or later:
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install python3.11 python3.11-venv git
|
||||
```
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Linux&target_arch=x86_64)
|
||||
Follow the instructions for your distribution.
|
||||
3.
|
||||
|
||||
> [!NOTE]
|
||||
> CUDA is usually not required and may not be compatible with Apple Silicon GPUs.
|
||||
|
||||
### Using `conda`
|
||||
|
||||
If you prefer Conda over `venv`, you can create an environment like this:
|
||||
|
||||
```shell
|
||||
# Create Conda Environment
|
||||
conda create -n kohyass python=3.11
|
||||
conda activate kohyass
|
||||
|
||||
# Run the Scripts
|
||||
chmod +x setup.sh
|
||||
./setup.sh
|
||||
|
||||
chmod +x gui.sh
|
||||
./gui.sh
|
||||
```
|
||||
|
||||
## Clone the Repository
|
||||
|
||||
Clone with submodules:
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/bmaltais/kohya_ss.git
|
||||
cd kohya_ss
|
||||
```
|
||||
|
||||
## Run the Setup Script
|
||||
|
||||
Make the setup script executable:
|
||||
|
||||
```bash
|
||||
chmod +x setup.sh
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you need additional options or information about the runpod environment, you can use `setup.sh -h` or `setup.sh --help` to display the help message.
|
||||
|
||||
## Start the GUI
|
||||
|
||||
Start with:
|
||||
|
||||
```bash
|
||||
./gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
You can also run `kohya_gui.py` directly with the same flags.
|
||||
|
||||
For help:
|
||||
|
||||
```bash
|
||||
./gui.sh --help
|
||||
```
|
||||
|
||||
This method uses a standard Python virtual environment.
|
||||
|
||||
### Available CLI Options
|
||||
|
||||
You can pass the following arguments to `gui.sh` or `kohya_gui.py`:
|
||||
|
||||
```text
|
||||
--help show this help message and exit
|
||||
--config CONFIG Path to the toml config file for interface defaults
|
||||
--debug Debug on
|
||||
--listen LISTEN IP to listen on for connections to Gradio
|
||||
--username USERNAME Username for authentication
|
||||
--password PASSWORD Password for authentication
|
||||
--server_port SERVER_PORT
|
||||
Port to run the server listener on
|
||||
--inbrowser Open in browser
|
||||
--share Share the gradio UI
|
||||
--headless Is the server headless
|
||||
--language LANGUAGE Set custom language
|
||||
--use-ipex Use IPEX environment
|
||||
--use-rocm Use ROCm environment
|
||||
--do_not_use_shell Enforce not to use shell=True when running external commands
|
||||
--do_not_share Do not share the gradio UI
|
||||
--requirements REQUIREMENTS
|
||||
requirements file to use for validation
|
||||
--root_path ROOT_PATH
|
||||
`root_path` for Gradio to enable reverse proxy support. e.g. /kohya_ss
|
||||
--noverify Disable requirements verification
|
||||
```
|
||||
|
||||
## Upgrade Instructions
|
||||
|
||||
To upgrade, pull the latest changes and rerun setup:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
## Optional: Install Location Details
|
||||
|
||||
On Linux, the setup script will install in the current directory if possible.
|
||||
|
||||
If that fails:
|
||||
|
||||
- Fallback: `/opt/kohya_ss`
|
||||
- If not writable: `$HOME/kohya_ss`
|
||||
- If all fail: stays in the current directory
|
||||
|
||||
To override the location, use:
|
||||
|
||||
```bash
|
||||
./setup.sh -d /your/custom/path
|
||||
```
|
||||
|
||||
On macOS, the behavior is similar but defaults to `$HOME/kohya_ss`.
|
||||
|
||||
If you use interactive mode, the default Accelerate values are:
|
||||
|
||||
- Machine: `This machine`
|
||||
- Compute: `None`
|
||||
- Others: `No`
|
||||
|
|
@ -0,0 +1,168 @@
|
|||
# Windows – Installation (pip method)
|
||||
|
||||
Use this method if `uv` is not available or you prefer the traditional approach.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [Using Conda](#using-conda-optional)
|
||||
- [Clone the Repository](#clone-the-repository)
|
||||
- [Run the Setup Script](#run-the-setup-script)
|
||||
- [Start the GUI](#start-the-gui)
|
||||
- [Available CLI Options](#available-cli-options)
|
||||
- [Upgrade Instructions](#upgrade-instructions)
|
||||
- [Optional: Install Location Details](#optional-install-location-details)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python 3.10.11**
|
||||
- **Git** – Required for cloning the repository
|
||||
- **NVIDIA CUDA Toolkit 12.8**
|
||||
- **NVIDIA GPU** – Required for training; VRAM needs vary
|
||||
- **(Optional) NVIDIA cuDNN** – Improves training speed and batch size
|
||||
- (Optional) Visual Studio Redistributables: [vc_redist.x64.exe](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
|
||||
## Installation Steps
|
||||
|
||||
1. Install [Python 3.11.9](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe)
|
||||
✅ Enable the "Add to PATH" option during setup
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Windows&target_arch=x86_64)
|
||||
|
||||
3. Install [Git](https://git-scm.com/download/win)
|
||||
|
||||
4. Install [Visual Studio Redistributables](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
|
||||
|
||||
## Using Conda (Optional)
|
||||
|
||||
If you prefer Conda over `venv`, you can create an environment like this:
|
||||
|
||||
```powershell
|
||||
conda create -n kohyass python=3.10
|
||||
conda activate kohyass
|
||||
|
||||
setup.bat
|
||||
```
|
||||
|
||||
You can also use:
|
||||
|
||||
```powershell
|
||||
setup-3.10.bat
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```powershell
|
||||
gui.ps1
|
||||
```
|
||||
|
||||
or:
|
||||
|
||||
```cmd
|
||||
gui.bat
|
||||
```
|
||||
|
||||
## Clone the Repository
|
||||
|
||||
Clone with submodules:
|
||||
|
||||
```cmd
|
||||
git clone --recursive https://github.com/bmaltais/kohya_ss.git
|
||||
cd kohya_ss
|
||||
```
|
||||
|
||||
> The `--recursive` flag ensures all submodules are fetched.
|
||||
|
||||
## Run the Setup Script
|
||||
|
||||
Run:
|
||||
|
||||
```cmd
|
||||
setup.bat
|
||||
```
|
||||
|
||||
If you have multiple Python versions installed:
|
||||
|
||||
```cmd
|
||||
setup-3.10.bat
|
||||
```
|
||||
|
||||
During the Accelerate configuration step, use the default values as proposed unless you know your hardware demands otherwise.
|
||||
The amount of VRAM on your GPU does **not** impact the values used.
|
||||
|
||||
*Optional: cuDNN 8.9.6.50*
|
||||
|
||||
These optional steps improve training speed for NVIDIA 30X0/40X0 GPUs. They allow for larger batch sizes and faster training.
|
||||
|
||||
Run:
|
||||
|
||||
```cmd
|
||||
setup.bat
|
||||
```
|
||||
|
||||
Then select:
|
||||
|
||||
```
|
||||
2. (Optional) Install cudnn files (if you want to use the latest supported cudnn version)
|
||||
```
|
||||
## Start the GUI
|
||||
|
||||
If you installed using the `pip` method, use either the `gui.ps1` or `gui.bat` script located in the root directory. Choose the script that suits your preference and run it in a terminal, providing the desired command line arguments. Here's an example:
|
||||
|
||||
```powershell
|
||||
gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```cmd
|
||||
gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
You can also run `kohya_gui.py` directly with the same flags.
|
||||
|
||||
For help:
|
||||
|
||||
```cmd
|
||||
gui.bat --help
|
||||
```
|
||||
|
||||
This method uses a Python virtual environment managed via pip.
|
||||
|
||||
### Available CLI Options
|
||||
|
||||
```text
|
||||
--help show this help message and exit
|
||||
--config CONFIG Path to the toml config file for interface defaults
|
||||
--debug Debug on
|
||||
--listen LISTEN IP to listen on for connections to Gradio
|
||||
--username USERNAME Username for authentication
|
||||
--password PASSWORD Password for authentication
|
||||
--server_port SERVER_PORT
|
||||
Port to run the server listener on
|
||||
--inbrowser Open in browser
|
||||
--share Share the gradio UI
|
||||
--headless Is the server headless
|
||||
--language LANGUAGE Set custom language
|
||||
--use-ipex Use IPEX environment
|
||||
--use-rocm Use ROCm environment
|
||||
--do_not_use_shell Enforce not to use shell=True when running external commands
|
||||
--do_not_share Do not share the gradio UI
|
||||
--requirements REQUIREMENTS
|
||||
requirements file to use for validation
|
||||
--root_path ROOT_PATH
|
||||
`root_path` for Gradio to enable reverse proxy support. e.g. /kohya_ss
|
||||
--noverify Disable requirements verification
|
||||
```
|
||||
|
||||
## Upgrade Instructions
|
||||
|
||||
To upgrade your environment:
|
||||
|
||||
```cmd
|
||||
git pull
|
||||
setup.bat
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,143 @@
|
|||
# Linux – Installation (uv method)
|
||||
|
||||
Recommended setup for most Linux users.
|
||||
If you have macOS please use **pip method**.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Linux – Installation (uv method)](#linux--installation-uv-method)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [Clone the Repository](#clone-the-repository)
|
||||
- [Start the GUI](#start-the-gui)
|
||||
- [Available CLI Options](#available-cli-options)
|
||||
- [Upgrade Instructions](#upgrade-instructions)
|
||||
- [Optional: Install Location Details](#optional-install-location-details)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python 3.10.9** (or higher, but below 3.13)
|
||||
|
||||
> [!NOTE]
|
||||
> The `uv` environment will use the Python version specified in the `.python-version` file at the root of the repository. You can edit this file to change the Python version used by `uv`.
|
||||
|
||||
- **Git** – Required for cloning the repository
|
||||
- **NVIDIA CUDA Toolkit 12.8**
|
||||
- **NVIDIA GPU** – Required for training; VRAM needs vary
|
||||
- **(Optional) NVIDIA cuDNN** – Improves training speed and batch size
|
||||
|
||||
## Installation Steps
|
||||
|
||||
1. Install Python (Make sure you have Python version 3.10.9 or higher (but lower than 3.11.0) installed on your system.)
|
||||
On Ubuntu 22.04 or later:
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install python3.11 python3.11-venv git
|
||||
```
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Linux&target_arch=x86_64)
|
||||
Follow the instructions for your distribution.
|
||||
|
||||
> [!NOTE]
|
||||
> macOS is only supported via the **pip method**.
|
||||
> CUDA is usually not required and may not be compatible with Apple Silicon GPUs.
|
||||
|
||||
## Clone the Repository
|
||||
|
||||
To install the project, you must first clone the repository **with submodules**:
|
||||
|
||||
```bash
|
||||
git clone --recursive https://github.com/bmaltais/kohya_ss.git
|
||||
cd kohya_ss
|
||||
```
|
||||
|
||||
> The `--recursive` flag ensures that all required Git submodules are also cloned.
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
./gui-uv.sh
|
||||
```
|
||||
|
||||
## Start the GUI
|
||||
|
||||
To launch the GUI service, run `./gui-uv.sh` or run the `kohya_gui.py` script directly. Use the command line arguments listed below to configure the underlying service.
|
||||
|
||||
### Available CLI Options
|
||||
|
||||
```text
|
||||
--help show this help message and exit
|
||||
--config CONFIG Path to the toml config file for interface defaults
|
||||
--debug Debug on
|
||||
--listen LISTEN IP to listen on for connections to Gradio
|
||||
--username USERNAME Username for authentication
|
||||
--password PASSWORD Password for authentication
|
||||
--server_port SERVER_PORT
|
||||
Port to run the server listener on
|
||||
--inbrowser Open in browser
|
||||
--share Share the gradio UI
|
||||
--headless Is the server headless
|
||||
--language LANGUAGE Set custom language
|
||||
--use-ipex Use IPEX environment
|
||||
--use-rocm Use ROCm environment
|
||||
--do_not_use_shell Enforce not to use shell=True when running external commands
|
||||
--do_not_share Do not share the gradio UI
|
||||
--requirements REQUIREMENTS
|
||||
requirements file to use for validation
|
||||
--root_path ROOT_PATH
|
||||
`root_path` for Gradio to enable reverse proxy support. e.g. /kohya_ss
|
||||
--noverify Disable requirements verification
|
||||
```
|
||||
|
||||
When you run `gui-uv.sh`, it will first check if `uv` is installed on your system. If `uv` is not found, the script will prompt you, asking if you'd like to attempt an automatic installation. You can choose 'Y' (or 'y') to let the script try to install `uv` for you, or 'N' (or 'n') to cancel. If you cancel, you'll need to install `uv` manually from [https://astral.sh/uv](https://astral.sh/uv) before running `gui-uv.sh` again.
|
||||
|
||||
```shell
|
||||
./gui-uv.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
If you are running on a headless server, use:
|
||||
|
||||
```shell
|
||||
./gui-uv.sh --headless --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
This script utilizes the `uv` managed environment.
|
||||
|
||||
## Upgrade Instructions
|
||||
|
||||
To upgrade your installation to a new version, follow the instructions below.
|
||||
|
||||
1. Open a terminal and navigate to the root directory of the project.
|
||||
2. Pull the latest changes from the repository:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
|
||||
3. Updates to the Python environment are handled automatically when you next run the `gui-uv.sh` script. No separate setup script execution is needed.
|
||||
|
||||
## Optional: Install Location Details
|
||||
|
||||
On Linux, the setup script will install in the current directory if possible.
|
||||
|
||||
If that fails:
|
||||
|
||||
- Fallback: `/opt/kohya_ss`
|
||||
- If not writable: `$HOME/kohya_ss`
|
||||
- If all fail: stays in the current directory
|
||||
|
||||
To override the location, use:
|
||||
|
||||
```bash
|
||||
./setup.sh -d /your/custom/path
|
||||
```
|
||||
|
||||
On macOS, the behavior is similar but defaults to `$HOME/kohya_ss`.
|
||||
|
||||
If you use interactive mode, the default Accelerate values are:
|
||||
|
||||
- Machine: `This machine`
|
||||
- Compute: `None`
|
||||
- Others: `No`
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
# Windows – Installation (uv method)
|
||||
|
||||
Recommended for most Windows users.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [Clone the Repository](#clone-the-repository)
|
||||
- [Start the GUI](#start-the-gui)
|
||||
- [Available CLI Options](#available-cli-options)
|
||||
- [Upgrade Instructions](#upgrade-instructions)
|
||||
-
|
||||
## Prerequisites
|
||||
|
||||
- [Python 3.11.9](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe) – enable "Add to PATH"
|
||||
> [!NOTE]
|
||||
> The `uv` environment will use the Python version specified in the `.python-version` file at the root of the repository. You can edit this file to change the Python version used by `uv`.
|
||||
- [Git for Windows](https://git-scm.com/download/win)
|
||||
- [CUDA Toolkit 12.8](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Windows&target_arch=x86_64)
|
||||
- **NVIDIA GPU** – Required for training; VRAM needs vary
|
||||
- **(Optional) NVIDIA cuDNN** – Improves training speed and batch size
|
||||
- (Optional) Visual Studio Redistributables: [vc_redist.x64.exe](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
|
||||
## Installation Steps
|
||||
|
||||
1. Install [Python 3.11.9](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe)
|
||||
✅ Enable the "Add to PATH" option during setup
|
||||
|
||||
2. Install [CUDA 12.8 Toolkit](https://developer.nvidia.com/cuda-12-8-0-download-archive?target_os=Windows&target_arch=x86_64)
|
||||
|
||||
3. Install [Git](https://git-scm.com/download/win)
|
||||
|
||||
4. Install [Visual Studio Redistributables](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
|
||||
|
||||
## Clone the Repository
|
||||
|
||||
Clone with submodules:
|
||||
|
||||
```powershell
|
||||
git clone --recursive https://github.com/bmaltais/kohya_ss.git
|
||||
cd kohya_ss
|
||||
```
|
||||
## Start the GUI
|
||||
|
||||
To launch the GUI, run:
|
||||
|
||||
```cmd
|
||||
.\gui-uv.bat
|
||||
```
|
||||
|
||||
If `uv` is not installed, the script will prompt you:
|
||||
- Press `Y` to install `uv` automatically
|
||||
- Or press `N` to cancel and install `uv` manually from [https://astral.sh/uv](https://astral.sh/uv)
|
||||
|
||||
Once installed, you can also start the GUI with additional flags:
|
||||
|
||||
```cmd
|
||||
.\gui-uv.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
|
||||
```
|
||||
|
||||
This script utilizes the `uv` managed environment and handles dependencies and updates automatically.
|
||||
|
||||
### Available CLI Options
|
||||
|
||||
```text
|
||||
--help show this help message and exit
|
||||
--config CONFIG Path to the toml config file for interface defaults
|
||||
--debug Debug on
|
||||
--listen LISTEN IP to listen on for connections to Gradio
|
||||
--username USERNAME Username for authentication
|
||||
--password PASSWORD Password for authentication
|
||||
--server_port SERVER_PORT
|
||||
Port to run the server listener on
|
||||
--inbrowser Open in browser
|
||||
--share Share the gradio UI
|
||||
--headless Is the server headless
|
||||
--language LANGUAGE Set custom language
|
||||
--use-ipex Use IPEX environment
|
||||
--use-rocm Use ROCm environment
|
||||
--do_not_use_shell Enforce not to use shell=True when running external commands
|
||||
--do_not_share Do not share the gradio UI
|
||||
--requirements REQUIREMENTS
|
||||
requirements file to use for validation
|
||||
--root_path ROOT_PATH
|
||||
`root_path` for Gradio to enable reverse proxy support. e.g. /kohya_ss
|
||||
--noverify Disable requirements verification
|
||||
```
|
||||
|
||||
This script utilizes the `uv` managed environment and automatically handles dependencies and updates.
|
||||
|
||||
## Upgrade Instructions
|
||||
|
||||
1. Pull the latest changes:
|
||||
|
||||
```powershell
|
||||
git pull
|
||||
```
|
||||
|
||||
2. Run `gui-uv.bat` again. It will update the environment automatically.
|
||||
2
gui.sh
2
gui.sh
|
|
@ -81,6 +81,8 @@ else
|
|||
if [ "$RUNPOD" = false ]; then
|
||||
if [[ "$@" == *"--use-ipex"* ]]; then
|
||||
REQUIREMENTS_FILE="$SCRIPT_DIR/requirements_linux_ipex.txt"
|
||||
elif [ -x "$(command -v nvidia-smi)" ]; then
|
||||
REQUIREMENTS_FILE="$SCRIPT_DIR/requirements_linux.txt"
|
||||
elif [[ "$@" == *"--use-rocm"* ]] || [ -x "$(command -v rocminfo)" ] || [ -f "/opt/rocm/bin/rocminfo" ]; then
|
||||
REQUIREMENTS_FILE="$SCRIPT_DIR/requirements_linux_rocm.txt"
|
||||
else
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[project]
|
||||
name = "kohya-ss"
|
||||
version = "25.2.0"
|
||||
version = "25.2.1"
|
||||
description = "Kohya_ss GUI"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.12"
|
||||
|
|
@ -15,7 +15,7 @@ dependencies = [
|
|||
"einops==0.7.0",
|
||||
"fairscale==0.4.13",
|
||||
"ftfy==6.1.1",
|
||||
"gradio>=5.23.1",
|
||||
"gradio>=5.34.1",
|
||||
"huggingface-hub==0.29.3",
|
||||
"imagesize==1.4.1",
|
||||
"invisible-watermark==0.2.0",
|
||||
|
|
@ -27,6 +27,7 @@ dependencies = [
|
|||
"onnxruntime-gpu==1.19.2",
|
||||
"open-clip-torch==2.20.0",
|
||||
"opencv-python==4.10.0.84",
|
||||
"pip",
|
||||
"prodigy-plus-schedule-free==1.8.0",
|
||||
"prodigyopt==1.1.2",
|
||||
"protobuf==3.20.3",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ easygui==0.98.3
|
|||
einops==0.7.0
|
||||
fairscale==0.4.13
|
||||
ftfy==6.1.1
|
||||
gradio>=5.23.1
|
||||
gradio>=5.34.1
|
||||
huggingface-hub==0.29.3
|
||||
imagesize==1.4.1
|
||||
invisible-watermark==0.2.0
|
||||
|
|
@ -28,7 +28,7 @@ schedulefree==1.4
|
|||
scipy==1.11.4
|
||||
# for T5XXL tokenizer (SD3/FLUX)
|
||||
sentencepiece==0.2.0
|
||||
timm==0.6.7
|
||||
timm==1.0.15
|
||||
tk==0.1.0
|
||||
toml==0.10.2
|
||||
transformers==4.44.2
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
Subproject commit 61eda7627874f1e0c5e31e710a698c49e8f0e332
|
||||
Subproject commit 3e6935a07edcb944407840ef74fcaf6fcad352f7
|
||||
|
|
@ -14,7 +14,7 @@ call .\venv\Scripts\deactivate.bat
|
|||
call .\venv\Scripts\activate.bat
|
||||
|
||||
REM first make sure we have setuptools available in the venv
|
||||
python -m pip install --require-virtualenv --no-input -q -q setuptools
|
||||
python -m pip install --require-virtualenv --no-input -q setuptools
|
||||
|
||||
REM Check if the batch was started via double-click
|
||||
IF /i "%comspec% /c %~0 " equ "%cmdcmdline:"=%" (
|
||||
|
|
|
|||
106
setup.sh
106
setup.sh
|
|
@ -1,6 +1,19 @@
|
|||
#!/usr/bin/env bash
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
# Function to get the python command
|
||||
get_python_command() {
|
||||
if command -v python3.11 &>/dev/null; then
|
||||
echo "python3.11"
|
||||
elif command -v python3.10 &>/dev/null; then
|
||||
echo "python3.10"
|
||||
elif command -v python3 &>/dev/null; then
|
||||
echo "python3"
|
||||
else
|
||||
echo "python" # Fallback, though this might not have venv
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to display help information
|
||||
display_help() {
|
||||
cat <<EOF
|
||||
|
|
@ -188,27 +201,29 @@ create_symlinks() {
|
|||
# Function to install Python dependencies
|
||||
install_python_dependencies() {
|
||||
local TEMP_REQUIREMENTS_FILE
|
||||
local PYTHON_CMD
|
||||
|
||||
PYTHON_CMD=$(get_python_command)
|
||||
|
||||
if [ "$PYTHON_CMD" == "python" ]; then # Check if get_python_command returned the fallback
|
||||
echo "Could not find python3.11, python3.10, or python3."
|
||||
echo "Please install a compatible Python version."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Switch to local virtual env
|
||||
echo "Switching to virtual Python environment."
|
||||
echo "Switching to virtual Python environment using $PYTHON_CMD."
|
||||
if ! inDocker; then
|
||||
# Check if conda environment is already activated
|
||||
if [ -n "$CONDA_PREFIX" ]; then
|
||||
echo "Detected active conda environment: $CONDA_DEFAULT_ENV"
|
||||
echo "Using existing conda environment at: $CONDA_PREFIX"
|
||||
# No need to create or activate a venv, conda env is already active
|
||||
elif command -v python3.10 >/dev/null; then
|
||||
python3.10 -m venv "$DIR/venv"
|
||||
# Activate the virtual environment
|
||||
source "$DIR/venv/bin/activate"
|
||||
elif command -v python3 >/dev/null; then
|
||||
python3 -m venv "$DIR/venv"
|
||||
# Activate the virtual environment
|
||||
source "$DIR/venv/bin/activate"
|
||||
else
|
||||
echo "Valid python3 or python3.10 binary not found."
|
||||
echo "Cannot proceed with the python steps."
|
||||
return 1
|
||||
"$PYTHON_CMD" -m venv "$DIR/venv"
|
||||
# Activate the virtual environment
|
||||
# shellcheck source=/dev/null
|
||||
source "$DIR/venv/bin/activate"
|
||||
fi
|
||||
fi
|
||||
|
||||
|
|
@ -218,6 +233,8 @@ install_python_dependencies() {
|
|||
python "$SCRIPT_DIR/setup/setup_linux.py" --platform-requirements-file=requirements_runpod.txt $QUIET
|
||||
elif [ "$USE_IPEX" = true ]; then
|
||||
python "$SCRIPT_DIR/setup/setup_linux.py" --platform-requirements-file=requirements_linux_ipex.txt $QUIET
|
||||
elif [ -x "$(command -v nvidia-smi)" ]; then
|
||||
python "$SCRIPT_DIR/setup/setup_linux.py" --platform-requirements-file=requirements_linux.txt $QUIET
|
||||
elif [ "$USE_ROCM" = true ] || [ -x "$(command -v rocminfo)" ] || [ -f "/opt/rocm/bin/rocminfo" ]; then
|
||||
echo "Upgrading pip for ROCm."
|
||||
pip install --upgrade pip # PyTorch ROCm is too large to install with older pip
|
||||
|
|
@ -545,35 +562,37 @@ if [[ "$OSTYPE" == "lin"* ]]; then
|
|||
if [ "$RUNPOD" = true ]; then
|
||||
if inDocker; then
|
||||
# We get the site-packages from python itself, then cut the string, so no other code changes required.
|
||||
VENV_DIR=$(python -c "import site; print(site.getsitepackages()[0])")
|
||||
VENV_DIR="${VENV_DIR%/lib/python3.10/site-packages}"
|
||||
PYTHON_CMD_FALLBACK=$(get_python_command) # Use a fallback if PYTHON_CMD is not set (e.g. not called from install_python_dependencies)
|
||||
VENV_PYTHON_VERSION=$($PYTHON_CMD_FALLBACK -c "import sys; print(f'{sys.version_info.major}.{sys.version_info.minor}')")
|
||||
VENV_DIR=$($PYTHON_CMD_FALLBACK -c "import site; print(site.getsitepackages()[0])")
|
||||
VENV_DIR="${VENV_DIR%/lib/python${VENV_PYTHON_VERSION}/site-packages}"
|
||||
fi
|
||||
|
||||
# Symlink paths
|
||||
libnvinfer_plugin_symlink="$VENV_DIR/lib/python3.10/site-packages/tensorrt/libnvinfer_plugin.so.7"
|
||||
libnvinfer_symlink="$VENV_DIR/lib/python3.10/site-packages/tensorrt/libnvinfer.so.7"
|
||||
libcudart_symlink="$VENV_DIR/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.11.0"
|
||||
libnvinfer_plugin_symlink="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/libnvinfer_plugin.so.7"
|
||||
libnvinfer_symlink="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/libnvinfer.so.7"
|
||||
libcudart_symlink="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/nvidia/cuda_runtime/lib/libcudart.so.11.0"
|
||||
|
||||
#Target file paths
|
||||
libnvinfer_plugin_target="$VENV_DIR/lib/python3.10/site-packages/tensorrt/libnvinfer_plugin.so.8"
|
||||
libnvinfer_target="$VENV_DIR/lib/python3.10/site-packages/tensorrt/libnvinfer.so.8"
|
||||
libcudart_target="$VENV_DIR/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12"
|
||||
libnvinfer_plugin_target="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/libnvinfer_plugin.so.8"
|
||||
libnvinfer_target="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/libnvinfer.so.8"
|
||||
libcudart_target="$VENV_DIR/lib/python${VENV_PYTHON_VERSION}/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12"
|
||||
|
||||
# echo "Checking symlinks now."
|
||||
# create_symlinks "$libnvinfer_plugin_symlink" "$libnvinfer_plugin_target"
|
||||
# create_symlinks "$libnvinfer_symlink" "$libnvinfer_target"
|
||||
# create_symlinks "$libcudart_symlink" "$libcudart_target"
|
||||
|
||||
# if [ -d "${VENV_DIR}/lib/python3.10/site-packages/tensorrt/" ]; then
|
||||
# export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${VENV_DIR}/lib/python3.10/site-packages/tensorrt/"
|
||||
# if [ -d "${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/" ]; then
|
||||
# export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/"
|
||||
# else
|
||||
# echo "${VENV_DIR}/lib/python3.10/site-packages/tensorrt/ not found; not linking library."
|
||||
# echo "${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/ not found; not linking library."
|
||||
# fi
|
||||
|
||||
# if [ -d "${VENV_DIR}/lib/python3.10/site-packages/tensorrt/" ]; then
|
||||
# export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${VENV_DIR}/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/"
|
||||
# if [ -d "${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/tensorrt/" ]; then
|
||||
# export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/nvidia/cuda_runtime/lib/"
|
||||
# else
|
||||
# echo "${VENV_DIR}/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/ not found; not linking library."
|
||||
# echo "${VENV_DIR}/lib/python${VENV_PYTHON_VERSION}/site-packages/nvidia/cuda_runtime/lib/ not found; not linking library."
|
||||
# fi
|
||||
|
||||
configure_accelerate
|
||||
|
|
@ -618,25 +637,36 @@ elif [[ "$OSTYPE" == "darwin"* ]]; then
|
|||
check_storage_space
|
||||
|
||||
# Install base python packages
|
||||
echo "Installing Python 3.10 if not found."
|
||||
if ! brew ls --versions python@3.10 >/dev/null; then
|
||||
echo "Installing Python 3.10."
|
||||
brew install python@3.10 >&3
|
||||
else
|
||||
echo "Checking for Python 3.11 or 3.10."
|
||||
if brew ls --versions python@3.11 >/dev/null; then
|
||||
echo "Python 3.11 found!"
|
||||
PYTHON_BREW_VERSION="python@3.11"
|
||||
elif brew ls --versions python@3.10 >/dev/null; then
|
||||
echo "Python 3.10 found!"
|
||||
fi
|
||||
echo "Installing Python-TK 3.10 if not found."
|
||||
if ! brew ls --versions python-tk@3.10 >/dev/null; then
|
||||
echo "Installing Python TK 3.10."
|
||||
brew install python-tk@3.10 >&3
|
||||
PYTHON_BREW_VERSION="python@3.10"
|
||||
else
|
||||
echo "Python Tkinter 3.10 found!"
|
||||
echo "Neither Python 3.11 nor 3.10 found via Homebrew. Installing Python 3.11."
|
||||
brew install python@3.11 >&3
|
||||
PYTHON_BREW_VERSION="python@3.11"
|
||||
fi
|
||||
|
||||
echo "Installing $PYTHON_BREW_VERSION if not linked or found."
|
||||
brew install "$PYTHON_BREW_VERSION" >&3
|
||||
|
||||
|
||||
echo "Checking for Python TK for $PYTHON_BREW_VERSION."
|
||||
PYTHON_TK_BREW_VERSION="python-tk@${PYTHON_BREW_VERSION#*@}" # Extracts e.g., 3.11 from python@3.11
|
||||
if ! brew ls --versions "$PYTHON_TK_BREW_VERSION" >/dev/null; then
|
||||
echo "Installing $PYTHON_TK_BREW_VERSION."
|
||||
brew install "$PYTHON_TK_BREW_VERSION" >&3
|
||||
else
|
||||
echo "$PYTHON_TK_BREW_VERSION found!"
|
||||
fi
|
||||
|
||||
update_kohya_ss
|
||||
|
||||
if ! install_python_dependencies; then
|
||||
echo "You may need to install Python. The command for this is brew install python@3.10."
|
||||
echo "You may need to install Python. The command for this is brew install $PYTHON_BREW_VERSION."
|
||||
fi
|
||||
|
||||
configure_accelerate
|
||||
|
|
|
|||
|
|
@ -35,7 +35,8 @@ def check_python_version():
|
|||
return False
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Failed to verify Python version. Error: {e}")
|
||||
log.error(f"An unexpected error occurred while verifying Python version: {e}")
|
||||
log.error("This might indicate a problem with your Python installation or environment configuration.")
|
||||
return False
|
||||
|
||||
|
||||
|
|
@ -49,12 +50,17 @@ def update_submodule(quiet=True):
|
|||
git_command.append("--quiet")
|
||||
|
||||
try:
|
||||
subprocess.run(git_command, check=True)
|
||||
subprocess.run(git_command, check=True, capture_output=True, text=True)
|
||||
log.info("Submodule initialized and updated.")
|
||||
except subprocess.CalledProcessError as e:
|
||||
log.error(f"Error during Git operation: {e}")
|
||||
except FileNotFoundError as e:
|
||||
log.error(e)
|
||||
log.error(f"Error updating submodule. Git command: '{' '.join(git_command)}' failed with exit code {e.returncode}.")
|
||||
if e.stdout:
|
||||
log.error(f"Git stdout: {e.stdout.strip()}")
|
||||
if e.stderr:
|
||||
log.error(f"Git stderr: {e.stderr.strip()}")
|
||||
log.error("Please ensure Git is installed and accessible in your PATH. Also, check your internet connection and repository permissions.")
|
||||
except FileNotFoundError:
|
||||
log.error(f"Error updating submodule: Git command not found. Please ensure Git is installed and accessible in your PATH.")
|
||||
|
||||
|
||||
def clone_or_checkout(repo_url, branch_or_tag, directory_name):
|
||||
|
|
@ -106,7 +112,14 @@ def clone_or_checkout(repo_url, branch_or_tag, directory_name):
|
|||
else:
|
||||
log.info(f"Already at required branch/tag: {branch_or_tag}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
log.error(f"Error during Git operation: {e}")
|
||||
log.error(f"Error during Git operation. Command: '{' '.join(e.cmd)}' failed with exit code {e.returncode}.")
|
||||
if e.stdout:
|
||||
log.error(f"Git stdout: {e.stdout.strip()}")
|
||||
if e.stderr:
|
||||
log.error(f"Git stderr: {e.stderr.strip()}")
|
||||
log.error(f"Failed to clone or checkout {repo_url} ({branch_or_tag}). Please check the repository URL, branch/tag name, your internet connection, and Git installation.")
|
||||
except FileNotFoundError:
|
||||
log.error(f"Error during Git operation: Git command not found. Please ensure Git is installed and accessible in your PATH.")
|
||||
finally:
|
||||
os.chdir(original_dir)
|
||||
|
||||
|
|
@ -189,12 +202,27 @@ def install_requirements_inbulk(
|
|||
log.info(line.strip()) if show_stdout else None
|
||||
|
||||
# Capture and log any errors
|
||||
_, stderr = process.communicate()
|
||||
stdout, stderr = process.communicate()
|
||||
if process.returncode != 0:
|
||||
log.error(f"Failed to install requirements: {stderr.strip()}")
|
||||
log.error(f"Failed to install requirements from {requirements_file}. Pip command: '{' '.join(cmd)}'. Exit code: {process.returncode}")
|
||||
if stdout:
|
||||
log.error(f"Pip stdout: {stdout.strip()}")
|
||||
if stderr:
|
||||
log.error(f"Pip stderr: {stderr.strip()}")
|
||||
log.error("Please check the requirements file path, your internet connection, and ensure pip is functioning correctly.")
|
||||
else:
|
||||
if stdout and show_stdout and not installed("uv"): # uv already prints its output
|
||||
for line in stdout.splitlines():
|
||||
if "Requirement already satisfied" not in line:
|
||||
log.info(line.strip())
|
||||
if stderr: # Always log stderr if present, even on success
|
||||
log.warning(f"Pip stderr (even on success): {stderr.strip()}")
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
log.error(f"An error occurred while installing requirements: {e}")
|
||||
|
||||
except FileNotFoundError:
|
||||
log.error(f"Error installing requirements: '{cmd[0]}' command not found. Please ensure it is installed and in your PATH.")
|
||||
except Exception as e:
|
||||
log.error(f"An unexpected error occurred while installing requirements from {requirements_file}: {e}")
|
||||
|
||||
|
||||
def configure_accelerate(run_accelerate=False):
|
||||
|
|
@ -288,27 +316,7 @@ def check_torch():
|
|||
# This function was adapted from code written by vladimandic: https://github.com/vladimandic/automatic/commits/master
|
||||
#
|
||||
|
||||
# Check for toolkit
|
||||
if shutil.which("nvidia-smi") is not None or os.path.exists(
|
||||
os.path.join(
|
||||
os.environ.get("SystemRoot") or r"C:\Windows",
|
||||
"System32",
|
||||
"nvidia-smi.exe",
|
||||
)
|
||||
):
|
||||
log.info("nVidia toolkit detected")
|
||||
elif shutil.which("rocminfo") is not None or os.path.exists(
|
||||
"/opt/rocm/bin/rocminfo"
|
||||
):
|
||||
log.info("AMD toolkit detected")
|
||||
elif (
|
||||
shutil.which("sycl-ls") is not None
|
||||
or os.environ.get("ONEAPI_ROOT") is not None
|
||||
or os.path.exists("/opt/intel/oneapi")
|
||||
):
|
||||
log.info("Intel OneAPI toolkit detected")
|
||||
else:
|
||||
log.info("Using CPU-only Torch")
|
||||
_check_hardware_toolkit()
|
||||
|
||||
try:
|
||||
import torch
|
||||
|
|
@ -323,42 +331,102 @@ def check_torch():
|
|||
log.warning(f"Failed to import intel_extension_for_pytorch: {e}")
|
||||
log.info(f"Torch {torch.__version__}")
|
||||
|
||||
if torch.cuda.is_available():
|
||||
if torch.version.cuda:
|
||||
# Log nVidia CUDA and cuDNN versions
|
||||
log.info(
|
||||
f'Torch backend: nVidia CUDA {torch.version.cuda} cuDNN {torch.backends.cudnn.version() if torch.backends.cudnn.is_available() else "N/A"}'
|
||||
)
|
||||
elif torch.version.hip:
|
||||
# Log AMD ROCm HIP version
|
||||
log.info(f"Torch backend: AMD ROCm HIP {torch.version.hip}")
|
||||
else:
|
||||
log.warning("Unknown Torch backend")
|
||||
|
||||
# Log information about detected GPUs
|
||||
for device in [
|
||||
torch.cuda.device(i) for i in range(torch.cuda.device_count())
|
||||
]:
|
||||
log.info(
|
||||
f"Torch detected GPU: {torch.cuda.get_device_name(device)} VRAM {round(torch.cuda.get_device_properties(device).total_memory / 1024 / 1024)} Arch {torch.cuda.get_device_capability(device)} Cores {torch.cuda.get_device_properties(device).multi_processor_count}"
|
||||
)
|
||||
# Check if XPU is available
|
||||
elif hasattr(torch, "xpu") and torch.xpu.is_available():
|
||||
# Log Intel IPEX version
|
||||
log.info(f"Torch backend: Intel IPEX {ipex.__version__}")
|
||||
for device in [
|
||||
torch.xpu.device(i) for i in range(torch.xpu.device_count())
|
||||
]:
|
||||
log.info(
|
||||
f"Torch detected GPU: {torch.xpu.get_device_name(device)} VRAM {round(torch.xpu.get_device_properties(device).total_memory / 1024 / 1024)} Compute Units {torch.xpu.get_device_properties(device).max_compute_units}"
|
||||
)
|
||||
else:
|
||||
log.warning("Torch reports GPU not available")
|
||||
_log_gpu_info(torch)
|
||||
|
||||
return int(torch.__version__[0])
|
||||
except Exception as e:
|
||||
log.error(f"Could not load torch: {e}")
|
||||
except ImportError as e:
|
||||
log.error(f"Failed to import Torch: {e}. Please ensure PyTorch is installed correctly for your system.")
|
||||
log.error("You might need to install or reinstall PyTorch. Check https://pytorch.org/get-started/locally/ for instructions.")
|
||||
return 0
|
||||
except Exception as e:
|
||||
log.error(f"An unexpected error occurred while checking Torch: {e}")
|
||||
return 0
|
||||
|
||||
|
||||
def _check_nvidia_toolkit():
|
||||
"""Checks for nVidia toolkit."""
|
||||
if shutil.which("nvidia-smi") is not None or os.path.exists(
|
||||
os.path.join(
|
||||
os.environ.get("SystemRoot") or r"C:\Windows",
|
||||
"System32",
|
||||
"nvidia-smi.exe",
|
||||
)
|
||||
):
|
||||
log.info("nVidia toolkit detected")
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _check_amd_toolkit():
|
||||
"""Checks for AMD toolkit."""
|
||||
if shutil.which("rocminfo") is not None or os.path.exists(
|
||||
"/opt/rocm/bin/rocminfo"
|
||||
):
|
||||
log.info("AMD toolkit detected")
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _check_intel_oneapi_toolkit():
|
||||
"""Checks for Intel OneAPI toolkit."""
|
||||
if (
|
||||
shutil.which("sycl-ls") is not None
|
||||
or os.environ.get("ONEAPI_ROOT") is not None
|
||||
or os.path.exists("/opt/intel/oneapi")
|
||||
):
|
||||
log.info("Intel OneAPI toolkit detected")
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _check_hardware_toolkit():
|
||||
"""Checks for available hardware toolkits."""
|
||||
if _check_nvidia_toolkit():
|
||||
return
|
||||
if _check_amd_toolkit():
|
||||
return
|
||||
if _check_intel_oneapi_toolkit():
|
||||
return
|
||||
log.info("Using CPU-only Torch")
|
||||
|
||||
|
||||
def _log_gpu_info(torch_module):
|
||||
"""Logs GPU information for available backends."""
|
||||
if torch_module.cuda.is_available():
|
||||
if torch_module.version.cuda:
|
||||
# Log nVidia CUDA and cuDNN versions
|
||||
log.info(
|
||||
f'Torch backend: nVidia CUDA {torch_module.version.cuda} cuDNN {torch_module.backends.cudnn.version() if torch_module.backends.cudnn.is_available() else "N/A"}'
|
||||
)
|
||||
elif torch_module.version.hip:
|
||||
# Log AMD ROCm HIP version
|
||||
log.info(f"Torch backend: AMD ROCm HIP {torch_module.version.hip}")
|
||||
else:
|
||||
log.warning("Unknown Torch backend")
|
||||
|
||||
# Log information about detected GPUs
|
||||
for i in range(torch_module.cuda.device_count()):
|
||||
device = torch_module.cuda.device(i)
|
||||
log.info(
|
||||
f"Torch detected GPU: {torch_module.cuda.get_device_name(device)} VRAM {round(torch_module.cuda.get_device_properties(device).total_memory / 1024 / 1024)} Arch {torch_module.cuda.get_device_capability(device)} Cores {torch_module.cuda.get_device_properties(device).multi_processor_count}"
|
||||
)
|
||||
# Check if XPU is available
|
||||
elif hasattr(torch_module, "xpu") and torch_module.xpu.is_available():
|
||||
# Log Intel IPEX version
|
||||
# Ensure ipex is imported before accessing __version__
|
||||
try:
|
||||
import intel_extension_for_pytorch as ipex
|
||||
log.info(f"Torch backend: Intel IPEX {ipex.__version__}")
|
||||
except ImportError:
|
||||
log.warning("Intel IPEX version not available.")
|
||||
|
||||
for i in range(torch_module.xpu.device_count()):
|
||||
device = torch_module.xpu.device(i)
|
||||
log.info(
|
||||
f"Torch detected GPU: {torch_module.xpu.get_device_name(device)} VRAM {round(torch_module.xpu.get_device_properties(device).total_memory / 1024 / 1024)} Compute Units {torch_module.xpu.get_device_properties(device).max_compute_units}"
|
||||
)
|
||||
else:
|
||||
log.warning("Torch reports GPU not available")
|
||||
|
||||
|
||||
# report current version of code
|
||||
|
|
@ -376,9 +444,9 @@ def check_repo_version():
|
|||
|
||||
log.info(f"Kohya_ss GUI version: {release}")
|
||||
except Exception as e:
|
||||
log.error(f"Could not read release: {e}")
|
||||
log.error(f"Could not read release file at './.release': {e}")
|
||||
else:
|
||||
log.debug("Could not read release...")
|
||||
log.debug("Could not read release file './.release' as it does not exist.")
|
||||
|
||||
|
||||
# execute git command
|
||||
|
|
@ -418,12 +486,13 @@ def git(arg: str, folder: str = None, ignore: bool = False):
|
|||
)
|
||||
txt = txt.strip()
|
||||
if result.returncode != 0 and not ignore:
|
||||
global errors
|
||||
errors += 1
|
||||
log.error(f"Error running git: {folder} / {arg}")
|
||||
# global errors # This variable is not defined in this file. Assuming it's a remnant from an older version or a different context.
|
||||
# errors += 1
|
||||
log.error(f"Error running git command 'git {arg}' in folder '{folder or '.'}'. Exit code: {result.returncode}")
|
||||
if "or stash them" in txt:
|
||||
log.error(f"Local changes detected: check log for details...")
|
||||
log.debug(f"Git output: {txt}")
|
||||
log.error(f"Local changes detected. Please commit or stash them before running this command again. Full git output below.")
|
||||
log.error(f"Git output: {txt}") # Changed from log.debug to log.error for better visibility of error details
|
||||
log.error("Please ensure Git is installed, the repository exists, you have the necessary permissions, and there are no conflicts or uncommitted changes.")
|
||||
|
||||
|
||||
def pip(arg: str, ignore: bool = False, quiet: bool = False, show_stdout: bool = False):
|
||||
|
|
@ -477,8 +546,9 @@ def pip(arg: str, ignore: bool = False, quiet: bool = False, show_stdout: bool =
|
|||
)
|
||||
txt = txt.strip()
|
||||
if result.returncode != 0 and not ignore:
|
||||
log.error(f"Error running pip: {arg}")
|
||||
log.error(f"Error running pip command: '{' '.join(pip_cmd)}'. Exit code: {result.returncode}")
|
||||
log.error(f"Pip output: {txt}")
|
||||
log.error("Please check the package name, version, your internet connection, and ensure pip is functioning correctly.")
|
||||
return txt
|
||||
|
||||
|
||||
|
|
@ -677,11 +747,24 @@ def run_cmd(run_cmd):
|
|||
"""
|
||||
log.debug(f"Running command: {run_cmd}")
|
||||
try:
|
||||
subprocess.run(run_cmd, shell=True, check=True, env=os.environ)
|
||||
process = subprocess.run(run_cmd, shell=True, check=True, env=os.environ, capture_output=True, text=True)
|
||||
log.debug(f"Command executed successfully: {run_cmd}")
|
||||
if process.stdout:
|
||||
log.debug(f"Stdout: {process.stdout.strip()}")
|
||||
if process.stderr:
|
||||
log.debug(f"Stderr: {process.stderr.strip()}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
log.error(f"Error occurred while running command: {run_cmd}")
|
||||
log.error(f"Error: {e}")
|
||||
log.error(f"Error occurred while running command: '{run_cmd}'. Exit code: {e.returncode}")
|
||||
if e.stdout:
|
||||
log.error(f"Stdout: {e.stdout.strip()}")
|
||||
if e.stderr:
|
||||
log.error(f"Stderr: {e.stderr.strip()}")
|
||||
log.error("Please check the command syntax, permissions, and ensure all required programs are installed and in PATH.")
|
||||
except FileNotFoundError:
|
||||
# This might occur if the command itself (e.g., the first part of run_cmd) is not found
|
||||
log.error(f"Error running command: '{run_cmd}'. The command or a part of it was not found. Please ensure it is correctly spelled and accessible in your PATH.")
|
||||
except Exception as e:
|
||||
log.error(f"An unexpected error occurred while running command '{run_cmd}': {e}")
|
||||
|
||||
|
||||
def clear_screen():
|
||||
|
|
|
|||
Loading…
Reference in New Issue