mirror of https://github.com/bmaltais/kohya_ss
* docs: Add documentation for allowed_paths This change adds documentation for the `allowed_paths` config option to the `README.md` file. * docs: Add server section to config example This change adds the `[server]` section to the `config example.toml` file, including the `allowed_paths` option. --------- Co-authored-by: bmaltais <bernard@ducourier.com> Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>pull/3345/head^2
parent
c8dc6d14fb
commit
60d99eceff
|
|
@ -18,4 +18,4 @@ jobs:
|
|||
- uses: actions/checkout@v4
|
||||
|
||||
- name: typos-action
|
||||
uses: crate-ci/typos@v1.34.0
|
||||
uses: crate-ci/typos@v1.32.0
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
3.11
|
||||
76
README.md
76
README.md
|
|
@ -20,36 +20,35 @@ Support for Linux and macOS is also available. While Linux support is actively m
|
|||
|
||||
## Table of Contents
|
||||
|
||||
- [Kohya's GUI](#kohyas-gui)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Installation Options](#installation-options)
|
||||
- [Local Installation Overview](#local-installation-overview)
|
||||
- [`uv` vs `pip` – What's the Difference?](#uv-vs-pip--whats-the-difference)
|
||||
- [Cloud Installation Overview](#cloud-installation-overview)
|
||||
- [🦒 Colab](#-colab)
|
||||
- [Runpod, Novita, Docker](#runpod-novita-docker)
|
||||
- [Custom Path Defaults with `config.toml`](#custom-path-defaults-with-configtoml)
|
||||
- [LoRA](#lora)
|
||||
- [Installation Options](#installation-options)
|
||||
- [Local Installation Overview](#local-installation-overview)
|
||||
- [`uv` vs `pip` – What's the Difference?](#uv-vs-pip--whats-the-difference)
|
||||
- [Cloud Installation Overview](#cloud-installation-overview)
|
||||
- [Colab](#-colab)
|
||||
- [Runpod, Novita, Docker](#runpod-novita-docker)
|
||||
- [Custom Path Defaults](#custom-path-defaults)
|
||||
- [Server Options](#server-options)
|
||||
- [LoRA](#lora)
|
||||
- [Sample image generation during training](#sample-image-generation-during-training)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Page File Limit](#page-file-limit)
|
||||
- [No module called tkinter](#no-module-called-tkinter)
|
||||
- [LORA Training on TESLA V100 - GPU Utilization Issue](#lora-training-on-tesla-v100---gpu-utilization-issue)
|
||||
- [SDXL training](#sdxl-training)
|
||||
- [Masked loss](#masked-loss)
|
||||
- [Guides](#guides)
|
||||
- [Using Accelerate Lora Tab to Select GPU ID](#using-accelerate-lora-tab-to-select-gpu-id)
|
||||
- [Starting Accelerate in GUI](#starting-accelerate-in-gui)
|
||||
- [Running Multiple Instances (linux)](#running-multiple-instances-linux)
|
||||
- [Monitoring Processes](#monitoring-processes)
|
||||
- [Interesting Forks](#interesting-forks)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Change History](#change-history)
|
||||
- [v25.0.3](#v2503)
|
||||
- [v25.0.2](#v2502)
|
||||
- [v25.0.1](#v2501)
|
||||
- [v25.0.0](#v2500)
|
||||
- [Page File Limit](#page-file-limit)
|
||||
- [No module called tkinter](#no-module-called-tkinter)
|
||||
- [LORA Training on TESLA V100 - GPU Utilization Issue](#lora-training-on-tesla-v100---gpu-utilization-issue)
|
||||
- [SDXL training](#sdxl-training)
|
||||
- [Masked loss](#masked-loss)
|
||||
- [Guides](#guides)
|
||||
- [Using Accelerate Lora Tab to Select GPU ID](#using-accelerate-lora-tab-to-select-gpu-id)
|
||||
- [Starting Accelerate in GUI](#starting-accelerate-in-gui)
|
||||
- [Running Multiple Instances (linux)](#running-multiple-instances-linux)
|
||||
- [Monitoring Processes](#monitoring-processes)
|
||||
- [Interesting Forks](#interesting-forks)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Change History](#change-history)
|
||||
- [v25.0.3](#v2503)
|
||||
- [v25.0.2](#v2502)
|
||||
- [v25.0.1](#v2501)
|
||||
- [v25.0.0](#v2500)
|
||||
|
||||
|
||||
## Installation Options
|
||||
|
|
@ -103,9 +102,9 @@ I would like to express my gratitude to camenduru for their valuable contributio
|
|||
|
||||
These options are for users running training on hosted GPU infrastructure or containers.
|
||||
|
||||
- **[Runpod setup](docs/Installation/installation_runpod.md)** – Ready-made GPU background training via templates.
|
||||
- **[Novita setup](docs/Installation/installation_novita.md)** – Similar to Runpod, but integrated into the Novita UI.
|
||||
- **[Docker setup](docs/Installation/installation_docker.md)** – For developers/sysadmins using containerized environments.
|
||||
- **[Runpod setup](docs/runpod_setup.md)** – Ready-made GPU background training via templates.
|
||||
- **[Novita setup](docs/novita_setup.md)** – Similar to Runpod, but integrated into the Novita UI.
|
||||
- **[Docker setup](docs/docker.md)** – For developers/sysadmins using containerized environments.
|
||||
|
||||
|
||||
## Custom Path Defaults with `config.toml`
|
||||
|
|
@ -182,6 +181,21 @@ If you prefer to name your configuration file differently or store it in another
|
|||
|
||||
By effectively using `config.toml`, you can significantly speed up your training setup process. Always refer to the `config example.toml` for the most up-to-date list of configurable paths.
|
||||
|
||||
## Server Options
|
||||
|
||||
The `config.toml` file can also be used to configure the Gradio server.
|
||||
|
||||
### `allowed_paths`
|
||||
|
||||
The `allowed_paths` option allows you to specify a list of directories that the application can access. This is useful if you want to store your models, datasets, or other files on an external drive or in a location outside of the application's root directory.
|
||||
|
||||
**Example:**
|
||||
|
||||
```toml
|
||||
[server]
|
||||
allowed_paths = ["/mnt/external_drive/models", "/home/user/datasets"]
|
||||
```
|
||||
|
||||
## LoRA
|
||||
|
||||
To train a LoRA, you can currently use the `train_network.py` code. You can create a LoRA network by using the all-in-one GUI.
|
||||
|
|
|
|||
|
|
@ -1,463 +1,192 @@
|
|||
# Copy this file to config.toml and customize it for your setup.
|
||||
# This file provides default settings for the Kohya_ss GUI.
|
||||
# Lines starting with '#' are comments and are ignored.
|
||||
# Copy this file and name it config.toml
|
||||
# Edit the values to suit your needs
|
||||
|
||||
[settings]
|
||||
# General UI behavior settings.
|
||||
use_shell = false # Use shell for running sd-scripts. Set to true only if necessary for your system. (Default: false)
|
||||
expand_all_accordions = false # Expand all UI accordions by default. (Default: false)
|
||||
use_shell = false # Use shell during process run of sd-scripts oython code. Most secure is false but some systems may require it to be true to properly run sd-scripts.
|
||||
|
||||
[server]
|
||||
allowed_paths = [] # A list of paths that the application is allowed to access.
|
||||
|
||||
# Default folders location
|
||||
[model]
|
||||
# Default paths and options related to the primary model.
|
||||
models_dir = "./models" # Base directory for pretrained models. Initial value for model dropdown.
|
||||
pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" # Default specific model path/name. Initial value for model dropdown.
|
||||
output_name = "last" # Default name for the trained model output files.
|
||||
train_data_dir = "" # Default directory for training images. Initial value for dropdown.
|
||||
dataset_config = "" # Default path to dataset TOML configuration file. Initial value for dropdown.
|
||||
training_comment = "" # Optional comment to store in metadata.
|
||||
|
||||
# Model saving options.
|
||||
save_model_as = "safetensors" # Format for saving the model (ckpt, safetensors, diffusers, diffusers_safetensors). (Default: safetensors)
|
||||
save_precision = "fp16" # Precision for saving the model (fp16, bf16, float). (Default: fp16)
|
||||
|
||||
# Model architecture flags.
|
||||
v2 = false # True if using a Stable Diffusion v2 model. (Default: false)
|
||||
v_parameterization = false # True if using v_parameterization. (Default: false)
|
||||
sdxl = false # True if using an SDXL model. (Default: false)
|
||||
sd3 = false # True if using an SD3 model. (Default: false)
|
||||
flux1 = false # True if using a Flux.1 model. (Default: false)
|
||||
models_dir = "./models" # Pretrained model name or path
|
||||
output_name = "new model" # Trained model output name
|
||||
train_data_dir = "./data" # Image folder (containing training images subfolders) / Image folder (containing training images)
|
||||
dataset_config = "./test.toml" # Dataset config file (Optional. Select the toml configuration file to use for the dataset)
|
||||
training_comment = "Some training comment" # Training comment
|
||||
save_model_as = "safetensors" # Save model as (ckpt, safetensors, diffusers, diffusers_safetensors)
|
||||
save_precision = "bf16" # Save model precision (fp16, bf16, float)
|
||||
|
||||
[folders]
|
||||
# Default locations for various output and data directories.
|
||||
output_dir = "./outputs" # Directory for trained model outputs. Initial value for dropdown.
|
||||
reg_data_dir = "./data/reg" # Directory for regularization images. Initial value for dropdown.
|
||||
logging_dir = "./logs" # Directory for logs (e.g., TensorBoard). Initial value for dropdown.
|
||||
output_dir = "./outputs" # Output directory for trained model
|
||||
reg_data_dir = "./data/reg" # Regularisation directory
|
||||
logging_dir = "./logs" # Logging directory
|
||||
|
||||
[configuration]
|
||||
# Settings for loading and saving GUI configuration presets (typically .json files).
|
||||
config_dir = "./presets" # Directory for storing and loading UI configuration presets. Initial value for dropdown.
|
||||
config_dir = "./presets" # Load/Save Config file
|
||||
|
||||
[accelerate_launch]
|
||||
# Parameters for Hugging Face Accelerate.
|
||||
mixed_precision = "fp16" # Mixed precision mode (no, fp16, bf16, fp8). (Default: fp16)
|
||||
num_processes = 1 # Total number of parallel processes. (Default: 1)
|
||||
num_machines = 1 # Total number of machines for distributed training. (Default: 1)
|
||||
num_cpu_threads_per_process = 2 # Number of CPU threads per process. (Default: 2)
|
||||
|
||||
# Dynamo backend options (experimental).
|
||||
dynamo_backend = "no" # Dynamo backend (no, eager, aot_eager, inductor, etc.). (Default: no)
|
||||
dynamo_mode = "default" # Dynamo mode (default, reduce-overhead, max-autotune). (Default: default)
|
||||
dynamo_use_fullgraph = false # Dynamo use_fullgraph option. (Default: false)
|
||||
dynamo_use_dynamic = false # Dynamo use_dynamic option. (Default: false)
|
||||
|
||||
# Multi-GPU and process communication.
|
||||
multi_gpu = false # True if using multiple GPUs. (Default: false)
|
||||
gpu_ids = "" # Comma-separated list of GPU IDs (e.g., 0,1). Leave empty for Accelerate default.
|
||||
main_process_port = 0 # Main process port for Accelerate (0 for default). (Default: 0)
|
||||
extra_accelerate_launch_args = "" # Extra arguments for `accelerate launch`.
|
||||
dynamo_backend = "no" # Dynamo backend
|
||||
dynamo_mode = "default" # Dynamo mode
|
||||
dynamo_use_dynamic = false # Dynamo use dynamic
|
||||
dynamo_use_fullgraph = false # Dynamo use fullgraph
|
||||
extra_accelerate_launch_args = "" # Extra accelerate launch args
|
||||
gpu_ids = "" # GPU IDs
|
||||
main_process_port = 0 # Main process port
|
||||
mixed_precision = "fp16" # Mixed precision (fp16, bf16, fp8)
|
||||
multi_gpu = false # Multi GPU
|
||||
num_cpu_threads_per_process = 2 # Number of CPU threads per process
|
||||
num_machines = 1 # Number of machines
|
||||
num_processes = 1 # Number of processes
|
||||
|
||||
[basic]
|
||||
# Basic training parameters.
|
||||
train_batch_size = 1 # Number of samples per batch. (Default: 1)
|
||||
epoch = 1 # Number of training epochs. (Default: 1)
|
||||
max_train_epochs = 0 # Maximum training epochs (overrides max_train_steps if > 0). (Default: 0)
|
||||
max_train_steps = 1600 # Maximum training steps (sd-scripts default is 1600). (Default: 1600)
|
||||
|
||||
save_every_n_epochs = 1 # Frequency of saving checkpoints (in epochs). (Default: 1)
|
||||
caption_extension = ".txt" # File extension for caption files. (Default: .txt)
|
||||
seed = 0 # Random seed for reproducibility (0 for random). (Default: 0)
|
||||
|
||||
# Latent caching options.
|
||||
cache_latents = true # Cache latents in VRAM. (Default: true)
|
||||
cache_latents_to_disk = false # Cache latents to disk (if VRAM caching is disabled or fills up). (Default: false)
|
||||
|
||||
# Learning rate scheduler and optimizer settings.
|
||||
lr_scheduler = "cosine" # LR scheduler type (adafactor, constant, constant_with_warmup, cosine, etc.). (Default: cosine)
|
||||
lr_scheduler_type = "" # Specific LR scheduler class (Optional, e.g., CosineAnnealingLR).
|
||||
optimizer = "AdamW8bit" # Optimizer type (AdamW, AdamW8bit, Adafactor, Lion, Prodigy, etc.). (Default: AdamW8bit)
|
||||
max_grad_norm = 1.0 # Maximum gradient norm for clipping. (Default: 1.0)
|
||||
lr_scheduler_args = "" # Extra arguments for the LR scheduler (e.g., "milestones=[1,10,30,50] gamma=0.1").
|
||||
optimizer_args = "" # Extra arguments for the optimizer (e.g., "relative_step=True scale_parameter=True warmup_init=True").
|
||||
|
||||
# Learning rates.
|
||||
learning_rate = 0.0001 # Learning rate for U-Net (or general if others are not set). (Default: 0.0001)
|
||||
learning_rate_te = 0.0001 # Learning rate for Text Encoder (SD1.5). (Default: 0.0001)
|
||||
learning_rate_te1 = 0.0001 # Learning rate for Text Encoder 1 (SDXL). (Default: 0.0001)
|
||||
learning_rate_te2 = 0.0001 # Learning rate for Text Encoder 2 (SDXL). (Default: 0.0001)
|
||||
|
||||
# Learning rate warmup settings.
|
||||
lr_warmup = 10 # LR warmup percentage of total steps. (Default: 10)
|
||||
lr_warmup_steps = 0 # LR warmup steps (overrides percentage if > 0). (Default: 0)
|
||||
|
||||
# Scheduler-specific parameters.
|
||||
lr_scheduler_num_cycles = 1 # Number of cycles for cosine_with_restarts or polynomial schedulers. (Default: 1)
|
||||
lr_scheduler_power = 1.0 # Power for polynomial scheduler. (Default: 1.0)
|
||||
|
||||
# Resolution and bucketing.
|
||||
max_resolution = "512,512" # Maximum training resolution (width,height). (Default: "512,512")
|
||||
stop_text_encoder_training = 0 # Percentage of total steps after which to stop Text Encoder training. (Default: 0)
|
||||
enable_bucket = true # Enable image bucketing. (Default: true)
|
||||
min_bucket_reso = 256 # Minimum bucket resolution. (Default: 256)
|
||||
max_bucket_reso = 2048 # Maximum bucket resolution. (Default: 2048)
|
||||
cache_latents = true # Cache latents
|
||||
cache_latents_to_disk = false # Cache latents to disk
|
||||
caption_extension = ".txt" # Caption extension
|
||||
enable_bucket = true # Enable bucket
|
||||
epoch = 1 # Epoch
|
||||
learning_rate = 0.0001 # Learning rate
|
||||
learning_rate_te = 0.0001 # Learning rate text encoder
|
||||
learning_rate_te1 = 0.0001 # Learning rate text encoder 1
|
||||
learning_rate_te2 = 0.0001 # Learning rate text encoder 2
|
||||
lr_scheduler = "cosine" # LR Scheduler
|
||||
lr_scheduler_args = "" # LR Scheduler args
|
||||
lr_scheduler_type = "" # LR Scheduler type
|
||||
lr_warmup = 0 # LR Warmup (% of total steps)
|
||||
lr_scheduler_num_cycles = 1 # LR Scheduler num cycles
|
||||
lr_scheduler_power = 1.0 # LR Scheduler power
|
||||
max_bucket_reso = 2048 # Max bucket resolution
|
||||
max_grad_norm = 1.0 # Max grad norm
|
||||
max_resolution = "512,512" # Max resolution
|
||||
max_train_steps = 0 # Max train steps
|
||||
max_train_epochs = 0 # Max train epochs
|
||||
min_bucket_reso = 256 # Min bucket resolution
|
||||
optimizer = "AdamW8bit" # Optimizer (AdamW, AdamW8bit, Adafactor, DAdaptation, DAdaptAdaGrad, DAdaptAdam, DAdaptAdan, DAdaptAdanIP, DAdaptAdamPreprint, DAdaptLion, DAdaptSGD, Lion, Lion8bit, PagedAdam
|
||||
optimizer_args = "" # Optimizer args
|
||||
save_every_n_epochs = 1 # Save every n epochs
|
||||
save_every_n_steps = 1 # Save every n steps
|
||||
seed = 1234 # Seed
|
||||
stop_text_encoder_training = 0 # Stop text encoder training (% of total steps)
|
||||
train_batch_size = 1 # Train batch size
|
||||
|
||||
[advanced]
|
||||
# Advanced training parameters.
|
||||
no_token_padding = false # Disable token padding. (Default: false)
|
||||
gradient_accumulation_steps = 1 # Number of steps to accumulate gradients before an optimizer update. (Default: 1)
|
||||
weighted_captions = false # Enable weighted captions. (Default: false)
|
||||
prior_loss_weight = 1.0 # Weight for prior preservation loss (Dreambooth). (Default: 1.0)
|
||||
vae_dir = "./models/vae" # Path to VAE model directory or file. Initial value for dropdown.
|
||||
additional_parameters = "" # Additional command-line parameters for sd-scripts.
|
||||
adaptive_noise_scale = 0 # Adaptive noise scale
|
||||
additional_parameters = "" # Additional parameters
|
||||
bucket_no_upscale = true # Don't upscale bucket resolution
|
||||
bucket_reso_steps = 64 # Bucket resolution steps
|
||||
caption_dropout_every_n_epochs = 0 # Caption dropout every n epochs
|
||||
caption_dropout_rate = 0 # Caption dropout rate
|
||||
color_aug = false # Color augmentation
|
||||
clip_skip = 1 # Clip skip
|
||||
debiased_estimation_loss = false # Debiased estimation loss
|
||||
flip_aug = false # Flip augmentation
|
||||
fp8_base = false # FP8 base training (experimental)
|
||||
full_bf16 = false # Full bf16 training (experimental)
|
||||
full_fp16 = false # Full fp16 training (experimental)
|
||||
gradient_accumulation_steps = 1 # Gradient accumulation steps
|
||||
gradient_checkpointing = false # Gradient checkpointing
|
||||
huber_c = 0.1 # The huber loss parameter. Only used if one of the huber loss modes (huber or smooth l1) is selected with loss_type
|
||||
huber_schedule = "snr" # The type of loss to use and whether it's scheduled based on the timestep
|
||||
ip_noise_gamma = 0 # IP noise gamma
|
||||
ip_noise_gamma_random_strength = false # IP noise gamma random strength (true, false)
|
||||
keep_tokens = 0 # Keep tokens
|
||||
log_tracker_config_dir = "./logs" # Log tracker configs directory
|
||||
log_tracker_name = "" # Log tracker name
|
||||
loss_type = "l2" # Loss type (l2, huber, smooth_l1)
|
||||
masked_loss = false # Masked loss
|
||||
max_data_loader_n_workers = 0 # Max data loader n workers (string)
|
||||
max_timestep = 1000 # Max timestep
|
||||
max_token_length = 150 # Max token length ("75", "150", "225")
|
||||
mem_eff_attn = false # Memory efficient attention
|
||||
min_snr_gamma = 0 # Min SNR gamma
|
||||
min_timestep = 0 # Min timestep
|
||||
multires_noise_iterations = 0 # Multires noise iterations
|
||||
multires_noise_discount = 0 # Multires noise discount
|
||||
no_token_padding = false # Disable token padding
|
||||
noise_offset = 0 # Noise offset
|
||||
noise_offset_random_strength = false # Noise offset random strength (true, false)
|
||||
noise_offset_type = "Original" # Noise offset type ("Original", "Multires")
|
||||
persistent_data_loader_workers = false # Persistent data loader workers
|
||||
prior_loss_weight = 1.0 # Prior loss weight
|
||||
random_crop = false # Random crop
|
||||
save_every_n_steps = 0 # Save every n steps
|
||||
save_last_n_steps = 0 # Save last n steps
|
||||
save_last_n_steps_state = 0 # Save last n steps state
|
||||
save_state = false # Save state
|
||||
save_state_on_train_end = false # Save state on train end
|
||||
scale_v_pred_loss_like_noise_pred = false # Scale v pred loss like noise pred
|
||||
shuffle_caption = false # Shuffle captions
|
||||
state_dir = "./outputs" # Resume from saved training state
|
||||
log_with = "" # Logger to use ["wandb", "tensorboard", "all", ""]
|
||||
vae_batch_size = 0 # VAE batch size
|
||||
vae_dir = "./models/vae" # VAEs folder path
|
||||
v_pred_like_loss = 0 # V pred like loss weight
|
||||
wandb_api_key = "" # Wandb api key
|
||||
wandb_run_name = "" # Wandb run name
|
||||
weighted_captions = false # Weighted captions
|
||||
xformers = "xformers" # CrossAttention (none, sdp, xformers)
|
||||
|
||||
# Loss function settings.
|
||||
loss_type = "l2" # Loss type (huber, smooth_l1, l1, l2). (Default: "l2")
|
||||
huber_schedule = "snr" # Huber schedule (constant, exponential, snr). (Default: "snr")
|
||||
huber_c = 0.1 # Delta for Huber loss. (Default: 0.1)
|
||||
huber_scale = 1.0 # Scale for Huber loss. (Default: 1.0)
|
||||
|
||||
# Checkpoint saving frequency (step-based).
|
||||
save_every_n_steps = 0 # Save checkpoint every N steps (0 to disable). (Default: 0)
|
||||
save_last_n_steps = 0 # Number of recent step-based checkpoints to keep (0 to keep all). (Default: 0)
|
||||
save_last_n_steps_state = 0 # Number of recent step-based training states to keep. (Default: 0)
|
||||
|
||||
# Checkpoint saving frequency (epoch-based, supplements basic.save_every_n_epochs).
|
||||
save_last_n_epochs = 0 # Number of recent epoch-based checkpoints to keep (0 to keep all). (Default: 0)
|
||||
save_last_n_epochs_state = 0 # Number of recent epoch-based training states to keep. (Default: 0)
|
||||
|
||||
# Tokenizer and text encoder settings.
|
||||
keep_tokens = 0 # Keep N tokens from text tokenizer. (Default: 0)
|
||||
clip_skip = 1 # Number of CLIP layers to skip. (Default: 1)
|
||||
max_token_length = 75 # Maximum token length (75, 150, 225). (Default: 75)
|
||||
|
||||
# Precision and performance settings.
|
||||
fp8_base = false # Enable fp8 base training (experimental). (Default: false)
|
||||
fp8_base_unet = false # Enable fp8 base unet (Flux.1 specific). (Default: false)
|
||||
full_fp16 = false # Full fp16 training (experimental). (Default: false)
|
||||
full_bf16 = false # Full bf16 training (experimental). (Default: false)
|
||||
highvram = false # Disable VRAM optimizations (for systems with ample VRAM). (Default: false)
|
||||
lowvram = false # Enable VRAM optimizations (for systems with limited VRAM). (Default: false)
|
||||
skip_cache_check = false # Skip checking cache for latents, speeding up start but risky if data changes. (Default: false)
|
||||
gradient_checkpointing = false # Enable gradient checkpointing to save VRAM. (Default: false)
|
||||
persistent_data_loader_workers = false # Use persistent workers for DataLoader. (Default: false)
|
||||
mem_eff_attn = false # Enable memory-efficient attention. (Default: false)
|
||||
xformers = "xformers" # Cross-attention implementation (none, sdpa, xformers). (Default: "xformers")
|
||||
|
||||
# Augmentation settings.
|
||||
color_aug = false # Enable color augmentation. (Default: false)
|
||||
flip_aug = false # Enable flip augmentation. (Default: false)
|
||||
shuffle_caption = false # Enable caption shuffling. (Default: false)
|
||||
|
||||
# Loss and bucketing adjustments.
|
||||
masked_loss = false # Enable masked loss. (Default: false)
|
||||
scale_v_pred_loss_like_noise_pred = false # Scale v_prediction loss like noise prediction (for SD v2 models). (Default: false)
|
||||
min_snr_gamma = 0 # Minimum SNR gamma (0 to disable). (Default: 0)
|
||||
debiased_estimation_loss = false # Enable debiased estimation loss. (Default: false)
|
||||
bucket_no_upscale = true # Disable upscaling for bucketing. (Default: true)
|
||||
bucket_reso_steps = 64 # Resolution step size for bucketing. (Default: 64)
|
||||
random_crop = false # Use random crop instead of center crop for bucketing. (Default: false)
|
||||
v_pred_like_loss = 0 # V Pred like loss (0 to disable). (Default: 0)
|
||||
|
||||
# Noise scheduling and offset parameters.
|
||||
min_timestep = 0 # Minimum timestep for noise scheduling (0 for image only). (Default: 0)
|
||||
max_timestep = 1000 # Maximum timestep for noise scheduling (1000 for noise only). (Default: 1000)
|
||||
noise_offset_type = "Original" # Type of noise offset ("Original", "Multires"). (Default: "Original")
|
||||
noise_offset = 0 # Noise offset value (0 to disable; recommended 0.05-0.15 for "Original"). (Default: 0)
|
||||
noise_offset_random_strength = false # Use random strength for noise offset. (Default: false)
|
||||
adaptive_noise_scale = 0 # Adaptive noise scale (adds to noise_offset; 0 to disable). (Default: 0)
|
||||
multires_noise_iterations = 0 # Multires noise iterations (0 to disable; recommended 6-10 for "Multires"). (Default: 0)
|
||||
multires_noise_discount = 0.3 # Multires noise discount (recommended 0.8; 0.1-0.3 for small LoRA datasets). (Default: 0.3)
|
||||
ip_noise_gamma = 0 # IP noise gamma (0 to disable; recommended ~0.1). (Default: 0)
|
||||
ip_noise_gamma_random_strength = false # Use random strength for IP noise gamma. (Default: false)
|
||||
|
||||
# Caption dropout.
|
||||
caption_dropout_every_n_epochs = 0 # Frequency of caption dropout (in epochs; 0 to disable). (Default: 0)
|
||||
caption_dropout_rate = 0 # Rate of caption dropout (0 to disable). (Default: 0)
|
||||
|
||||
# VAE and memory management.
|
||||
vae_batch_size = 0 # Batch size for VAE processing (0 for default/disabled). (Default: 0)
|
||||
blocks_to_swap = 0 # Number of blocks to swap for fused backward/optimizer (0 for no swap). (Default: 0)
|
||||
|
||||
# Training state saving and resuming.
|
||||
save_state = false # Save training state periodically. (Default: false)
|
||||
save_state_on_train_end = false # Save training state at the end of training. (Default: false)
|
||||
state_dir = "./outputs" # Directory to resume training state from. Initial value for dropdown.
|
||||
max_data_loader_n_workers = 0 # Maximum number of workers for DataLoader (0 for system default). (Default: 0)
|
||||
|
||||
# Logging settings.
|
||||
log_with = "" # Logging integration (wandb, tensorboard, all, or empty for none). (Default: "")
|
||||
wandb_api_key = "" # Weights & Biases API key.
|
||||
wandb_run_name = "" # Weights & Biases run name.
|
||||
log_config = false # Log training parameters to W&B. (Default: false)
|
||||
log_tracker_name = "" # Name for the log tracker.
|
||||
log_tracker_config_dir = "./logs" # Directory for log tracker configuration. Initial value for dropdown.
|
||||
|
||||
[sdxl]
|
||||
# SDXL specific parameters.
|
||||
sdxl_cache_text_encoder_outputs = false # Cache SDXL text encoder outputs in VRAM. (Default: false)
|
||||
sdxl_no_half_vae = false # Disable half-precision VAE for SDXL (original example was true). (Default: false)
|
||||
fused_backward_pass = false # Enable fused backward pass for SDXL. (Default: false)
|
||||
fused_optimizer_groups = 0 # Number of fused optimizer groups for SDXL. (Default: 0)
|
||||
disable_mmap_load_safetensors = false # Disable memory-mapped loading of safetensors for SDXL. (Default: false)
|
||||
# This next section can be used to set default values for the Dataset Preparation section
|
||||
# The "Destination training direcroty" field will be equal to "train_data_dir" as specified above
|
||||
[dataset_preparation]
|
||||
class_prompt = "class" # Class prompt
|
||||
images_folder = "/some/folder/where/images/are" # Training images directory
|
||||
instance_prompt = "instance" # Instance prompt
|
||||
reg_images_folder = "/some/folder/where/reg/images/are" # Regularisation images directory
|
||||
reg_images_repeat = 1 # Regularisation images repeat
|
||||
util_regularization_images_repeat_input = 1 # Regularisation images repeat input
|
||||
util_training_images_repeat_input = 40 # Training images repeat input
|
||||
|
||||
[huggingface]
|
||||
# Hugging Face Hub integration settings.
|
||||
repo_id = "" # Repository ID on Hugging Face Hub (e.g., username/my-model).
|
||||
token = "" # Hugging Face API token.
|
||||
repo_type = "" # Type of repository on Hugging Face Hub (e.g., model, dataset).
|
||||
repo_visibility = "" # Visibility of the repository (public, private).
|
||||
path_in_repo = "" # Path within the repository to save/load files.
|
||||
|
||||
# Hugging Face state saving/resuming.
|
||||
save_state_to_huggingface = false # Save training state to Hugging Face Hub. (Default: false)
|
||||
resume_from_huggingface = "" # Resume training from Hugging Face Hub (format: repo_id/path:revision:type).
|
||||
async_upload = false # Enable asynchronous uploads to Hugging Face Hub. (Default: false)
|
||||
|
||||
[metadata]
|
||||
# Metadata to embed in the trained model.
|
||||
metadata_title = "" # Title for the model (defaults to output_name if empty).
|
||||
metadata_author = "" # Author of the model.
|
||||
metadata_description = "" # Description of the model.
|
||||
metadata_license = "" # License information for the model.
|
||||
metadata_tags = "" # Comma-separated tags for the model.
|
||||
async_upload = false # Async upload
|
||||
huggingface_path_in_repo = "" # Huggingface path in repo
|
||||
huggingface_repo_id = "" # Huggingface repo id
|
||||
huggingface_repo_type = "" # Huggingface repo type
|
||||
huggingface_repo_visibility = "" # Huggingface repo visibility
|
||||
huggingface_token = "" # Huggingface token
|
||||
resume_from_huggingface = "" # Resume from huggingface (ex: {repo_id}/{path_in_repo}:{revision}:{repo_type})
|
||||
save_state_to_huggingface = false # Save state to huggingface
|
||||
|
||||
[samples]
|
||||
# Image sample generation during training.
|
||||
sample_every_n_steps = 0 # Frequency of generating samples (in steps; 0 to disable). (Default: 0)
|
||||
sample_every_n_epochs = 0 # Frequency of generating samples (in epochs; 0 to disable). (Default: 0)
|
||||
sample_sampler = "euler_a" # Sampler to use for image generation (e.g., euler_a, dpm++_2m_karras). (Default: "euler_a")
|
||||
sample_prompts = "" # Prompts for sample generation (one per line).
|
||||
sample_every_n_steps = 0 # Sample every n steps
|
||||
sample_every_n_epochs = 0 # Sample every n epochs
|
||||
sample_prompts = "" # Sample prompts
|
||||
sample_sampler = "euler_a" # Sampler to use for image sampling
|
||||
|
||||
# --- UI Specific Default Settings ---
|
||||
# These sections define default values for specific tabs in the Kohya_ss GUI.
|
||||
|
||||
[dreambooth]
|
||||
# Default UI settings for the Dreambooth tab.
|
||||
expand_basic_accordion = true # Expand the 'Basic Configuration' accordion by default. (Default: true)
|
||||
|
||||
[finetune]
|
||||
# Default UI settings for the Finetune tab.
|
||||
expand_basic_accordion = true # Expand the 'Basic Configuration' accordion by default. (Default: true)
|
||||
|
||||
[finetune.advanced]
|
||||
# Default UI settings for the 'Advanced' section within the Finetune tab.
|
||||
block_lr = "" # Block learning rates (SDXL specific for finetuning).
|
||||
|
||||
[finetune.basic]
|
||||
# Default UI settings for the 'Basic' section within the Finetune tab.
|
||||
dataset_repeats = "40" # Number of times to repeat the dataset per epoch. (Default: "40")
|
||||
train_text_encoder = true # Train the text encoder during finetuning. (Default: true)
|
||||
|
||||
[finetune.dataset_preparation]
|
||||
# Default UI settings for the 'Dataset Preparation' section within the Finetune tab.
|
||||
max_resolution = "512,512" # Max resolution for dataset processing. (Default: "512,512")
|
||||
min_bucket_reso = "256" # Min bucket resolution for dataset processing. (Default: "256")
|
||||
max_bucket_reso = "1024" # Max bucket resolution for dataset processing. (Default: "1024")
|
||||
batch_size = "1" # Batch size for dataset processing. (Default: "1")
|
||||
create_caption = true # Generate caption metadata. (Default: true)
|
||||
create_buckets = true # Generate image bucket metadata. (Default: true)
|
||||
use_latent_files = "Yes" # Use latent files ("Yes", "No", "Only"). (Default: "Yes")
|
||||
caption_metadata_filename = "meta_cap.json" # Filename for caption metadata. (Default: "meta_cap.json")
|
||||
latent_metadata_filename = "meta_lat.json" # Filename for latent metadata. (Default: "meta_lat.json")
|
||||
full_path = true # Use full paths in metadata files. (Default: true)
|
||||
weighted_captions = false # Enable weighted captions for dataset. (Default: false)
|
||||
|
||||
[lora]
|
||||
# Default UI settings for the LoRA and LyCORIS tab.
|
||||
expand_basic_accordion = true # Expand the 'Basic Configuration' accordion by default. (Default: true)
|
||||
expand_lycoris_accordion = false # Expand the 'LyCORIS' specific accordion by default. (Default: false)
|
||||
|
||||
[lora.basic]
|
||||
# Default UI settings for the 'Basic' section within the LoRA tab.
|
||||
lora_type = "Standard" # Type of LoRA network (Standard, LoCon, LoHa, etc.). (Default: "Standard")
|
||||
lycoris_preset = "full" # Default preset for LyCORIS if selected. (Default: "full")
|
||||
network_weights = "" # Path to existing LoRA network weights to resume training.
|
||||
dim_from_weights = false # Determine network dimension from weights automatically. (Default: false)
|
||||
|
||||
# Learning rates for LoRA components.
|
||||
text_encoder_lr = 0.0 # Learning rate for text encoder LoRA components. (Default: 0.0)
|
||||
t5xxl_lr = 0.0 # Learning rate for T5XXL LoRA components. (Default: 0.0)
|
||||
unet_lr = 0.0001 # Learning rate for U-Net LoRA components. (Default: 0.0001)
|
||||
loraplus_lr_ratio = 0.0 # LoRA+ learning rate ratio. (Default: 0.0)
|
||||
loraplus_unet_lr_ratio = 0.0 # LoRA+ U-Net learning rate ratio. (Default: 0.0)
|
||||
loraplus_text_encoder_lr_ratio = 0.0 # LoRA+ Text Encoder learning rate ratio. (Default: 0.0)
|
||||
|
||||
# Network dimensions and alpha values.
|
||||
network_dim = 8 # Rank of the LoRA decomposition. (Default: 8)
|
||||
network_alpha = 1.0 # Alpha for LoRA scaling (often same as network_dim). (Default: 1.0)
|
||||
conv_dim = 1 # Rank for convolutional layers in LoCon/LyCORIS. (Default: 1)
|
||||
conv_alpha = 1.0 # Alpha for convolutional layers. (Default: 1.0)
|
||||
|
||||
# Dropout and normalization.
|
||||
scale_weight_norms = 0.0 # Scale weight norms (0.0 to disable). (Default: 0.0)
|
||||
network_dropout = 0.0 # Dropout rate for LoRA network. (Default: 0.0)
|
||||
rank_dropout = 0.0 # Rank dropout rate. (Default: 0.0)
|
||||
module_dropout = 0.0 # Module dropout rate. (Default: 0.0)
|
||||
|
||||
# DyLoRA specific.
|
||||
unit = 1 # Unit value for DyLoRA. (Default: 1)
|
||||
|
||||
# GGPO specific (experimental).
|
||||
train_lora_ggpo = false # Train LoRA with GGPO. (Default: false)
|
||||
ggpo_sigma = 0.03 # GGPO sigma value. (Default: 0.03)
|
||||
ggpo_beta = 0.01 # GGPO beta value. (Default: 0.01)
|
||||
|
||||
[lora.advanced]
|
||||
# Default UI settings for LoRA specific advanced parameters (weights, blocks, conv layers).
|
||||
down_lr_weight = "" # Learning rate weights for down blocks.
|
||||
mid_lr_weight = "" # Learning rate weights for mid block.
|
||||
up_lr_weight = "" # Learning rate weights for up blocks.
|
||||
block_lr_zero_threshold = "" # Threshold for zeroing block learning rates.
|
||||
block_dims = "" # Dimensions for specific blocks.
|
||||
block_alphas = "" # Alphas for specific blocks.
|
||||
conv_block_dims = "" # Dimensions for specific convolutional blocks.
|
||||
conv_block_alphas = "" # Alphas for specific convolutional blocks.
|
||||
|
||||
[lora.lycoris]
|
||||
# Default UI settings for LyCORIS specific parameters.
|
||||
factor = -1 # Factor for LyCORIS algorithms. (Default: -1)
|
||||
bypass_mode = false # Enable bypass mode for LyCORIS. (Default: false)
|
||||
dora_wd = false # Enable DoRA Weight Decompsition. (Default: false)
|
||||
use_cp = false # Use CP decomposition for LoKR. (Default: false)
|
||||
use_tucker = false # Use Tucker decomposition for LoKR. (Default: false)
|
||||
use_scalar = false # Use scalar values in LoKR. (Default: false)
|
||||
rank_dropout_scale = false # Scale rank dropout. (Default: false)
|
||||
constrain = 0.0 # Constraint value for LoKR. (Default: 0.0)
|
||||
rescaled = false # Enable rescaled LoKR. (Default: false)
|
||||
train_norm = false # Train normalization layers with LyCORIS. (Default: false)
|
||||
decompose_both = false # Decompose both layers in LyCORIS. (Default: false)
|
||||
train_on_input = true # Train LyCORIS on input. (Default: true)
|
||||
|
||||
[textual_inversion]
|
||||
# Default UI settings for the Textual Inversion tab.
|
||||
expand_basic_accordion = true # Expand the 'Basic Configuration' accordion by default. (Default: true)
|
||||
|
||||
[textual_inversion.basic]
|
||||
# Default UI settings for the 'Basic' section within the Textual Inversion tab.
|
||||
token_string = "" # Trigger token string for the embedding.
|
||||
init_word = "*" # Initialization word for the embedding. (Default: "*")
|
||||
num_vectors_per_token = 1 # Number of vectors per token in the embedding. (Default: 1)
|
||||
template = "caption" # Template file for training. (Default: "caption")
|
||||
weights = "" # Path to existing TI embedding file to resume training.
|
||||
|
||||
[sd3]
|
||||
# SD3 specific parameters.
|
||||
weighting_scheme = "logit_normal" # Weighting scheme for SD3. (Default: "logit_normal")
|
||||
logit_mean = 0.0 # Logit mean for SD3. (Default: 0.0)
|
||||
logit_std = 1.0 # Logit standard deviation for SD3. (Default: 1.0)
|
||||
mode_scale = 1.29 # Mode scale for SD3. (Default: 1.29)
|
||||
|
||||
# Paths to SD3 components.
|
||||
clip_l = "" # Path to SD3 CLIP-L model.
|
||||
clip_g = "" # Path to SD3 CLIP-G model.
|
||||
t5xxl = "" # Path to SD3 T5XXL model.
|
||||
|
||||
# Saving options for SD3 components.
|
||||
save_clip = false # Save CLIP model during SD3 training. (Default: false)
|
||||
save_t5xxl = false # Save T5XXL model during SD3 training. (Default: false)
|
||||
|
||||
# T5XXL settings for SD3.
|
||||
t5xxl_device = "" # Device for T5XXL processing (e.g., "cpu", "cuda").
|
||||
t5xxl_dtype = "bf16" # Data type for T5XXL (e.g., "bf16", "fp16", "float32"). (Default: "bf16")
|
||||
|
||||
# Batch size and caching for SD3 text encoders.
|
||||
text_encoder_batch_size = 1 # Batch size for SD3 text encoders. (Default: 1)
|
||||
cache_text_encoder_outputs = false # Cache SD3 text encoder outputs in VRAM. (Default: false)
|
||||
cache_text_encoder_outputs_to_disk = false # Cache SD3 text encoder outputs to disk. (Default: false)
|
||||
|
||||
# Dropout rates for SD3 text encoders.
|
||||
clip_l_dropout_rate = 0.0 # Dropout rate for SD3 CLIP-L. (Default: 0.0)
|
||||
clip_g_dropout_rate = 0.0 # Dropout rate for SD3 CLIP-G. (Default: 0.0)
|
||||
t5_dropout_rate = 0.0 # Dropout rate for SD3 T5. (Default: 0.0)
|
||||
|
||||
# Performance options for SD3.
|
||||
fused_backward_pass = false # Enable fused backward pass for SD3. (Default: false)
|
||||
disable_mmap_load_safetensors = false # Disable memory-mapped loading of safetensors for SD3. (Default: false)
|
||||
enable_scaled_pos_embed = false # Enable scaled positional embeddings for SD3. (Default: false)
|
||||
pos_emb_random_crop_rate = 0.0 # Random crop rate for positional embeddings in SD3. (Default: 0.0)
|
||||
|
||||
[flux1]
|
||||
# Flux.1 specific parameters.
|
||||
expand_accordion = true # Expand the main Flux.1 accordion in UI by default. (Default: true)
|
||||
|
||||
# Paths to Flux.1 components.
|
||||
ae = "" # Path to Flux.1 Autoencoder model.
|
||||
clip_l = "" # Path to Flux.1 CLIP-L model.
|
||||
t5xxl = "" # Path to Flux.1 T5XXL model.
|
||||
|
||||
# Core Flux.1 parameters.
|
||||
discrete_flow_shift = 3.0 # Discrete flow shift value. (Default: 3.0)
|
||||
model_prediction_type = "sigma_scaled" # Model prediction type (e.g., "sigma_scaled"). (Default: "sigma_scaled")
|
||||
timestep_sampling = "sigma" # Timestep sampling method (e.g., "sigma"). (Default: "sigma")
|
||||
apply_t5_attn_mask = false # Apply T5 attention mask. (Default: false)
|
||||
split_mode = false # Enable split mode. (Default: false)
|
||||
train_blocks = "all" # Blocks to train (e.g., "all", "single", "double"). (Default: "all")
|
||||
split_qkv = false # Split QKV computations. (Default: false)
|
||||
train_t5xxl = false # Train T5XXL component. (Default: false)
|
||||
cpu_offload_checkpointing = false # Offload checkpoints to CPU to save VRAM. Appears once in this TOML. (Default: false)
|
||||
guidance_scale = 3.5 # Guidance scale for inference. (Default: 3.5)
|
||||
|
||||
# T5XXL settings for Flux.1.
|
||||
t5xxl_max_token_length = 512 # Maximum token length for T5XXL. (Default: 512)
|
||||
|
||||
# Performance and memory options for Flux.1.
|
||||
enable_all_linear = false # Enable all linear layers. (Default: false)
|
||||
cache_text_encoder_outputs = false # Cache Flux.1 text encoder outputs in VRAM. (Default: false)
|
||||
cache_text_encoder_outputs_to_disk = false # Cache Flux.1 text encoder outputs to disk. (Default: false)
|
||||
mem_eff_save = false # Enable memory-efficient saving. (Default: false)
|
||||
single_blocks_to_swap = 0 # Number of single blocks to swap. (Default: 0)
|
||||
double_blocks_to_swap = 0 # Number of double blocks to swap. (Default: 0)
|
||||
blockwise_fused_optimizers = false # Enable blockwise fused optimizers. (Default: false)
|
||||
flux_fused_backward_pass = false # Enable fused backward pass specific to Flux.1. (Default: false)
|
||||
|
||||
# Block indices for training.
|
||||
train_double_block_indices = "all" # Indices of double blocks to train. (Default: "all")
|
||||
train_single_block_indices = "all" # Indices of single blocks to train. (Default: "all")
|
||||
|
||||
# Dimensionality settings for Flux.1 components.
|
||||
img_attn_dim = "" # Image attention dimension.
|
||||
img_mlp_dim = "" # Image MLP dimension.
|
||||
img_mod_dim = "" # Image modulator dimension.
|
||||
single_dim = "" # Single dimension.
|
||||
txt_attn_dim = "" # Text attention dimension.
|
||||
txt_mlp_dim = "" # Text MLP dimension.
|
||||
txt_mod_dim = "" # Text modulator dimension.
|
||||
single_mod_dim = "" # Single modulator dimension.
|
||||
in_dims = "" # Input dimensions.
|
||||
|
||||
# --- Standalone Tool Default Settings ---
|
||||
|
||||
[dataset_preparation]
|
||||
# Default values for the standalone "Dreambooth Folder Creation" utility (not the Finetune tab's dataset prep).
|
||||
# The "Destination training directory" field in that utility will use `[model].train_data_dir`.
|
||||
class_prompt = "class" # Class prompt for Dreambooth style training. (Default: "class")
|
||||
images_folder = "/some/folder/where/images/are" # Example path for training images directory.
|
||||
instance_prompt = "instance" # Instance prompt for Dreambooth style training. (Default: "instance")
|
||||
reg_images_folder = "/some/folder/where/reg/images/are" # Example path for regularization images directory.
|
||||
reg_images_repeat = 1 # Repeats for regularization images. (Default: 1)
|
||||
util_regularization_images_repeat_input = 1 # Input field for regularization image repeats in the utility. (Default: 1)
|
||||
util_training_images_repeat_input = 40 # Input field for training image repeats in the utility. (Default: 40)
|
||||
[sdxl]
|
||||
disable_mmap_load_safetensors = false # Disable mmap load safe tensors
|
||||
fused_backward_pass = false # Fused backward pass
|
||||
fused_optimizer_groups = 0 # Fused optimizer groups
|
||||
sdxl_cache_text_encoder_outputs = false # Cache text encoder outputs
|
||||
sdxl_no_half_vae = true # No half VAE
|
||||
|
||||
[wd14_caption]
|
||||
# Default settings for the WD14 Tagger utility.
|
||||
always_first_tags = "" # Comma-separated tags to always place at the beginning (e.g., "1girl,1boy").
|
||||
append_tags = false # Append new tags instead of overwriting caption file. (Default: false)
|
||||
batch_size = 8 # Batch size for processing images. (Default: 8)
|
||||
caption_extension = ".txt" # Extension for caption files (e.g., ".caption", ".txt"). (Default: ".txt")
|
||||
caption_separator = ", " # Separator used between tags in the caption file. (Default: ", ")
|
||||
character_tag_expand = false # Expand character tags like `name_(series)` to `name, series`. (Default: false)
|
||||
character_threshold = 0.35 # Confidence threshold for character tags. (Default: 0.35)
|
||||
debug = false # Enable debug mode for verbose logging. (Default: false)
|
||||
force_download = false # Force re-download of the tagging model. (Default: false)
|
||||
frequency_tags = false # Include frequency of tags in output. (Default: false)
|
||||
general_threshold = 0.35 # Confidence threshold for general tags. (Default: 0.35)
|
||||
max_data_loader_n_workers = 2 # Max workers for DataLoader. (Default: 2)
|
||||
onnx = true # Use ONNX runtime for inference (if available). (Default: true)
|
||||
recursive = false # Process images in subdirectories recursively. (Default: false)
|
||||
remove_underscore = false # Replace underscores with spaces in tags. (Default: false)
|
||||
repo_id = "SmilingWolf/wd-convnext-tagger-v3" # Hugging Face repository ID for the tagger model.
|
||||
tag_replacement = "" # Tag replacement rules (e.g., "source1,target1;source2,target2").
|
||||
thresh = 0.36 # General confidence threshold (alternative to general_threshold). (Default: 0.36)
|
||||
train_data_dir = "" # Directory of images to caption. Initial value for dropdown.
|
||||
undesired_tags = "" # Comma-separated list of tags to remove from captions.
|
||||
use_rating_tags = false # Include rating tags (e.g., general, sensitive, questionable, explicit). (Default: false)
|
||||
use_rating_tags_as_last_tag = false # Place rating tags at the end of the caption. (Default: false)
|
||||
always_first_tags = "" # comma-separated list of tags to always put at the beginning, e.g. 1girl,1boy
|
||||
append_tags = false # Append TAGs
|
||||
batch_size = 8 # Batch size
|
||||
caption_extension = ".txt" # Extension for caption file (e.g., .caption, .txt)
|
||||
caption_separator = ", " # Caption Separator
|
||||
character_tag_expand = false # Expand tag tail parenthesis to another tag for character tags. `chara_name_(series)` becomes `chara_name, series`
|
||||
character_threshold = 0.35 # Character threshold
|
||||
debug = false # Debug mode
|
||||
force_download = false # Force model re-download when switching to onnx
|
||||
frequency_tags = false # Frequency tags
|
||||
general_threshold = 0.35 # General threshold
|
||||
max_data_loader_n_workers = 2 # Max dataloader workers
|
||||
onnx = true # ONNX
|
||||
recursive = false # Recursive
|
||||
remove_underscore = false # Remove underscore
|
||||
repo_id = "SmilingWolf/wd-convnext-tagger-v3" # Repo id for wd14 tagger on Hugging Face
|
||||
tag_replacement = "" # Tag replacement in the format of `source1,target1;source2,target2; ...`. Escape `,` and `;` with `\`. e.g. `tag1,tag2;tag3,tag4`
|
||||
thresh = 0.36 # Threshold
|
||||
train_data_dir = "" # Image folder to caption (containing the images to caption)
|
||||
undesired_tags = "" # comma-separated list of tags to remove, e.g. 1girl,1boy
|
||||
use_rating_tags = false # Use rating tags
|
||||
use_rating_tags_as_last_tag = false # Use rating tags as last tagging tags
|
||||
|
||||
[metadata]
|
||||
metadata_title = "" # Title for model metadata (default is output_name)
|
||||
metadata_author = "" # Author name for model metadata
|
||||
metadata_description = "" # Description for model metadata
|
||||
metadata_license = "" # License for model metadata
|
||||
metadata_tags = "" # Tags for model metadata
|
||||
|
|
|
|||
|
|
@ -36,18 +36,12 @@ To install the necessary components for Runpod and run kohya_ss, follow these st
|
|||
|
||||
6. Connect to the public URL displayed after the installation process is completed.
|
||||
|
||||
#### Pre-built Runpod templates
|
||||
#### Pre-built Runpod template
|
||||
|
||||
To run from a pre-built Runpod template, you can:
|
||||
|
||||
1. Open the Runpod template by clicking on one of the template links in the table below.
|
||||
1. Open the Runpod template by clicking on <https://runpod.io/gsc?template=ya6013lj5a&ref=w18gds2n>.
|
||||
|
||||
2. Deploy the template on the desired host.
|
||||
|
||||
3. Once deployed, connect to the Runpod on HTTP 3010 to access the kohya_ss GUI. You can also connect to auto1111 on HTTP 3000.
|
||||
|
||||
| Runpod Template Version | Runpod Template Description |
|
||||
|-----------------------------------------------------------------------------------------|----------------------------------------------------|
|
||||
| [CUDA 12.4 template](https://runpod.io/console/deploy?template=uajca40f1z&ref=2xxro4sy) | Template with CUDA 12.4 for non-RTX 5090 GPU types |
|
||||
| [CUDA 12.8 template](https://runpod.io/console/deploy?template=8y5a02q55r&ref=2xxro4sy) | Template with CUDA 12.8 for RTX 5090 GPU type |
|
||||
|
||||
|
|
@ -4,7 +4,6 @@ import argparse
|
|||
import subprocess
|
||||
import contextlib
|
||||
import gradio as gr
|
||||
import toml
|
||||
|
||||
from kohya_gui.class_gui_config import KohyaSSGUIConfig
|
||||
from kohya_gui.dreambooth_gui import dreambooth_tab
|
||||
|
|
@ -78,12 +77,7 @@ def UI(**kwargs):
|
|||
log.info(f"headless: {kwargs.get('headless', False)}")
|
||||
|
||||
# Load release and README information
|
||||
try:
|
||||
pyproject_data = toml.load("pyproject.toml")
|
||||
release_info = "v" + pyproject_data["project"]["version"]
|
||||
except Exception as e:
|
||||
log.error(f"Error loading version from pyproject.toml: {e}")
|
||||
release_info = "vUNKNOWN" # Fallback version
|
||||
release_info = read_file_content("./.release")
|
||||
readme_content = read_file_content("./README.md")
|
||||
|
||||
# Load configuration from the specified file
|
||||
|
|
@ -108,6 +102,7 @@ def UI(**kwargs):
|
|||
"share": False if kwargs.get("do_not_share", False) else kwargs.get("share", False),
|
||||
"root_path": kwargs.get("root_path", None),
|
||||
"debug": kwargs.get("debug", False),
|
||||
"allowed_paths": config.allowed_paths,
|
||||
}
|
||||
|
||||
# This line filters out any key-value pairs from `launch_params` where the value is `None`, ensuring only valid parameters are passed to the `launch` function.
|
||||
|
|
|
|||
|
|
@ -142,7 +142,7 @@ class AdvancedTraining:
|
|||
placeholder='(Optional) Use to provide additional parameters not handled by the GUI. Eg: --some_parameters "value"',
|
||||
value=self.config.get("advanced.additional_parameters", ""),
|
||||
)
|
||||
with gr.Accordion("Scheduled Huber Loss", open=self.config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Scheduled Huber Loss", open=False):
|
||||
with gr.Row():
|
||||
self.loss_type = gr.Dropdown(
|
||||
label="Loss type",
|
||||
|
|
@ -328,7 +328,7 @@ class AdvancedTraining:
|
|||
)
|
||||
self.flip_aug = gr.Checkbox(
|
||||
label="Flip augmentation",
|
||||
value=self.config.get("advanced.flip_aug", False),
|
||||
value=getattr(self.config, "advanced.flip_aug", False),
|
||||
info="Enable horizontal flip augmentation",
|
||||
)
|
||||
self.masked_loss = gr.Checkbox(
|
||||
|
|
@ -599,9 +599,9 @@ class AdvancedTraining:
|
|||
)
|
||||
self.log_tracker_config = gr.Dropdown(
|
||||
label="Log tracker config",
|
||||
choices=[self.config.get("advanced.log_tracker_config_dir", "")]
|
||||
choices=[self.config.get("log_tracker_config_dir", "")]
|
||||
+ list_log_tracker_config_files(self.current_log_tracker_config_dir),
|
||||
value=self.config.get("advanced.log_tracker_config_dir", ""),
|
||||
value=self.config.get("log_tracker_config_dir", ""),
|
||||
info="Path to tracker config file to use for logging",
|
||||
interactive=True,
|
||||
allow_custom_value=True,
|
||||
|
|
@ -610,7 +610,7 @@ class AdvancedTraining:
|
|||
self.log_tracker_config,
|
||||
lambda: None,
|
||||
lambda: {
|
||||
"choices": [self.config.get("advanced.log_tracker_config_dir", "")]
|
||||
"choices": [self.config.get("log_tracker_config_dir", "")]
|
||||
+ list_log_tracker_config_files(self.current_log_tracker_config_dir)
|
||||
},
|
||||
"open_folder_small",
|
||||
|
|
@ -625,7 +625,7 @@ class AdvancedTraining:
|
|||
)
|
||||
self.log_tracker_config.change(
|
||||
fn=lambda path: gr.Dropdown(
|
||||
choices=[self.config.get("advanced.log_tracker_config_dir", "")]
|
||||
choices=[self.config.get("log_tracker_config_dir", "")]
|
||||
+ list_log_tracker_config_files(path)
|
||||
),
|
||||
inputs=self.log_tracker_config,
|
||||
|
|
|
|||
|
|
@ -89,8 +89,8 @@ class BasicTraining:
|
|||
minimum=1,
|
||||
maximum=64,
|
||||
label="Train batch size",
|
||||
value=self.config.get("basic.train_batch_size", 1),
|
||||
step=1,
|
||||
value=1,
|
||||
step=self.config.get("basic.train_batch_size", 1),
|
||||
)
|
||||
# Initialize the epoch number input
|
||||
self.epoch = gr.Number(
|
||||
|
|
@ -123,7 +123,7 @@ class BasicTraining:
|
|||
self.caption_extension = gr.Dropdown(
|
||||
label="Caption file extension",
|
||||
choices=["", ".cap", ".caption", ".txt"],
|
||||
value=self.config.get("basic.caption_extension", ".txt"),
|
||||
value=".txt",
|
||||
interactive=True,
|
||||
)
|
||||
|
||||
|
|
@ -230,7 +230,7 @@ class BasicTraining:
|
|||
"""
|
||||
with gr.Row():
|
||||
# Initialize the maximum gradient norm slider
|
||||
self.max_grad_norm = gr.Number(label='Max grad norm', value=self.config.get("basic.max_grad_norm", 1.0), interactive=True)
|
||||
self.max_grad_norm = gr.Number(label='Max grad norm', value=1.0, interactive=True)
|
||||
# Initialize the learning rate scheduler extra arguments textbox
|
||||
self.lr_scheduler_args = gr.Textbox(
|
||||
label="LR scheduler extra arguments",
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class ConfigurationFile:
|
|||
|
||||
# Sets the directory for storing configuration files, defaults to a 'presets' folder within the script directory.
|
||||
self.current_config_dir = self.config.get(
|
||||
"configuration.config_dir", os.path.join(scriptdir, "presets")
|
||||
"config_dir", os.path.join(scriptdir, "presets")
|
||||
)
|
||||
|
||||
# Initialize the GUI components for configuration.
|
||||
|
|
@ -60,8 +60,8 @@ class ConfigurationFile:
|
|||
# Dropdown for selecting or entering the name of a configuration file.
|
||||
self.config_file_name = gr.Dropdown(
|
||||
label="Load/Save Config file",
|
||||
choices=[self.config.get("configuration.config_dir", "")] + self.list_config_dir(self.current_config_dir),
|
||||
value=self.config.get("configuration.config_dir", ""),
|
||||
choices=[self.config.get("config_dir", "")] + self.list_config_dir(self.current_config_dir),
|
||||
value=self.config.get("config_dir", ""),
|
||||
interactive=True,
|
||||
allow_custom_value=True,
|
||||
)
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ class flux1Training:
|
|||
return (gr.Group(visible=False), gr.Group(visible=True))
|
||||
|
||||
with gr.Accordion(
|
||||
"Flux.1", open=self.config.get("flux1.expand_accordion", True), visible=False, elem_classes=["flux1_background"]
|
||||
"Flux.1", open=True, visible=False, elem_classes=["flux1_background"]
|
||||
) as flux1_accordion:
|
||||
with gr.Group():
|
||||
with gr.Row():
|
||||
|
|
@ -104,7 +104,7 @@ class flux1Training:
|
|||
label="Model Prediction Type",
|
||||
choices=["raw", "additive", "sigma_scaled"],
|
||||
value=self.config.get(
|
||||
"flux1.model_prediction_type", "sigma_scaled"
|
||||
"flux1.timestep_sampling", "sigma_scaled"
|
||||
),
|
||||
interactive=True,
|
||||
)
|
||||
|
|
@ -247,7 +247,7 @@ class flux1Training:
|
|||
)
|
||||
self.flux_fused_backward_pass = gr.Checkbox(
|
||||
label="Fused Backward Pass",
|
||||
value=self.config.get("flux1.flux_fused_backward_pass", False),
|
||||
value=self.config.get("flux1.fused_backward_pass", False),
|
||||
info="Enables the fusing of the optimizer step into the backward pass for each parameter. Only Adafactor optimizer is supported.",
|
||||
interactive=True,
|
||||
)
|
||||
|
|
@ -274,7 +274,7 @@ class flux1Training:
|
|||
|
||||
with gr.Accordion(
|
||||
"Rank for layers",
|
||||
open=self.config.get("settings.expand_all_accordions", False),
|
||||
open=False,
|
||||
visible=False if finetuning else True,
|
||||
elem_classes=["flux1_rank_layers_background"],
|
||||
):
|
||||
|
|
|
|||
|
|
@ -26,13 +26,13 @@ class Folders:
|
|||
|
||||
# Set default directories if not provided
|
||||
self.current_output_dir = self.config.get(
|
||||
"folders.output_dir", os.path.join(scriptdir, "outputs")
|
||||
"output_dir", os.path.join(scriptdir, "outputs")
|
||||
)
|
||||
self.current_logging_dir = self.config.get(
|
||||
"folders.logging_dir", os.path.join(scriptdir, "logs")
|
||||
"logging_dir", os.path.join(scriptdir, "logs")
|
||||
)
|
||||
self.current_reg_data_dir = self.config.get(
|
||||
"folders.reg_data_dir", os.path.join(scriptdir, "reg")
|
||||
"reg_data_dir", os.path.join(scriptdir, "reg")
|
||||
)
|
||||
|
||||
# Create directories if they don't exist
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ class KohyaSSGUIConfig:
|
|||
Initialize the KohyaSSGUIConfig class.
|
||||
"""
|
||||
self.config = self.load_config(config_file_path=config_file_path)
|
||||
self.allowed_paths = self.get_allowed_paths()
|
||||
|
||||
def load_config(self, config_file_path: str = "./config.toml") -> dict:
|
||||
"""
|
||||
|
|
@ -81,6 +82,12 @@ class KohyaSSGUIConfig:
|
|||
log.debug(f"Returned {data}")
|
||||
return data
|
||||
|
||||
def get_allowed_paths(self) -> list:
|
||||
"""
|
||||
Retrieves the list of allowed paths from the [server] section of the config file.
|
||||
"""
|
||||
return self.get("server.allowed_paths", [])
|
||||
|
||||
def is_config_loaded(self) -> bool:
|
||||
"""
|
||||
Checks if the configuration was loaded from a file.
|
||||
|
|
|
|||
|
|
@ -15,32 +15,32 @@ class MetaData:
|
|||
label="Metadata title",
|
||||
placeholder="(optional) title for model metadata (default is output_name)",
|
||||
interactive=True,
|
||||
value=self.config.get("metadata.metadata_title", ""),
|
||||
value=self.config.get("metadata.title", ""),
|
||||
)
|
||||
self.metadata_author = gr.Textbox(
|
||||
label="Metadata author",
|
||||
placeholder="(optional) author name for model metadata",
|
||||
interactive=True,
|
||||
value=self.config.get("metadata.metadata_author", ""),
|
||||
value=self.config.get("metadata.author", ""),
|
||||
)
|
||||
self.metadata_description = gr.Textbox(
|
||||
label="Metadata description",
|
||||
placeholder="(optional) description for model metadata",
|
||||
interactive=True,
|
||||
value=self.config.get("metadata.metadata_description", ""),
|
||||
value=self.config.get("metadata.description", ""),
|
||||
)
|
||||
with gr.Row():
|
||||
self.metadata_license = gr.Textbox(
|
||||
label="Metadata license",
|
||||
placeholder="(optional) license for model metadata",
|
||||
interactive=True,
|
||||
value=self.config.get("metadata.metadata_license", ""),
|
||||
value=self.config.get("metadata.license", ""),
|
||||
)
|
||||
self.metadata_tags = gr.Textbox(
|
||||
label="Metadata tags",
|
||||
placeholder="(optional) tags for model metadata, separated by comma",
|
||||
interactive=True,
|
||||
value=self.config.get("metadata.metadata_tags", ""),
|
||||
value=self.config.get("metadata.tags", ""),
|
||||
)
|
||||
|
||||
def run_cmd(run_cmd: list, **kwargs):
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ class sd3Training:
|
|||
return (gr.Group(visible=False), gr.Group(visible=True))
|
||||
|
||||
with gr.Accordion(
|
||||
"SD3", open=self.config.get("settings.expand_all_accordions", False), elem_id="sd3_tab", visible=False
|
||||
"SD3", open=False, elem_id="sd3_tab", visible=False
|
||||
) as sd3_accordion:
|
||||
with gr.Group():
|
||||
gr.Markdown("### SD3 Specific Parameters")
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ class SourceModel:
|
|||
self.pretrained_model_name_or_path = gr.Dropdown(
|
||||
label="Pretrained model name or path",
|
||||
choices=default_models + model_checkpoints,
|
||||
value=self.config.get("model.pretrained_model_name_or_path", "runwayml/stable-diffusion-v1-5"),
|
||||
value=self.config.get("model.models_dir", "runwayml/stable-diffusion-v1-5"),
|
||||
allow_custom_value=True,
|
||||
visible=True,
|
||||
min_width=100,
|
||||
|
|
@ -244,33 +244,33 @@ class SourceModel:
|
|||
with gr.Column():
|
||||
with gr.Row():
|
||||
self.v2 = gr.Checkbox(
|
||||
label="v2", value=self.config.get("model.v2", False), visible=False, min_width=60,
|
||||
label="v2", value=False, visible=False, min_width=60,
|
||||
interactive=True,
|
||||
)
|
||||
self.v_parameterization = gr.Checkbox(
|
||||
label="v_param",
|
||||
value=self.config.get("model.v_parameterization", False),
|
||||
value=False,
|
||||
visible=False,
|
||||
min_width=130,
|
||||
interactive=True,
|
||||
)
|
||||
self.sdxl_checkbox = gr.Checkbox(
|
||||
label="SDXL",
|
||||
value=self.config.get("model.sdxl", False),
|
||||
value=False,
|
||||
visible=False,
|
||||
min_width=60,
|
||||
interactive=True,
|
||||
)
|
||||
self.sd3_checkbox = gr.Checkbox(
|
||||
label="SD3",
|
||||
value=self.config.get("model.sd3", False),
|
||||
value=False,
|
||||
visible=False,
|
||||
min_width=60,
|
||||
interactive=True,
|
||||
)
|
||||
self.flux1_checkbox = gr.Checkbox(
|
||||
label="Flux.1",
|
||||
value=self.config.get("model.flux1", False),
|
||||
value=False,
|
||||
visible=False,
|
||||
min_width=60,
|
||||
interactive=True,
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ try:
|
|||
except ImportError:
|
||||
visibility = False
|
||||
|
||||
from easygui import msgbox
|
||||
from threading import Thread, Event
|
||||
from .custom_logging import setup_logging
|
||||
from .common_gui import setup_environment
|
||||
|
|
@ -59,7 +60,7 @@ class TensorboardManager:
|
|||
self.log.error(
|
||||
"Error: logging folder does not exist or does not contain logs."
|
||||
)
|
||||
gr.Warning("Error: logging folder does not exist or does not contain logs.")
|
||||
msgbox(msg="Error: logging folder does not exist or does not contain logs.")
|
||||
return self.get_button_states(started=False)
|
||||
|
||||
run_cmd = [
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ try:
|
|||
from tkinter import filedialog, Tk
|
||||
except ImportError:
|
||||
pass
|
||||
from easygui import msgbox, ynbox
|
||||
from typing import Optional
|
||||
from .custom_logging import setup_logging
|
||||
from .sd_modeltype import SDModelType
|
||||
|
|
@ -122,32 +123,24 @@ def check_if_model_exist(
|
|||
"""
|
||||
if headless:
|
||||
log.info(
|
||||
"Headless mode: If model already exists, it will be overwritten by default."
|
||||
"Headless mode, skipping verification if model already exist... if model already exist it will be overwritten..."
|
||||
)
|
||||
# No change needed for headless, it already implies proceeding.
|
||||
|
||||
model_path_to_check = ""
|
||||
model_type_for_message = ""
|
||||
return False
|
||||
|
||||
if save_model_as in ["diffusers", "diffusers_safetendors"]:
|
||||
ckpt_folder = os.path.join(output_dir, output_name)
|
||||
if os.path.isdir(ckpt_folder):
|
||||
model_path_to_check = ckpt_folder
|
||||
model_type_for_message = "diffuser model"
|
||||
|
||||
msg = f"A diffuser model with the same name {ckpt_folder} already exists. Do you want to overwrite it?"
|
||||
if not ynbox(msg, "Overwrite Existing Model?"):
|
||||
log.info("Aborting training due to existing model with same name...")
|
||||
return True
|
||||
elif save_model_as in ["ckpt", "safetensors"]:
|
||||
ckpt_file = os.path.join(output_dir, output_name + "." + save_model_as)
|
||||
if os.path.isfile(ckpt_file):
|
||||
model_path_to_check = ckpt_file
|
||||
model_type_for_message = "model file"
|
||||
|
||||
if model_path_to_check:
|
||||
message = f"Existing {model_type_for_message} found: {model_path_to_check}. It will be overwritten."
|
||||
log.warning(message)
|
||||
if not headless:
|
||||
gr.Warning(message)
|
||||
# Returning False means "don't abort", so the overwrite will proceed.
|
||||
return False
|
||||
msg = f"A model with the same file name {ckpt_file} already exists. Do you want to overwrite it?"
|
||||
if not ynbox(msg, "Overwrite Existing Model?"):
|
||||
log.info("Aborting training due to existing model with same name...")
|
||||
return True
|
||||
else:
|
||||
log.info(
|
||||
'Can\'t verify if existing model exist when save model is set as "same as source model", continuing to train model...'
|
||||
|
|
@ -172,7 +165,7 @@ def output_message(msg: str = "", title: str = "", headless: bool = False) -> No
|
|||
if headless:
|
||||
log.info(msg)
|
||||
else:
|
||||
gr.Info(msg)
|
||||
msgbox(msg=msg, title=title)
|
||||
|
||||
|
||||
def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_id):
|
||||
|
|
@ -874,7 +867,7 @@ def find_replace(
|
|||
# Validate the presence of caption files and the search text
|
||||
if not search_text or not has_ext_files(folder_path, caption_file_ext):
|
||||
# Display a message box indicating no files were found
|
||||
gr.Info(
|
||||
msgbox(
|
||||
f"No files with extension {caption_file_ext} were found in {folder_path}..."
|
||||
)
|
||||
log.warning(
|
||||
|
|
@ -938,7 +931,7 @@ def color_aug_changed(color_aug):
|
|||
"""
|
||||
# If color augmentation is enabled, disable cache latent and return a new checkbox
|
||||
if color_aug:
|
||||
gr.Info(
|
||||
msgbox(
|
||||
'Disabling "Cache latent" because "Color augmentation" has been selected...'
|
||||
)
|
||||
return gr.Checkbox(value=False, interactive=False)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import os
|
||||
import re
|
||||
import gradio as gr
|
||||
from easygui import msgbox, boolbox
|
||||
from .common_gui import get_folder_path, scriptdir, list_dirs, create_refresh_button
|
||||
|
||||
from .custom_logging import setup_logging
|
||||
|
|
@ -12,20 +13,20 @@ log = setup_logging()
|
|||
import os
|
||||
import re
|
||||
import logging as log
|
||||
# from easygui import msgbox # easygui msgbox will be handled in step 2
|
||||
from easygui import msgbox
|
||||
|
||||
def dataset_balancing(concept_repeats, folder, insecure):
|
||||
|
||||
if not concept_repeats > 0:
|
||||
# Display an error message if the total number of repeats is not a valid integer
|
||||
gr.Error("Please enter a valid integer for the total number of repeats.")
|
||||
msgbox("Please enter a valid integer for the total number of repeats.")
|
||||
return
|
||||
|
||||
concept_repeats = int(concept_repeats)
|
||||
|
||||
# Check if folder exist
|
||||
if folder == "" or not os.path.isdir(folder):
|
||||
gr.Error("Please enter a valid folder for balancing.")
|
||||
msgbox("Please enter a valid folder for balancing.")
|
||||
return
|
||||
|
||||
pattern = re.compile(r"^\d+_.+$")
|
||||
|
|
@ -92,23 +93,19 @@ def dataset_balancing(concept_repeats, folder, insecure):
|
|||
f"Skipping folder {subdir} because it does not match kohya_ss expected syntax..."
|
||||
)
|
||||
|
||||
gr.Info("Dataset balancing completed...")
|
||||
msgbox("Dataset balancing completed...")
|
||||
|
||||
|
||||
|
||||
def warning(insecure_checked, headless=False): # Renamed insecure to insecure_checked for clarity
|
||||
if insecure_checked:
|
||||
message = "DANGER!!! Insecure folder renaming is active. Folders not matching the standard '<number>_<text>' pattern may be renamed."
|
||||
# Log the warning regardless of headless state, as it's a significant user choice
|
||||
log.warning(message)
|
||||
if not headless:
|
||||
gr.Warning(message)
|
||||
# Return the state of the checkbox. If it was checked, it remains checked.
|
||||
# The calling UI's .change(outputs=checkbox) will ensure this.
|
||||
return insecure_checked
|
||||
# If the checkbox was unchecked, or if it was checked and then logic above ran,
|
||||
# this ensures the checkbox state is correctly returned to Gradio.
|
||||
return insecure_checked
|
||||
def warning(insecure):
|
||||
if insecure:
|
||||
if boolbox(
|
||||
f"WARNING!!! You have asked to rename non kohya_ss <num>_<text> folders...\n\nAre you sure you want to do that?",
|
||||
choices=("Yes, I like danger", "No, get me out of here"),
|
||||
):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def gradio_dataset_balancing_tab(headless=False):
|
||||
|
|
@ -171,7 +168,7 @@ def gradio_dataset_balancing_tab(headless=False):
|
|||
value=False,
|
||||
label="DANGER!!! -- Insecure folder renaming -- DANGER!!!",
|
||||
)
|
||||
insecure.change(lambda val: warning(val, headless=headless), inputs=insecure, outputs=insecure)
|
||||
insecure.change(warning, inputs=insecure, outputs=insecure)
|
||||
balance_button = gr.Button("Balance dataset")
|
||||
balance_button.click(
|
||||
dataset_balancing,
|
||||
|
|
|
|||
|
|
@ -250,9 +250,9 @@ def gradio_dreambooth_folder_creation_tab(
|
|||
util_training_dir_output = gr.Dropdown(
|
||||
label="Destination training directory (where formatted training and regularisation folders will be placed)",
|
||||
interactive=True,
|
||||
choices=[config.get(key="model.train_data_dir", default="")]
|
||||
choices=[config.get(key="train_data_dir", default="")]
|
||||
+ list_train_output_dirs(current_train_output_dir),
|
||||
value=config.get(key="model.train_data_dir", default=""),
|
||||
value=config.get(key="train_data_dir", default=""),
|
||||
allow_custom_value=True,
|
||||
)
|
||||
create_refresh_button(
|
||||
|
|
@ -272,7 +272,7 @@ def gradio_dreambooth_folder_creation_tab(
|
|||
)
|
||||
util_training_dir_output.change(
|
||||
fn=lambda path: gr.Dropdown(
|
||||
choices=[config.get(key="model.train_data_dir", default="")] + list_train_output_dirs(path)
|
||||
choices=[config.get(key="train_data_dir", default="")] + list_train_output_dirs(path)
|
||||
),
|
||||
inputs=util_training_dir_output,
|
||||
outputs=util_training_dir_output,
|
||||
|
|
|
|||
|
|
@ -1175,22 +1175,22 @@ def dreambooth_tab(
|
|||
gr.Markdown("Train a custom model using kohya dreambooth python code...")
|
||||
|
||||
# Setup Configuration Files Gradio
|
||||
with gr.Accordion("Configuration", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Configuration", open=False):
|
||||
configuration = ConfigurationFile(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Accelerate launch", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Accelerate launch", open=False), gr.Column():
|
||||
accelerate_launch = AccelerateLaunch(config=config)
|
||||
|
||||
with gr.Column():
|
||||
source_model = SourceModel(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Folders", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Folders", open=False), gr.Group():
|
||||
folders = Folders(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Metadata", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Metadata", open=False), gr.Group():
|
||||
metadata = MetaData(config=config)
|
||||
|
||||
with gr.Accordion("Dataset Preparation", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Dataset Preparation", open=False):
|
||||
gr.Markdown(
|
||||
"This section provide Dreambooth tools to help setup your dataset..."
|
||||
)
|
||||
|
|
@ -1205,8 +1205,8 @@ def dreambooth_tab(
|
|||
|
||||
gradio_dataset_balancing_tab(headless=headless)
|
||||
|
||||
with gr.Accordion("Parameters", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Basic", open=config.get("dreambooth.expand_basic_accordion", True)):
|
||||
with gr.Accordion("Parameters", open=False), gr.Column():
|
||||
with gr.Accordion("Basic", open="True"):
|
||||
with gr.Group(elem_id="basic_tab"):
|
||||
basic_training = BasicTraining(
|
||||
learning_rate_value=1e-5,
|
||||
|
|
@ -1237,7 +1237,7 @@ def dreambooth_tab(
|
|||
headless=headless, config=config, sd3_checkbox=source_model.sd3_checkbox
|
||||
)
|
||||
|
||||
with gr.Accordion("Advanced", open=config.get("settings.expand_all_accordions", False), elem_id="advanced_tab"):
|
||||
with gr.Accordion("Advanced", open=False, elem_id="advanced_tab"):
|
||||
advanced_training = AdvancedTraining(headless=headless, config=config)
|
||||
advanced_training.color_aug.change(
|
||||
color_aug_changed,
|
||||
|
|
@ -1245,11 +1245,11 @@ def dreambooth_tab(
|
|||
outputs=[basic_training.cache_latents],
|
||||
)
|
||||
|
||||
with gr.Accordion("Samples", open=config.get("settings.expand_all_accordions", False), elem_id="samples_tab"):
|
||||
with gr.Accordion("Samples", open=False, elem_id="samples_tab"):
|
||||
sample = SampleImages(config=config)
|
||||
|
||||
global huggingface
|
||||
with gr.Accordion("HuggingFace", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("HuggingFace", open=False):
|
||||
huggingface = HuggingFace(config=config)
|
||||
|
||||
global executor
|
||||
|
|
|
|||
|
|
@ -1214,10 +1214,10 @@ def finetune_tab(
|
|||
gr.Markdown("Train a custom model using kohya finetune python code...")
|
||||
|
||||
# Setup Configuration Files Gradio
|
||||
with gr.Accordion("Configuration", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Configuration", open=False):
|
||||
configuration = ConfigurationFile(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Accelerate launch", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Accelerate launch", open=False), gr.Column():
|
||||
accelerate_launch = AccelerateLaunch(config=config)
|
||||
|
||||
with gr.Column():
|
||||
|
|
@ -1227,31 +1227,31 @@ def finetune_tab(
|
|||
image_folder = source_model.train_data_dir
|
||||
output_name = source_model.output_name
|
||||
|
||||
with gr.Accordion("Folders", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Folders", open=False), gr.Group():
|
||||
folders = Folders(headless=headless, finetune=True, config=config)
|
||||
output_dir = folders.output_dir
|
||||
logging_dir = folders.logging_dir
|
||||
train_dir = folders.reg_data_dir
|
||||
|
||||
with gr.Accordion("Metadata", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Metadata", open=False), gr.Group():
|
||||
metadata = MetaData(config=config)
|
||||
|
||||
with gr.Accordion("Dataset Preparation", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Dataset Preparation", open=False):
|
||||
with gr.Row():
|
||||
max_resolution = gr.Textbox(
|
||||
label="Resolution (width,height)", value=config.get("finetune.dataset_preparation.max_resolution", "512,512")
|
||||
label="Resolution (width,height)", value="512,512"
|
||||
)
|
||||
min_bucket_reso = gr.Textbox(label="Min bucket resolution", value=config.get("finetune.dataset_preparation.min_bucket_reso", "256"))
|
||||
min_bucket_reso = gr.Textbox(label="Min bucket resolution", value="256")
|
||||
max_bucket_reso = gr.Textbox(
|
||||
label="Max bucket resolution", value=config.get("finetune.dataset_preparation.max_bucket_reso", "1024")
|
||||
label="Max bucket resolution", value="1024"
|
||||
)
|
||||
batch_size = gr.Textbox(label="Batch size", value=config.get("finetune.dataset_preparation.batch_size", "1"))
|
||||
batch_size = gr.Textbox(label="Batch size", value="1")
|
||||
with gr.Row():
|
||||
create_caption = gr.Checkbox(
|
||||
label="Generate caption metadata", value=config.get("finetune.dataset_preparation.create_caption", True)
|
||||
label="Generate caption metadata", value=True
|
||||
)
|
||||
create_buckets = gr.Checkbox(
|
||||
label="Generate image buckets metadata", value=config.get("finetune.dataset_preparation.create_buckets", True)
|
||||
label="Generate image buckets metadata", value=True
|
||||
)
|
||||
use_latent_files = gr.Dropdown(
|
||||
label="Use latent files",
|
||||
|
|
@ -1259,24 +1259,24 @@ def finetune_tab(
|
|||
"No",
|
||||
"Yes",
|
||||
],
|
||||
value=config.get("finetune.dataset_preparation.use_latent_files", "Yes"),
|
||||
value="Yes",
|
||||
)
|
||||
with gr.Accordion("Advanced parameters", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Advanced parameters", open=False):
|
||||
with gr.Row():
|
||||
caption_metadata_filename = gr.Textbox(
|
||||
label="Caption metadata filename",
|
||||
value=config.get("finetune.dataset_preparation.caption_metadata_filename", "meta_cap.json"),
|
||||
value="meta_cap.json",
|
||||
)
|
||||
latent_metadata_filename = gr.Textbox(
|
||||
label="Latent metadata filename", value=config.get("finetune.dataset_preparation.latent_metadata_filename", "meta_lat.json")
|
||||
label="Latent metadata filename", value="meta_lat.json"
|
||||
)
|
||||
with gr.Row():
|
||||
full_path = gr.Checkbox(label="Use full path", value=config.get("finetune.dataset_preparation.full_path", True))
|
||||
full_path = gr.Checkbox(label="Use full path", value=True)
|
||||
weighted_captions = gr.Checkbox(
|
||||
label="Weighted captions", value=config.get("finetune.dataset_preparation.weighted_captions", False)
|
||||
label="Weighted captions", value=False
|
||||
)
|
||||
|
||||
with gr.Accordion("Parameters", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Parameters", open=False), gr.Column():
|
||||
|
||||
def list_presets(path):
|
||||
json_files = []
|
||||
|
|
@ -1301,7 +1301,7 @@ def finetune_tab(
|
|||
value="none",
|
||||
)
|
||||
|
||||
with gr.Accordion("Basic", open=config.get("finetune.expand_basic_accordion", True)):
|
||||
with gr.Accordion("Basic", open="True"):
|
||||
with gr.Group(elem_id="basic_tab"):
|
||||
basic_training = BasicTraining(
|
||||
learning_rate_value=1e-5,
|
||||
|
|
@ -1318,9 +1318,9 @@ def finetune_tab(
|
|||
)
|
||||
|
||||
with gr.Row():
|
||||
dataset_repeats = gr.Textbox(label="Dataset repeats", value=config.get("finetune.basic.dataset_repeats", "40"))
|
||||
dataset_repeats = gr.Textbox(label="Dataset repeats", value=40)
|
||||
train_text_encoder = gr.Checkbox(
|
||||
label="Train text encoder", value=config.get("finetune.basic.train_text_encoder", True)
|
||||
label="Train text encoder", value=True
|
||||
)
|
||||
|
||||
# Add FLUX1 Parameters
|
||||
|
|
@ -1336,7 +1336,7 @@ def finetune_tab(
|
|||
headless=headless, config=config, sd3_checkbox=source_model.sd3_checkbox
|
||||
)
|
||||
|
||||
with gr.Accordion("Advanced", open=config.get("settings.expand_all_accordions", False), elem_id="advanced_tab"):
|
||||
with gr.Accordion("Advanced", open=False, elem_id="advanced_tab"):
|
||||
with gr.Row():
|
||||
gradient_accumulation_steps = gr.Slider(
|
||||
label="Gradient accumulate steps",
|
||||
|
|
@ -1350,7 +1350,6 @@ def finetune_tab(
|
|||
label="Block LR (SDXL)",
|
||||
placeholder="(Optional)",
|
||||
info="Specify the different learning rates for each U-Net block. Specify 23 values separated by commas like 1e-3,1e-3 ... 1e-3",
|
||||
value=config.get("finetune.advanced.block_lr", ""),
|
||||
)
|
||||
advanced_training = AdvancedTraining(
|
||||
headless=headless, finetuning=True, config=config
|
||||
|
|
@ -1363,11 +1362,11 @@ def finetune_tab(
|
|||
], # Not applicable to fine_tune.py
|
||||
)
|
||||
|
||||
with gr.Accordion("Samples", open=config.get("settings.expand_all_accordions", False), elem_id="samples_tab"):
|
||||
with gr.Accordion("Samples", open=False, elem_id="samples_tab"):
|
||||
sample = SampleImages(config=config)
|
||||
|
||||
global huggingface
|
||||
with gr.Accordion("HuggingFace", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("HuggingFace", open=False):
|
||||
huggingface = HuggingFace(config=config)
|
||||
|
||||
global executor
|
||||
|
|
|
|||
|
|
@ -1853,10 +1853,10 @@ def lora_tab(
|
|||
)
|
||||
|
||||
# Setup Configuration Files Gradio
|
||||
with gr.Accordion("Configuration", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Configuration", open=False):
|
||||
configuration = ConfigurationFile(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Accelerate launch", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Accelerate launch", open=False), gr.Column():
|
||||
accelerate_launch = AccelerateLaunch(config=config)
|
||||
|
||||
with gr.Column():
|
||||
|
|
@ -1869,13 +1869,13 @@ def lora_tab(
|
|||
config=config,
|
||||
)
|
||||
|
||||
with gr.Accordion("Folders", open=config.get("settings.expand_all_accordions", True)), gr.Group():
|
||||
with gr.Accordion("Folders", open=True), gr.Group():
|
||||
folders = Folders(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Metadata", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Metadata", open=False), gr.Group():
|
||||
metadata = MetaData(config=config)
|
||||
|
||||
with gr.Accordion("Dataset Preparation", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Dataset Preparation", open=False):
|
||||
gr.Markdown(
|
||||
"This section provide Dreambooth tools to help setup your dataset..."
|
||||
)
|
||||
|
|
@ -1890,7 +1890,7 @@ def lora_tab(
|
|||
|
||||
gradio_dataset_balancing_tab(headless=headless)
|
||||
|
||||
with gr.Accordion("Parameters", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Parameters", open=False), gr.Column():
|
||||
|
||||
def list_presets(path):
|
||||
json_files = []
|
||||
|
|
@ -1918,7 +1918,7 @@ def lora_tab(
|
|||
elem_classes=["preset_background"],
|
||||
)
|
||||
|
||||
with gr.Accordion("Basic", open=config.get("lora.expand_basic_accordion", True), elem_classes=["basic_background"]):
|
||||
with gr.Accordion("Basic", open="True", elem_classes=["basic_background"]):
|
||||
with gr.Row():
|
||||
LoRA_type = gr.Dropdown(
|
||||
label="LoRA type",
|
||||
|
|
@ -1939,12 +1939,12 @@ def lora_tab(
|
|||
"LyCORIS/Native Fine-Tuning",
|
||||
"Standard",
|
||||
],
|
||||
value=config.get("lora.basic.lora_type", "Standard"),
|
||||
value="Standard",
|
||||
)
|
||||
LyCORIS_preset = gr.Dropdown(
|
||||
label="LyCORIS Preset",
|
||||
choices=LYCORIS_PRESETS_CHOICES,
|
||||
value=config.get("lora.basic.lycoris_preset", "full"),
|
||||
value="full",
|
||||
visible=False,
|
||||
interactive=True,
|
||||
allow_custom_value=True,
|
||||
|
|
@ -1956,7 +1956,6 @@ def lora_tab(
|
|||
label="Network weights",
|
||||
placeholder="(Optional)",
|
||||
info="Path to an existing LoRA network weights to resume training from",
|
||||
value=config.get("lora.basic.network_weights", ""),
|
||||
)
|
||||
network_weights_file = gr.Button(
|
||||
document_symbol,
|
||||
|
|
@ -1972,11 +1971,11 @@ def lora_tab(
|
|||
)
|
||||
dim_from_weights = gr.Checkbox(
|
||||
label="DIM from weights",
|
||||
value=config.get("lora.basic.dim_from_weights", False),
|
||||
value=False,
|
||||
info="Automatically determine the dim(rank) from the weight file.",
|
||||
)
|
||||
basic_training = BasicTraining(
|
||||
learning_rate_value=0.0001, # This is a default for LoRA, config will override if basic.learning_rate is set
|
||||
learning_rate_value=0.0001,
|
||||
lr_scheduler_value="cosine",
|
||||
lr_warmup_value=10,
|
||||
sdxl_checkbox=source_model.sdxl_checkbox,
|
||||
|
|
@ -1986,7 +1985,7 @@ def lora_tab(
|
|||
with gr.Row():
|
||||
text_encoder_lr = gr.Number(
|
||||
label="Text Encoder learning rate",
|
||||
value=config.get("lora.basic.text_encoder_lr", 0),
|
||||
value=0,
|
||||
info="(Optional) Set CLIP-L and T5XXL learning rates.",
|
||||
minimum=0,
|
||||
maximum=1,
|
||||
|
|
@ -1994,7 +1993,7 @@ def lora_tab(
|
|||
|
||||
t5xxl_lr = gr.Number(
|
||||
label="T5XXL learning rate",
|
||||
value=config.get("lora.basic.t5xxl_lr", 0),
|
||||
value=0,
|
||||
info="(Optional) Override the T5XXL learning rate set by the Text Encoder learning rate if you desire a different one.",
|
||||
minimum=0,
|
||||
maximum=1,
|
||||
|
|
@ -2002,7 +2001,7 @@ def lora_tab(
|
|||
|
||||
unet_lr = gr.Number(
|
||||
label="Unet learning rate",
|
||||
value=config.get("lora.basic.unet_lr", 0.0001),
|
||||
value=0.0001,
|
||||
info="(Optional)",
|
||||
minimum=0,
|
||||
maximum=1,
|
||||
|
|
@ -2011,7 +2010,7 @@ def lora_tab(
|
|||
with gr.Row() as loraplus:
|
||||
loraplus_lr_ratio = gr.Number(
|
||||
label="LoRA+ learning rate ratio",
|
||||
value=config.get("lora.basic.loraplus_lr_ratio", 0),
|
||||
value=0,
|
||||
info="(Optional) starting with 16 is suggested",
|
||||
minimum=0,
|
||||
maximum=128,
|
||||
|
|
@ -2019,7 +2018,7 @@ def lora_tab(
|
|||
|
||||
loraplus_unet_lr_ratio = gr.Number(
|
||||
label="LoRA+ Unet learning rate ratio",
|
||||
value=config.get("lora.basic.loraplus_unet_lr_ratio", 0),
|
||||
value=0,
|
||||
info="(Optional) starting with 16 is suggested",
|
||||
minimum=0,
|
||||
maximum=128,
|
||||
|
|
@ -2027,7 +2026,7 @@ def lora_tab(
|
|||
|
||||
loraplus_text_encoder_lr_ratio = gr.Number(
|
||||
label="LoRA+ Text Encoder learning rate ratio",
|
||||
value=config.get("lora.basic.loraplus_text_encoder_lr_ratio", 0),
|
||||
value=0,
|
||||
info="(Optional) starting with 16 is suggested",
|
||||
minimum=0,
|
||||
maximum=128,
|
||||
|
|
@ -2036,79 +2035,79 @@ def lora_tab(
|
|||
sdxl_params = SDXLParameters(source_model.sdxl_checkbox, config=config)
|
||||
|
||||
# LyCORIS Specific parameters
|
||||
with gr.Accordion("LyCORIS", open=config.get("lora.expand_lycoris_accordion", False), visible=False) as lycoris_accordion: # Added config get for open
|
||||
with gr.Accordion("LyCORIS", visible=False) as lycoris_accordion:
|
||||
with gr.Row():
|
||||
factor = gr.Slider(
|
||||
label="LoKr factor",
|
||||
value=config.get("lora.lycoris.factor", -1),
|
||||
value=-1,
|
||||
minimum=-1,
|
||||
maximum=64,
|
||||
step=1,
|
||||
visible=False,
|
||||
)
|
||||
bypass_mode = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.bypass_mode", False),
|
||||
value=False,
|
||||
label="Bypass mode",
|
||||
info="Designed for bnb 8bit/4bit linear layer. (QLyCORIS)",
|
||||
visible=False,
|
||||
)
|
||||
dora_wd = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.dora_wd", False),
|
||||
value=False,
|
||||
label="DoRA Weight Decompose",
|
||||
info="Enable the DoRA method for these algorithms",
|
||||
visible=False,
|
||||
)
|
||||
use_cp = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.use_cp", False),
|
||||
value=False,
|
||||
label="Use CP decomposition",
|
||||
info="A two-step approach utilizing tensor decomposition and fine-tuning to accelerate convolution layers in large neural networks, resulting in significant CPU speedups with minor accuracy drops.",
|
||||
visible=False,
|
||||
)
|
||||
use_tucker = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.use_tucker", False),
|
||||
value=False,
|
||||
label="Use Tucker decomposition",
|
||||
info="Efficiently decompose tensor shapes, resulting in a sequence of convolution layers with varying dimensions and Hadamard product implementation through multiplication of two distinct tensors.",
|
||||
visible=False,
|
||||
)
|
||||
use_scalar = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.use_scalar", False),
|
||||
value=False,
|
||||
label="Use Scalar",
|
||||
info="Train an additional scalar in front of the weight difference, use a different weight initialization strategy.",
|
||||
visible=False,
|
||||
)
|
||||
with gr.Row():
|
||||
rank_dropout_scale = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.rank_dropout_scale", False),
|
||||
value=False,
|
||||
label="Rank Dropout Scale",
|
||||
info="Adjusts the scale of the rank dropout to maintain the average dropout rate, ensuring more consistent regularization across different layers.",
|
||||
visible=False,
|
||||
)
|
||||
constrain = gr.Number(
|
||||
value=config.get("lora.lycoris.constrain", 0.0),
|
||||
value=0.0,
|
||||
label="Constrain OFT",
|
||||
info="Limits the norm of the oft_blocks, ensuring that their magnitude does not exceed a specified threshold, thus controlling the extent of the transformation applied.",
|
||||
visible=False,
|
||||
)
|
||||
rescaled = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.rescaled", False),
|
||||
value=False,
|
||||
label="Rescaled OFT",
|
||||
info="applies an additional scaling factor to the oft_blocks, allowing for further adjustment of their impact on the model's transformations.",
|
||||
visible=False,
|
||||
)
|
||||
train_norm = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.train_norm", False),
|
||||
value=False,
|
||||
label="Train Norm",
|
||||
info="Selects trainable layers in a network, but trains normalization layers identically across methods as they lack matrix decomposition.",
|
||||
visible=False,
|
||||
)
|
||||
decompose_both = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.decompose_both", False),
|
||||
value=False,
|
||||
label="LoKr decompose both",
|
||||
info="Controls whether both input and output dimensions of the layer's weights are decomposed into smaller matrices for reparameterization.",
|
||||
visible=False,
|
||||
)
|
||||
train_on_input = gr.Checkbox(
|
||||
value=config.get("lora.lycoris.train_on_input", True),
|
||||
value=True,
|
||||
label="iA3 train on input",
|
||||
info="Set if we change the information going into the system (True) or the information coming out of it (False).",
|
||||
visible=False,
|
||||
|
|
@ -2118,7 +2117,7 @@ def lora_tab(
|
|||
minimum=1,
|
||||
maximum=512,
|
||||
label="Network Rank (Dimension)",
|
||||
value=config.get("lora.basic.network_dim", 8),
|
||||
value=8,
|
||||
step=1,
|
||||
interactive=True,
|
||||
)
|
||||
|
|
@ -2126,7 +2125,7 @@ def lora_tab(
|
|||
minimum=0.00001,
|
||||
maximum=1024,
|
||||
label="Network Alpha",
|
||||
value=config.get("lora.basic.network_alpha", 1),
|
||||
value=1,
|
||||
step=0.00001,
|
||||
interactive=True,
|
||||
info="alpha for LoRA weight scaling",
|
||||
|
|
@ -2136,21 +2135,21 @@ def lora_tab(
|
|||
conv_dim = gr.Slider(
|
||||
minimum=0,
|
||||
maximum=512,
|
||||
value=config.get("lora.basic.conv_dim", 1),
|
||||
value=1,
|
||||
step=1,
|
||||
label="Convolution Rank (Dimension)",
|
||||
)
|
||||
conv_alpha = gr.Slider(
|
||||
minimum=0,
|
||||
maximum=512,
|
||||
value=config.get("lora.basic.conv_alpha", 1),
|
||||
value=1,
|
||||
step=1,
|
||||
label="Convolution Alpha",
|
||||
)
|
||||
with gr.Row():
|
||||
scale_weight_norms = gr.Slider(
|
||||
label="Scale weight norms",
|
||||
value=config.get("lora.basic.scale_weight_norms", 0),
|
||||
value=0,
|
||||
minimum=0,
|
||||
maximum=10,
|
||||
step=0.01,
|
||||
|
|
@ -2159,7 +2158,7 @@ def lora_tab(
|
|||
)
|
||||
network_dropout = gr.Slider(
|
||||
label="Network dropout",
|
||||
value=config.get("lora.basic.network_dropout", 0),
|
||||
value=0,
|
||||
minimum=0,
|
||||
maximum=1,
|
||||
step=0.01,
|
||||
|
|
@ -2167,7 +2166,7 @@ def lora_tab(
|
|||
)
|
||||
rank_dropout = gr.Slider(
|
||||
label="Rank dropout",
|
||||
value=config.get("lora.basic.rank_dropout", 0),
|
||||
value=0,
|
||||
minimum=0,
|
||||
maximum=1,
|
||||
step=0.01,
|
||||
|
|
@ -2175,7 +2174,7 @@ def lora_tab(
|
|||
)
|
||||
module_dropout = gr.Slider(
|
||||
label="Module dropout",
|
||||
value=config.get("lora.basic.module_dropout", 0.0),
|
||||
value=0.0,
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.01,
|
||||
|
|
@ -2186,7 +2185,7 @@ def lora_tab(
|
|||
minimum=1,
|
||||
maximum=64,
|
||||
label="DyLoRA Unit / Block size",
|
||||
value=config.get("lora.basic.unit", 1),
|
||||
value=1,
|
||||
step=1,
|
||||
interactive=True,
|
||||
)
|
||||
|
|
@ -2194,20 +2193,20 @@ def lora_tab(
|
|||
with gr.Row(visible=False) as train_lora_ggpo_row:
|
||||
train_lora_ggpo = gr.Checkbox(
|
||||
label="Train LoRA GGPO",
|
||||
value=config.get("lora.basic.train_lora_ggpo", False),
|
||||
value=False,
|
||||
info="Train LoRA GGPO",
|
||||
interactive=True,
|
||||
)
|
||||
with gr.Row(visible=False) as ggpo_row:
|
||||
ggpo_sigma = gr.Number(
|
||||
label="GGPO sigma",
|
||||
value=config.get("lora.basic.ggpo_sigma", 0.03),
|
||||
value=0.03,
|
||||
info="Specify the sigma of GGPO.",
|
||||
interactive=True,
|
||||
)
|
||||
ggpo_beta = gr.Number(
|
||||
label="GGPO beta",
|
||||
value=config.get("lora.basic.ggpo_beta", 0.01),
|
||||
value=0.01,
|
||||
info="Specify the beta of GGPO.",
|
||||
interactive=True,
|
||||
)
|
||||
|
|
@ -2681,9 +2680,9 @@ def lora_tab(
|
|||
)
|
||||
|
||||
with gr.Accordion(
|
||||
"Advanced", open=config.get("settings.expand_all_accordions", False), elem_classes="advanced_background"
|
||||
"Advanced", open=False, elem_classes="advanced_background"
|
||||
):
|
||||
# with gr.Accordion('Advanced Configuration', open=config.get("settings.expand_all_accordions", False)):
|
||||
# with gr.Accordion('Advanced Configuration', open=False):
|
||||
with gr.Row(visible=True) as kohya_advanced_lora:
|
||||
with gr.Tab(label="Weights"):
|
||||
with gr.Row(visible=True):
|
||||
|
|
@ -2691,25 +2690,21 @@ def lora_tab(
|
|||
label="Down LR weights",
|
||||
placeholder="(Optional) eg: 0,0,0,0,0,0,1,1,1,1,1,1",
|
||||
info="Specify the learning rate weight of the down blocks of U-Net.",
|
||||
value=config.get("lora.advanced.down_lr_weight", ""),
|
||||
)
|
||||
mid_lr_weight = gr.Textbox(
|
||||
label="Mid LR weights",
|
||||
placeholder="(Optional) eg: 0.5",
|
||||
info="Specify the learning rate weight of the mid block of U-Net.",
|
||||
value=config.get("lora.advanced.mid_lr_weight", ""),
|
||||
)
|
||||
up_lr_weight = gr.Textbox(
|
||||
label="Up LR weights",
|
||||
placeholder="(Optional) eg: 0,0,0,0,0,0,1,1,1,1,1,1",
|
||||
info="Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.",
|
||||
value=config.get("lora.advanced.up_lr_weight", ""),
|
||||
)
|
||||
block_lr_zero_threshold = gr.Textbox(
|
||||
label="Blocks LR zero threshold",
|
||||
placeholder="(Optional) eg: 0.1",
|
||||
info="If the weight is not more than this value, the LoRA module is not created. The default is 0.",
|
||||
value=config.get("lora.advanced.block_lr_zero_threshold", ""),
|
||||
)
|
||||
with gr.Tab(label="Blocks"):
|
||||
with gr.Row(visible=True):
|
||||
|
|
@ -2717,13 +2712,11 @@ def lora_tab(
|
|||
label="Block dims",
|
||||
placeholder="(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2",
|
||||
info="Specify the dim (rank) of each block. Specify 25 numbers.",
|
||||
value=config.get("lora.advanced.block_dims", ""),
|
||||
)
|
||||
block_alphas = gr.Textbox(
|
||||
label="Block alphas",
|
||||
placeholder="(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2",
|
||||
info="Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.",
|
||||
value=config.get("lora.advanced.block_alphas", ""),
|
||||
)
|
||||
with gr.Tab(label="Conv"):
|
||||
with gr.Row(visible=True):
|
||||
|
|
@ -2731,13 +2724,11 @@ def lora_tab(
|
|||
label="Conv dims",
|
||||
placeholder="(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2",
|
||||
info="Extend LoRA to Conv2d 3x3 and specify the dim (rank) of each block. Specify 25 numbers.",
|
||||
value=config.get("lora.advanced.conv_block_dims", ""),
|
||||
)
|
||||
conv_block_alphas = gr.Textbox(
|
||||
label="Conv alphas",
|
||||
placeholder="(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2",
|
||||
info="Specify the alpha of each block when expanding LoRA to Conv2d 3x3. Specify 25 numbers. If omitted, the value of conv_alpha is used.",
|
||||
value=config.get("lora.advanced.conv_block_alphas", ""),
|
||||
)
|
||||
advanced_training = AdvancedTraining(
|
||||
headless=headless, training_type="lora", config=config
|
||||
|
|
@ -2748,12 +2739,12 @@ def lora_tab(
|
|||
outputs=[basic_training.cache_latents],
|
||||
)
|
||||
|
||||
with gr.Accordion("Samples", open=config.get("settings.expand_all_accordions", False), elem_classes="samples_background"):
|
||||
with gr.Accordion("Samples", open=False, elem_classes="samples_background"):
|
||||
sample = SampleImages(config=config)
|
||||
|
||||
global huggingface
|
||||
with gr.Accordion(
|
||||
"HuggingFace", open=config.get("settings.expand_all_accordions", False), elem_classes="huggingface_background"
|
||||
"HuggingFace", open=False, elem_classes="huggingface_background"
|
||||
):
|
||||
huggingface = HuggingFace(config=config)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import gradio as gr
|
||||
from easygui import msgbox, boolbox
|
||||
from .common_gui import get_folder_path, scriptdir, list_dirs
|
||||
from math import ceil
|
||||
import os
|
||||
|
|
@ -47,7 +48,7 @@ def paginate_go(page, max_page):
|
|||
try:
|
||||
page = float(page)
|
||||
except:
|
||||
gr.Error(f"Invalid page num: {page}")
|
||||
msgbox(f"Invalid page num: {page}")
|
||||
return
|
||||
return paginate(page, max_page, 0)
|
||||
|
||||
|
|
@ -106,7 +107,7 @@ def update_image_tags(
|
|||
|
||||
|
||||
def import_tags_from_captions(
|
||||
images_dir, caption_ext, quick_tags_text, ignore_load_tags_word_count, headless=False
|
||||
images_dir, caption_ext, quick_tags_text, ignore_load_tags_word_count
|
||||
):
|
||||
"""
|
||||
Scans images directory for all available captions and loads all tags
|
||||
|
|
@ -118,23 +119,23 @@ def import_tags_from_captions(
|
|||
|
||||
# Check for images_dir
|
||||
if not images_dir:
|
||||
gr.Warning("Image folder is missing...")
|
||||
msgbox("Image folder is missing...")
|
||||
return empty_return()
|
||||
|
||||
if not os.path.exists(images_dir):
|
||||
gr.Warning("Image folder does not exist...")
|
||||
msgbox("Image folder does not exist...")
|
||||
return empty_return()
|
||||
|
||||
if not caption_ext:
|
||||
gr.Warning("Please provide an extension for the caption files.")
|
||||
msgbox("Please provide an extension for the caption files.")
|
||||
return empty_return()
|
||||
|
||||
if quick_tags_text:
|
||||
overwrite_message = "Overwriting existing quick tags."
|
||||
if not headless:
|
||||
gr.Info(overwrite_message)
|
||||
log.info(overwrite_message)
|
||||
# Proceeding to overwrite by not returning early.
|
||||
if not boolbox(
|
||||
f"Are you sure you wish to overwrite the current quick tags?",
|
||||
choices=("Yes", "No"),
|
||||
):
|
||||
return empty_return()
|
||||
|
||||
images_list = os.listdir(images_dir)
|
||||
image_files = [f for f in images_list if f.lower().endswith(IMAGE_EXTENSIONS)]
|
||||
|
|
@ -172,15 +173,15 @@ def load_images(images_dir, caption_ext, loaded_images_dir, page, max_page):
|
|||
|
||||
# Check for images_dir
|
||||
if not images_dir:
|
||||
gr.Warning("Image folder is missing...")
|
||||
msgbox("Image folder is missing...")
|
||||
return empty_return()
|
||||
|
||||
if not os.path.exists(images_dir):
|
||||
gr.Warning("Image folder does not exist...")
|
||||
msgbox("Image folder does not exist...")
|
||||
return empty_return()
|
||||
|
||||
if not caption_ext:
|
||||
gr.Warning("Please provide an extension for the caption files.")
|
||||
msgbox("Please provide an extension for the caption files.")
|
||||
return empty_return()
|
||||
|
||||
# Load Images
|
||||
|
|
@ -443,7 +444,7 @@ def gradio_manual_caption_gui_tab(headless=False, default_images_dir=None):
|
|||
|
||||
# Import tags button
|
||||
import_tags_button.click(
|
||||
lambda loaded_dir, cap_ext, q_text, ignore_wc: import_tags_from_captions(loaded_dir, cap_ext, q_text, ignore_wc, headless=headless),
|
||||
import_tags_from_captions,
|
||||
inputs=[
|
||||
loaded_images_dir,
|
||||
caption_ext,
|
||||
|
|
|
|||
|
|
@ -976,10 +976,10 @@ def ti_tab(
|
|||
gr.Markdown("Train a TI using kohya textual inversion python code...")
|
||||
|
||||
# Setup Configuration Files Gradio
|
||||
with gr.Accordion("Configuration", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Configuration", open=False):
|
||||
configuration = ConfigurationFile(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Accelerate launch", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Accelerate launch", open=False), gr.Column():
|
||||
accelerate_launch = AccelerateLaunch(config=config)
|
||||
|
||||
with gr.Column():
|
||||
|
|
@ -992,13 +992,13 @@ def ti_tab(
|
|||
config=config,
|
||||
)
|
||||
|
||||
with gr.Accordion("Folders", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Folders", open=False), gr.Group():
|
||||
folders = Folders(headless=headless, config=config)
|
||||
|
||||
with gr.Accordion("Metadata", open=config.get("settings.expand_all_accordions", False)), gr.Group():
|
||||
with gr.Accordion("Metadata", open=False), gr.Group():
|
||||
metadata = MetaData(config=config)
|
||||
|
||||
with gr.Accordion("Dataset Preparation", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("Dataset Preparation", open=False):
|
||||
gr.Markdown(
|
||||
"This section provide Dreambooth tools to help setup your dataset..."
|
||||
)
|
||||
|
|
@ -1013,8 +1013,8 @@ def ti_tab(
|
|||
|
||||
gradio_dataset_balancing_tab(headless=headless)
|
||||
|
||||
with gr.Accordion("Parameters", open=config.get("settings.expand_all_accordions", False)), gr.Column():
|
||||
with gr.Accordion("Basic", open=config.get("textual_inversion.expand_basic_accordion", True)):
|
||||
with gr.Accordion("Parameters", open=False), gr.Column():
|
||||
with gr.Accordion("Basic", open="True"):
|
||||
with gr.Group(elem_id="basic_tab"):
|
||||
with gr.Row():
|
||||
|
||||
|
|
@ -1032,7 +1032,7 @@ def ti_tab(
|
|||
weights = gr.Dropdown(
|
||||
label="Resume TI training (Optional. Path to existing TI embedding file to keep training)",
|
||||
choices=[""] + list_embedding_files(current_embedding_dir),
|
||||
value=config.get("textual_inversion.basic.weights", ""),
|
||||
value="",
|
||||
interactive=True,
|
||||
allow_custom_value=True,
|
||||
)
|
||||
|
|
@ -1068,16 +1068,15 @@ def ti_tab(
|
|||
token_string = gr.Textbox(
|
||||
label="Token string",
|
||||
placeholder="eg: cat",
|
||||
value=config.get("textual_inversion.basic.token_string", ""),
|
||||
)
|
||||
init_word = gr.Textbox(
|
||||
label="Init word",
|
||||
value=config.get("textual_inversion.basic.init_word", "*"),
|
||||
value="*",
|
||||
)
|
||||
num_vectors_per_token = gr.Slider(
|
||||
minimum=1,
|
||||
maximum=75,
|
||||
value=config.get("textual_inversion.basic.num_vectors_per_token", 1),
|
||||
value=1,
|
||||
step=1,
|
||||
label="Vectors",
|
||||
)
|
||||
|
|
@ -1092,7 +1091,7 @@ def ti_tab(
|
|||
"object template",
|
||||
"style template",
|
||||
],
|
||||
value=config.get("textual_inversion.basic.template", "caption"),
|
||||
value="caption",
|
||||
)
|
||||
basic_training = BasicTraining(
|
||||
learning_rate_value=1e-5,
|
||||
|
|
@ -1109,7 +1108,7 @@ def ti_tab(
|
|||
config=config,
|
||||
)
|
||||
|
||||
with gr.Accordion("Advanced", open=config.get("settings.expand_all_accordions", False), elem_id="advanced_tab"):
|
||||
with gr.Accordion("Advanced", open=False, elem_id="advanced_tab"):
|
||||
advanced_training = AdvancedTraining(headless=headless, config=config)
|
||||
advanced_training.color_aug.change(
|
||||
color_aug_changed,
|
||||
|
|
@ -1117,11 +1116,11 @@ def ti_tab(
|
|||
outputs=[basic_training.cache_latents],
|
||||
)
|
||||
|
||||
with gr.Accordion("Samples", open=config.get("settings.expand_all_accordions", False), elem_id="samples_tab"):
|
||||
with gr.Accordion("Samples", open=False, elem_id="samples_tab"):
|
||||
sample = SampleImages(config=config)
|
||||
|
||||
global huggingface
|
||||
with gr.Accordion("HuggingFace", open=config.get("settings.expand_all_accordions", False)):
|
||||
with gr.Accordion("HuggingFace", open=False):
|
||||
huggingface = HuggingFace(config=config)
|
||||
|
||||
global executor
|
||||
|
|
|
|||
|
|
@ -206,7 +206,6 @@ def gradio_wd14_caption_gui_tab(
|
|||
"SmilingWolf/wd-swinv2-tagger-v3",
|
||||
"SmilingWolf/wd-vit-tagger-v3",
|
||||
"SmilingWolf/wd-convnext-tagger-v3",
|
||||
"SmilingWolf/wd-eva02-large-tagger-v3",
|
||||
],
|
||||
value=config.get(
|
||||
"wd14_caption.repo_id", "SmilingWolf/wd-v1-4-convnextv2-tagger-v2"
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ dependencies = [
|
|||
"bitsandbytes>=0.45.0",
|
||||
"dadaptation==3.2",
|
||||
"diffusers[torch]==0.32.2",
|
||||
"easygui==0.98.3",
|
||||
"einops==0.7.0",
|
||||
"fairscale==0.4.13",
|
||||
"ftfy==6.1.1",
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ aiofiles==23.2.1
|
|||
altair==4.2.2
|
||||
dadaptation==3.2
|
||||
diffusers[torch]==0.32.2
|
||||
easygui==0.98.3
|
||||
einops==0.7.0
|
||||
fairscale==0.4.13
|
||||
ftfy==6.1.1
|
||||
|
|
|
|||
|
|
@ -8,12 +8,12 @@ oneccl_bind_pt==2.3.100+xpu
|
|||
|
||||
tensorboard==2.15.2
|
||||
tensorflow==2.15.1
|
||||
intel-extension-for-tensorflow[xpu]
|
||||
intel-extension-for-tensorflow[xpu]==2.15.0.1
|
||||
onnxruntime-openvino==1.22.0
|
||||
|
||||
mkl
|
||||
mkl-dpcpp
|
||||
oneccl-devel
|
||||
impi-devel
|
||||
mkl==2024.2.1
|
||||
mkl-dpcpp==2024.2.1
|
||||
oneccl-devel==2021.13.1
|
||||
impi-devel==2021.13.1
|
||||
|
||||
-r requirements.txt
|
||||
|
|
|
|||
|
|
@ -0,0 +1,59 @@
|
|||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
class TestAllowedPaths(unittest.TestCase):
|
||||
def setUp(self):
|
||||
print("Setting up test...")
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.temp_dir_path = Path(self.temp_dir.name)
|
||||
self.dummy_file = self.temp_dir_path / "dummy.txt"
|
||||
with open(self.dummy_file, "w") as f:
|
||||
f.write("dummy content")
|
||||
|
||||
self.config_file = self.temp_dir_path / "config.toml"
|
||||
with open(self.config_file, "w") as f:
|
||||
f.write(f'''
|
||||
[server]
|
||||
allowed_paths = ["{self.temp_dir.name}"]
|
||||
''')
|
||||
print("Setup complete.")
|
||||
|
||||
def tearDown(self):
|
||||
print("Tearing down test...")
|
||||
self.temp_dir.cleanup()
|
||||
print("Teardown complete.")
|
||||
|
||||
def test_allowed_paths(self):
|
||||
print("Running test_allowed_paths...")
|
||||
# Run the gui with the new config and check if it can access the dummy file
|
||||
process = subprocess.Popen(
|
||||
[
|
||||
"python",
|
||||
"kohya_gui.py",
|
||||
"--config",
|
||||
str(self.config_file),
|
||||
"--headless",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
print("Process started.")
|
||||
# Give the server some time to start
|
||||
try:
|
||||
stdout, stderr = process.communicate(timeout=10)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
stdout, stderr = process.communicate()
|
||||
|
||||
print(f"Stdout: {stdout.decode()}")
|
||||
print(f"Stderr: {stderr.decode()}")
|
||||
# Check if there are any errors in the stderr
|
||||
self.assertNotIn("InvalidPathError", stderr.decode())
|
||||
print("Test complete.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
91
uv.lock
91
uv.lock
|
|
@ -338,6 +338,15 @@ wheels = [
|
|||
{ url = "https://files.pythonhosted.org/packages/f5/e8/f6bd1eee09314e7e6dee49cbe2c5e22314ccdb38db16c9fc72d2fa80d054/docker_pycreds-0.4.0-py2.py3-none-any.whl", hash = "sha256:7266112468627868005106ec19cd0d722702d2b7d5912a28e19b826c3d37af49", size = 8982, upload-time = "2018-11-29T03:26:49.575Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "easygui"
|
||||
version = "0.98.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/cc/ad/e35f7a30272d322be09dc98592d2f55d27cc933a7fde8baccbbeb2bd9409/easygui-0.98.3.tar.gz", hash = "sha256:d653ff79ee1f42f63b5a090f2f98ce02335d86ad8963b3ce2661805cafe99a04", size = 85583, upload-time = "2022-04-01T13:15:50.752Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/a7/b276ff776533b423710a285c8168b52551cb2ab0855443131fdc7fd8c16f/easygui-0.98.3-py2.py3-none-any.whl", hash = "sha256:33498710c68b5376b459cd3fc48d1d1f33822139eb3ed01defbc0528326da3ba", size = 92655, upload-time = "2022-04-01T13:15:49.568Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "einops"
|
||||
version = "0.7.0"
|
||||
|
|
@ -830,6 +839,7 @@ dependencies = [
|
|||
{ name = "bitsandbytes" },
|
||||
{ name = "dadaptation" },
|
||||
{ name = "diffusers", extra = ["torch"] },
|
||||
{ name = "easygui" },
|
||||
{ name = "einops" },
|
||||
{ name = "fairscale" },
|
||||
{ name = "ftfy" },
|
||||
|
|
@ -884,6 +894,7 @@ requires-dist = [
|
|||
{ name = "bitsandbytes", specifier = ">=0.45.0" },
|
||||
{ name = "dadaptation", specifier = "==3.2" },
|
||||
{ name = "diffusers", extras = ["torch"], specifier = "==0.32.2" },
|
||||
{ name = "easygui", specifier = "==0.98.3" },
|
||||
{ name = "einops", specifier = "==0.7.0" },
|
||||
{ name = "fairscale", specifier = "==0.4.13" },
|
||||
{ name = "ftfy", specifier = "==6.1.1" },
|
||||
|
|
@ -1082,46 +1093,50 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "multidict"
|
||||
version = "6.5.1"
|
||||
version = "6.5.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/5c/43/2d90c414d9efc4587d6e7cebae9f2c2d8001bcb4f89ed514ae837e9dcbe6/multidict-6.5.1.tar.gz", hash = "sha256:a835ea8103f4723915d7d621529c80ef48db48ae0c818afcabe0f95aa1febc3a", size = 98690, upload-time = "2025-06-24T22:16:05.117Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/46/b5/59f27b4ce9951a4bce56b88ba5ff5159486797ab18863f2b4c1c5e8465bd/multidict-6.5.0.tar.gz", hash = "sha256:942bd8002492ba819426a8d7aefde3189c1b87099cdf18aaaefefcf7f3f7b6d2", size = 98512, upload-time = "2025-06-17T14:15:56.556Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/29/db869cd110db3b91cbb6a353031e1e7964487403c086fb18fa0fdf5ec48a/multidict-6.5.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7b7d75cb5b90fa55700edbbdca12cd31f6b19c919e98712933c7a1c3c6c71b73", size = 74668, upload-time = "2025-06-24T22:13:46.269Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/82/2520af39f62a2eb989aaaf84059e95e337ea7aeb464d9feccd53738d5c37/multidict-6.5.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ad32e43e028276612bf5bab762677e7d131d2df00106b53de2efb2b8a28d5bce", size = 43479, upload-time = "2025-06-24T22:13:48.053Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/1a/e4f4822d2a0597d7f9e2222178ca4f06ce979ea81c2a6747cc4e0e675cb3/multidict-6.5.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0499cbc67c1b02ba333781798560c5b1e7cd03e9273b678c92c6de1b1657fac9", size = 43511, upload-time = "2025-06-24T22:13:49.131Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/5e/f9e0b97f235a31de5db2a6859aa057bb459c767a7991b5d24549ca123aa6/multidict-6.5.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3c78fc6bc1dd7a139dab7ee9046f79a2082dce9360e3899b762615d564e2e857", size = 250040, upload-time = "2025-06-24T22:13:51.151Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/ac/2b3058deb743ce99421f8d4072f18eee30a81c86030f9f196a9bdb42978c/multidict-6.5.1-cp310-cp310-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:f369d6619b24da4df4a02455fea8641fe8324fc0100a3e0dcebc5bf55fa903f3", size = 222481, upload-time = "2025-06-24T22:13:53.189Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/39/81/b38cf2c7717a286f40876cdb3f2ecaa251512a34da1a3a7c4a7b4e0a226e/multidict-6.5.1-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:719af50a44ce9cf9ab15d829bf8cf146de486b4816284c17c3c9b9c9735abb8f", size = 260376, upload-time = "2025-06-24T22:13:54.763Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/61/02/ce56214478629279941c5c20af2a865bbfa6bb9fd59ef1c2250bed8d077a/multidict-6.5.1-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:199a0a9b3de8bbeb6881460d32b857dc7abec94448aeb6d607c336628c53580a", size = 256845, upload-time = "2025-06-24T22:13:56.001Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/7d/e8a57728a6c0fd421e3c7637621bb0cd9e20bdd6cd07bb8c068ea9b0bd4c/multidict-6.5.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fe09318a28b00c6f43180d0d889df1535e98fb2d93d25955d46945f8d5410d87", size = 247834, upload-time = "2025-06-24T22:13:57.22Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/30/7f3abcf04755d0265123ceee96d9560c6994a90cd5b28f5c25ec83e43824/multidict-6.5.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ab94923ae54385ed480e4ab19f10269ee60f3eabd0b35e2a5d1ba6dbf3b0cc27", size = 245233, upload-time = "2025-06-24T22:13:59.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f6/7a/86057ac640fb13205a489b671281211c85fd9a2b4d4274401484b5e3f1cb/multidict-6.5.1-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:de2b253a3a90e1fa55eef5f9b3146bb5c722bd3400747112c9963404a2f5b9cf", size = 236867, upload-time = "2025-06-24T22:14:00.225Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/3e/de3c63ab6690d4ab4bd7656264b8bec6d3426e8b6b9a280723bb538861fc/multidict-6.5.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:b3bd88c1bc1f749db6a1e1f01696c3498bc25596136eceebb45766d24a320b27", size = 257058, upload-time = "2025-06-24T22:14:01.79Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/13/1f/46782124968479edd8dc8bc6d770b5b0e4b3fcd00275ee26198f025a6184/multidict-6.5.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:0ce8f0ea49e8f54203f7d80e083a7aa017dbcb6f2d76d674273e25144c8aa3d7", size = 246787, upload-time = "2025-06-24T22:14:03.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/b6/c8cd1ece2855296cc64f4c9db21a5252f1c61535ef6e008d984b47ba31e2/multidict-6.5.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:dc62c8ac1b73ec704ed1a05be0267358fd5c99d1952f30448db1637336635cf8", size = 243538, upload-time = "2025-06-24T22:14:04.362Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/c4/41fad83305ca4a3b4b0a1cf8fec3a6d044a7eb4d9484ee574851fde0b182/multidict-6.5.1-cp310-cp310-win32.whl", hash = "sha256:7a365a579fb3e067943d0278474e14c2244c252f460b401ccbf49f962e7b70fa", size = 40742, upload-time = "2025-06-24T22:14:05.516Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/cf/cd82de0db6469376dfe40b56c96768d422077e71e76b4783384864975882/multidict-6.5.1-cp310-cp310-win_amd64.whl", hash = "sha256:4b299a2ffed33ad0733a9d47805b538d59465f8439bfea44df542cfb285c4db2", size = 44715, upload-time = "2025-06-24T22:14:06.686Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/9b/5bb9a84cb264f4717a4be7be8ce0536172fb587f8aa8868889408384f8cd/multidict-6.5.1-cp310-cp310-win_arm64.whl", hash = "sha256:ed98ac527278372251fbc8f5c6c41bdf64ded1db0e6e86f9b9622744306060f6", size = 41922, upload-time = "2025-06-24T22:14:07.762Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/65/439c3f595f68ee60d2c7abd14f36829b936b49c4939e35f24e65950b59b2/multidict-6.5.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:153d7ff738d9b67b94418b112dc5a662d89d2fc26846a9e942f039089048c804", size = 74129, upload-time = "2025-06-24T22:14:08.859Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/7a/88b474366126ef7cd427dca84ea6692d81e6e8ebb46f810a565e60716951/multidict-6.5.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1d784c0a1974f00d87f632d0fb6b1078baf7e15d2d2d1408af92f54d120f136e", size = 43248, upload-time = "2025-06-24T22:14:10.017Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/8f/c45ff8980c2f2d1ed8f4f0c682953861fbb840adc318da1b26145587e443/multidict-6.5.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dedf667cded1cdac5bfd3f3c2ff30010f484faccae4e871cc8a9316d2dc27363", size = 43250, upload-time = "2025-06-24T22:14:11.107Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/71/795e729385ecd8994d2033731ced3a80959e9c3c279766613565f5dcc7e1/multidict-6.5.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7cbf407313236a79ce9b8af11808c29756cfb9c9a49a7f24bb1324537eec174b", size = 254313, upload-time = "2025-06-24T22:14:12.216Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/5a/36e8dd1306f8f6e5b252d6341e919c4a776745e2c38f86bc27d0640d3379/multidict-6.5.1-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2bf0068fe9abb0ebed1436a4e415117386951cf598eb8146ded4baf8e1ff6d1e", size = 227162, upload-time = "2025-06-24T22:14:13.549Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/c2/4e68fb3a8ef5b23bbf3d82a19f4ff71de8289b696c662572a6cb094eabf6/multidict-6.5.1-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:195882f2f6272dacc88194ecd4de3608ad0ee29b161e541403b781a5f5dd346f", size = 265552, upload-time = "2025-06-24T22:14:14.846Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/5b/b9ee059e39cd3fec2e1fe9ecb57165fba0518d79323a6f355275ed9ec956/multidict-6.5.1-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5776f9d2c3a1053f022f744af5f467c2f65b40d4cc00082bcf70e8c462c7dbad", size = 260935, upload-time = "2025-06-24T22:14:16.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4c/0a/ea655a79d2d89dedb33f423b5dd3a733d97b1765a5e2155da883060fb48f/multidict-6.5.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a266373c604e49552d295d9f8ec4fd59bd364f2dd73eb18e7d36d5533b88f45", size = 251778, upload-time = "2025-06-24T22:14:17.963Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/58/8ff6b032f6c8956c8beb93a7191c80e4a6f385e9ffbe4a38c1cd758a7445/multidict-6.5.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:79101d58094419b6e8d07e24946eba440136b9095590271cd6ccc4a90674a57d", size = 249837, upload-time = "2025-06-24T22:14:19.344Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/be/2fcdfd358ebc1be2ac3922a594daf660f99a23740f5177ba8b2fb6a66feb/multidict-6.5.1-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:62eb76be8c20d9017a82b74965db93ddcf472b929b6b2b78c56972c73bacf2e4", size = 240831, upload-time = "2025-06-24T22:14:20.647Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/e0/1d3a4bb4ce34f314b919f4cb0da26430a6d88758f6d20b1c4f236a569085/multidict-6.5.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:70c742357dd6207be30922207f8d59c91e2776ddbefa23830c55c09020e59f8a", size = 262110, upload-time = "2025-06-24T22:14:21.919Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/5a/4cabf6661aa18e43dca54d00de06ef287740ad6ddbba34be53b3a554a6ee/multidict-6.5.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:29eff1c9a905e298e9cd29f856f77485e58e59355f0ee323ac748203e002bbd3", size = 250845, upload-time = "2025-06-24T22:14:23.276Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/ad/44c44312d48423327d22be8c7058f9da8e2a527c9230d89b582670327efd/multidict-6.5.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:090e0b37fde199b58ea050c472c21dc8a3fbf285f42b862fe1ff02aab8942239", size = 247351, upload-time = "2025-06-24T22:14:24.523Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/30/a12bbd76222be44c4f2d540c0d9cd1f932ab97e84a06098749f29b2908f5/multidict-6.5.1-cp311-cp311-win32.whl", hash = "sha256:6037beca8cb481307fb586ee0b73fae976a3e00d8f6ad7eb8af94a878a4893f0", size = 40644, upload-time = "2025-06-24T22:14:26.139Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/58/2ce479dcb4611212eaa4808881d9a66a4362c48cd9f7b525b24a5d45764f/multidict-6.5.1-cp311-cp311-win_amd64.whl", hash = "sha256:b632c1e4a2ff0bb4c1367d6c23871aa95dbd616bf4a847034732a142bb6eea94", size = 44693, upload-time = "2025-06-24T22:14:27.265Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/d1/466a6cf48dcef796f2d75ba51af4475ac96c6ea33ef4dbf4cea1caf99532/multidict-6.5.1-cp311-cp311-win_arm64.whl", hash = "sha256:2ec3aa63f0c668f591d43195f8e555f803826dee34208c29ade9d63355f9e095", size = 41822, upload-time = "2025-06-24T22:14:28.387Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/9f/d4719ce55a1d8bf6619e8bb92f1e2e7399026ea85ae0c324ec77ee06c050/multidict-6.5.1-py3-none-any.whl", hash = "sha256:895354f4a38f53a1df2cc3fa2223fa714cff2b079a9f018a76cad35e7f0f044c", size = 12185, upload-time = "2025-06-24T22:16:03.816Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/88/f8354ef1cb1121234c3461ff3d11eac5f4fe115f00552d3376306275c9ab/multidict-6.5.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2e118a202904623b1d2606d1c8614e14c9444b59d64454b0c355044058066469", size = 73858, upload-time = "2025-06-17T14:13:21.451Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/04/634b49c7abe71bd1c61affaeaa0c2a46b6be8d599a07b495259615dbdfe0/multidict-6.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a42995bdcaff4e22cb1280ae7752c3ed3fbb398090c6991a2797a4a0e5ed16a9", size = 43186, upload-time = "2025-06-17T14:13:23.615Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/ff/091ff4830ec8f96378578bfffa7f324a9dd16f60274cec861ae65ba10be3/multidict-6.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2261b538145723ca776e55208640fffd7ee78184d223f37c2b40b9edfe0e818a", size = 43031, upload-time = "2025-06-17T14:13:24.725Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/c1/1b4137845f8b8dbc2332af54e2d7761c6a29c2c33c8d47a0c8c70676bac1/multidict-6.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e5b19f8cd67235fab3e195ca389490415d9fef5a315b1fa6f332925dc924262", size = 233588, upload-time = "2025-06-17T14:13:26.181Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/77/cbe9a1f58c6d4f822663788e414637f256a872bc352cedbaf7717b62db58/multidict-6.5.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:177b081e4dec67c3320b16b3aa0babc178bbf758553085669382c7ec711e1ec8", size = 222714, upload-time = "2025-06-17T14:13:27.482Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/37/39e1142c2916973818515adc13bbdb68d3d8126935e3855200e059a79bab/multidict-6.5.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4d30a2cc106a7d116b52ee046207614db42380b62e6b1dd2a50eba47c5ca5eb1", size = 242741, upload-time = "2025-06-17T14:13:28.92Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/aa/60c3ef0c87ccad3445bf01926a1b8235ee24c3dde483faef1079cc91706d/multidict-6.5.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a72933bc308d7a64de37f0d51795dbeaceebdfb75454f89035cdfc6a74cfd129", size = 235008, upload-time = "2025-06-17T14:13:30.587Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/5e/f7e0fd5f5b8a7b9a75b0f5642ca6b6dde90116266920d8cf63b513f3908b/multidict-6.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96d109e663d032280ef8ef62b50924b2e887d5ddf19e301844a6cb7e91a172a6", size = 226627, upload-time = "2025-06-17T14:13:31.831Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/74/1bc0a3c6a9105051f68a6991fe235d7358836e81058728c24d5bbdd017cb/multidict-6.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b555329c9894332401f03b9a87016f0b707b6fccd4706793ec43b4a639e75869", size = 228232, upload-time = "2025-06-17T14:13:33.402Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/e7/37118291cdc31f4cc680d54047cdea9b520e9a724a643919f71f8c2a2aeb/multidict-6.5.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6994bad9d471ef2156f2b6850b51e20ee409c6b9deebc0e57be096be9faffdce", size = 246616, upload-time = "2025-06-17T14:13:34.964Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/89/e2c08d6bdb21a1a55be4285510d058ace5f5acabe6b57900432e863d4c70/multidict-6.5.0-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:b15f817276c96cde9060569023808eec966bd8da56a97e6aa8116f34ddab6534", size = 235007, upload-time = "2025-06-17T14:13:36.428Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/1e/e39a98e8e1477ec7a871b3c17265658fbe6d617048059ae7fa5011b224f3/multidict-6.5.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:b4bf507c991db535a935b2127cf057a58dbc688c9f309c72080795c63e796f58", size = 244824, upload-time = "2025-06-17T14:13:37.982Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/ba/63e11edd45c31e708c5a1904aa7ac4de01e13135a04cfe96bc71eb359b85/multidict-6.5.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:60c3f8f13d443426c55f88cf3172547bbc600a86d57fd565458b9259239a6737", size = 257229, upload-time = "2025-06-17T14:13:39.554Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/00/bdcceb6af424936adfc8b92a79d3a95863585f380071393934f10a63f9e3/multidict-6.5.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:a10227168a24420c158747fc201d4279aa9af1671f287371597e2b4f2ff21879", size = 247118, upload-time = "2025-06-17T14:13:40.795Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/a0/4aa79e991909cca36ca821a9ba5e8e81e4cd5b887c81f89ded994e0f49df/multidict-6.5.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e3b1425fe54ccfde66b8cfb25d02be34d5dfd2261a71561ffd887ef4088b4b69", size = 243948, upload-time = "2025-06-17T14:13:42.477Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/8b/e45e19ce43afb31ff6b0fd5d5816b4fcc1fcc2f37e8a82aefae06c40c7a6/multidict-6.5.0-cp310-cp310-win32.whl", hash = "sha256:b4e47ef51237841d1087e1e1548071a6ef22e27ed0400c272174fa585277c4b4", size = 40433, upload-time = "2025-06-17T14:13:43.972Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/6e/96e0ba4601343d9344e69503fca072ace19c35f7d4ca3d68401e59acdc8f/multidict-6.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:63b3b24fadc7067282c88fae5b2f366d5b3a7c15c021c2838de8c65a50eeefb4", size = 44423, upload-time = "2025-06-17T14:13:44.991Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/4a/9befa919d7a390f13a5511a69282b7437782071160c566de6e0ebf712c9f/multidict-6.5.0-cp310-cp310-win_arm64.whl", hash = "sha256:8b2d61afbafc679b7eaf08e9de4fa5d38bd5dc7a9c0a577c9f9588fb49f02dbb", size = 41481, upload-time = "2025-06-17T14:13:49.389Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/ba/484f8e96ee58ec4fef42650eb9dbbedb24f9bc155780888398a4725d2270/multidict-6.5.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:8b4bf6bb15a05796a07a248084e3e46e032860c899c7a9b981030e61368dba95", size = 73283, upload-time = "2025-06-17T14:13:50.406Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/48/01d62ea6199d76934c87746695b3ed16aeedfdd564e8d89184577037baac/multidict-6.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46bb05d50219655c42a4b8fcda9c7ee658a09adbb719c48e65a20284e36328ea", size = 42937, upload-time = "2025-06-17T14:13:51.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/cf/bb462d920f26d9e2e0aff8a78aeb06af1225b826e9a5468870c57591910a/multidict-6.5.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:54f524d73f4d54e87e03c98f6af601af4777e4668a52b1bd2ae0a4d6fc7b392b", size = 42748, upload-time = "2025-06-17T14:13:52.505Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/b1/d5c11ea0fdad68d3ed45f0e2527de6496d2fac8afe6b8ca6d407c20ad00f/multidict-6.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:529b03600466480ecc502000d62e54f185a884ed4570dee90d9a273ee80e37b5", size = 236448, upload-time = "2025-06-17T14:13:53.562Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/69/c3ceb264994f5b338c812911a8d660084f37779daef298fc30bd817f75c7/multidict-6.5.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:69ad681ad7c93a41ee7005cc83a144b5b34a3838bcf7261e2b5356057b0f78de", size = 228695, upload-time = "2025-06-17T14:13:54.775Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/3d/c23dcc0d34a35ad29974184db2878021d28fe170ecb9192be6bfee73f1f2/multidict-6.5.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fe9fada8bc0839466b09fa3f6894f003137942984843ec0c3848846329a36ae", size = 247434, upload-time = "2025-06-17T14:13:56.039Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/06/b3/06cf7a049129ff52525a859277abb5648e61d7afae7fb7ed02e3806be34e/multidict-6.5.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f94c6ea6405fcf81baef1e459b209a78cda5442e61b5b7a57ede39d99b5204a0", size = 239431, upload-time = "2025-06-17T14:13:57.33Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/72/b2fe2fafa23af0c6123aebe23b4cd23fdad01dfe7009bb85624e4636d0dd/multidict-6.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84ca75ad8a39ed75f079a8931435a5b51ee4c45d9b32e1740f99969a5d1cc2ee", size = 231542, upload-time = "2025-06-17T14:13:58.597Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/c9/a52ca0a342a02411a31b6af197a6428a5137d805293f10946eeab614ec06/multidict-6.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be4c08f3a2a6cc42b414496017928d95898964fed84b1b2dace0c9ee763061f9", size = 233069, upload-time = "2025-06-17T14:13:59.834Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/55/a3328a3929b8e131e2678d5e65f552b0a6874fab62123e31f5a5625650b0/multidict-6.5.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:046a7540cfbb4d5dc846a1fd9843f3ba980c6523f2e0c5b8622b4a5c94138ae6", size = 250596, upload-time = "2025-06-17T14:14:01.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/b8/aa3905a38a8287013aeb0a54c73f79ccd8b32d2f1d53e5934643a36502c2/multidict-6.5.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:64306121171d988af77d74be0d8c73ee1a69cf6f96aea7fa6030c88f32a152dd", size = 237858, upload-time = "2025-06-17T14:14:03.232Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/eb/f11d5af028014f402e5dd01ece74533964fa4e7bfae4af4824506fa8c398/multidict-6.5.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:b4ac1dd5eb0ecf6f7351d5a9137f30a83f7182209c5d37f61614dfdce5714853", size = 249175, upload-time = "2025-06-17T14:14:04.561Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/57/d451905a62e5ef489cb4f92e8190d34ac5329427512afd7f893121da4e96/multidict-6.5.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:bab4a8337235365f4111a7011a1f028826ca683834ebd12de4b85e2844359c36", size = 259532, upload-time = "2025-06-17T14:14:05.798Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/90/ff82b5ac5cabe3c79c50cf62a62f3837905aa717e67b6b4b7872804f23c8/multidict-6.5.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a05b5604c5a75df14a63eeeca598d11b2c3745b9008539b70826ea044063a572", size = 250554, upload-time = "2025-06-17T14:14:07.382Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/5a/0cabc50d4bc16e61d8b0a8a74499a1409fa7b4ef32970b7662a423781fc7/multidict-6.5.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:67c4a640952371c9ca65b6a710598be246ef3be5ca83ed38c16a7660d3980877", size = 248159, upload-time = "2025-06-17T14:14:08.65Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/1d/adeabae0771544f140d9f42ab2c46eaf54e793325999c36106078b7f6600/multidict-6.5.0-cp311-cp311-win32.whl", hash = "sha256:fdeae096ca36c12d8aca2640b8407a9d94e961372c68435bef14e31cce726138", size = 40357, upload-time = "2025-06-17T14:14:09.91Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/fe/bbd85ae65c96de5c9910c332ee1f4b7be0bf0fb21563895167bcb6502a1f/multidict-6.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:e2977ef8b7ce27723ee8c610d1bd1765da4f3fbe5a64f9bf1fd3b4770e31fbc0", size = 44432, upload-time = "2025-06-17T14:14:11.013Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/96/af/f9052d9c4e65195b210da9f7afdea06d3b7592b3221cc0ef1b407f762faa/multidict-6.5.0-cp311-cp311-win_arm64.whl", hash = "sha256:82d0cf0ea49bae43d9e8c3851e21954eff716259ff42da401b668744d1760bcb", size = 41408, upload-time = "2025-06-17T14:14:12.112Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/d8/45e8fc9892a7386d074941429e033adb4640e59ff0780d96a8cf46fe788e/multidict-6.5.0-py3-none-any.whl", hash = "sha256:5634b35f225977605385f56153bd95a7133faffc0ffe12ad26e10517537e8dfc", size = 12181, upload-time = "2025-06-17T14:15:55.156Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
@ -2179,15 +2194,15 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "2.31.0"
|
||||
version = "2.30.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
{ name = "urllib3" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d0/45/c7ef7e12d8434fda8b61cdab432d8af64fb832480c93cdaf4bdcab7f5597/sentry_sdk-2.31.0.tar.gz", hash = "sha256:fed6d847f15105849cdf5dfdc64dcec356f936d41abb8c9d66adae45e60959ec", size = 334167, upload-time = "2025-06-24T16:36:26.066Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/04/4c/af31e0201b48469786ddeb1bf6fd3dfa3a291cc613a0fe6a60163a7535f9/sentry_sdk-2.30.0.tar.gz", hash = "sha256:436369b02afef7430efb10300a344fb61a11fe6db41c2b11f41ee037d2dd7f45", size = 326767, upload-time = "2025-06-12T10:34:34.733Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/a2/9b6d8cc59f03251c583b3fec9d2f075dc09c0f6e030e0e0a3b223c6e64b2/sentry_sdk-2.31.0-py2.py3-none-any.whl", hash = "sha256:e953f5ab083e6599bab255b75d6829b33b3ddf9931a27ca00b4ab0081287e84f", size = 355638, upload-time = "2025-06-24T16:36:24.306Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/99/31ac6faaae33ea698086692638f58d14f121162a8db0039e68e94135e7f1/sentry_sdk-2.30.0-py2.py3-none-any.whl", hash = "sha256:59391db1550662f746ea09b483806a631c3ae38d6340804a1a4c0605044f6877", size = 343149, upload-time = "2025-06-12T10:34:32.896Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
|
|||
Loading…
Reference in New Issue