Fix: Ensure main learning rate is used in LoRA training

The GUI logic was preventing the main learning rate from being passed to the training script if text_encoder_lr or unet_lr was set. This caused issues with optimizers like Prodigy, which might default to a very small LR if the main LR isn't provided.

This commit modifies kohya_gui/lora_gui.py to ensure the main learning_rate is always included in the parameters passed to the training script, allowing optimizers to use your specified main LR, TE LR, and UNet LR correctly.
pull/3264/head
google-labs-jules[bot] 2025-06-01 11:31:14 +00:00
parent 1a0ee43ec8
commit 3a8b599ba9
1 changed files with 1 additions and 1 deletions

View File

@ -1440,7 +1440,7 @@ def train_model(
do_not_set_learning_rate = False # Initialize with a default value
if text_encoder_lr_float != 0 or unet_lr_float != 0:
log.info("Learning rate won't be used for training because text_encoder_lr or unet_lr is set.")
do_not_set_learning_rate = True
# do_not_set_learning_rate = True # This line is now commented out
clip_l_value = None
if sd3_checkbox: