Commit Graph

198 Commits (master)

Author SHA1 Message Date
google-labs-jules[bot] 4ef34d9f5f Fix: Default config file dialog to repo root
Modified the get_file_path function in kohya_gui/common_gui.py
to ensure that when you are opening a configuration file, the
file dialog defaults to the kohya_ss repository root if no
initial path or only a filename is provided.

If a full or relative path is already present in the input field,
the dialog will open in the specified directory as before.

This change improves your experience by starting the file search
in a more relevant location.
2025-06-01 17:54:27 +00:00
google-labs-jules[bot] 69d8b96c1c Feat: Add logging for effective learning rates in LoRA GUI
This commit introduces a helper function, `get_effective_lr_messages`, into `kohya_gui/lora_gui.py` and integrates it into the `train_model` function.

The purpose is to provide you with clearer information about how the learning rates set in the GUI (Main LR, Text Encoder LR, U-Net LR, T5XXL LR) will be interpreted and effectively applied by the underlying `sd-scripts` training engine.

Before training commences, the GUI will now log:
- The Main LR.
- The effective LR for the primary Text Encoder (CLIP), indicating if it's a specific value or a fallback to the Main LR.
- The effective LR for the T5XXL Text Encoder (if applicable), indicating its source (specific, inherited from primary TE, or fallback to Main LR).
- The effective LR for the U-Net, indicating if it's a specific value or a fallback to the Main LR.

This enhances transparency by helping you understand how your LR settings interact, without modifying the `sd-scripts` submodule.
2025-06-01 13:59:07 +00:00
google-labs-jules[bot] d63a7fa2b6 Refactor: Clean up LR handling logic in LoRA GUI
This commit refactors the learning rate (LR) handling in `kohya_gui/lora_gui.py` for LoRA training.

The previous fix for LR misinterpretation involved commenting out a line. This commit completes the cleanup by:
- Removing the `do_not_set_learning_rate` variable and its associated conditional logic, which became redundant.
- Renaming the float-converted `learning_rate` to `learning_rate_float` for clarity.
- Ensuring that `learning_rate_float` and the float-converted `unet_lr_float` are consistently used when preparing the `config_toml_data` for the training script.

This makes the code cleaner and the intent of always passing the main learning rate (along with specific TE/UNet LRs) more direct. The functional behavior of the LR fix remains the same.
2025-06-01 12:29:08 +00:00
google-labs-jules[bot] 3a8b599ba9 Fix: Ensure main learning rate is used in LoRA training
The GUI logic was preventing the main learning rate from being passed to the training script if text_encoder_lr or unet_lr was set. This caused issues with optimizers like Prodigy, which might default to a very small LR if the main LR isn't provided.

This commit modifies kohya_gui/lora_gui.py to ensure the main learning_rate is always included in the parameters passed to the training script, allowing optimizers to use your specified main LR, TE LR, and UNet LR correctly.
2025-06-01 11:31:14 +00:00
bmaltais 451f051d52
Update kohya_gui/blip2_caption_gui.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-25 18:10:24 -04:00
bmaltais c4247408fe
Feat/add max grad norm dreambooth (#3251)
* I've added a `max_grad_norm` parameter to the Dreambooth GUI.

This change adds support for the `max_grad_norm` parameter to the Dreambooth training GUI.

- The `max_grad_norm` option is now available in the 'Basic' training parameters section.
- The value you set in the GUI for `max_grad_norm` is passed to the training script via the generated TOML configuration file, similar to its existing implementation in the LoRA GUI.

* Fix mising entry for max_grad_norm

---------

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-05-25 17:44:03 -04:00
bmaltais c30249f633 Fix logic issues 2025-05-03 11:20:08 -04:00
bmaltais d8bf6eff04 Fix some logic issues in lora handling 2025-05-03 11:11:41 -04:00
Ryan 9d500d99c2
Apple Silicone Support (This time not on Master) (#3174)
* Adding some changes to support current apple silicone

Adding a note that MPS is detected in validation, and a current set of packages that offer MPS torch acceleration

* Adding MPS support for blip2
2025-04-19 10:22:53 -04:00
bmaltais e79f0416fb Fix issue with v_param when SDXL is selected 2025-04-05 08:47:35 -04:00
bmaltais 8f2476115e Add pytorch_optimizer.CAME to optimizer list. 2025-03-30 14:44:24 -04:00
bmaltais 1c7ab4d4f3 Add support for LoRA-GGPO 2025-03-30 14:41:40 -04:00
bmaltais eabfc4f93c Fix issue with local variable 'do_not_set_learning_rate' referenced before assignment 2025-03-30 10:48:29 -04:00
bmaltais b9028e8710
v25.0.3
- Upgrade Gradio, diffusers and huggingface-hub to latest release to fix issue with ASGI.
- Add a new method to setup and run the GUI. You will find two new script for both Windows (gui-uv.bat) and Linux (gui-uv.sh). With those scripts there is no need to run setup.bat or setup.sh anymore.
2025-03-28 17:46:38 -04:00
bmaltais ed55e81997
v25.0.0 release (#3138)
* Add support for custom learning rate scheduler type to the GUI

* Add .webp image extension support to BLIP2 captioning.

* Check for --debug flag for gui command-line args at startup

* Validate GPU ID accelerate input and return error when needed

* Update to latest sd-scripts dev commit

* Fix issue with pip upgrade

* Remove confusing log after command execution.

* piecewise_constant scheduler

* Update to latest sd-scripts dev commit

* fix: fixed docker-compose for passing models via volumes

* Prevent providing the legacy learning_rate if unet or te learning rate is provided

* Fix toml noise offset parameters based on selected type

* Fix adaptive_noise_scale value not properly loading from json config

* Fix prompt.txt location

* Improve "print command" output format

* Use output model name as wandb run name if not provided

* Update sd-scripts dev release

* Bump crate-ci/typos from 1.21.0 to 1.22.9

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.21.0 to 1.22.9.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.21.0...v1.22.9)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump docker/build-push-action from 5 to 6

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Get latest sd3 code

* Adding SD3 GUI elements

* Fix interactivity

* MVP GUI for SD3

* Fix text encoder issue

* Add fork section to readme

* Update sd3 commit

* Merge security-fix

* Update sc-script to latest code

* Auto-detect model type for safetensors files

Automatically tick the checkboxes for v2 and SDXL on the common training UI
and LoRA extract/merge utilities.

* autodetect-modeltype: remove unused lambda inputs

* rework TE1/TE2 learning rate handling for SDXL dreambooth

SDXL dreambooth apparently trains without the text encoders by default,
requiring the `--train_text_encoder` flag to be passed so that the
learning rates for TE1/TE2 are recognized.

The toml handling now permits 0 to be passed as a learning rate in
order to disable training of one or both text encoders.
This behavior aligns with the description given on the GUI.

TE1/TE2 learning rate parameters can be left blank on the GUI to
not pass a value to the training script.

* dreambooth_gui: fix toml value filtering condition

In python3, `0 == False` will evaluate True.
That can cause arg values of 0 to be wrongly eliminated from the toml output.
The conditional must check the type when comparing for False.

* autodetect-modeltype: also do the v2 checkbox in extract_lora

* Update to latest dev branch code

* bring back SDXLConfig accordion for dreambooth gui (#2694)

b-fission <b-fission@users.noreply.github.com>

* Update to latest sd3 branch commit

* Fix merge issue

* Update gradio version

* Update to latest flux.1 code

* Add Flux.1 Model checkbox and detection

* Adding LoRA type "Flux1" to dropdown

* Added Flux.1 parameters to GUI

* Update sd-scripts and requirements

* Add missing Flux.1 GUI parameters

* Update to latest sd-scripts sd3 code

* Fix issue with cache_text_encoder_outputs

* Update to latest sd-scripts flux1 code

* Adding new flux.1 options to GUI

* Update to latest sd-scripts version of flux.1

* Adding guidance_scale option

* Update to latest sd3 flux.1 sd-scripts

* Add dreambooth and finetuning support for flux.1

* Update README

* Fix t5xxl path issue in DB

* add missing fp8_base parameter

* Fix issue with guidance scale not being passed as float for values like 1

* Temporary fir for blockwise_fused_optimizers

* Update to latest sd-scripts Flux.1 code

* Fix blockwise_fused_optimizers typo

* Add mem_eff_save option to GUI for Flux.1

* Added support for Flux.1 LoRA Merge

* Update to latest sd-scripts sd3 branch code

* Add diffusers option to flux.1 merge LoRA utility

* Fix issue with split_mode and train_blocks

* Updating requirements

* Add flux_fused_backward_pass to dreambooth and finetuning

* Update requirements_linux_docker.txt

update accelerate version for linux_docker

* Update to latest sd3 flux code

* Add extract flux lora GUI

* MErged latest sd3 branch code

* Add support for split_qkv

* Add missing network argument for split_qkv

* Add timestep_sampling shift support

* Update to latest sd-scripts flux.1 code

* Add support for fp8_base_unet

* Update requirements as per sd-scripts suggestion

* Upgrade to cu124

* Update IPEX and ROCm

* Fix issue with balancing when folder with name already exist

* Update sd-scripts

* Removed unsupported parameters from flux lora network

* Bump crate-ci/typos from 1.23.6 to 1.24.3

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.24.3.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.24.3)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update sd-scripts code

* Adding flux_shift option to timestep_sampling

* Update sd-scripts release

* Add support for Train T5-XXL

* Update sd-scripts submodule

* Add support for cpu_offload_checkpointing to GUI

* Force t5xxl_max_token_length  to be served as an integer

* Fix typo for flux_shift

* Update to latest sd-scripts code

* Grouping lora parameters

* Validate if lora type is Flux1 when flux1_checkbox is true

* Improve visual sectioning of parameters for lora

* Add dark mode styles

* Missed one color

* Update sd-scripts and add support for t5xxl LR

* Update transformers and wandb module

* Fix issue with new text_encoder_lr parameter syntax

* Add support for lr_warmup_steps override

* Update lr_warmup_steps code

* Removing stable-diffusion-1.5 default model

* Fix for max_train_steps

* Revert some changes

* Preliminary support for Flux1 OFT

* Fix logic typo

* Update sd-scripts

* Add support for Rank for layers

* Update lora_gui.py

Fixed minor typos of "Regularization"

* Update dreambooth_gui.py

Fixed minor typos of "Regularization"

* Update textual_inversion_gui.py

Fixed minor typos of "Regularization"

* Add support for Blocks to train

* Add missing network parms

* Fix issue with old_lr_warmup_steps

* Update sd-scripts

* Add support for ScheduleFree Optimizer Type

* Update sd-scripts

* Update requirements_pytorch_windows.txt

* Update requirements_pytorch_windows.txt

* Update sd-scripts from origin

* Another sd-script update

* Adding support for blocks_to_swap option to gui

* Fix xformers install issue

* feat(docker): mount models folder as a volume

* feat(docker): add models folder to .dockerignore

* Add support for AdEMAMix8bit optimizer

* Bump crate-ci/typos from 1.23.6 to 1.25.0

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.25.0.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.25.0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Fix typo on README.md

* Add new --noverify option to skip requirements validation on startup

* Update startup GUI code

* Update setup code

* Update sd-scripts

* Update sf-scripts

* Update Lycoris support

* Allow to specify tensorboard host via env var TENSORBOARD_HOST

* Update sd-scripts version

* Update sd-scripts release

* Update sd-scripts

* Add --skip_cache_check option to GUI

* Fix requirements issue

* Add support for LyCORIS LoRA when training Flux.1

* Pin huggingface-hub version for gradio 5

* Update sd-scripts

* Add support for --save_last_n_epochs_state

* Update sd-scripts to version with Differential Output Preservation

* Increase maximum flux-lora merge strength to 2

* Update to latest sd-scripts

* Update requirements syntax (for windows)

* Update requirements for linux

* Update torch version and validation output

* Fix typo

* Update README

* Fix validation issue on linux

* Update sd-scripts, improve requirements outputs

* Update requirements_runpod.txt

* Update requirements for onnxruntime-gpu

Needed for compatibility with CUDA 12.

* Update onnxruntime-gpu==1.19.2

* Update sd-scripts release

* Add support for save_last_n_epochs

* Update sd-scripts

* Bump crate-ci/typos from 1.23.6 to 1.26.8 (#2940)

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.26.8.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.26.8)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bmaltais <bernard@ducourier.com>

* fix 'cached_download' from 'huggingface_hub' (#2947)

Describe the bug: cannot import name 'cached_download' from 'huggingface_hub'

It's applyed for all platforms

Co-authored-by: bmaltais <bernard@ducourier.com>

* Add support for quiet output for linux setup

* Fix quiet issue

* Update sd-scripts

* Update sd-scripts with blocks_to_swap support

* Make blocks_to_swap visible in LoRA tab

* Fix blocks_to_swap not properly working

* Update sd-scripts and allow python 3.10 to 3.12

* Fix issue with max_train_steps

* Fix max_train_steps_info error

* Reverting all changes for max_train_steps

* Update sd-scripts

* Update sd-scripts

* Update to latest sd-scripts

* Add support for RAdamScheduleFree

* Add support for huber_scale

* Add support for fused_backward_pass for sd3 finetuning

* Add support for prodigyplus.ProdigyPlusScheduleFree

* SD3 LoRA training MVP

* Make blocks_to_swap common

* Add support for sd3 lora disable_mmap_load_safetensors

* Add a bunch of missing SD3 parameters

* Fix clip_l issue for missing path

* Fix train_t5xxl issue

* Fix network_module issue

* Add uniform to weighting_scheme

* Bump crate-ci/typos from 1.23.6 to 1.28.1 (#2996)

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.28.1.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.28.1)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bmaltais <bernard@ducourier.com>

* Update README.md (#3031)

* Bump crate-ci/typos from 1.23.6 to 1.29.0 (#3029)

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.29.0.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.29.0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bmaltais <bernard@ducourier.com>

* Update sd-scripts version

* Update setup.sh (#3054)

Enter the current directory before executing setup.sh, otherwise the installer might failed to find rqeuirements.txt

* Removing wrong folder

* Fix issue with SD3 Lora training blocks_to_swap and fused_backward_pass

* Fix dreambooth issue

* Update to lastest sd-scripts code

* Run on novita (#3119) (#3120)

* add run on novita

* adjust position

Co-authored-by: hugo <liyiligang@users.noreply.github.com>

* Bump crate-ci/typos from 1.23.6 to 1.30.0 (#3101)

Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to 1.30.0.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.23.6...v1.30.0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bmaltais <bernard@ducourier.com>

* updated prodigyopt to 1.1.2 and removed duplicated row in requirements.txt (#3065)

* fixed names on LR Schedure dropdown (#3064)

* Update to latest sd-scripts version

* fixed names on LR Schedure dropdown (#3064)

* Cleanup venv3

* Fix issue with gradio on new installations
Add support for latest sd-scripts pytorch-optimizer

* Update README for v25.0.0 release

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: b-fission <b-fission@users.noreply.github.com>
Co-authored-by: DevArqSangoi <lucas.sangoi@gmail.com>
Co-authored-by: Кирилл Москвин <retreat.cost@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: b-fission <131207849+b-fission@users.noreply.github.com>
Co-authored-by: eftSharptooth <76253264+eftSharptooth@users.noreply.github.com>
Co-authored-by: Disty0 <disty@disty.xyz>
Co-authored-by: wcole3 <will.cole3@gmail.com>
Co-authored-by: rohitanshu <85547195+iamrohitanshu@users.noreply.github.com>
Co-authored-by: wzgrx <39661556+wzgrx@users.noreply.github.com>
Co-authored-by: Vladimir Sotnikov <vladimir.s@alphakek.ai>
Co-authored-by: bulieme0 <53142287+bulieme@users.noreply.github.com>
Co-authored-by: Nicolas Pereira <41456803+hqnicolas@users.noreply.github.com>
Co-authored-by: ruucm <ruucm.a@gmail.com>
Co-authored-by: CaledoniaProject <CaledoniaProject@users.noreply.github.com>
Co-authored-by: hugo <liyiligang@users.noreply.github.com>
Co-authored-by: Koro <Koronos@users.noreply.github.com>
2025-03-28 11:00:44 -04:00
bmaltais 3121d5ec35 Fix issue with missing key. Upgrade Gradio release for security issue. 2024-08-02 10:41:13 -04:00
bmaltais 3b771f51ec
Fix finetuning meta file creation (#2488) 2024-05-11 09:10:39 -04:00
bmaltais d010b0d15b Fix resume folder path validation 2024-05-10 20:16:39 -04:00
Lucas Freire Sangoi bfc2856c68
Minor fixes (#2480)
* Update common_gui.py

Little fix to the `validate_model_path` function to properly handle a folder path. Currently, it errors out when a folder path is selected for a training with diffusers. 😊

* Update lora_gui.py

Switching the validation type for the resume training state path from 'file' to 'folder'.
😊

* Update common_gui.py

format

* Update common_gui.py

My last fix was wrong and was returning errors when using a default model or when the VAE was not defined. This new fix works with all possibilities.
However, when a diffusers folder is used, `validate_file_path` will show a failed check, but the operation will still succeed because `validate_folder_path` will return successfully.
2024-05-10 20:08:52 -04:00
bmaltais 73822af880 Catch when accelerate command is not found 2024-05-09 18:01:05 -04:00
bmaltais 27b58a79f2 Fix creation of output folders 2024-05-09 13:30:27 -04:00
bmaltais 92a01a3890 Fix issue with tensorboard 2024-05-09 13:20:20 -04:00
bmaltais 79afb84f60 Fix issue with svd merge int parameters handling 2024-05-07 19:06:40 -04:00
bmaltais 7fac1e89dd Move env var config to common_gui 2024-05-07 07:58:11 -04:00
bmaltais 8f8ec7a3de
Merge pull request #2462 from notjedi/vae-validation-fix
fix: vae path validation
2024-05-07 07:17:50 -04:00
bmaltais 5984c87d22 Update model validation code 2024-05-07 07:17:27 -04:00
Krithic Kumar 3902707d9b fix: vae path validation 2024-05-07 13:41:24 +05:30
bmaltais 8f8907f36c Fix issue with vae file path validation 2024-05-06 06:43:43 -04:00
bmaltais d26dad20b6
Relocate toml training config file to same folder as the model output directory (#2448) 2024-05-05 13:43:56 -04:00
bmaltais 36071cc244
Improve files and folders validation (#2429) 2024-05-01 08:29:31 -04:00
bmaltais e836bb5847
Add support for custom LyCORIS preset config toml base files (#2425)
* Add support for custom LyCORIS preset config toml base files
2024-04-30 20:24:15 -04:00
bmaltais 91350e5581 Update utility code 2024-04-30 20:09:19 -04:00
bmaltais 7bbc99d91b
2382 some incorrect usage related to the recent shell=false issue (#2417)
* Convert script back to no shell.

* Update rest of tools and update dreambooth to no shell

* Update rest of trainers to not use shell

* Allow the use of custom caption extension
2024-04-29 07:44:55 -04:00
b-fission fbd0d1a0cf
Use zero as minimum LR for polynomial scheduler (#2410) 2024-04-29 07:15:56 -04:00
bmaltais a97d3a94ee Fix issue with pre and post fix caption in subfolders 2024-04-29 07:13:34 -04:00
bmaltais ee725a9c01 Change tmp file config name to have date and time info 2024-04-27 21:01:56 -04:00
陳鈞 074de82dc5
chore(docker): Configure TensorBoard port through .env file (#2397)
* chore(docker): Configure TensorBoard port through .env file

- Added a new `.env` file to specify the TensorBoard port
- Updated the `docker-compose.yaml` file to import the TensorBoard port from the `.env` file
- Adjusted the tensorboard service in `docker-compose.yaml` to make the port configurable via an environment variable
- Added a comment in `docker-compose.yaml` to encourage changing the port in the `.env` file instead of the docker-compose file itself

* fix: the `Open tensorboard` button is not working in headless environment

Use the gradio builtin feature instead.

- In `class_tensorboard.py`, the "Open tensorboard" button now directly links to the tensorboard URL instead of calling the `open_tensorboard_url` function when clicked.
2024-04-26 19:43:37 -04:00
bmaltais 0c2c2d4e06 Add "Open tensorboard" button for docker containers without tensorflow installed 2024-04-26 10:03:25 -04:00
bmaltais d8a51f34fd Set `max_train_steps` to 0 if not specified in older `.json` config files 2024-04-26 07:11:07 -04:00
bmaltais 1b71c7f5ee Fix issue where tensorboard was displayed when tensorflow was not installed 2024-04-25 18:39:06 -04:00
bmaltais 433fabf7d8 Fix [24.0.6] Train toml config seed type error #2370 2024-04-25 13:10:55 -04:00
bmaltais 1f5f3cf027 Prevent tkinter import crash 2024-04-25 11:20:27 -04:00
bmaltais 81cd49830f Convert gradio_resize_lora_tab to shell false 2024-04-21 08:42:52 -04:00
bmaltais 214f7199a3 Fix issue with verify lora tool 2024-04-21 08:29:00 -04:00
bmaltais 8234e52ded
Add validation of lr scheduler and optimizer arguments (#2358) 2024-04-20 15:53:11 -04:00
bmaltais 58e57a3365
Fix issue with lora_network_weights not being loaded (#2357) 2024-04-20 15:05:35 -04:00
bmaltais 4923f5647c
Make Start/Stop buttons visible in headless (#2356) 2024-04-20 14:27:19 -04:00
bmaltais 5a801649c2 Convert config with use_wandb to log_with = wandb 2024-04-19 13:16:23 -04:00
Maatra 25d7c6889f
Changed logger checkbox to dropdown, renamed use_wandb -> log_with (#2352) 2024-04-19 13:14:15 -04:00
bmaltais 542af98980 Fix wrong default for max_train_steps and max_train_epoch 2024-04-19 13:04:02 -04:00