mirror of https://github.com/bmaltais/kohya_ss
11 lines
547 B
Markdown
11 lines
547 B
Markdown
### LORA Training on TESLA V100 - GPU Utilization Issue
|
|
|
|
#### Issue Summary
|
|
|
|
When training LORA on a TESLA V100, users reported low GPU utilization. Additionally, there was difficulty in specifying GPUs other than the default for training.
|
|
|
|
#### Potential Solutions
|
|
|
|
- **GPU Selection:** Users can specify GPU IDs in the setup configuration to select the desired GPUs for training.
|
|
- **Improving GPU Load:** Utilizing `adamW8bit` optimizer and increasing the batch size can help achieve 70-80% GPU utilization without exceeding GPU memory limits.
|