mirror of https://github.com/bmaltais/kohya_ss
547 B
547 B
LORA Training on TESLA V100 - GPU Utilization Issue
Issue Summary
When training LORA on a TESLA V100, users reported low GPU utilization. Additionally, there was difficulty in specifying GPUs other than the default for training.
Potential Solutions
- GPU Selection: Users can specify GPU IDs in the setup configuration to select the desired GPUs for training.
- Improving GPU Load: Utilizing
adamW8bitoptimizer and increasing the batch size can help achieve 70-80% GPU utilization without exceeding GPU memory limits.