mirror of https://github.com/vladmandic/automatic
Load Flux2/Klein LoRAs as native NetworkModuleLora objects, bypassing diffusers PEFT. Handles kohya (lora_unet_), AI toolkit (diffusion_model.), diffusers PEFT (transformer.), and bare BFL key formats with automatic QKV splitting for double block fused attention weights. Includes shape validation to reject architecture-mismatched LoRAs early. Respects lora_force_diffusers setting to fall back to PEFT when needed. |
||
|---|---|---|
| .. | ||
| flux2_lora.py | ||
| flux_lora.py | ||
| flux_nunchaku.py | ||