automatic/pipelines/flux
CalamitousFelicitousness 18568db41c Add native LoRA loading for Flux2/Klein models
Load Flux2/Klein LoRAs as native NetworkModuleLora objects, bypassing
diffusers PEFT. Handles kohya (lora_unet_), AI toolkit (diffusion_model.),
diffusers PEFT (transformer.), and bare BFL key formats with automatic
QKV splitting for double block fused attention weights.

Includes shape validation to reject architecture-mismatched LoRAs early.
Respects lora_force_diffusers setting to fall back to PEFT when needed.
2026-03-25 04:24:49 +00:00
..
flux2_lora.py Add native LoRA loading for Flux2/Klein models 2026-03-25 04:24:49 +00:00
flux_lora.py SDNQ Flux lora, use shape from sdnq_dequantizer 2025-08-18 19:53:57 +03:00
flux_nunchaku.py cleanup logger 2026-02-19 11:09:13 +01:00