Commit Graph

197 Commits (e0f6766b8a3125b1bc0205ac2030bd2d1e314e85)

Author SHA1 Message Date
awsr bec694a7d2
PEP 484 pipelines directory 2026-03-25 16:56:41 -07:00
vladmandic 0b74796d94 basic ty config
Signed-off-by: vladmandic <mandic00@live.com>
2026-03-25 09:34:59 +01:00
CalamitousFelicitousness 18568db41c Add native LoRA loading for Flux2/Klein models
Load Flux2/Klein LoRAs as native NetworkModuleLora objects, bypassing
diffusers PEFT. Handles kohya (lora_unet_), AI toolkit (diffusion_model.),
diffusers PEFT (transformer.), and bare BFL key formats with automatic
QKV splitting for double block fused attention weights.

Includes shape validation to reject architecture-mismatched LoRAs early.
Respects lora_force_diffusers setting to fall back to PEFT when needed.
2026-03-25 04:24:49 +00:00
vladmandic 53839e464c remove legacy quant loaders
Signed-off-by: vladmandic <mandic00@live.com>
2026-03-24 15:01:07 +01:00
vladmandic 09b9ae32c1 add color grading to processing
Signed-off-by: vladmandic <mandic00@live.com>
2026-03-23 08:44:07 +01:00
CalamitousFelicitousness 9719290ceb fix(lora): handle kohya-format alpha keys in Flux2/Klein LoRA loading 2026-03-23 02:14:07 +00:00
CalamitousFelicitousness 091f31d4bf add Flux2/Klein LoRA support
- detect f2 model type for LoRAs via metadata, architecture, and filename/folder
- preprocess bare BFL-format keys with diffusion_model prefix for Flux2LoraLoaderMixin
- handle LoKR format via native NetworkModuleLokr with on-the-fly kron(w1, w2)
- add NetworkModuleLokrChunk for fused QKV split into separate Q/K/V modules
- activate native modules loaded via diffusers path
- improve error message for Flux1/Flux2 architecture mismatch
2026-03-23 02:14:07 +00:00
vladmandic 5ff73b61a4 add google gemini to captioning
Signed-off-by: vladmandic <mandic00@live.com>
2026-03-02 09:34:40 +01:00
vladmandic 2da67ce7d3 cleanup and fix monitor
Signed-off-by: vladmandic <mandic00@live.com>
2026-02-28 12:31:09 +01:00
vladmandic afff46f2ac add google-flash-3.1-image
Signed-off-by: vladmandic <mandic00@live.com>
2026-02-27 10:53:22 +01:00
vladmandic 554c8fbf2f remove model: hdm
Signed-off-by: vladmandic <mandic00@live.com>
2026-02-24 18:19:23 +01:00
Vladimir Mandic 87e4505acd
Merge pull request #4658 from liutyi/dev
FireRed Edit preview image
2026-02-21 13:43:13 +01:00
Vladimir Mandic 9a63ec758a initial pyright lint
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2026-02-21 09:32:36 +01:00
Oleksandr Liutyi 316b940b6b Change Unipic3 CM to Unipic 3 base. Add unipic3 to autodetect patterns. Add preview image 2026-02-20 19:09:24 +00:00
Vladimir Mandic d65a2d1ebc ruff lint 2026-02-19 11:13:44 +01:00
Vladimir Mandic e5c494f999 cleanup logger 2026-02-19 11:09:13 +01:00
Vladimir Mandic a3074baf8b unified logger 2026-02-19 09:46:42 +01:00
Vladimir Mandic 6fdd3a53cf reduce mandatory requirements
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2026-02-18 17:53:08 +01:00
vladmandic da1cf2f996 refactor image methods
Signed-off-by: vladmandic <mandic00@live.com>
2026-02-11 12:29:00 +01:00
CalamitousFelicitousness 76aa949a26 refactor: integrate sharpfin for high-quality image resize
Vendor sharpfin library (Apache 2.0) and add centralized wrapper
module (images_sharpfin.py) replacing torchvision tensor/PIL
conversion and resize operations throughout the codebase.

- Add modules/sharpfin/ vendored library with MKS2021, Lanczos3,
  Mitchell, Catmull-Rom kernels and optional Triton sparse acceleration
- Add modules/images_sharpfin.py wrapper with to_tensor(), to_pil(),
  pil_to_tensor(), normalize(), resize(), resize_tensor()
- Add resize_quality and resize_linearize_srgb settings
- Add MKS2021 and Lanczos3 upscaler entries
- Replace torchvision.transforms.functional imports across 18 files
- to_pil() auto-detects HWC/BHWC layout, adds .round() before uint8
- Sparse Triton path falls back to dense GPU on compilation failure
- Mixed-axis resize splits into two single-axis scale() calls
- Masks and non-sRGB data always use linearize=False
2026-02-11 09:57:37 +01:00
CalamitousFelicitousness 33de04a0c7 feat: add 4-step Nunchaku variants for Qwen-Lightning models
Add 4-step distilled Nunchaku SVDQuant entries for Qwen-Lightning and
Qwen-Lightning-Edit alongside the existing 8-step variants. Step count
is now shown in the reference name (e.g. "Qwen-Lightning (4-step)").

- Add subfolder parameter to load_qwen_nunchaku to distinguish
  4-step (nunchaku-4step) from 8-step (nunchaku) variants
- Route to correct safetensors: lightningv1.0-4steps vs
  lightningv1.1-8steps for gen, lightningv1.0-4steps vs
  lightningv1.0-8steps for edit
- Strip nunchaku subfolder before pipeline from_pretrained since
  it does not exist in the base HuggingFace repos
2026-02-07 22:27:05 +00:00
CalamitousFelicitousness a2ee885e28 refactor: update nunchaku repo URLs and version handling
- Rename HuggingFace org from nunchaku-tech to nunchaku-ai across all
  nunchaku model repos (flux, sdxl, sana, z-image, qwen, t5)
- Add per-torch-version nunchaku version mapping instead of single global
  version, with robust torch version parsing
2026-02-07 22:27:05 +00:00
CalamitousFelicitousness 4ba913e072 fix: Anima pipeline detection, custom module loading, and model type
- Relax sd_detect to match 'anima' without requiring 'cosmos' in name
- Use hf_hub_download for custom pipeline.py and adapter modules
- Register custom modules in sys.modules for Diffusers trust_remote_code
- Pass trust_remote_code=True to from_pretrained
- Map AnimaTextToImage to 'cosmos' model type for TAESD preview support
2026-02-02 00:44:51 +00:00
CalamitousFelicitousness af9fe036a3 feat: add Anima (Cosmos-Predict-2B variant) pipeline support
Anima replaces the Cosmos T5-11B text encoder with Qwen3-0.6B + a
6-layer LLM adapter and uses CONST preconditioning instead of EDM.

- Add pipelines/model_anima.py loader with dynamic import of custom
  AnimaTextToImagePipeline and AnimaLLMAdapter from model repo
- Register 'Anima' pipeline in shared_items.py
- Add name-based detection in sd_detect.py
- Fix list-format _class_name handling in guess_by_diffusers()
- Wire loader in sd_models.py load_diffuser_force()
- Skip noise_pred callback injection for Anima (uses velocity instead)
- Add output_type='np' override in processing_args.py
2026-02-02 00:44:51 +00:00
vladmandic cc0b0e8e3d cleanup todo
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-19 11:10:05 +01:00
CalamitousFelicitousness dd9001b20d fix(model_flux2_klein): update generic.py and pipeline 2026-01-17 21:36:44 +00:00
CalamitousFelicitousness 33ee04a9f3 feat(flux2-klein): add SDNQ pre-quantized model support and reference images
- Add Transformers v5 tokenizer compatibility fix for SDNQ Klein models
  Downloads missing vocab.json, merges.txt, tokenizer_config.json from
  Z-Image-Turbo when needed
- Detect SDNQ repos and disable shared text encoder to use pre-quantized
  weights from the SDNQ repo instead of loading from shared base model
- Update reference-quant.json with correct preview images and metadata
  for Klein SDNQ models
- Update reference-distilled.json with correct cfg_scale (1.0) for
  distilled Klein models per official HuggingFace documentation
- Add 6 Klein model preview images
2026-01-17 21:36:44 +00:00
vladmandic b3d65f4559 logging cleanup
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-16 11:32:09 +01:00
vladmandic 0d90d95bf6 lint and safeguard glm
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-16 09:40:48 +01:00
CalamitousFelicitousness eaa8dbcd42 fix: correct comments and cleanup model descriptions
- Fix Klein text encoder comment to specify correct sizes per variant
- Lock TAESD decode logging behind SD_PREVIEW_DEBUG env var
- Fix misleading comment about FLUX.2 128-channel reshape (is fallback)
- Remove VRAM requirements from model descriptions in reference files
2026-01-16 03:24:39 +00:00
CalamitousFelicitousness fe99d3fe5d feat: add FLUX.2 Klein model support
Add support for FLUX.2 Klein distilled models (4B and 9B variants):

- Add pipeline loader for Flux2KleinPipeline
- Add model detection for 'flux.2' + 'klein' patterns
- Add pipeline mapping in shared_items
- Add shared Qwen3ForCausalLM text encoder handling:
  - 4B variants use Z-Image-Turbo's Qwen3-8B
  - 9B variants use FLUX.2-klein-9B's Qwen3-14B
- Add reference entries for distilled (4B, 9B) and base models
- Update diffusers commit for Flux2KleinPipeline support
2026-01-16 01:35:20 +00:00
vladmandic f7328fb907 add chroma-inpaint
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-15 08:37:44 +01:00
CalamitousFelicitousness 7fb58f22c2
fix(glm): rename _wrap_vision_language_generate to hijack_vision_language_generate 2026-01-14 11:34:25 +00:00
CalamitousFelicitousness 3f259cff9a add GLM-Image pipeline support
- Add GLM-Image (zai-org/GLM-Image) model detection and loading
- Custom pipeline loader with proper component handling:
  - ByT5 text encoder (cannot use shared T5 due to different hidden size)
  - Vision-language encoder (9B AR model)
  - DiT transformer (7B)
- Fix EOS token early stopping in AR generation
- Add AR token generation progress tracking with terminal progress bar
- Fix uninitialized audio variable in processing
- Add TAESD support for GLM-Image (using f1 variant)
2026-01-14 03:33:49 +00:00
vladmandic 641ba05d15 add nunchaku-z-image-turbo
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-10 09:09:45 +01:00
CalamitousFelicitousness 3302522fdb
Cleanup 2026-01-10 03:07:44 +00:00
CalamitousFelicitousness 9fe9d9521c fix(cloud): support three Google GenAI auth modes, use UI settings only 2026-01-09 00:30:55 +00:00
vladmandic 98d304a493 fix longcat-edit
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-01 11:22:43 +01:00
vladmandic a37c70d668 update changelog
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-01 10:45:48 +01:00
vladmandic bb13aabe17 add ovis-image
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-26 08:05:25 +01:00
vladmandic c0141de02d add qwen-image-edit-2511
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-24 09:55:08 +01:00
vladmandic a319a98e59 error handling of meta embeds
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-23 10:23:14 +01:00
vladmandic 72b533a52a cleanup layered again
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-22 22:52:56 +01:00
vladmandic b5a5efaf62 cleanup
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-22 22:14:12 +01:00
vladmandic 4c43d9a1e6 cleanup qwen
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-22 22:08:49 +01:00
vladmandic 2a33d9f1a8 add qwen-image-layered
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-22 21:24:27 +01:00
vladmandic dde91321b9 genai exception handling and lint all
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-22 20:29:50 +01:00
vladmandic 0c01866bc3 improve response validation for google-genai
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-21 13:27:32 +01:00
vladmandic 409ad8d2bd add longcat image and image-edit
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-16 08:58:22 +01:00
vladmandic 13b4dc8996 update google access methods
Signed-off-by: vladmandic <mandic00@live.com>
2025-12-12 09:56:39 +01:00