- Remove GFPGAN pip install from installer.py optional requirements
- Remove 'gfpgan' from modules_to_remove cleanup list in launch.py
- Remove --codeformer-models-path and --gfpgan-models-path CLI args
- Remove GFPGAN model directory migration from modelloader.py
- Remove codeformer, restoreformer, GFPGANv1.4, and GPEN-BFR ONNX
model URLs from the predefined list
- Remove the .fp16 ONNX restorer code path that bypassed detailer
processing to run face restoration directly
- Remove /sdapi/v1/face-restorers route from api.py
- Remove get_restorers() function from endpoints.py
- Remove gfpgan_visibility, codeformer_visibility, codeformer_weight
fields from ReqProcess model
- Remove GFPGAN and CodeFormer entries from run_extras() signature
and create_args_for_run dict in postprocessing.py
- Remove CodeFormer/GFPGAN import and setup from webui.py initialize()
- Remove face_restorers list, codeformer/gfpgan model path settings,
and face restore UI settings section from shared.py
- Remove restore_faces parameter from StableDiffusionProcessing
- Remove face_restoration import and restore_faces processing block
from processing.py
Remove all vendored face restoration code that is no longer maintained:
- modules/postprocess/codeformer_model.py, codeformer_arch.py, vqgan_arch.py
- modules/postprocess/gfpgan_model.py, restorer.py
- modules/face_restoration.py (base class and dispatcher)
- scripts/postprocessing_codeformer.py, postprocessing_gfpgan.py
- modules/facelib/ (vendored face detection/parsing library)
These were the only two backends registered in shared.face_restorers,
making the entire face restoration infrastructure dead code.
Nunchaku's SDXL UNet does not support offloading and raises
NotImplementedError when offload=True is passed. Skip the parameter
for SDXL and log a warning instead of crashing.
Filter out reference entries tagged "nunchaku" from Extra Networks
when the active backend is not CUDA, since Nunchaku requires NVIDIA
GPUs. Entries remain in shared.reference_models for programmatic
lookup but are not yielded to the UI.
- Rename HuggingFace org from nunchaku-tech to nunchaku-ai across all
nunchaku model repos (flux, sdxl, sana, z-image, qwen, t5)
- Add per-torch-version nunchaku version mapping instead of single global
version, with robust torch version parsing
- Add 'Fill (Nunchaku)' and 'Depth (Nunchaku)' options to Flux Tools
dropdown, loading models with +nunchaku suffix for SVDQuant quantization
- Mark Fill and Depth nunchaku reference entries as hidden so they remain
available for check_nunchaku() lookup but don't appear in Extra Networks
- Filter hidden reference models in ui_extra_networks_checkpoints
Replace manual Model/TE checkboxes in Quantization Settings with a
dedicated "Nunchaku" tab in the Extra Networks menu where users can
directly select nunchaku-quantized model variants. Detection is now
using a +nunchaku path marker for disambiguation.
- Relax sd_detect to match 'anima' without requiring 'cosmos' in name
- Use hf_hub_download for custom pipeline.py and adapter modules
- Register custom modules in sys.modules for Diffusers trust_remote_code
- Pass trust_remote_code=True to from_pretrained
- Map AnimaTextToImage to 'cosmos' model type for TAESD preview support
Anima replaces the Cosmos T5-11B text encoder with Qwen3-0.6B + a
6-layer LLM adapter and uses CONST preconditioning instead of EDM.
- Add pipelines/model_anima.py loader with dynamic import of custom
AnimaTextToImagePipeline and AnimaLLMAdapter from model repo
- Register 'Anima' pipeline in shared_items.py
- Add name-based detection in sd_detect.py
- Fix list-format _class_name handling in guess_by_diffusers()
- Wire loader in sd_models.py load_diffuser_force()
- Skip noise_pred callback injection for Anima (uses velocity instead)
- Add output_type='np' override in processing_args.py
Hide all CLiP, VLM, and Tagger settings from Settings > Interrogate page
while keeping them in shared.opts for persistence. Caption Tab UI becomes
the single control point with change handlers that save directly to config.
Changes:
- Hide OpenCLiP, VLM, and Tagger settings with visible=False
- Add change handlers to save settings when UI controls change
- Rename "Booru Tags" tab to "Tagger", update choice labels
- Update interrogate.py to use unified tagger interface with all settings
Add DeepBooru as a model option alongside WD14 models in the Booru Tags
tab, with dynamic UI that disables inapplicable controls.
Changes:
- Create modules/interrogate/tagger.py as unified adapter module
- Add batch, load/unload, get_models functions to deepbooru.py
- Update ui_caption.py to use unified tagger interface
- Consolidate shared tagger settings in shared.py
- Add implementation plan for future settings consolidation
UI behavior:
- Model dropdown shows DeepBooru + all WD14 models
- Character threshold and include rating disabled for DeepBooru
- All controls re-enable when WD14 model selected
Add SmilingWolf's WD14/WaifuDiffusion tagger models for anime/illustration
tagging as a new "Booru Tags" tab in the Caption panel.
- Support 9 models (v2 and v3 variants) via HuggingFace
- ONNX backend chosen due to safetensors v3 variants exhibiting
unacceptable accuracy loss
- Separate thresholds for general/character tags
- Batch processing with progress bar
- Consolidate debug env var to SD_INTERROGATE_DEBUG
The "Restore from metadata: skip params" setting previously required
exact metadata parameter names (e.g., "Batch-2" instead of "batch_size").
This was confusing because metadata names differ from Python variables
and UI labels.
Changes:
- Auto-populate param_aliases from component labels and elem_ids
- Expand user input with aliases in should_skip()
- Support normalized names so "Batch" skips both "Batch-1" and "Batch-2"
Users can now enter any of these formats (case-insensitive):
- Python variable names: batch_size, cfg_scale, clip_skip
- UI labels: Batch size, CFG scale, Clip skip
- Metadata names: Batch-2, CFG scale, Clip skip
- Normalized names: Batch (skips both Batch-1 and Batch-2)
- Add detailed hints explaining LoRA fuse behavior and model reload warning
- Add hints for force reload, diffusers fuse, and quantization precision options
- Improve clarity of auto-apply tags and hash metadata hints
- Comment out unimplemented lora_quant setting
Change "Images folder" and "Grids folder" settings to act as base paths
that combine with specific folder settings, rather than replacing them.
- Add resolve_output_path() helper function to modules/paths.py
- Update all output path usages to use combined base + specific paths
- Update gallery API to return resolved paths with display labels
- Update gallery UI to show short labels with full path on hover
Example: If base is "C:\Database\" and specific is "outputs/text",
the resolved path becomes "C:\Database\outputs\text"
Edge cases handled:
- Empty base path: uses specific path directly (backward compatible)
- Absolute specific path: ignores base path
- Empty specific path: uses base path only
- Fix Klein text encoder comment to specify correct sizes per variant
- Lock TAESD decode logging behind SD_PREVIEW_DEBUG env var
- Fix misleading comment about FLUX.2 128-channel reshape (is fallback)
- Remove VRAM requirements from model descriptions in reference files
Enable live preview during FLUX.2 and FLUX.2 Klein image generation
using the TAE FLUX.2 decoder from madebyollin/taesd.
- Add dedicated TAE entries (FLUX.1, FLUX.2, SD3) that auto-select
based on model type, making the dropdown only affect SD/SDXL models
- Add FLUX.2 latent unpacking in callback to convert packed
[B, seq_len, 128] format to spatial [B, 32, H, W] for preview
- Support FLUX.2's 32 latent channels (vs 16 for FLUX.1/SD3)
Add support for FLUX.2 Klein distilled models (4B and 9B variants):
- Add pipeline loader for Flux2KleinPipeline
- Add model detection for 'flux.2' + 'klein' patterns
- Add pipeline mapping in shared_items
- Add shared Qwen3ForCausalLM text encoder handling:
- 4B variants use Z-Image-Turbo's Qwen3-8B
- 9B variants use FLUX.2-klein-9B's Qwen3-14B
- Add reference entries for distilled (4B, 9B) and base models
- Update diffusers commit for Flux2KleinPipeline support