mirror of https://github.com/vladmandic/automatic
Merge branch 'dev' into various1
commit
999cbe5d3a
45
CHANGELOG.md
45
CHANGELOG.md
|
|
@ -1,26 +1,27 @@
|
|||
# Change Log for SD.Next
|
||||
|
||||
## Update for 2026-03-27
|
||||
## Update for 2026-04-01
|
||||
|
||||
### Highlights for 2026-03-27
|
||||
### Highlights for 2026-04-01
|
||||
|
||||
This release brings massive code refactoring to modernize codebase and removal of some obsolete features. Leaner & Faster!
|
||||
And since its a bit quieter period when it comes to new models, notable additions would be : *FireRed-Image-Edit*, *SkyWorks-UniPic-3* and new versions of *Anima-Preview*, *Flux-Klein-KV*
|
||||
And since its a bit quieter period when it comes to new models, notable additions would be : *FireRed-Image-Edit*, *SkyWorks-UniPic-3* and new versions of *Anima-Preview*, *Flux-Klein-KV* image models and *LTX 2.3* video model
|
||||
|
||||
If you're on Windows platform, we have a brand new [All-in-one Installer & Launcher](https://github.com/vladmandic/sdnext-launcher): simply download [exe or zip](https://github.com/vladmandic/sdnext-launcher/releases) and done!
|
||||
If you're on Windows platform, we have a brand new [All-in-one Installer & Launcher](https://github.com/vladmandic/sdnext-launcher): simply download [exe or zip](https://github.com/vladmandic/sdnext-launcher/releases) and done!
|
||||
|
||||
And we have a new (optional) React-based **UI** [Enso](https://github.com/CalamitousFelicitousness/enso)!
|
||||
|
||||
*What else*? Really a lot!
|
||||
New color grading module, updated localization with new languages and improved translations, new civitai integration module, new finetunes loader, several new upscalers, improvements to LLM/VLM in captioning and prompt enhance, a lot of new control preprocessors, new realtime server info panel, some new UI themes
|
||||
And major work on API hardening: security, rate limits, secrets handling, new endpoints, etc.
|
||||
New color grading module, updated localization with new languages and improved translations, new CivitAI integration module, new finetunes loader, several new upscalers, improvements to LLM/VLM in captioning and prompt enhance, a lot of new control preprocessors, new realtime server info panel, some new UI themes
|
||||
And major work on API hardening: *security, rate limits, secrets handling, new endpoints*, etc.
|
||||
But also many smaller quality-of-life improvements - for full details, see [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md)
|
||||
|
||||
|
||||
*Note*: Purely due to size of changes, clean install is recommended!
|
||||
Just how big? Some stats: *~530 commits over 880 files*
|
||||
|
||||
[ReadMe](https://github.com/vladmandic/automatic/blob/master/README.md) | [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md) | [Docs](https://vladmandic.github.io/sdnext-docs/) | [WiKi](https://github.com/vladmandic/automatic/wiki) | [Discord](https://discord.com/invite/sd-next-federal-batch-inspectors-1101998836328697867) | [Sponsor](https://github.com/sponsors/vladmandic)
|
||||
|
||||
### Details for 2026-03-27
|
||||
### Details for 2026-04-01
|
||||
|
||||
- **Models**
|
||||
- [Google Flash 3.1 Image](https://ai.google.dev/gemini-api/docs/models/gemini-3-flash-preview) a.k.a. *Nano Banana 2*
|
||||
|
|
@ -30,6 +31,8 @@ Just how big? Some stats: *~530 commits over 880 files*
|
|||
*Note*: UniPic-3 is a fine-tune of Qwen-Image-Edit with new distillation regardless of its claim of major changes
|
||||
- [Anima Preview-v2](https://huggingface.co/circlestone-labs/Anima)
|
||||
- [FLUX.2-Klein-KV](https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-kv), thanks @liutyi
|
||||
- [LTX-Video 2.3](https://huggingface.co/Lightricks/LTX-2.3) in *Full and Distilled* variants and in both original *FP16 and SDNQ-4bit* quantiztion
|
||||
*note* ltx-2.3 is a massive 22B parameters and full model is very large (72GB) so use of pre-quantized variant (32GB) is highly recommended
|
||||
- **Image manipulation**
|
||||
- new **Color grading** module
|
||||
apply basic corrections to your images: brightness,contrast,saturation,shadows,highlights
|
||||
|
|
@ -77,8 +80,17 @@ Just how big? Some stats: *~530 commits over 880 files*
|
|||
- *note* **Cuda** `torch==2.10` removed support for `rtx1000` series and older GPUs
|
||||
use following before first startup to force installation of `torch==2.9.1` with `cuda==12.6`:
|
||||
> `set TORCH_COMMAND='torch==2.9.1 torchvision==0.24.1 torchaudio==2.9.1 --index-url https://download.pytorch.org/whl/cu126'`
|
||||
- *note* **Cuda** `cuda==13.0` requires newer nVidia drivers
|
||||
use following before first startup to force installation of `torch==2.11.0` with `cuda==12.68`:
|
||||
> `set TORCH_COMMAND='torch torchvision --index-url https://download.pytorch.org/whl/cu128`
|
||||
- update installer and support `nunchaku==1.2.1`
|
||||
- **UI**
|
||||
- legacy panels **T2I** and **I2I** are disabled by default
|
||||
- **Enso** new React-based UI, developed by @CalamitousFelicitousness!
|
||||
with WYSIWYG infinite canvas workspace, command palette, and numerous quality of life improvements across the board
|
||||
enable using `--enso` flag and access using `/enso` endpoint (e.g. <http://localhost:7860/enso>)
|
||||
see [Enso Docs](https://vladmandic.github.io/sdnext-docs/Enso/) and [Enso Home](https://github.com/CalamitousFelicitousness/enso) for details
|
||||
*note* Enso is work-in-progress and alpha-ready
|
||||
- legacy panels **T2I** and **I2I** are disabled by default
|
||||
you can re-enable them in *settings -> ui -> hide legacy tabs*
|
||||
- new panel: **Server Info** with detailed runtime informaton
|
||||
- rename **Scripts** to **Extras** and reorganize to split internal functionality vs external extensions
|
||||
|
|
@ -94,14 +106,23 @@ Just how big? Some stats: *~530 commits over 880 files*
|
|||
- **Themes** add *Vlad-Neomorph*
|
||||
- **Gallery** add option to auto-refresh gallery, thanks @awsr
|
||||
- **Token counters** add per-section display for supported models, thanks @awsr
|
||||
- **Colour grading** add hints for all the functions
|
||||
- **Docs / Wiki**
|
||||
- updates to to compute sections: *AMD-ROCm, AMD-MIOpen, ZLUDA, OpenVINO, nVidia*
|
||||
- updates to core sections: *Installation, Python, Schedulers, Launcher, SDNQ, Video*
|
||||
- added Enso page
|
||||
- **API**
|
||||
- **rate limiting**: global for all endpoints, guards against abuse and denial-of-service type of attacks
|
||||
- prototype **v2 API** (`/sdapi/v2/`)
|
||||
job-based generation with queue, per-job WebSocket progress, file uploads with TTL, model/network enumeration
|
||||
and a plethora of other improvements *(work-in-progress)*
|
||||
for the time being ships with Enso, which must be enabled wih `--enso` flag on startup for v2 API to be available
|
||||
- **rate limiting**: global for all endpoints, guards against abuse and denial-of-service type of attacks
|
||||
configurable in *settings -> server settings*
|
||||
- new `/sdapi/v1/upload` endpoint with support for both POST with form-data or PUT using raw-bytes
|
||||
- new `/sdapi/v1/torch` endpoint for torch info (backend, version, etc.)
|
||||
- new `/sdapi/v1/gpu` endpoint for GPU info
|
||||
- new `/sdapi/v1/rembg` endpoint for background removal
|
||||
- new `/sdadpi/v1/unet` endpoint to list available unets/dits
|
||||
- new `/sdapi/v1/unet` endpoint to list available unets/dits
|
||||
- use rate limiting for api logging
|
||||
- **Obsoleted**
|
||||
- removed support for additional quantization engines: *BitsAndBytes, TorchAO, Optimum-Quanto, NNCF*
|
||||
|
|
@ -128,6 +149,7 @@ Just how big? Some stats: *~530 commits over 880 files*
|
|||
- refactor move `rebmg` to core instead of extensions
|
||||
- remove face restoration
|
||||
- unified command line parsing
|
||||
- reinstall `nunchaku` with `--reinstall` flag
|
||||
- use explicit icon image references in `gallery`, thanks @awsr
|
||||
- launch use threads to async execute non-critical tasks
|
||||
- switch from deprecated `pkg_resources` to `importlib`
|
||||
|
|
@ -183,6 +205,7 @@ Just how big? Some stats: *~530 commits over 880 files*
|
|||
- add `lora` support for flux2-klein
|
||||
- fix `lora` change when used with `sdnq`
|
||||
- multiple `sdnq` fixes
|
||||
- handle `taesd` init errors
|
||||
|
||||
## Update for 2026-02-04
|
||||
|
||||
|
|
|
|||
51
TODO.md
51
TODO.md
|
|
@ -1,16 +1,8 @@
|
|||
# TODO
|
||||
|
||||
## Release
|
||||
|
||||
- Implement: `unload_auxiliary_models`
|
||||
- Add notes: **Enso**
|
||||
- Tips: **Color Grading**
|
||||
- Regen: **Localization**
|
||||
- Rebuild: **Launcher** with `master`
|
||||
|
||||
## Internal
|
||||
|
||||
- Integrate: [Depth3D](https://github.com/vladmandic/sd-extension-depth3d)
|
||||
- Feature: implement `unload_auxiliary_models`
|
||||
- Feature: RIFE update
|
||||
- Feature: RIFE in processing
|
||||
- Feature: SeedVR2 in processing
|
||||
|
|
@ -30,6 +22,7 @@
|
|||
- Refactor: Unify *huggingface* and *diffusers* model folders
|
||||
- Refactor: [GGUF](https://huggingface.co/docs/diffusers/main/en/quantization/gguf)
|
||||
- Reimplement `llama` remover for Kanvas
|
||||
- Integrate: [Depth3D](https://github.com/vladmandic/sd-extension-depth3d)
|
||||
|
||||
## OnHold
|
||||
|
||||
|
|
@ -131,21 +124,25 @@ TODO: Investigate which models are diffusers-compatible and prioritize!
|
|||
|
||||
> npm run todo
|
||||
|
||||
- fc: autodetect distilled based on model
|
||||
- fc: autodetect tensor format based on model
|
||||
- hypertile: vae breaks when using non-standard sizes
|
||||
- install: switch to pytorch source when it becomes available
|
||||
- loader: load receipe
|
||||
- loader: save receipe
|
||||
- lora: add other quantization types
|
||||
- lora: add t5 key support for sd35/f1
|
||||
- lora: maybe force imediate quantization
|
||||
- model load: force-reloading entire model as loading transformers only leads to massive memory usage
|
||||
- model load: implement model in-memory caching
|
||||
- modernui: monkey-patch for missing tabs.select event
|
||||
- modules/lora/lora_extract.py:188:9: W0511: TODO: lora: support pre-quantized flux
|
||||
- modules/modular_guiders.py:65:58: W0511: TODO: guiders
|
||||
- processing: remove duplicate mask params
|
||||
- resize image: enable full VAE mode for resize-latent
|
||||
|
||||
modules/sd_samplers_diffusers.py:353:31: W0511: TODO enso-required (fixme)
|
||||
```code
|
||||
installer.py:TODO rocm: switch to pytorch source when it becomes available
|
||||
modules/control/run.py:TODO modernui: monkey-patch for missing tabs.select event
|
||||
modules/history.py:TODO: apply metadata, preview, load/save
|
||||
modules/image/resize.py:TODO resize image: enable full VAE mode for resize-latent
|
||||
modules/lora/lora_apply.py:TODO lora: add other quantization types
|
||||
modules/lora/lora_apply.py:TODO lora: maybe force imediate quantization
|
||||
modules/lora/lora_extract.py:TODO: lora: support pre-quantized flux
|
||||
modules/lora/lora_load.py:TODO lora: add t5 key support for sd35/f1
|
||||
modules/masking.py:TODO: additional masking algorithms
|
||||
modules/modular_guiders.py:TODO: guiders
|
||||
modules/processing_class.py:TODO processing: remove duplicate mask params
|
||||
modules/sd_hijack_hypertile.py:TODO hypertile: vae breaks when using non-standard sizes
|
||||
modules/sd_models.py:TODO model load: implement model in-memory caching
|
||||
modules/sd_samplers_diffusers.py:TODO enso-required
|
||||
modules/sd_unet.py:TODO model load: force-reloading entire model as loading transformers only leads to massive memory usage
|
||||
modules/transformer_cache.py:TODO fc: autodetect distilled based on model
|
||||
modules/transformer_cache.py:TODO fc: autodetect tensor format based on model
|
||||
modules/ui_models_load.py:TODO loader: load receipe
|
||||
modules/ui_models_load.py:TODO loader: save receipe
|
||||
modules/video_models/video_save.py:TODO audio set time-base
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
Subproject commit b6ddea3013eef6093d0bf5b605f8628a22213896
|
||||
Subproject commit d4eab2166e4d9b52e42924cc942198f9e22eb916
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit 88d2f12b1f2015894224ed7f8f4d3a10c1fa514e
|
||||
Subproject commit 006f08f499bbe69c484f0f1cc332bbf0e75526c2
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit d15b31206a581e49d0e8b70b375587c046e7f53f
|
||||
Subproject commit 1e840033b040d8915ddfb5dbf62c80f411bcec0a
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit 488ab401cfaae83da94821c3f92ba718177dc106
|
||||
Subproject commit c7af727f31758c9fc96cf0429bcf3608858a15e8
|
||||
7420
html/locale_ar.json
7420
html/locale_ar.json
File diff suppressed because it is too large
Load Diff
7852
html/locale_bn.json
7852
html/locale_bn.json
File diff suppressed because it is too large
Load Diff
9384
html/locale_de.json
9384
html/locale_de.json
File diff suppressed because it is too large
Load Diff
|
|
@ -161,7 +161,7 @@
|
|||
{"id":"","label":"Batch size","localized":"","hint":"How many image to create in a single batch (increases generation performance at cost of higher VRAM usage)","ui":"txt2img"},
|
||||
{"id":"","label":"Beta schedule","localized":"","hint":"Defines how beta (noise strength per step) grows. Options:<br>- default: the model default<br>- linear: evenly decays noise per step<br>- scaled: squared version of linear, used only by Stable Diffusion<br>- cosine: smoother decay, often better results with fewer steps<br>- sigmoid: sharp transition, experimental","ui":"txt2img"},
|
||||
{"id":"","label":"Base shift","localized":"","hint":"Minimum shift value for low resolutions when using dynamic shifting.","ui":"txt2img"},
|
||||
{"id":"","label":"Brightness","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Brightness","localized":"","hint":"Adjusts overall image brightness.<br>Positive values lighten the image, negative values darken it.<br><br>Applied uniformly across all pixels in linear space.","ui":"txt2img"},
|
||||
{"id":"","label":"Block","localized":"","hint":"","ui":"script_kohya_hires_fix"},
|
||||
{"id":"","label":"Block size","localized":"","hint":"","ui":"script_nudenet"},
|
||||
{"id":"","label":"Banned words","localized":"","hint":"","ui":"script_nudenet"},
|
||||
|
|
@ -224,7 +224,7 @@
|
|||
{"id":"change_vae","label":"Change VAE","localized":"","hint":""},
|
||||
{"id":"change_unet","label":"Change UNet","localized":"","hint":""},
|
||||
{"id":"change_reference","label":"Change reference","localized":"","hint":""},
|
||||
{"id":"","label":"Color Grading","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Color Grading","localized":"","hint":"Post-generation color adjustments, applied per-image after generation and before mask overlay.","ui":"txt2img"},
|
||||
{"id":"","label":"Control Methods","localized":"","hint":"","ui":"control"},
|
||||
{"id":"","label":"Control Media","localized":"","hint":"Add input image as separate initialization image for control processing","ui":"control"},
|
||||
{"id":"","label":"Create Video","localized":"","hint":"","ui":"extras"},
|
||||
|
|
@ -238,10 +238,10 @@
|
|||
{"id":"","label":"Client log","localized":"","hint":""},
|
||||
{"id":"","label":"CLIP Analysis","localized":"","hint":"","ui":"caption"},
|
||||
{"id":"","label":"Context","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Contrast","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Color temp","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"CLAHE clip","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"CLAHE grid","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Contrast","localized":"","hint":"Adjusts the difference between light and dark areas.<br>Positive values increase contrast, making darks darker and lights brighter.<br>Negative values flatten the tonal range toward a more uniform appearance.","ui":"txt2img"},
|
||||
{"id":"","label":"Color temp","localized":"","hint":"Shifts color temperature in Kelvin.<br>Lower values (e.g., 2000K) produce a warm, amber tone. Higher values (e.g., 12000K) produce a cool, bluish tone.<br><br>Default 6500K is neutral daylight. Works by scaling R/G/B channels to simulate the target white point.","ui":"txt2img"},
|
||||
{"id":"","label":"CLAHE clip","localized":"","hint":"Clip limit for Contrast Limited Adaptive Histogram Equalization.<br>Higher values allow more local contrast enhancement, which brings out detail in flat regions.<br><br>Set to 0 to disable. Typical values are 1.0–3.0. Very high values can introduce noise amplification.","ui":"txt2img"},
|
||||
{"id":"","label":"CLAHE grid","localized":"","hint":"Grid size for CLAHE tile regions.<br>Smaller grids (e.g., 2–4) produce coarser, more global equalization.<br>Larger grids (e.g., 12–16) enhance finer local detail but may amplify noise.<br><br>Default is 8. Only active when CLAHE clip is above 0.","ui":"txt2img"},
|
||||
{"id":"","label":"Correction mode","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Crop to portrait","localized":"","hint":"Crop input image to portrait-only before using it as IP adapter input","ui":"txt2img"},
|
||||
{"id":"","label":"Concept Tokens","localized":"","hint":"","ui":"script_consistory"},
|
||||
|
|
@ -619,8 +619,8 @@
|
|||
{"id":"","label":"Guidance scale","localized":"","hint":"Classifier Free Guidance scale: how strongly the image should conform to prompt. Lower values produce more creative results, higher values make it follow the prompt more strictly; recommended values between 5-10","ui":"txt2img"},
|
||||
{"id":"","label":"Guidance end","localized":"","hint":"Ends the effect of CFG and PAG early: A value of 1 acts as normal, 0.5 stops guidance at 50% of steps","ui":"txt2img"},
|
||||
{"id":"","label":"Guidance rescale","localized":"","hint":"Rescale guidance to avoid overexposed images at higher guidance values","ui":"txt2img"},
|
||||
{"id":"","label":"Gamma","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Grain","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Gamma","localized":"","hint":"Non-linear brightness curve adjustment.<br>Values below 1.0 brighten midtones and shadows while preserving highlights.<br>Values above 1.0 darken midtones and shadows.<br><br>Default is 1.0 (no change). Unlike brightness, gamma reshapes the tonal curve rather than shifting it uniformly.","ui":"txt2img"},
|
||||
{"id":"","label":"Grain","localized":"","hint":"Adds film-like noise to the image.<br>Higher values produce more visible grain, simulating analog film texture.<br><br>Applied as random noise blended into the final image. Set to 0 to disable.","ui":"txt2img"},
|
||||
{"id":"","label":"Grid margins","localized":"","hint":"","ui":"script_prompt_matrix"},
|
||||
{"id":"","label":"Grid sections","localized":"","hint":"","ui":"script_regional_prompting"},
|
||||
{"id":"","label":"Guidance strength","localized":"","hint":"","ui":"script_slg"},
|
||||
|
|
@ -657,9 +657,9 @@
|
|||
{"id":"","label":"HiDiffusion","localized":"","hint":"HiDiffusion allows creation of high-resolution images using your standard models without duplicates/distortions and improved performance","ui":"settings_advanced"},
|
||||
{"id":"","label":"Height","localized":"","hint":"Image height","ui":"txt2img"},
|
||||
{"id":"","label":"HiRes steps","localized":"","hint":"Number of sampling steps for upscaled picture. If 0, uses same as for original","ui":"txt2img"},
|
||||
{"id":"","label":"Hue","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Highlights","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Highlights tint","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Hue","localized":"","hint":"Rotates all colors around the color wheel.<br>Small values produce subtle color shifts, while higher values cycle through the full spectrum.<br><br>Useful for creative color effects or correcting unwanted color casts.","ui":"txt2img"},
|
||||
{"id":"","label":"Highlights","localized":"","hint":"Adjusts the brightness of highlight (bright) regions.<br>Positive values brighten highlights, negative values pull them down.<br><br>Operates on the L channel in Lab color space using a luminance-weighted mask, leaving shadows and midtones largely unaffected.","ui":"txt2img"},
|
||||
{"id":"","label":"Highlights tint","localized":"","hint":"Color to blend into highlight regions for split toning.<br>Works together with Shadows tint and Split tone balance to create cinematic color grading looks.<br><br>Default white (#ffffff) applies no tint.","ui":"txt2img"},
|
||||
{"id":"","label":"HDR range","localized":"","hint":"","ui":"script_hdr"},
|
||||
{"id":"","label":"HQ init latents","localized":"","hint":"","ui":"script_instantir"},
|
||||
{"id":"","label":"Height after","localized":"","hint":"","ui":"control"},
|
||||
|
|
@ -782,7 +782,7 @@
|
|||
{"id":"","label":"Log Display","localized":"","hint":"","ui":"settings_ui"},
|
||||
{"id":"","label":"List all locally available models","localized":"","hint":"","ui":"models_list_tab"},
|
||||
{"id":"","label":"Last Generate","localized":"","hint":""},
|
||||
{"id":"","label":"LUT","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LUT","localized":"","hint":"Look-Up Table color grading section.<br>Upload a .cube LUT file to apply professional color grading presets.<br><br>LUTs remap colors according to a predefined 3D color transform, commonly used in film and photography for consistent color looks.","ui":"txt2img"},
|
||||
{"id":"","label":"low order","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LSC layer indices","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LSC fully qualified name","localized":"","hint":"","ui":"txt2img"},
|
||||
|
|
@ -790,7 +790,7 @@
|
|||
{"id":"","label":"LSC skip feed-forward blocks","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LSC skip attention scores","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LSC dropout rate","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LUT strength","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"LUT strength","localized":"","hint":"Controls the intensity of the applied LUT.<br>1.0 applies the LUT at full strength. Values below 1.0 blend toward the original colors, values above 1.0 amplify the effect.<br><br>Only active when a .cube LUT file is loaded.","ui":"txt2img"},
|
||||
{"id":"","label":"Latent brightness","localized":"","hint":"Increase or deacrease brightness directly in latent space during generation","ui":"txt2img"},
|
||||
{"id":"","label":"Latent sharpen","localized":"","hint":"Increase or decrease sharpness directly in latent space during generation","ui":"txt2img"},
|
||||
{"id":"","label":"Latent color","localized":"","hint":"Adjust the color balance directly in latent space during generation","ui":"txt2img"},
|
||||
|
|
@ -888,7 +888,7 @@
|
|||
{"id":"","label":"Max overlap","localized":"","hint":"Maximum overlap between two detected items before one is discarded","ui":"txt2img"},
|
||||
{"id":"","label":"Min size","localized":"","hint":"Minimum size of detected object as percentage of overal image","ui":"txt2img"},
|
||||
{"id":"","label":"Max size","localized":"","hint":"Maximum size of detected object as percentage of overal image","ui":"txt2img"},
|
||||
{"id":"","label":"Midtones","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Midtones","localized":"","hint":"Adjusts the brightness of midtone regions.<br>Positive values brighten midtones, negative values darken them.<br><br>Targets pixels near the middle of the luminance range using a bell-shaped mask in Lab space, leaving shadows and highlights largely untouched.","ui":"txt2img"},
|
||||
{"id":"","label":"Momentum","localized":"","hint":"","ui":"script_apg"},
|
||||
{"id":"","label":"Mode x-axis","localized":"","hint":"","ui":"script_asymmetric_tiling"},
|
||||
{"id":"","label":"Mode y-axis","localized":"","hint":"","ui":"script_asymmetric_tiling"},
|
||||
|
|
@ -1310,11 +1310,11 @@
|
|||
{"id":"","label":"SEG config","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Strength","localized":"","hint":"Denoising strength of during image operation controls how much of original image is allowed to change during generate","ui":"txt2img"},
|
||||
{"id":"","label":"Sort detections","localized":"","hint":"Sort detected areas by from left to right instead of detection score","ui":"txt2img"},
|
||||
{"id":"","label":"Saturation","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Sharpness","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Shadows","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Shadows tint","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Split tone balance","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Saturation","localized":"","hint":"Controls color intensity.<br>Positive values make colors more vivid, negative values desaturate toward grayscale.<br><br>At -1.0 the image becomes fully monochrome.","ui":"txt2img"},
|
||||
{"id":"","label":"Sharpness","localized":"","hint":"Enhances edge detail and fine textures.<br>Higher values produce crisper edges but may amplify noise or artifacts if pushed too far.<br><br>Set to 0 to disable. Operates via an unsharp mask kernel.","ui":"txt2img"},
|
||||
{"id":"","label":"Shadows","localized":"","hint":"Adjusts the brightness of shadow (dark) regions.<br>Positive values lift shadows to reveal detail, negative values deepen them.<br><br>Operates on the L channel in Lab color space using a luminance-weighted mask, leaving highlights and midtones largely unaffected.","ui":"txt2img"},
|
||||
{"id":"","label":"Shadows tint","localized":"","hint":"Color to blend into shadow regions for split toning.<br>Works together with Highlights tint and Split tone balance to create cinematic color grading looks.<br><br>Default black (#000000) applies no tint.","ui":"txt2img"},
|
||||
{"id":"","label":"Split tone balance","localized":"","hint":"Controls the crossover point between shadow and highlight tinting.<br>Values below 0.5 extend the shadow tint into midtones. Values above 0.5 extend the highlight tint into midtones.<br><br>Default 0.5 splits evenly at the midpoint.","ui":"txt2img"},
|
||||
{"id":"","label":"Subject","localized":"","hint":"","ui":"script_consistory"},
|
||||
{"id":"","label":"Same latent","localized":"","hint":"","ui":"script_consistory"},
|
||||
{"id":"","label":"Share queries","localized":"","hint":"","ui":"script_consistory"},
|
||||
|
|
@ -1587,7 +1587,7 @@
|
|||
{"id":"","label":"Video Output","localized":"","hint":"","ui":"tab_video"},
|
||||
{"id":"","label":"Variation","localized":"","hint":"Second seed to be mixed with primary seed","ui":"txt2img"},
|
||||
{"id":"","label":"Variation strength","localized":"","hint":"How strong of a variation to produce. At 0, there will be no effect. At 1, you will get the complete picture with variation seed (except for ancestral samplers, where you will just get something)","ui":"txt2img"},
|
||||
{"id":"","label":"Vignette","localized":"","hint":"","ui":"txt2img"},
|
||||
{"id":"","label":"Vignette","localized":"","hint":"Applies radial edge darkening that draws focus toward the center of the image.<br>Higher values produce a stronger falloff from center to corners.<br><br>Set to 0 to disable. Simulates the natural light falloff seen in vintage and cinematic lenses.","ui":"txt2img"},
|
||||
{"id":"","label":"VAE type","localized":"","hint":"Choose if you want to run full VAE, reduced quality VAE or attempt to use remote VAE service","ui":"txt2img"},
|
||||
{"id":"","label":"Version","localized":"","hint":"","ui":"script_pulid"},
|
||||
{"id":"","label":"Video format","localized":"","hint":"Format and codec of output video","ui":"script_video"},
|
||||
|
|
|
|||
6548
html/locale_es.json
6548
html/locale_es.json
File diff suppressed because it is too large
Load Diff
8188
html/locale_fr.json
8188
html/locale_fr.json
File diff suppressed because it is too large
Load Diff
7610
html/locale_he.json
7610
html/locale_he.json
File diff suppressed because it is too large
Load Diff
7322
html/locale_hi.json
7322
html/locale_hi.json
File diff suppressed because it is too large
Load Diff
6689
html/locale_hr.json
6689
html/locale_hr.json
File diff suppressed because it is too large
Load Diff
6452
html/locale_id.json
6452
html/locale_id.json
File diff suppressed because it is too large
Load Diff
9692
html/locale_it.json
9692
html/locale_it.json
File diff suppressed because it is too large
Load Diff
8179
html/locale_ja.json
8179
html/locale_ja.json
File diff suppressed because it is too large
Load Diff
7302
html/locale_ko.json
7302
html/locale_ko.json
File diff suppressed because it is too large
Load Diff
10476
html/locale_nb.json
10476
html/locale_nb.json
File diff suppressed because it is too large
Load Diff
8382
html/locale_po.json
8382
html/locale_po.json
File diff suppressed because it is too large
Load Diff
6338
html/locale_pt.json
6338
html/locale_pt.json
File diff suppressed because it is too large
Load Diff
8764
html/locale_qq.json
8764
html/locale_qq.json
File diff suppressed because it is too large
Load Diff
7332
html/locale_ru.json
7332
html/locale_ru.json
File diff suppressed because it is too large
Load Diff
7908
html/locale_sr.json
7908
html/locale_sr.json
File diff suppressed because it is too large
Load Diff
11194
html/locale_tb.json
11194
html/locale_tb.json
File diff suppressed because it is too large
Load Diff
9790
html/locale_tlh.json
9790
html/locale_tlh.json
File diff suppressed because it is too large
Load Diff
7082
html/locale_tr.json
7082
html/locale_tr.json
File diff suppressed because it is too large
Load Diff
8028
html/locale_ur.json
8028
html/locale_ur.json
File diff suppressed because it is too large
Load Diff
6766
html/locale_vi.json
6766
html/locale_vi.json
File diff suppressed because it is too large
Load Diff
8700
html/locale_xx.json
8700
html/locale_xx.json
File diff suppressed because it is too large
Load Diff
6890
html/locale_zh.json
6890
html/locale_zh.json
File diff suppressed because it is too large
Load Diff
42
installer.py
42
installer.py
|
|
@ -197,14 +197,15 @@ def installed(package, friendly: str | None = None, quiet = False): # pylint: di
|
|||
def uninstall(package, quiet = False):
|
||||
t_start = time.time()
|
||||
packages = package if isinstance(package, list) else [package]
|
||||
res = ''
|
||||
txt = ''
|
||||
for p in packages:
|
||||
if installed(p, p, quiet=True):
|
||||
if not quiet:
|
||||
log.warning(f'Package: {p} uninstall')
|
||||
res += pip(f"uninstall {p} --yes --quiet", ignore=True, quiet=True, uv=False)
|
||||
_result, _txt = pip(f"uninstall {p} --yes --quiet", ignore=True, quiet=True, uv=False)
|
||||
txt += _txt
|
||||
ts('uninstall', t_start)
|
||||
return res
|
||||
return txt
|
||||
|
||||
|
||||
def run(cmd: str, *nargs: str, **kwargs):
|
||||
|
|
@ -243,13 +244,13 @@ def cleanup_broken_packages():
|
|||
pass
|
||||
|
||||
|
||||
def pip(arg: str, ignore: bool = False, quiet: bool = True, uv = True):
|
||||
def pip(arg: str, ignore: bool = False, quiet: bool = True, uv = True) -> tuple[subprocess.CompletedProcess, str]:
|
||||
t_start = time.time()
|
||||
originalArg = arg
|
||||
arg = arg.replace('>=', '==')
|
||||
if opts.get('offline_mode', False):
|
||||
log.warning('Offline mode enabled')
|
||||
return 'offline'
|
||||
return None, 'offline'
|
||||
package = arg.replace("install", "").replace("--upgrade", "").replace("--no-deps", "").replace("--force-reinstall", "").replace(" ", " ").strip()
|
||||
uv = uv and args.uv and not package.startswith('git+')
|
||||
pipCmd = "uv pip" if uv else "pip"
|
||||
|
|
@ -259,20 +260,20 @@ def pip(arg: str, ignore: bool = False, quiet: bool = True, uv = True):
|
|||
all_args = f'{pip_log}{arg} {env_args}'.strip()
|
||||
if not quiet:
|
||||
log.debug(f'Running: {pipCmd}="{all_args}"')
|
||||
result, txt = run(sys.executable, "-m", pipCmd, all_args)
|
||||
result, output = run(sys.executable, "-m", pipCmd, all_args)
|
||||
if len(result.stderr) > 0:
|
||||
if uv and result.returncode != 0:
|
||||
log.warning(f'Install: cmd="{pipCmd}" args="{all_args}" cannot use uv, fallback to pip')
|
||||
debug(f'Install: uv pip error: {result.stderr}')
|
||||
cleanup_broken_packages()
|
||||
return pip(originalArg, ignore, quiet, uv=False)
|
||||
debug(f'Install {pipCmd}: {txt}')
|
||||
debug(f'Install {pipCmd}: {output}')
|
||||
if result.returncode != 0 and not ignore:
|
||||
errors.append(f'pip: {package}')
|
||||
log.error(f'Install: {pipCmd}: {arg}')
|
||||
log.debug(f'Install: pip output {txt}')
|
||||
log.debug(f'Install: pip code={result.returncode} stdout={result.stdout} stderr={result.stderr} output={output}')
|
||||
ts('pip', t_start)
|
||||
return txt
|
||||
return result, output
|
||||
|
||||
|
||||
# install package using pip if not already installed
|
||||
|
|
@ -454,6 +455,10 @@ def check_python(supported_minors=None, experimental_minors=None, reason=None):
|
|||
if not (int(sys.version_info.major) == 3 and int(sys.version_info.minor) in supported_minors):
|
||||
if (int(sys.version_info.major) == 3 and int(sys.version_info.minor) in experimental_minors):
|
||||
log.warning(f"Python experimental: {sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}")
|
||||
if reason is not None:
|
||||
log.error(reason)
|
||||
if not args.ignore and not args.experimental:
|
||||
sys.exit(1)
|
||||
else:
|
||||
log.error(f"Python incompatible: current {sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro} required 3.{supported_minors}")
|
||||
if reason is not None:
|
||||
|
|
@ -477,7 +482,7 @@ def check_diffusers():
|
|||
t_start = time.time()
|
||||
if args.skip_all:
|
||||
return
|
||||
target_commit = "85ffcf1db23c0e981215416abd8e8a748bfd86b6" # diffusers commit hash == 0.37.1.dev-0326
|
||||
target_commit = "0325ca4c5938a7e300f3e3b9ee7ec85f52d01bb5" # diffusers commit hash == 0.37.1.dev-0331
|
||||
# if args.use_rocm or args.use_zluda or args.use_directml:
|
||||
# sha = '043ab2520f6a19fce78e6e060a68dbc947edb9f9' # lock diffusers versions for now
|
||||
pkg = package_spec('diffusers')
|
||||
|
|
@ -506,7 +511,7 @@ def check_transformers():
|
|||
pkg_tokenizers = package_spec('tokenizers')
|
||||
# target_commit = '753d61104116eefc8ffc977327b441ee0c8d599f' # transformers commit hash == 4.57.6
|
||||
# target_commit = "aad13b87ed59f2afcfaebc985f403301887a35fc" # transformers commit hash == 5.3.0
|
||||
target_commit = "c9faacd7d57459157656bdffe049dabb6293f011" # transformers commit hash == 5.3.0.dev-0326
|
||||
target_commit = "2dba8e0495974930af02274d75bd182d22cc1686" # transformers commit hash == 5.3.0.dev-0331
|
||||
if args.use_directml:
|
||||
target_transformers = '4.52.4'
|
||||
target_tokenizers = '0.21.4'
|
||||
|
|
@ -559,8 +564,7 @@ def install_cuda():
|
|||
if args.use_nightly:
|
||||
cmd = os.environ.get('TORCH_COMMAND', '--upgrade --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128 --extra-index-url https://download.pytorch.org/whl/nightly/cu130')
|
||||
else:
|
||||
# cmd = os.environ.get('TORCH_COMMAND', 'torch==2.10.0+cu128 torchvision==0.25.0+cu128 --index-url https://download.pytorch.org/whl/cu128')
|
||||
cmd = os.environ.get("TORCH_COMMAND", "pip install -U torch==2.11.0+cu130 torchvision==0.26.0+cu130 --index-url https://download.pytorch.org/whl/cu130")
|
||||
cmd = os.environ.get('TORCH_COMMAND', 'torch==2.11.0+cu130 torchvision==0.26.0+cu130 --index-url https://download.pytorch.org/whl/cu130')
|
||||
return cmd
|
||||
|
||||
|
||||
|
|
@ -686,10 +690,8 @@ def install_rocm_zluda():
|
|||
|
||||
def install_ipex():
|
||||
t_start = time.time()
|
||||
#check_python(supported_minors=[10, 11, 12, 13, 14], reason='IPEX backend requires a Python version between 3.10 and 3.13')
|
||||
args.use_ipex = True # pylint: disable=attribute-defined-outside-init
|
||||
log.info('IPEX: Intel OneAPI toolkit detected')
|
||||
|
||||
if args.use_nightly:
|
||||
torch_command = os.environ.get('TORCH_COMMAND', '--upgrade --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/xpu')
|
||||
else:
|
||||
|
|
@ -703,8 +705,6 @@ def install_openvino():
|
|||
t_start = time.time()
|
||||
log.info('OpenVINO: selected')
|
||||
os.environ.setdefault('PYTORCH_TRACING_MODE', 'TORCHFX')
|
||||
|
||||
#check_python(supported_minors=[10, 11, 12, 13], reason='OpenVINO backend requires a Python version between 3.10 and 3.13')
|
||||
if sys.platform == 'darwin':
|
||||
torch_command = os.environ.get('TORCH_COMMAND', 'torch==2.11.0 torchvision==0.26.0')
|
||||
else:
|
||||
|
|
@ -1145,14 +1145,6 @@ def install_compel():
|
|||
|
||||
|
||||
def install_pydantic():
|
||||
"""
|
||||
if args.new or (sys.version_info >= (3, 14)):
|
||||
install('pydantic==2.12.5', ignore=True, quiet=True)
|
||||
reload('pydantic', '2.12.5')
|
||||
else:
|
||||
install('pydantic==1.10.21', ignore=True, quiet=True)
|
||||
reload('pydantic', '1.10.21')
|
||||
"""
|
||||
install('pydantic==2.12.5', ignore=True, quiet=True)
|
||||
reload('pydantic', '2.12.5')
|
||||
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ const anyPromptExists = () => gradioApp().querySelectorAll('.main-prompts').leng
|
|||
|
||||
function scheduleAfterUiUpdateCallbacks() {
|
||||
clearTimeout(uiAfterUpdateTimeout);
|
||||
uiAfterUpdateTimeout = setTimeout(() => executeCallbacks(uiAfterUpdateCallbacks, 500));
|
||||
uiAfterUpdateTimeout = setTimeout(() => executeCallbacks(uiAfterUpdateCallbacks), 250);
|
||||
}
|
||||
|
||||
let executedOnLoaded = false;
|
||||
|
|
|
|||
|
|
@ -22,12 +22,13 @@ log_cost = {
|
|||
"/internal/progress": -1,
|
||||
"/sdapi/v1/version": -1,
|
||||
"/sdapi/v1/log": -1,
|
||||
"/sdapi/v1/torch": 60,
|
||||
"/sdapi/v1/gpu": 60,
|
||||
"/sdapi/v1/torch": -1,
|
||||
"/sdapi/v1/gpu": -1,
|
||||
"/sdapi/v1/memory": -1,
|
||||
"/sdapi/v1/platform": -1,
|
||||
"/sdapi/v1/checkpoint": -1,
|
||||
"/sdapi/v1/status": 60,
|
||||
"/sdapi/v1/memory": 60,
|
||||
"/sdapi/v1/platform": 60,
|
||||
"/sdapi/v1/checkpoint": 60,
|
||||
"/sdapi/v1/progress": 60,
|
||||
}
|
||||
log_exclude_suffix = ['.css', '.js', '.ico', '.svg']
|
||||
log_exclude_prefix = ['/assets']
|
||||
|
|
|
|||
|
|
@ -205,7 +205,6 @@ def batch(
|
|||
Returns:
|
||||
Combined tag results
|
||||
"""
|
||||
import os
|
||||
import time
|
||||
from pathlib import Path
|
||||
import rich.progress as rp
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ class GoogleGeminiPipeline():
|
|||
self.model = model_name.split(' (')[0]
|
||||
from installer import install
|
||||
install('google-genai==1.52.0')
|
||||
from google import genai
|
||||
from google import genai # pylint: disable=no-name-in-module
|
||||
args = self.get_args()
|
||||
self.client = genai.Client(**args)
|
||||
log.debug(f'Load model: type=GoogleGemini model="{self.model}"')
|
||||
|
|
@ -54,7 +54,7 @@ class GoogleGeminiPipeline():
|
|||
return args
|
||||
|
||||
def __call__(self, question, image, model, instructions, prefill, thinking, kwargs):
|
||||
from google.genai import types
|
||||
from google.genai import types # pylint: disable=no-name-in-module
|
||||
config = {
|
||||
'system_instruction': instructions or shared.opts.caption_vlm_system,
|
||||
'thinking_config': types.ThinkingConfig(thinking_level="high" if thinking else "low")
|
||||
|
|
|
|||
|
|
@ -419,7 +419,6 @@ def batch(
|
|||
Returns:
|
||||
Combined tag results
|
||||
"""
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Load model
|
||||
|
|
|
|||
|
|
@ -61,38 +61,53 @@ def atomically_save_image():
|
|||
log.warning(f'Save failed: description={filename_txt} {e}')
|
||||
|
||||
# actual save
|
||||
exifinfo_dump = piexif.helper.UserComment.dump(exifinfo, encoding="unicode")
|
||||
if image_format == 'PNG':
|
||||
pnginfo_data = PngImagePlugin.PngInfo()
|
||||
for k, v in params.pnginfo.items():
|
||||
pnginfo_data.add_text(k, str(v))
|
||||
debug_save(f'Save pnginfo: {params.pnginfo.items()}')
|
||||
save_args = { 'compress_level': 6, 'pnginfo': pnginfo_data if shared.opts.image_metadata else None }
|
||||
save_args = {
|
||||
'compress_level': 6,
|
||||
'pnginfo': pnginfo_data if shared.opts.image_metadata else None,
|
||||
}
|
||||
elif image_format == 'JPEG':
|
||||
if image.mode == 'RGBA':
|
||||
log.warning('Save: removing alpha channel')
|
||||
image = image.convert("RGB")
|
||||
elif image.mode == 'I;16':
|
||||
image = image.point(lambda p: p * 0.0038910505836576).convert("L")
|
||||
save_args = { 'optimize': True, 'quality': shared.opts.jpeg_quality }
|
||||
save_args = {
|
||||
'optimize': True,
|
||||
'quality': shared.opts.jpeg_quality,
|
||||
}
|
||||
if shared.opts.image_metadata:
|
||||
debug_save(f'Save exif: {exifinfo}')
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(exifinfo, encoding="unicode") } })
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: exifinfo_dump } })
|
||||
elif image_format == 'WEBP':
|
||||
if image.mode == 'I;16':
|
||||
image = image.point(lambda p: p * 0.0038910505836576).convert("RGB")
|
||||
save_args = { 'optimize': True, 'quality': shared.opts.jpeg_quality, 'lossless': shared.opts.webp_lossless }
|
||||
save_args = {
|
||||
'optimize': True,
|
||||
'quality': shared.opts.jpeg_quality,
|
||||
'lossless': shared.opts.webp_lossless,
|
||||
}
|
||||
if shared.opts.image_metadata:
|
||||
debug_save(f'Save exif: {exifinfo}')
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(exifinfo, encoding="unicode") } })
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: exifinfo_dump } })
|
||||
elif image_format == 'JXL':
|
||||
if image.mode == 'I;16':
|
||||
image = image.point(lambda p: p * 0.0038910505836576).convert("RGB")
|
||||
elif image.mode not in {"RGB", "RGBA"}:
|
||||
image = image.convert("RGBA")
|
||||
save_args = { 'optimize': True, 'quality': shared.opts.jpeg_quality, 'lossless': shared.opts.webp_lossless }
|
||||
save_args = {
|
||||
'optimize': True,
|
||||
'quality': shared.opts.jpeg_quality,
|
||||
'lossless': shared.opts.webp_lossless,
|
||||
}
|
||||
if shared.opts.image_metadata:
|
||||
debug_save(f'Save exif: {exifinfo}')
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(exifinfo, encoding="unicode") } })
|
||||
save_args['exif'] = piexif.dump({ "Exif": { piexif.ExifIFD.UserComment: exifinfo_dump } })
|
||||
else:
|
||||
save_args = { 'quality': shared.opts.jpeg_quality }
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -2,43 +2,22 @@
|
|||
|
||||
from installer import pip
|
||||
from modules.logger import log
|
||||
from modules import devices
|
||||
|
||||
|
||||
nunchaku_versions = {
|
||||
'2.5': '1.0.1',
|
||||
'2.6': '1.0.1',
|
||||
'2.7': '1.1.0',
|
||||
'2.8': '1.1.0',
|
||||
'2.9': '1.1.0',
|
||||
'2.10': '1.0.2',
|
||||
'2.11': '1.1.0',
|
||||
}
|
||||
ok = False
|
||||
|
||||
|
||||
def _expected_ver():
|
||||
try:
|
||||
import torch
|
||||
torch_ver = '.'.join(torch.__version__.split('+')[0].split('.')[:2])
|
||||
return nunchaku_versions.get(torch_ver)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def check():
|
||||
def check(force=False):
|
||||
global ok # pylint: disable=global-statement
|
||||
if force:
|
||||
return False
|
||||
if ok:
|
||||
return True
|
||||
try:
|
||||
import nunchaku
|
||||
import nunchaku.utils
|
||||
from nunchaku import __version__
|
||||
expected = _expected_ver()
|
||||
import nunchaku.utils # pylint: disable=no-name-in-module, no-member
|
||||
from nunchaku import __version__ # pylint: disable=no-name-in-module
|
||||
log.info(f'Nunchaku: path={nunchaku.__path__} version={__version__.__version__} precision={nunchaku.utils.get_precision()}')
|
||||
if expected is not None and __version__.__version__ != expected:
|
||||
ok = False
|
||||
return False
|
||||
ok = True
|
||||
return True
|
||||
except Exception as e:
|
||||
|
|
@ -47,14 +26,17 @@ def check():
|
|||
return False
|
||||
|
||||
|
||||
def install_nunchaku():
|
||||
if devices.backend is None:
|
||||
return False # too early
|
||||
if not check():
|
||||
def install_nunchaku(force=False):
|
||||
if not force:
|
||||
from modules import devices, shared
|
||||
if devices.backend is None:
|
||||
return False # too early
|
||||
if shared.cmd_opts.reinstall:
|
||||
force = True
|
||||
if not check(force):
|
||||
import os
|
||||
import sys
|
||||
import platform
|
||||
import torch
|
||||
python_ver = f'{sys.version_info.major}{sys.version_info.minor}'
|
||||
if python_ver not in ['311', '312', '313']:
|
||||
log.error(f'Nunchaku: python={sys.version_info} unsupported')
|
||||
|
|
@ -63,24 +45,62 @@ def install_nunchaku():
|
|||
if arch not in ['linux', 'windows']:
|
||||
log.error(f'Nunchaku: platform={arch} unsupported')
|
||||
return False
|
||||
if devices.backend not in ['cuda']:
|
||||
if not force and devices.backend not in ['cuda']:
|
||||
log.error(f'Nunchaku: backend={devices.backend} unsupported')
|
||||
return False
|
||||
torch_ver = '.'.join(torch.__version__.split('+')[0].split('.')[:2])
|
||||
nunchaku_ver = nunchaku_versions.get(torch_ver)
|
||||
if nunchaku_ver is None:
|
||||
log.error(f'Nunchaku: torch={torch.__version__} unsupported')
|
||||
return False
|
||||
suffix = 'x86_64' if arch == 'linux' else 'win_amd64'
|
||||
|
||||
url = os.environ.get('NUNCHAKU_COMMAND', None)
|
||||
if url is None:
|
||||
arch = f'{arch}_' if arch == 'linux' else ''
|
||||
url = f'https://huggingface.co/nunchaku-ai/nunchaku/resolve/main/nunchaku-{nunchaku_ver}'
|
||||
url += f'+torch{torch_ver}-cp{python_ver}-cp{python_ver}-{arch}{suffix}.whl'
|
||||
cmd = f'install --upgrade {url}'
|
||||
log.debug(f'Nunchaku: install="{url}"')
|
||||
pip(cmd, ignore=False, uv=False)
|
||||
if url is not None:
|
||||
cmd = f'install --upgrade {url}'
|
||||
log.debug(f'Nunchaku: install="{url}"')
|
||||
result, _output = pip(cmd, uv=False, ignore=not force, quiet=not force)
|
||||
return result.returncode == 0
|
||||
else:
|
||||
import torch
|
||||
torch_ver = '.'.join(torch.__version__.split('+')[0].split('.')[:2])
|
||||
cuda_ver = torch.__version__.split('+')[1]if '+' in torch.__version__ else None
|
||||
cuda_ver = cuda_ver[:4] + '.' + cuda_ver[-1] if cuda_ver and len(cuda_ver) >= 4 else None
|
||||
suffix = 'linux_x86_64' if arch == 'linux' else 'win_amd64'
|
||||
if cuda_ver is None:
|
||||
log.error(f'Nunchaku: torch={torch.__version__} cuda="unknown"')
|
||||
return False
|
||||
if cuda_ver.startswith('cu13'):
|
||||
nunchaku_versions = ['1.2.1', '1.2.0', '1.1.0', '1.0.2', '1.0.1']
|
||||
else:
|
||||
nunchaku_versions = ['1.2.1', '1.0.2', '1.0.1'] # 1.2.0 and 1.1.0 imply cu13 but do not specify it
|
||||
for v in nunchaku_versions:
|
||||
url = f'https://github.com/nunchaku-ai/nunchaku/releases/download/v{v}/'
|
||||
fn = f'nunchaku-{v}+{cuda_ver}torch{torch_ver}-cp{python_ver}-cp{python_ver}-{suffix}.whl'
|
||||
result, _output = pip(f'install --upgrade {url+fn}', uv=False, ignore=True, quiet=True)
|
||||
if (result is None) or (_output == 'offline'):
|
||||
log.error(f'Nunchaku: install url="{url+fn}" offline mode')
|
||||
return False
|
||||
if force:
|
||||
log.debug(f'Nunchaku: url="{fn}" code={result.returncode} stdout={result.stdout} stderr={result.stderr} output={_output}')
|
||||
if result.returncode == 0:
|
||||
log.info(f'Nunchaku: install url="{url}"')
|
||||
return True
|
||||
fn = f'nunchaku-{v}+torch{torch_ver}-cp{python_ver}-cp{python_ver}-{suffix}.whl'
|
||||
result, _output = pip(f'install --upgrade {url+fn}', uv=False, ignore=True, quiet=True)
|
||||
if (result is None) or (_output == 'offline'):
|
||||
log.error(f'Nunchaku: install url="{url+fn}" offline mode')
|
||||
return False
|
||||
if force:
|
||||
log.debug(f'Nunchaku: url="{fn}" code={result.returncode} stdout={result.stdout} stderr={result.stderr} output={_output}')
|
||||
if result.returncode == 0:
|
||||
log.info(f'Nunchaku: install version={v} url="{url+fn}"')
|
||||
return True
|
||||
log.error(f'Nunchaku install failed: torch={torch.__version__} cuda={cuda_ver} python={python_ver} platform={arch}')
|
||||
return False
|
||||
|
||||
if not check():
|
||||
log.error('Nunchaku: install failed')
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
from modules.logger import setup_logging
|
||||
setup_logging()
|
||||
log.info('Nunchaku: manual install')
|
||||
install_nunchaku(force=True)
|
||||
|
|
|
|||
|
|
@ -509,8 +509,8 @@ def load_diffuser_force(detected_model_type, checkpoint_info, diffusers_load_con
|
|||
allow_post_quant = False
|
||||
except Exception as e:
|
||||
log.error(f'Load {op}: path="{checkpoint_info.path}" {e}')
|
||||
if debug_load:
|
||||
errors.display(e, 'Load')
|
||||
# if debug_load:
|
||||
errors.display(e, 'Load')
|
||||
return None, True
|
||||
if sd_model is not None:
|
||||
return sd_model, True
|
||||
|
|
|
|||
|
|
@ -11,58 +11,29 @@ import triton
|
|||
import triton.language as tl
|
||||
|
||||
|
||||
def get_autotune_config():
|
||||
if triton.runtime.driver.active.get_current_target().backend == "cuda":
|
||||
return [
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=3, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=5, num_warps=2),
|
||||
triton.Config({"BLOCK_SIZE_M": 32, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=5, num_warps=2),
|
||||
#
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=3, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 256, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=3, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 256, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=4, num_warps=4),
|
||||
]
|
||||
else:
|
||||
return [
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=2),
|
||||
triton.Config({"BLOCK_SIZE_M": 32, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 32, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=2),
|
||||
#
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 256, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 64, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 128, "BLOCK_SIZE_N": 32, "BLOCK_SIZE_K": 64, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
triton.Config({"BLOCK_SIZE_M": 256, "BLOCK_SIZE_N": 128, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=8),
|
||||
triton.Config({"BLOCK_SIZE_M": 256, "BLOCK_SIZE_N": 64, "BLOCK_SIZE_K": 128, "GROUP_SIZE_M": 8}, num_stages=2, num_warps=4),
|
||||
]
|
||||
matmul_configs = [
|
||||
triton.Config({'BLOCK_SIZE_M': BM, 'BLOCK_SIZE_N': BN, "BLOCK_SIZE_K": BK, "GROUP_SIZE_M": GM}, num_warps=w, num_stages=s)
|
||||
for BM in [32, 64, 128, 256]
|
||||
for BN in [32, 64, 128, 256]
|
||||
for BK in [32, 64, 128]
|
||||
for GM in [8, 16]
|
||||
for w in [8, 16]
|
||||
for s in [2]
|
||||
]
|
||||
|
||||
|
||||
@triton.autotune(configs=get_autotune_config(), key=["M", "N", "K", "stride_bk"])
|
||||
@triton.autotune(configs=matmul_configs, key=["M", "N", "K", "stride_bk", "ACCUMULATOR_DTYPE"])
|
||||
@triton.jit
|
||||
def int_mm_kernel(
|
||||
def triton_mm_kernel(
|
||||
a_ptr, b_ptr, c_ptr,
|
||||
M, N, K,
|
||||
stride_am, stride_ak,
|
||||
stride_bk, stride_bn,
|
||||
stride_cm, stride_cn,
|
||||
BLOCK_SIZE_M: tl.constexpr, BLOCK_SIZE_N: tl.constexpr, BLOCK_SIZE_K: tl.constexpr,
|
||||
M: int, N: int, K: int,
|
||||
stride_am: int, stride_ak: int,
|
||||
stride_bk: int, stride_bn: int,
|
||||
stride_cm: int, stride_cn: int,
|
||||
ACCUMULATOR_DTYPE: tl.constexpr,
|
||||
BLOCK_SIZE_M: tl.constexpr,
|
||||
BLOCK_SIZE_N: tl.constexpr,
|
||||
BLOCK_SIZE_K: tl.constexpr,
|
||||
GROUP_SIZE_M: tl.constexpr,
|
||||
):
|
||||
pid = tl.program_id(axis=0)
|
||||
|
|
@ -90,11 +61,11 @@ def int_mm_kernel(
|
|||
a_ptrs = a_ptr + (offs_am[:, None] * stride_am + offs_k[None, :] * stride_ak)
|
||||
b_ptrs = b_ptr + (offs_k[:, None] * stride_bk + offs_bn[None, :] * stride_bn)
|
||||
|
||||
accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.int32)
|
||||
accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=ACCUMULATOR_DTYPE)
|
||||
for k in range(0, tl.cdiv(K, BLOCK_SIZE_K)):
|
||||
a = tl.load(a_ptrs, mask=offs_k[None, :] < K - k * BLOCK_SIZE_K, other=0.0)
|
||||
b = tl.load(b_ptrs, mask=offs_k[:, None] < K - k * BLOCK_SIZE_K, other=0.0)
|
||||
accumulator = tl.dot(a, b, accumulator, out_dtype=tl.int32)
|
||||
accumulator = tl.dot(a, b, accumulator, out_dtype=ACCUMULATOR_DTYPE)
|
||||
a_ptrs += BLOCK_SIZE_K * stride_ak
|
||||
b_ptrs += BLOCK_SIZE_K * stride_bk
|
||||
|
||||
|
|
@ -105,7 +76,7 @@ def int_mm_kernel(
|
|||
tl.store(c_ptrs, accumulator, mask=c_mask)
|
||||
|
||||
|
||||
def int_mm(a, b):
|
||||
def int_mm(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
|
||||
assert a.shape[1] == b.shape[0], "Incompatible dimensions"
|
||||
assert a.is_contiguous(), "Matrix A must be contiguous"
|
||||
M, K = a.shape
|
||||
|
|
@ -113,68 +84,18 @@ def int_mm(a, b):
|
|||
c = torch.empty((M, N), device=a.device, dtype=torch.int32)
|
||||
def grid(META):
|
||||
return (triton.cdiv(M, META["BLOCK_SIZE_M"]) * triton.cdiv(N, META["BLOCK_SIZE_N"]), )
|
||||
int_mm_kernel[grid](
|
||||
triton_mm_kernel[grid](
|
||||
a, b, c,
|
||||
M, N, K,
|
||||
a.stride(0), a.stride(1),
|
||||
b.stride(0), b.stride(1),
|
||||
c.stride(0), c.stride(1),
|
||||
tl.int32,
|
||||
)
|
||||
return c
|
||||
|
||||
|
||||
@triton.autotune(configs=get_autotune_config(), key=["M", "N", "K", "stride_bk"])
|
||||
@triton.jit
|
||||
def fp_mm_kernel(
|
||||
a_ptr, b_ptr, c_ptr,
|
||||
M, N, K,
|
||||
stride_am, stride_ak,
|
||||
stride_bk, stride_bn,
|
||||
stride_cm, stride_cn,
|
||||
BLOCK_SIZE_M: tl.constexpr, BLOCK_SIZE_N: tl.constexpr, BLOCK_SIZE_K: tl.constexpr,
|
||||
GROUP_SIZE_M: tl.constexpr,
|
||||
):
|
||||
pid = tl.program_id(axis=0)
|
||||
num_pid_m = tl.cdiv(M, BLOCK_SIZE_M)
|
||||
num_pid_n = tl.cdiv(N, BLOCK_SIZE_N)
|
||||
num_pid_in_group = GROUP_SIZE_M * num_pid_n
|
||||
group_id = pid // num_pid_in_group
|
||||
first_pid_m = group_id * GROUP_SIZE_M
|
||||
group_size_m = min(num_pid_m - first_pid_m, GROUP_SIZE_M)
|
||||
pid_m = first_pid_m + ((pid % num_pid_in_group) % group_size_m)
|
||||
pid_n = (pid % num_pid_in_group) // group_size_m
|
||||
|
||||
tl.assume(pid_m >= 0)
|
||||
tl.assume(pid_n >= 0)
|
||||
tl.assume(stride_am > 0)
|
||||
tl.assume(stride_ak > 0)
|
||||
tl.assume(stride_bn > 0)
|
||||
tl.assume(stride_bk > 0)
|
||||
tl.assume(stride_cm > 0)
|
||||
tl.assume(stride_cn > 0)
|
||||
|
||||
offs_am = (pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)) % M
|
||||
offs_bn = (pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)) % N
|
||||
offs_k = tl.arange(0, BLOCK_SIZE_K)
|
||||
a_ptrs = a_ptr + (offs_am[:, None] * stride_am + offs_k[None, :] * stride_ak)
|
||||
b_ptrs = b_ptr + (offs_k[:, None] * stride_bk + offs_bn[None, :] * stride_bn)
|
||||
|
||||
accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32)
|
||||
for k in range(0, tl.cdiv(K, BLOCK_SIZE_K)):
|
||||
a = tl.load(a_ptrs, mask=offs_k[None, :] < K - k * BLOCK_SIZE_K, other=0.0)
|
||||
b = tl.load(b_ptrs, mask=offs_k[:, None] < K - k * BLOCK_SIZE_K, other=0.0)
|
||||
accumulator = tl.dot(a, b, accumulator, out_dtype=tl.float32)
|
||||
a_ptrs += BLOCK_SIZE_K * stride_ak
|
||||
b_ptrs += BLOCK_SIZE_K * stride_bk
|
||||
|
||||
offs_cm = pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)
|
||||
offs_cn = pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)
|
||||
c_ptrs = c_ptr + stride_cm * offs_cm[:, None] + stride_cn * offs_cn[None, :]
|
||||
c_mask = (offs_cm[:, None] < M) & (offs_cn[None, :] < N)
|
||||
tl.store(c_ptrs, accumulator, mask=c_mask)
|
||||
|
||||
|
||||
def fp_mm(a, b):
|
||||
def fp_mm(a: torch.FloatTensor, b: torch.FloatTensor) -> torch.FloatTensor:
|
||||
assert a.shape[1] == b.shape[0], "Incompatible dimensions"
|
||||
assert a.is_contiguous(), "Matrix A must be contiguous"
|
||||
M, K = a.shape
|
||||
|
|
@ -182,11 +103,12 @@ def fp_mm(a, b):
|
|||
c = torch.empty((M, N), device=a.device, dtype=torch.float32)
|
||||
def grid(META):
|
||||
return (triton.cdiv(M, META["BLOCK_SIZE_M"]) * triton.cdiv(N, META["BLOCK_SIZE_N"]), )
|
||||
fp_mm_kernel[grid](
|
||||
triton_mm_kernel[grid](
|
||||
a, b, c,
|
||||
M, N, K,
|
||||
a.stride(0), a.stride(1),
|
||||
b.stride(0), b.stride(1),
|
||||
c.stride(0), c.stride(1),
|
||||
tl.float32,
|
||||
)
|
||||
return c
|
||||
|
|
|
|||
|
|
@ -153,9 +153,14 @@ def get_model(model_type = 'decoder', variant = None):
|
|||
def decode(latents):
|
||||
global first_run # pylint: disable=global-statement
|
||||
with lock:
|
||||
vae, variant = get_model(model_type='decoder')
|
||||
if vae is None or max(latents.shape) > 256: # safetey check of large tensors
|
||||
return latents
|
||||
try:
|
||||
vae, variant = get_model(model_type='decoder')
|
||||
if vae is None or max(latents.shape) > 256: # safetey check of large tensors
|
||||
return latents
|
||||
except Exception as e:
|
||||
# from modules import errors
|
||||
# errors.display(e, 'taesd"')
|
||||
return warn_once(f'load: {e}')
|
||||
try:
|
||||
with devices.inference_context():
|
||||
t0 = time.time()
|
||||
|
|
|
|||
|
|
@ -128,30 +128,82 @@ try:
|
|||
],
|
||||
'LTX Video': [
|
||||
Model(name='None'),
|
||||
Model(name='LTXVideo 2 19B T2V Dev',
|
||||
|
||||
Model(name='LTXVideo 2.3 22B T2V',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B I2V',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B T2V Distilled',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-Distilled',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B I2V Distilled',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-Distilled',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
|
||||
Model(name='LTXVideo 2.3 22B T2V SDNQ-4Bit',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-sdnq-dynamic-int4',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B I2V SDNQ-4Bit',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-sdnq-dynamic-int4',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B T2V Distilled SDNQ-4Bit',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-Distilled-sdnq-dynamic-int4',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2.3 22B I2V Distilled SDNQ-4Bit',
|
||||
url='https://huggingface.co/Lightricks/LTX-2.3',
|
||||
repo='OzzyGT/LTX-2.3-Distilled-sdnq-dynamic-int4',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
|
||||
Model(name='LTXVideo 2.0 19B T2V Dev',
|
||||
url='https://huggingface.co/Lightricks/LTX-2',
|
||||
repo='Lightricks/LTX-2',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2 19B I2V Dev',
|
||||
Model(name='LTXVideo 2.0 19B I2V Dev',
|
||||
url='https://huggingface.co/Lightricks/LTX-2',
|
||||
repo='Lightricks/LTX-2',
|
||||
repo_cls=getattr(diffusers, 'LTX2ImageToVideoPipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2 19B T2V Dev SDNQ',
|
||||
Model(name='LTXVideo 2.0 19B T2V Dev SDNQ-4Bit',
|
||||
url='https://huggingface.co/Disty0/LTX-2-SDNQ-4bit-dynamic',
|
||||
repo='Disty0/LTX-2-SDNQ-4bit-dynamic',
|
||||
repo_cls=getattr(diffusers, 'LTX2Pipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
Model(name='LTXVideo 2 19B I2V Dev SDNQ',
|
||||
Model(name='LTXVideo 2.0 19B I2V Dev SDNQ-4Bit',
|
||||
url='https://huggingface.co/Disty0/LTX-2-SDNQ-4bit-dynamic',
|
||||
repo='Disty0/LTX-2-SDNQ-4bit-dynamic',
|
||||
repo_cls=getattr(diffusers, 'LTX2ImageToVideoPipeline', None),
|
||||
te_cls=getattr(transformers, 'Gemma3ForConditionalGeneration', None),
|
||||
dit_cls=getattr(diffusers, 'LTX2VideoTransformer3DModel', None)),
|
||||
|
||||
Model(name='LTXVideo 0.9.8 13B Distilled',
|
||||
url='https://huggingface.co/Lightricks/LTX-Video-0.9.8-13B-distilled',
|
||||
repo='Lightricks/LTX-Video-0.9.8-13B-distilled',
|
||||
|
|
|
|||
|
|
@ -7,11 +7,13 @@ from pipelines import generic
|
|||
|
||||
def load_nunchaku():
|
||||
import nunchaku
|
||||
if not hasattr(nunchaku, 'NunchakuZImageTransformer2DModel'): # not present in older versions of nunchaku
|
||||
return None
|
||||
nunchaku_precision = nunchaku.utils.get_precision()
|
||||
nunchaku_rank = 128
|
||||
nunchaku_repo = f"nunchaku-ai/nunchaku-z-image-turbo/svdq-{nunchaku_precision}_r{nunchaku_rank}-z-image-turbo.safetensors"
|
||||
log.debug(f'Load module: quant=Nunchaku module=transformer repo="{nunchaku_repo}" attention={shared.opts.nunchaku_attention}')
|
||||
transformer = nunchaku.NunchakuZImageTransformer2DModel.from_pretrained(
|
||||
transformer = nunchaku.NunchakuZImageTransformer2DModel.from_pretrained( # pylint: disable=no-member
|
||||
nunchaku_repo,
|
||||
torch_dtype=devices.dtype,
|
||||
cache_dir=shared.opts.hfcache_dir,
|
||||
|
|
@ -28,9 +30,10 @@ def load_z_image(checkpoint_info, diffusers_load_config=None):
|
|||
load_args, _quant_args = model_quant.get_dit_args(diffusers_load_config, allow_quant=False)
|
||||
log.debug(f'Load model: type=ZImage repo="{repo_id}" config={diffusers_load_config} offload={shared.opts.diffusers_offload_mode} dtype={devices.dtype} args={diffusers_load_config}')
|
||||
|
||||
transformer = None
|
||||
if model_quant.check_nunchaku('Model'): # only available model
|
||||
transformer = load_nunchaku()
|
||||
else:
|
||||
if transformer is None:
|
||||
transformer = generic.load_transformer(repo_id, cls_name=diffusers.ZImageTransformer2DModel, load_config=diffusers_load_config)
|
||||
|
||||
text_encoder = generic.load_text_encoder(repo_id, cls_name=transformers.Qwen3ForCausalLM, load_config=diffusers_load_config)
|
||||
|
|
|
|||
425
pyproject.toml
425
pyproject.toml
|
|
@ -14,7 +14,6 @@ exclude = [
|
|||
".git",
|
||||
".ruff_cache",
|
||||
".vscode",
|
||||
|
||||
"modules/cfgzero",
|
||||
"modules/flash_attn_triton_amd",
|
||||
"modules/hidiffusion",
|
||||
|
|
@ -24,19 +23,16 @@ exclude = [
|
|||
"modules/teacache",
|
||||
"modules/seedvr",
|
||||
"modules/sharpfin",
|
||||
|
||||
"modules/control/proc",
|
||||
"modules/control/units",
|
||||
"modules/control/units/xs_pipe.py",
|
||||
"modules/postprocess/aurasr_arch.py",
|
||||
|
||||
"pipelines/meissonic",
|
||||
"pipelines/omnigen2",
|
||||
"pipelines/hdm",
|
||||
"pipelines/segmoe",
|
||||
"pipelines/xomni",
|
||||
"pipelines/chrono",
|
||||
|
||||
"scripts/lbm",
|
||||
"scripts/daam",
|
||||
"scripts/xadapter",
|
||||
|
|
@ -44,9 +40,6 @@ exclude = [
|
|||
"scripts/instantir",
|
||||
"scripts/freescale",
|
||||
"scripts/consistory",
|
||||
|
||||
"repositories",
|
||||
|
||||
"extensions-builtin/Lora",
|
||||
"extensions-builtin/sd-extension-chainner/nodes",
|
||||
"extensions-builtin/sd-webui-agent-scheduler",
|
||||
|
|
@ -55,53 +48,53 @@ exclude = [
|
|||
|
||||
[tool.ruff.lint]
|
||||
select = [
|
||||
"F",
|
||||
"E",
|
||||
"W",
|
||||
"C",
|
||||
"B",
|
||||
"I",
|
||||
"YTT",
|
||||
"ASYNC",
|
||||
"RUF",
|
||||
"AIR",
|
||||
"NPY",
|
||||
"C4",
|
||||
"T10",
|
||||
"EXE",
|
||||
"ISC",
|
||||
"ICN",
|
||||
"RSE",
|
||||
"TC",
|
||||
"TID",
|
||||
"INT",
|
||||
"PLE",
|
||||
"F",
|
||||
"E",
|
||||
"W",
|
||||
"C",
|
||||
"B",
|
||||
"I",
|
||||
"YTT",
|
||||
"ASYNC",
|
||||
"RUF",
|
||||
"AIR",
|
||||
"NPY",
|
||||
"C4",
|
||||
"T10",
|
||||
"EXE",
|
||||
"ISC",
|
||||
"ICN",
|
||||
"RSE",
|
||||
"TC",
|
||||
"TID",
|
||||
"INT",
|
||||
"PLE",
|
||||
]
|
||||
ignore = [
|
||||
"B006", # Do not use mutable data structures for argument defaults
|
||||
"B008", # Do not perform function call in argument defaults
|
||||
"B905", # Strict zip() usage
|
||||
"ASYNC240", # Async functions should not use os.path methods
|
||||
"C420", # Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
|
||||
"C408", # Unnecessary `dict` call
|
||||
"I001", # Import block is un-sorted or un-formatted
|
||||
"E402", # Module level import not at top of file
|
||||
"E501", # Line too long
|
||||
"E721", # Do not compare types, use `isinstance()`
|
||||
"E731", # Do not assign a `lambda` expression, use a `def`
|
||||
"E741", # Ambiguous variable name
|
||||
"F401", # Imported by unused
|
||||
"EXE001", # file with shebang is not marked executable
|
||||
"NPY002", # replace legacy random
|
||||
"RUF005", # Consider iterable unpacking
|
||||
"RUF008", # Do not use mutable default values for dataclass
|
||||
"RUF010", # Use explicit conversion flag
|
||||
"RUF012", # Mutable class attributes
|
||||
"RUF015", # Prefer `next(...)` over single element slice
|
||||
"RUF022", # All is not sorted
|
||||
"RUF046", # Value being cast to `int` is already an integer
|
||||
"RUF059", # Unpacked variables are not used
|
||||
"RUF051", # Prefer pop over del
|
||||
"ASYNC240", # Async functions should not use os.path methods
|
||||
"B006", # Do not use mutable data structures for argument defaults
|
||||
"B008", # Do not perform function call in argument defaults
|
||||
"B905", # Strict zip() usage
|
||||
"C408", # Unnecessary `dict` call
|
||||
"C420", # Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
|
||||
"E402", # Module level import not at top of file
|
||||
"E501", # Line too long
|
||||
"E721", # Do not compare types, use `isinstance()`
|
||||
"E731", # Do not assign a `lambda` expression, use a `def`
|
||||
"E741", # Ambiguous variable name
|
||||
"EXE001", # file with shebang is not marked executable
|
||||
"F401", # Imported by unused
|
||||
"I001", # Import block is un-sorted or un-formatted
|
||||
"NPY002", # replace legacy random
|
||||
"RUF005", # Consider iterable unpacking
|
||||
"RUF008", # Do not use mutable default values for dataclass
|
||||
"RUF010", # Use explicit conversion flag
|
||||
"RUF012", # Mutable class attributes
|
||||
"RUF015", # Prefer `next(...)` over single element slice
|
||||
"RUF022", # All is not sorted
|
||||
"RUF046", # Value being cast to `int` is already an integer
|
||||
"RUF051", # Prefer pop over del
|
||||
"RUF059", # Unpacked variables are not used
|
||||
]
|
||||
fixable = ["ALL"]
|
||||
unfixable = []
|
||||
|
|
@ -131,73 +124,75 @@ main.fail-on=""
|
|||
main.fail-under=10
|
||||
main.ignore="CVS"
|
||||
main.ignore-paths=[
|
||||
"venv",
|
||||
".git",
|
||||
".ruff_cache",
|
||||
".vscode",
|
||||
"modules/apg",
|
||||
"modules/cfgzero",
|
||||
"modules/control/proc",
|
||||
"modules/control/units",
|
||||
"modules/dml",
|
||||
"modules/face",
|
||||
"modules/flash_attn_triton_amd",
|
||||
"modules/ggml",
|
||||
"modules/hidiffusion",
|
||||
"modules/hijack/ddpm_edit.py",
|
||||
"modules/intel",
|
||||
"modules/intel/ipex",
|
||||
"modules/framepack/pipeline",
|
||||
"modules/onnx_impl",
|
||||
"modules/pag",
|
||||
"modules/postprocess/aurasr_arch.py",
|
||||
"modules/prompt_parser_xhinker.py",
|
||||
"modules/ras",
|
||||
"modules/seedvr",
|
||||
"modules/sharpfin",
|
||||
"modules/rife",
|
||||
"modules/schedulers",
|
||||
"modules/taesd",
|
||||
"modules/teacache",
|
||||
"modules/todo",
|
||||
"modules/res4lyf",
|
||||
"pipelines/bria",
|
||||
"pipelines/flex2",
|
||||
"pipelines/f_lite",
|
||||
"pipelines/hidream",
|
||||
"pipelines/hdm",
|
||||
"pipelines/meissonic",
|
||||
"pipelines/omnigen2",
|
||||
"pipelines/segmoe",
|
||||
"pipelines/xomni",
|
||||
"pipelines/chrono",
|
||||
"scripts/consistory",
|
||||
"scripts/ctrlx",
|
||||
"scripts/daam",
|
||||
"scripts/demofusion",
|
||||
"scripts/freescale",
|
||||
"scripts/infiniteyou",
|
||||
"scripts/instantir",
|
||||
"scripts/lbm",
|
||||
"scripts/layerdiffuse",
|
||||
"scripts/mod",
|
||||
"scripts/pixelsmith",
|
||||
"scripts/differential_diffusion.py",
|
||||
"scripts/pulid",
|
||||
"scripts/xadapter",
|
||||
"repositories",
|
||||
"extensions-builtin/sd-extension-chainner/nodes",
|
||||
"extensions-builtin/sd-webui-agent-scheduler",
|
||||
"extensions-builtin/sdnext-modernui/node_modules",
|
||||
"extensions-builtin/sdnext-kanvas/node_modules",
|
||||
]
|
||||
"venv",
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
".git",
|
||||
".ruff_cache",
|
||||
".vscode",
|
||||
"modules/apg",
|
||||
"modules/cfgzero",
|
||||
"modules/control/proc",
|
||||
"modules/control/units",
|
||||
"modules/dml",
|
||||
"modules/face",
|
||||
"modules/flash_attn_triton_amd",
|
||||
"modules/ggml",
|
||||
"modules/hidiffusion",
|
||||
"modules/hijack/ddpm_edit.py",
|
||||
"modules/intel",
|
||||
"modules/intel/ipex",
|
||||
"modules/framepack/pipeline",
|
||||
"modules/onnx_impl",
|
||||
"modules/pag",
|
||||
"modules/postprocess/aurasr_arch.py",
|
||||
"modules/prompt_parser_xhinker.py",
|
||||
"modules/ras",
|
||||
"modules/seedvr",
|
||||
"modules/sharpfin",
|
||||
"modules/rife",
|
||||
"modules/schedulers",
|
||||
"modules/taesd",
|
||||
"modules/teacache",
|
||||
"modules/todo",
|
||||
"modules/res4lyf",
|
||||
"pipelines/bria",
|
||||
"pipelines/flex2",
|
||||
"pipelines/f_lite",
|
||||
"pipelines/hidream",
|
||||
"pipelines/hdm",
|
||||
"pipelines/meissonic",
|
||||
"pipelines/omnigen2",
|
||||
"pipelines/segmoe",
|
||||
"pipelines/xomni",
|
||||
"pipelines/chrono",
|
||||
"scripts/consistory",
|
||||
"scripts/ctrlx",
|
||||
"scripts/daam",
|
||||
"scripts/demofusion",
|
||||
"scripts/freescale",
|
||||
"scripts/infiniteyou",
|
||||
"scripts/instantir",
|
||||
"scripts/lbm",
|
||||
"scripts/layerdiffuse",
|
||||
"scripts/mod",
|
||||
"scripts/pixelsmith",
|
||||
"scripts/differential_diffusion.py",
|
||||
"scripts/pulid",
|
||||
"scripts/xadapter",
|
||||
"repositories",
|
||||
"extensions-builtin/sd-extension-chainner/nodes",
|
||||
"extensions-builtin/sd-webui-agent-scheduler",
|
||||
"extensions-builtin/sdnext-modernui/node_modules",
|
||||
"extensions-builtin/sdnext-kanvas/node_modules",
|
||||
]
|
||||
main.ignore-patterns=[
|
||||
".*test*.py$",
|
||||
".*_model.py$",
|
||||
".*_arch.py$",
|
||||
".*_model_arch.py*",
|
||||
".*_model_arch_v2.py$",
|
||||
]
|
||||
".*test*.py$",
|
||||
".*_model.py$",
|
||||
".*_arch.py$",
|
||||
".*_model_arch.py*",
|
||||
".*_model_arch_v2.py$",
|
||||
]
|
||||
main.ignored-modules=""
|
||||
main.jobs=4
|
||||
main.limit-inference-results=100
|
||||
|
|
@ -245,7 +240,6 @@ design.max-statements=200
|
|||
design.min-public-methods=1
|
||||
exceptions.overgeneral-exceptions=["builtins.BaseException","builtins.Exception"]
|
||||
format.expected-line-ending-format=""
|
||||
# format.ignore-long-lines="^\s*(# )?<?https?://\S+>?$"
|
||||
format.indent-after-paren=4
|
||||
format.indent-string=' '
|
||||
format.max-line-length=200
|
||||
|
|
@ -266,70 +260,79 @@ logging.logging-format-style="new"
|
|||
logging.logging-modules="logging"
|
||||
messages_control.confidence=["HIGH","CONTROL_FLOW","INFERENCE","INFERENCE_FAILURE","UNDEFINED"]
|
||||
messages_control.disable=[
|
||||
"abstract-method",
|
||||
"arguments-renamed",
|
||||
"bad-inline-option",
|
||||
"bare-except",
|
||||
"broad-exception-caught",
|
||||
"chained-comparison",
|
||||
"consider-iterating-dictionary",
|
||||
"consider-merging-isinstance",
|
||||
"consider-using-dict-items",
|
||||
"consider-using-enumerate",
|
||||
"consider-using-from-import",
|
||||
"consider-using-generator",
|
||||
"consider-using-get",
|
||||
"consider-using-in",
|
||||
"consider-using-max-builtin",
|
||||
"consider-using-min-builtin",
|
||||
"consider-using-sys-exit",
|
||||
"cyclic-import",
|
||||
"dangerous-default-value",
|
||||
"deprecated-pragma",
|
||||
"duplicate-code",
|
||||
"file-ignored",
|
||||
"import-error",
|
||||
"import-outside-toplevel",
|
||||
"invalid-name",
|
||||
"line-too-long",
|
||||
"locally-disabled",
|
||||
"logging-fstring-interpolation",
|
||||
"missing-class-docstring",
|
||||
"missing-function-docstring",
|
||||
"missing-module-docstring",
|
||||
"no-else-raise",
|
||||
"no-else-return",
|
||||
"not-callable",
|
||||
"pointless-string-statement",
|
||||
"raw-checker-failed",
|
||||
"simplifiable-if-expression",
|
||||
"suppressed-message",
|
||||
"subprocess-run-check",
|
||||
"too-few-public-methods",
|
||||
"too-many-instance-attributes",
|
||||
"too-many-locals",
|
||||
"too-many-nested-blocks",
|
||||
"too-many-positional-arguments",
|
||||
"too-many-statements",
|
||||
"unidiomatic-typecheck",
|
||||
"unknown-option-value",
|
||||
"unnecessary-dict-index-lookup",
|
||||
"unnecessary-dunder-call",
|
||||
"unnecessary-lambda-assigment",
|
||||
"unnecessary-lambda",
|
||||
"unused-wildcard-import",
|
||||
"unpacking-non-sequence",
|
||||
"unsubscriptable-object",
|
||||
"useless-return",
|
||||
"use-maxsplit-arg",
|
||||
"use-dict-literal",
|
||||
"use-symbolic-message-instead",
|
||||
"useless-suppression",
|
||||
"wrong-import-position",
|
||||
]
|
||||
"abstract-method",
|
||||
"arguments-renamed",
|
||||
"bad-inline-option",
|
||||
"bare-except",
|
||||
"broad-exception-caught",
|
||||
"chained-comparison",
|
||||
"consider-iterating-dictionary",
|
||||
"consider-merging-isinstance",
|
||||
"consider-using-dict-items",
|
||||
"consider-using-enumerate",
|
||||
"consider-using-from-import",
|
||||
"consider-using-generator",
|
||||
"consider-using-get",
|
||||
"consider-using-in",
|
||||
"consider-using-max-builtin",
|
||||
"consider-using-min-builtin",
|
||||
"consider-using-sys-exit",
|
||||
"cyclic-import",
|
||||
"dangerous-default-value",
|
||||
"deprecated-pragma",
|
||||
"duplicate-code",
|
||||
"file-ignored",
|
||||
"import-error",
|
||||
"import-outside-toplevel",
|
||||
"invalid-name",
|
||||
"line-too-long",
|
||||
"locally-disabled",
|
||||
"logging-fstring-interpolation",
|
||||
"missing-class-docstring",
|
||||
"missing-function-docstring",
|
||||
"missing-module-docstring",
|
||||
"no-else-raise",
|
||||
"no-else-return",
|
||||
"not-callable",
|
||||
"pointless-string-statement",
|
||||
"raw-checker-failed",
|
||||
"simplifiable-if-expression",
|
||||
"subprocess-run-check",
|
||||
"suppressed-message",
|
||||
"too-few-public-methods",
|
||||
"too-many-instance-attributes",
|
||||
"too-many-locals",
|
||||
"too-many-nested-blocks",
|
||||
"too-many-positional-arguments",
|
||||
"too-many-statements",
|
||||
"unidiomatic-typecheck",
|
||||
"unknown-option-value",
|
||||
"unnecessary-dict-index-lookup",
|
||||
"unnecessary-dunder-call",
|
||||
"unnecessary-lambda-assigment",
|
||||
"unnecessary-lambda",
|
||||
"unpacking-non-sequence",
|
||||
"unsubscriptable-object",
|
||||
"unused-wildcard-import",
|
||||
"use-dict-literal",
|
||||
"use-maxsplit-arg",
|
||||
"use-symbolic-message-instead",
|
||||
"useless-return",
|
||||
"useless-suppression",
|
||||
"wrong-import-position",
|
||||
]
|
||||
messages_control.enable="c-extension-no-member"
|
||||
method_args.timeout-methods=["requests.api.delete","requests.api.get","requests.api.head","requests.api.options","requests.api.patch","requests.api.post","requests.api.put","requests.api.request"]
|
||||
miscellaneous.notes=["FIXME","XXX","TODO"]
|
||||
method_args.timeout-methods=[
|
||||
"requests.api.delete",
|
||||
"requests.api.get",
|
||||
"requests.api.head",
|
||||
"requests.api.options",
|
||||
"requests.api.patch",
|
||||
"requests.api.post",
|
||||
"requests.api.put",
|
||||
"requests.api.request"
|
||||
]
|
||||
miscellaneous.notes=["FIXME","XXX","TODO","HERE"]
|
||||
miscellaneous.notes-rgx=""
|
||||
refactoring.max-nested-blocks=5
|
||||
refactoring.never-returning-functions=["sys.exit","argparse.parse_error"]
|
||||
|
|
@ -346,11 +349,11 @@ string.check-quote-consistency=false
|
|||
string.check-str-concat-over-line-jumps=false
|
||||
typecheck.contextmanager-decorators="contextlib.contextmanager"
|
||||
typecheck.generated-members=[
|
||||
"numpy.*",
|
||||
"logging.*",
|
||||
"torch.*",
|
||||
"cv2.*",
|
||||
]
|
||||
"numpy.*",
|
||||
"logging.*",
|
||||
"torch.*",
|
||||
"cv2.*",
|
||||
]
|
||||
typecheck.ignore-none=true
|
||||
typecheck.ignore-on-opaque-inference=true
|
||||
typecheck.ignored-checks-for-mixins=["no-member","not-async-context-manager","not-context-manager","attribute-defined-outside-init"]
|
||||
|
|
@ -375,23 +378,23 @@ pythonPlatform = "All"
|
|||
typeCheckingMode = "off"
|
||||
venvPath = "./venv"
|
||||
include = [
|
||||
"*.py",
|
||||
"modules/**/*.py",
|
||||
"pipelines/**/*.py",
|
||||
"scripts/**/*.py",
|
||||
"extensions-builtin/**/*.py"
|
||||
]
|
||||
"*.py",
|
||||
"modules/**/*.py",
|
||||
"pipelines/**/*.py",
|
||||
"scripts/**/*.py",
|
||||
"extensions-builtin/**/*.py"
|
||||
]
|
||||
exclude = [
|
||||
"**/.*",
|
||||
".git/",
|
||||
"**/node_modules",
|
||||
"**/__pycache__",
|
||||
"venv",
|
||||
]
|
||||
"**/.*",
|
||||
".git/",
|
||||
"**/node_modules",
|
||||
"**/__pycache__",
|
||||
"venv",
|
||||
]
|
||||
extraPaths = [
|
||||
"scripts",
|
||||
"pipelines",
|
||||
]
|
||||
"scripts",
|
||||
"pipelines",
|
||||
]
|
||||
reportMissingImports = "none"
|
||||
reportInvalidTypeForm = "none"
|
||||
|
||||
|
|
@ -402,16 +405,16 @@ python-version = "3.10"
|
|||
|
||||
[tool.ty.src]
|
||||
include = [
|
||||
"*.py",
|
||||
"modules/**/*.py",
|
||||
"pipelines/**/*.py",
|
||||
"scripts/**/*.py",
|
||||
"extensions-builtin/**/*.py"
|
||||
]
|
||||
"*.py",
|
||||
"modules/**/*.py",
|
||||
"pipelines/**/*.py",
|
||||
"scripts/**/*.py",
|
||||
"extensions-builtin/**/*.py"
|
||||
]
|
||||
exclude = [
|
||||
"venv/",
|
||||
"*.git/",
|
||||
]
|
||||
"venv/",
|
||||
"*.git/",
|
||||
]
|
||||
|
||||
[tool.ty.rules]
|
||||
invalid-method-override = "ignore"
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ class I2IFolderScript(scripts_manager.Script):
|
|||
)
|
||||
with gr.Row():
|
||||
upscale_only = gr.Checkbox(
|
||||
label="Upscale only (skip inference, apply post-resize directly to inputs)",
|
||||
label="Upscale only (skip inference, apply post-resize)",
|
||||
value=False,
|
||||
elem_id=self.elem_id("upscale_only"),
|
||||
)
|
||||
|
|
@ -78,46 +78,46 @@ class I2IFolderScript(scripts_manager.Script):
|
|||
label="Denoising strength (0.0 = use panel)",
|
||||
elem_id=self.elem_id("strength_override"),
|
||||
)
|
||||
with gr.Row():
|
||||
gr.HTML('<b>Pre-inference resize</b>')
|
||||
# with gr.Row():
|
||||
# gr.HTML('<b>Pre-inference resize</b>')
|
||||
_upscaler_choices = [x.name for x in shared.sd_upscalers] or ["None"]
|
||||
with gr.Row():
|
||||
pre_resize_enabled = gr.Checkbox(
|
||||
label="Enable pre-inference resize",
|
||||
value=False,
|
||||
elem_id=self.elem_id("pre_resize_enabled"),
|
||||
)
|
||||
with gr.Row():
|
||||
pre_resize_mode = gr.Dropdown(
|
||||
label="Resize mode",
|
||||
choices=shared.resize_modes,
|
||||
type="index",
|
||||
value="None",
|
||||
elem_id=self.elem_id("pre_resize_mode"),
|
||||
)
|
||||
pre_resize_name = gr.Dropdown(
|
||||
label="Resize method",
|
||||
choices=_upscaler_choices,
|
||||
value=_upscaler_choices[0],
|
||||
elem_id=self.elem_id("pre_resize_name"),
|
||||
)
|
||||
with gr.Row():
|
||||
pre_resize_scale = gr.Slider(
|
||||
minimum=0.25, maximum=4.0, step=0.05, value=1.0,
|
||||
label="Scale factor (ignored if width/height set)",
|
||||
elem_id=self.elem_id("pre_resize_scale"),
|
||||
)
|
||||
with gr.Row():
|
||||
pre_resize_width = gr.Number(
|
||||
label="Width (0 = use scale factor)",
|
||||
value=0, precision=0,
|
||||
elem_id=self.elem_id("pre_resize_width"),
|
||||
)
|
||||
pre_resize_height = gr.Number(
|
||||
label="Height (0 = use scale factor)",
|
||||
value=0, precision=0,
|
||||
elem_id=self.elem_id("pre_resize_height"),
|
||||
)
|
||||
# with gr.Row():
|
||||
# pre_resize_enabled = gr.Checkbox(
|
||||
# label="Enable pre-inference resize",
|
||||
# value=False,
|
||||
# elem_id=self.elem_id("pre_resize_enabled"),
|
||||
# )
|
||||
# with gr.Row():
|
||||
# pre_resize_mode = gr.Dropdown(
|
||||
# label="Resize mode",
|
||||
# choices=shared.resize_modes,
|
||||
# type="index",
|
||||
# value="None",
|
||||
# elem_id=self.elem_id("pre_resize_mode"),
|
||||
# )
|
||||
# pre_resize_name = gr.Dropdown(
|
||||
# label="Resize method",
|
||||
# choices=_upscaler_choices,
|
||||
# value=_upscaler_choices[0],
|
||||
# elem_id=self.elem_id("pre_resize_name"),
|
||||
# )
|
||||
# with gr.Row():
|
||||
# pre_resize_scale = gr.Slider(
|
||||
# minimum=0.25, maximum=4.0, step=0.05, value=1.0,
|
||||
# label="Scale factor (ignored if width/height set)",
|
||||
# elem_id=self.elem_id("pre_resize_scale"),
|
||||
# )
|
||||
# with gr.Row():
|
||||
# pre_resize_width = gr.Number(
|
||||
# label="Width (0 = use scale factor)",
|
||||
# value=0, precision=0,
|
||||
# elem_id=self.elem_id("pre_resize_width"),
|
||||
# )
|
||||
# pre_resize_height = gr.Number(
|
||||
# label="Height (0 = use scale factor)",
|
||||
# value=0, precision=0,
|
||||
# elem_id=self.elem_id("pre_resize_height"),
|
||||
# )
|
||||
with gr.Row():
|
||||
gr.HTML('<b>Post-inference resize</b>')
|
||||
with gr.Row():
|
||||
|
|
@ -157,9 +157,9 @@ class I2IFolderScript(scripts_manager.Script):
|
|||
value=0, precision=0,
|
||||
elem_id=self.elem_id("resize_height"),
|
||||
)
|
||||
return [folder, output_dir, upscale_only, prompt_override, negative_override, seed_override, steps_override, cfg_scale_override, sampler_override, strength_override, pre_resize_enabled, pre_resize_mode, pre_resize_name, pre_resize_scale, pre_resize_width, pre_resize_height, resize_enabled, resize_mode, resize_name, resize_scale, resize_width, resize_height]
|
||||
return [folder, output_dir, upscale_only, prompt_override, negative_override, seed_override, steps_override, cfg_scale_override, sampler_override, strength_override, resize_enabled, resize_mode, resize_name, resize_scale, resize_width, resize_height]
|
||||
|
||||
def run(self, p, folder, output_dir, upscale_only, prompt_override, negative_override, seed_override, steps_override, cfg_scale_override, sampler_override, strength_override, pre_resize_enabled, pre_resize_mode, pre_resize_name, pre_resize_scale, pre_resize_width, pre_resize_height, resize_enabled, resize_mode, resize_name, resize_scale, resize_width, resize_height): # pylint: disable=arguments-differ
|
||||
def run(self, p, folder, output_dir, upscale_only, prompt_override, negative_override, seed_override, steps_override, cfg_scale_override, sampler_override, strength_override, resize_enabled, resize_mode, resize_name, resize_scale, resize_width, resize_height): # pylint: disable=arguments-differ
|
||||
folder = (folder or "").strip()
|
||||
if not folder or not os.path.isdir(folder):
|
||||
log.error(f"Image folder batch: invalid or missing folder: {folder!r}")
|
||||
|
|
@ -172,6 +172,9 @@ class I2IFolderScript(scripts_manager.Script):
|
|||
|
||||
out_dir = (output_dir or "").strip() or os.path.join(folder, "output")
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
# pre_resize_out_dir = os.path.join(out_dir, "pre-resize") if pre_resize_enabled else None
|
||||
# if pre_resize_out_dir:
|
||||
# os.makedirs(pre_resize_out_dir, exist_ok=True)
|
||||
resize_out_dir = os.path.join(os.path.dirname(out_dir), "output-resized") if resize_enabled else None
|
||||
if resize_out_dir:
|
||||
os.makedirs(resize_out_dir, exist_ok=True)
|
||||
|
|
@ -221,14 +224,17 @@ class I2IFolderScript(scripts_manager.Script):
|
|||
cp.init_images = [img]
|
||||
cp.width = img.width
|
||||
cp.height = img.height
|
||||
if pre_resize_enabled and pre_resize_mode != 0 and pre_resize_name not in ('None', '') and shared.sd_upscalers:
|
||||
pre_w = int(pre_resize_width) if int(pre_resize_width) > 0 else int(img.width * pre_resize_scale)
|
||||
pre_h = int(pre_resize_height) if int(pre_resize_height) > 0 else int(img.height * pre_resize_scale)
|
||||
img = images.resize_image(pre_resize_mode, img, pre_w, pre_h, pre_resize_name)
|
||||
cp.init_images = [img]
|
||||
cp.width = img.width
|
||||
cp.height = img.height
|
||||
log.info(f"Image folder batch: pre-resize to {img.size} mode={shared.resize_modes[pre_resize_mode]!r} method={pre_resize_name!r}")
|
||||
# if pre_resize_enabled and pre_resize_mode != 0 and pre_resize_name not in ('None', '') and shared.sd_upscalers:
|
||||
# pre_w = int(pre_resize_width) if int(pre_resize_width) > 0 else int(img.width * pre_resize_scale)
|
||||
# pre_h = int(pre_resize_height) if int(pre_resize_height) > 0 else int(img.height * pre_resize_scale)
|
||||
# img = images.resize_image(pre_resize_mode, img, pre_w, pre_h, pre_resize_name)
|
||||
# cp.init_images = [img]
|
||||
# cp.width = img.width
|
||||
# cp.height = img.height
|
||||
# log.info(f"Image folder batch: pre-resize to {img.size} mode={shared.resize_modes[pre_resize_mode]!r} method={pre_resize_name!r}")
|
||||
# pre_out_path = os.path.join(pre_resize_out_dir, os.path.basename(filepath))
|
||||
# img.save(pre_out_path)
|
||||
# log.info(f"Image folder batch: pre-resize saved {pre_out_path!r}")
|
||||
if upscale_only:
|
||||
log.info(f"Image folder batch: [{i + 1}/{len(files)}] upscale-only file={os.path.basename(filepath)} size={img.size}")
|
||||
out_img = img
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import * as fs from 'node:fs';
|
|||
import * as process from 'node:process';
|
||||
|
||||
const apiKey = process.env.GOOGLE_API_KEY;
|
||||
const model = 'gemini-3-flash-preview';
|
||||
const model = 'gemini-3.1-flash-lite-preview';
|
||||
const prompt = `## You are expert translator AI.
|
||||
Translate attached JSON from English to {language}.
|
||||
## Translation Rules:
|
||||
|
|
@ -111,6 +111,7 @@ async function localize() {
|
|||
const t0 = performance.now();
|
||||
let allOk = true;
|
||||
for (const section of Object.keys(json)) {
|
||||
if (!allOk) continue;
|
||||
const keys = Object.keys(json[section]).length;
|
||||
console.log(' start:', { locale, section, keys });
|
||||
try {
|
||||
|
|
@ -141,6 +142,8 @@ async function localize() {
|
|||
if (allOk) {
|
||||
const txt = JSON.stringify(output, null, 2);
|
||||
fs.writeFileSync(fn, txt);
|
||||
} else {
|
||||
console.error(' error: something went wrong, output file not saved');
|
||||
}
|
||||
const t3 = performance.now();
|
||||
console.log(' time:', { locale, time: Math.round(t3 - t0) / 1000 });
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
/**
|
||||
2
wiki
2
wiki
|
|
@ -1 +1 @@
|
|||
Subproject commit 7abb07dc95bdb2c1869e2901213f5c82b46905c3
|
||||
Subproject commit d54ade8e5f79a62b5228de6406400c8eda71b67f
|
||||
Loading…
Reference in New Issue