regen all localizations

Signed-off-by: vladmandic <mandic00@live.com>
pull/4716/head
vladmandic 2026-04-01 10:29:08 +02:00
parent 75cc035354
commit 1310264d43
35 changed files with 103207 additions and 97599 deletions

View File

@ -1,17 +1,19 @@
# Change Log for SD.Next
## Update for 2026-03-30
## Update for 2026-04-01
### Highlights for 2026-03-30
### Highlights for 2026-04-01
This release brings massive code refactoring to modernize codebase and removal of some obsolete features. Leaner & Faster!
And since its a bit quieter period when it comes to new models, notable additions would be : *FireRed-Image-Edit*, *SkyWorks-UniPic-3* and new versions of *Anima-Preview*, *Flux-Klein-KV* image models and *LTX 2.3* video model
If you're on Windows platform, we have a brand new [All-in-one Installer & Launcher](https://github.com/vladmandic/sdnext-launcher): simply download [exe or zip](https://github.com/vladmandic/sdnext-launcher/releases) and done!
And we have a new (optional) React-based **UI** [Enso](https://github.com/CalamitousFelicitousness/enso)!
*What else*? Really a lot!
New color grading module, updated localization with new languages and improved translations, new civitai integration module, new finetunes loader, several new upscalers, improvements to LLM/VLM in captioning and prompt enhance, a lot of new control preprocessors, new realtime server info panel, some new UI themes
And major work on API hardening: security, rate limits, secrets handling, new endpoints, etc.
New color grading module, updated localization with new languages and improved translations, new CivitAI integration module, new finetunes loader, several new upscalers, improvements to LLM/VLM in captioning and prompt enhance, a lot of new control preprocessors, new realtime server info panel, some new UI themes
And major work on API hardening: *security, rate limits, secrets handling, new endpoints*, etc.
But also many smaller quality-of-life improvements - for full details, see [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md)
*Note*: Purely due to size of changes, clean install is recommended!
@ -19,7 +21,7 @@ Just how big? Some stats: *~530 commits over 880 files*
[ReadMe](https://github.com/vladmandic/automatic/blob/master/README.md) | [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md) | [Docs](https://vladmandic.github.io/sdnext-docs/) | [WiKi](https://github.com/vladmandic/automatic/wiki) | [Discord](https://discord.com/invite/sd-next-federal-batch-inspectors-1101998836328697867) | [Sponsor](https://github.com/sponsors/vladmandic)
### Details for 2026-03-30
### Details for 2026-04-01
- **Models**
- [Google Flash 3.1 Image](https://ai.google.dev/gemini-api/docs/models/gemini-3-flash-preview) a.k.a. *Nano Banana 2*
@ -83,10 +85,11 @@ Just how big? Some stats: *~530 commits over 880 files*
> `set TORCH_COMMAND='torch torchvision --index-url https://download.pytorch.org/whl/cu128`
- update installer and support `nunchaku==1.2.1`
- **UI**
- **Enso** new React-based UI with WYSIWYG infinite canvas workspace, command palette, and numerous quality of life improvements across the board *(work-in-progress alpha)*.
enable using `--enso` flag, and use on `/enso` endpoint.
**Separate installation of SD.Next recommended**
see wiki page and Enso repo README for details.
- **Enso** new React-based UI, developed by @CalamitousFelicitousness!
with WYSIWYG infinite canvas workspace, command palette, and numerous quality of life improvements across the board
enable using `--enso` flag and access using `/enso` endpoint (e.g. <http://localhost:7860/enso>)
see [Enso Docs](https://vladmandic.github.io/sdnext-docs/Enso/) and [Enso Home](https://github.com/CalamitousFelicitousness/enso) for details
*note* Enso is work-in-progress and alpha-ready
- legacy panels **T2I** and **I2I** are disabled by default
you can re-enable them in *settings -> ui -> hide legacy tabs*
- new panel: **Server Info** with detailed runtime informaton
@ -109,15 +112,17 @@ Just how big? Some stats: *~530 commits over 880 files*
- updates to core sections: *Installation, Python, Schedulers, Launcher, SDNQ, Video*
- added Enso page
- **API**
- new **v2 API** (`/sdapi/v2/`): job-based generation with queue, per-job WebSocket progress, file uploads with TTL, model/network enumeration, and a plethora of other improvements *(work-in-progress)*
for the time being ships with Enso, which must be enabled wih `--enso` flag on startup for v2 API to be available.
- prototype **v2 API** (`/sdapi/v2/`)
job-based generation with queue, per-job WebSocket progress, file uploads with TTL, model/network enumeration
and a plethora of other improvements *(work-in-progress)*
for the time being ships with Enso, which must be enabled wih `--enso` flag on startup for v2 API to be available
- **rate limiting**: global for all endpoints, guards against abuse and denial-of-service type of attacks
configurable in *settings -> server settings*
- new `/sdapi/v1/upload` endpoint with support for both POST with form-data or PUT using raw-bytes
- new `/sdapi/v1/torch` endpoint for torch info (backend, version, etc.)
- new `/sdapi/v1/gpu` endpoint for GPU info
- new `/sdapi/v1/rembg` endpoint for background removal
- new `/sdadpi/v1/unet` endpoint to list available unets/dits
- new `/sdapi/v1/unet` endpoint to list available unets/dits
- use rate limiting for api logging
- **Obsoleted**
- removed support for additional quantization engines: *BitsAndBytes, TorchAO, Optimum-Quanto, NNCF*

47
TODO.md
View File

@ -1,12 +1,5 @@
# TODO
## Release
- Add notes: **Enso**
- Tips: **Color Grading**
- Regen: **Localization**
- Rebuild: **Launcher** with `master`
## Internal
- Feature: implement `unload_auxiliary_models`
@ -131,21 +124,25 @@ TODO: Investigate which models are diffusers-compatible and prioritize!
> npm run todo
- fc: autodetect distilled based on model
- fc: autodetect tensor format based on model
- hypertile: vae breaks when using non-standard sizes
- install: switch to pytorch source when it becomes available
- loader: load receipe
- loader: save receipe
- lora: add other quantization types
- lora: add t5 key support for sd35/f1
- lora: maybe force imediate quantization
- model load: force-reloading entire model as loading transformers only leads to massive memory usage
- model load: implement model in-memory caching
- modernui: monkey-patch for missing tabs.select event
- modules/lora/lora_extract.py:188:9: W0511: TODO: lora: support pre-quantized flux
- modules/modular_guiders.py:65:58: W0511: TODO: guiders
- processing: remove duplicate mask params
- resize image: enable full VAE mode for resize-latent
modules/sd_samplers_diffusers.py:353:31: W0511: TODO enso-required (fixme)
```code
installer.py:TODO rocm: switch to pytorch source when it becomes available
modules/control/run.py:TODO modernui: monkey-patch for missing tabs.select event
modules/history.py:TODO: apply metadata, preview, load/save
modules/image/resize.py:TODO resize image: enable full VAE mode for resize-latent
modules/lora/lora_apply.py:TODO lora: add other quantization types
modules/lora/lora_apply.py:TODO lora: maybe force imediate quantization
modules/lora/lora_extract.py:TODO: lora: support pre-quantized flux
modules/lora/lora_load.py:TODO lora: add t5 key support for sd35/f1
modules/masking.py:TODO: additional masking algorithms
modules/modular_guiders.py:TODO: guiders
modules/processing_class.py:TODO processing: remove duplicate mask params
modules/sd_hijack_hypertile.py:TODO hypertile: vae breaks when using non-standard sizes
modules/sd_models.py:TODO model load: implement model in-memory caching
modules/sd_samplers_diffusers.py:TODO enso-required
modules/sd_unet.py:TODO model load: force-reloading entire model as loading transformers only leads to massive memory usage
modules/transformer_cache.py:TODO fc: autodetect distilled based on model
modules/transformer_cache.py:TODO fc: autodetect tensor format based on model
modules/ui_models_load.py:TODO loader: load receipe
modules/ui_models_load.py:TODO loader: save receipe
modules/video_models/video_save.py:TODO audio set time-base
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -78,7 +78,7 @@ const anyPromptExists = () => gradioApp().querySelectorAll('.main-prompts').leng
function scheduleAfterUiUpdateCallbacks() {
clearTimeout(uiAfterUpdateTimeout);
uiAfterUpdateTimeout = setTimeout(() => executeCallbacks(uiAfterUpdateCallbacks, 500));
uiAfterUpdateTimeout = setTimeout(() => executeCallbacks(uiAfterUpdateCallbacks), 250);
}
let executedOnLoaded = false;

View File

@ -205,7 +205,6 @@ def batch(
Returns:
Combined tag results
"""
import os
import time
from pathlib import Path
import rich.progress as rp

View File

@ -12,7 +12,7 @@ class GoogleGeminiPipeline():
self.model = model_name.split(' (')[0]
from installer import install
install('google-genai==1.52.0')
from google import genai
from google import genai # pylint: disable=no-name-in-module
args = self.get_args()
self.client = genai.Client(**args)
log.debug(f'Load model: type=GoogleGemini model="{self.model}"')
@ -54,7 +54,7 @@ class GoogleGeminiPipeline():
return args
def __call__(self, question, image, model, instructions, prefill, thinking, kwargs):
from google.genai import types
from google.genai import types # pylint: disable=no-name-in-module
config = {
'system_instruction': instructions or shared.opts.caption_vlm_system,
'thinking_config': types.ThinkingConfig(thinking_level="high" if thinking else "low")

View File

@ -419,7 +419,6 @@ def batch(
Returns:
Combined tag results
"""
import os
from pathlib import Path
# Load model

View File

@ -14,7 +14,6 @@ exclude = [
".git",
".ruff_cache",
".vscode",
"modules/cfgzero",
"modules/flash_attn_triton_amd",
"modules/hidiffusion",
@ -24,19 +23,16 @@ exclude = [
"modules/teacache",
"modules/seedvr",
"modules/sharpfin",
"modules/control/proc",
"modules/control/units",
"modules/control/units/xs_pipe.py",
"modules/postprocess/aurasr_arch.py",
"pipelines/meissonic",
"pipelines/omnigen2",
"pipelines/hdm",
"pipelines/segmoe",
"pipelines/xomni",
"pipelines/chrono",
"scripts/lbm",
"scripts/daam",
"scripts/xadapter",
@ -44,9 +40,6 @@ exclude = [
"scripts/instantir",
"scripts/freescale",
"scripts/consistory",
"repositories",
"extensions-builtin/Lora",
"extensions-builtin/sd-extension-chainner/nodes",
"extensions-builtin/sd-webui-agent-scheduler",
@ -78,20 +71,20 @@ select = [
"PLE",
]
ignore = [
"ASYNC240", # Async functions should not use os.path methods
"B006", # Do not use mutable data structures for argument defaults
"B008", # Do not perform function call in argument defaults
"B905", # Strict zip() usage
"ASYNC240", # Async functions should not use os.path methods
"C420", # Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
"C408", # Unnecessary `dict` call
"I001", # Import block is un-sorted or un-formatted
"C420", # Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
"E402", # Module level import not at top of file
"E501", # Line too long
"E721", # Do not compare types, use `isinstance()`
"E731", # Do not assign a `lambda` expression, use a `def`
"E741", # Ambiguous variable name
"F401", # Imported by unused
"EXE001", # file with shebang is not marked executable
"F401", # Imported by unused
"I001", # Import block is un-sorted or un-formatted
"NPY002", # replace legacy random
"RUF005", # Consider iterable unpacking
"RUF008", # Do not use mutable default values for dataclass
@ -100,8 +93,8 @@ ignore = [
"RUF015", # Prefer `next(...)` over single element slice
"RUF022", # All is not sorted
"RUF046", # Value being cast to `int` is already an integer
"RUF059", # Unpacked variables are not used
"RUF051", # Prefer pop over del
"RUF059", # Unpacked variables are not used
]
fixable = ["ALL"]
unfixable = []
@ -132,6 +125,8 @@ main.fail-under=10
main.ignore="CVS"
main.ignore-paths=[
"venv",
"node_modules",
"__pycache__",
".git",
".ruff_cache",
".vscode",
@ -190,14 +185,14 @@ main.ignore-paths=[
"extensions-builtin/sd-webui-agent-scheduler",
"extensions-builtin/sdnext-modernui/node_modules",
"extensions-builtin/sdnext-kanvas/node_modules",
]
]
main.ignore-patterns=[
".*test*.py$",
".*_model.py$",
".*_arch.py$",
".*_model_arch.py*",
".*_model_arch_v2.py$",
]
]
main.ignored-modules=""
main.jobs=4
main.limit-inference-results=100
@ -245,7 +240,6 @@ design.max-statements=200
design.min-public-methods=1
exceptions.overgeneral-exceptions=["builtins.BaseException","builtins.Exception"]
format.expected-line-ending-format=""
# format.ignore-long-lines="^\s*(# )?<?https?://\S+>?$"
format.indent-after-paren=4
format.indent-string=' '
format.max-line-length=200
@ -303,8 +297,8 @@ messages_control.disable=[
"pointless-string-statement",
"raw-checker-failed",
"simplifiable-if-expression",
"suppressed-message",
"subprocess-run-check",
"suppressed-message",
"too-few-public-methods",
"too-many-instance-attributes",
"too-many-locals",
@ -317,19 +311,28 @@ messages_control.disable=[
"unnecessary-dunder-call",
"unnecessary-lambda-assigment",
"unnecessary-lambda",
"unused-wildcard-import",
"unpacking-non-sequence",
"unsubscriptable-object",
"useless-return",
"use-maxsplit-arg",
"unused-wildcard-import",
"use-dict-literal",
"use-maxsplit-arg",
"use-symbolic-message-instead",
"useless-return",
"useless-suppression",
"wrong-import-position",
]
]
messages_control.enable="c-extension-no-member"
method_args.timeout-methods=["requests.api.delete","requests.api.get","requests.api.head","requests.api.options","requests.api.patch","requests.api.post","requests.api.put","requests.api.request"]
miscellaneous.notes=["FIXME","XXX","TODO"]
method_args.timeout-methods=[
"requests.api.delete",
"requests.api.get",
"requests.api.head",
"requests.api.options",
"requests.api.patch",
"requests.api.post",
"requests.api.put",
"requests.api.request"
]
miscellaneous.notes=["FIXME","XXX","TODO","HERE"]
miscellaneous.notes-rgx=""
refactoring.max-nested-blocks=5
refactoring.never-returning-functions=["sys.exit","argparse.parse_error"]
@ -350,7 +353,7 @@ typecheck.generated-members=[
"logging.*",
"torch.*",
"cv2.*",
]
]
typecheck.ignore-none=true
typecheck.ignore-on-opaque-inference=true
typecheck.ignored-checks-for-mixins=["no-member","not-async-context-manager","not-context-manager","attribute-defined-outside-init"]
@ -380,18 +383,18 @@ include = [
"pipelines/**/*.py",
"scripts/**/*.py",
"extensions-builtin/**/*.py"
]
]
exclude = [
"**/.*",
".git/",
"**/node_modules",
"**/__pycache__",
"venv",
]
]
extraPaths = [
"scripts",
"pipelines",
]
]
reportMissingImports = "none"
reportInvalidTypeForm = "none"
@ -407,11 +410,11 @@ include = [
"pipelines/**/*.py",
"scripts/**/*.py",
"extensions-builtin/**/*.py"
]
]
exclude = [
"venv/",
"*.git/",
]
]
[tool.ty.rules]
invalid-method-override = "ignore"

5
test/localize.mjs Normal file → Executable file
View File

@ -6,7 +6,7 @@ import * as fs from 'node:fs';
import * as process from 'node:process';
const apiKey = process.env.GOOGLE_API_KEY;
const model = 'gemini-3-flash-preview';
const model = 'gemini-3.1-flash-lite-preview';
const prompt = `## You are expert translator AI.
Translate attached JSON from English to {language}.
## Translation Rules:
@ -111,6 +111,7 @@ async function localize() {
const t0 = performance.now();
let allOk = true;
for (const section of Object.keys(json)) {
if (!allOk) continue;
const keys = Object.keys(json[section]).length;
console.log(' start:', { locale, section, keys });
try {
@ -141,6 +142,8 @@ async function localize() {
if (allOk) {
const txt = JSON.stringify(output, null, 2);
fs.writeFileSync(fn, txt);
} else {
console.error(' error: something went wrong, output file not saved');
}
const t3 = performance.now();
console.log(' time:', { locale, time: Math.round(t3 - t0) / 1000 });

2
test/reformat.js → test/reformat-json.js Normal file → Executable file
View File

@ -1,3 +1,5 @@
#!/usr/bin/env node
const fs = require('fs');
/**

2
wiki

@ -1 +1 @@
Subproject commit d2ecbe713e25e2a8da3e2f7d6794c82b85e8a0fe
Subproject commit d54ade8e5f79a62b5228de6406400c8eda71b67f