diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7cd97c48a..3e4e48f34 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,6 @@
# Change Log for SD.Next
-## Update for 2025-01-10
+## Update for 2025-01-12
- **Models**
- [Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512)
@@ -14,9 +14,13 @@
- **Feaures**
- **SDNQ** now has *19 int* based and *69 float* based quantization types
*note*: not all are exposed via ui purely for simplicity, but all are available via api and scripts
- - allow weights for wildcards, thanks @Tillerz
- - sampler: add laplace beta schedule
+ - **wildcards**: allow weights, thanks @Tillerz
+ - **sampler**: add laplace beta schedule
results in better prompt adherence and smoother infills
+ - **prompt enhance**: improve handling and refresh ui, thanks @CalamitousFelicitousness
+ new models such moondream-3 and xiaomo-mimo
+ add support for *thinking* mode where model can reason about the prompt
+ add support for *vision* processing where prompt enhance can also optionally analyze input image
- **UI**
- kanvas add send-to functionality
- kanvas improve support for standardui
@@ -52,6 +56,7 @@
- lora loading when using torch without distributed support
- google-genai auth, thanks @CalamitousFelicitousness
- reduce triton test verbosity
+ - improve qwen i2i handling
## Update for 2025-12-26
diff --git a/extensions-builtin/sdnext-modernui b/extensions-builtin/sdnext-modernui
index dff67ef8f..d83de1303 160000
--- a/extensions-builtin/sdnext-modernui
+++ b/extensions-builtin/sdnext-modernui
@@ -1 +1 @@
-Subproject commit dff67ef8f40a300627b2f20d16484c65d1d43202
+Subproject commit d83de1303abc930d6a2e8570e6c929a0139510f1
diff --git a/html/locale_en.json b/html/locale_en.json
index 7a883c2aa..5afe2c440 100644
--- a/html/locale_en.json
+++ b/html/locale_en.json
@@ -1224,7 +1224,7 @@
{"id":"","label":"tdd","localized":"","reload":"","hint":"tdd"},
{"id":"","label":"te","localized":"","reload":"","hint":"te"},
{"id":"","label":"temperature","localized":"","reload":"","hint":"Controls randomness in token selection by reshaping the probability distribution.
Like adjusting a dial between cautious predictability (low values ~0.4) and creative exploration (higher values ~1). Higher temperatures increase willingness to choose less obvious options, but makes outputs more unpredictable.
Set to 0 to disable, resulting in silent switch to greedy decoding, disabling sampling."},
- {"id":"","label":"Thinking mode","localized":"","reload":"","hint":"Enables thinking/reasoning, allowing the model to take more time to generate responses.
This can lead to more thoughtful and detailed answers, but will increase response time.
This setting affects both hybrid and thinking-only models, and in some may result in lower overall quality than expected. For thinking-only models like Qwen3-VL this setting might have to be combined with prefill to guarantee preventing thinking.
Models supporting this feature are marked with an \uf0eb icon."},
+ {"id":"","label":"Thinking Mode","localized":"","reload":"","hint":"Enables thinking/reasoning, allowing the model to take more time to generate responses.
This can lead to more thoughtful and detailed answers, but will increase response time.
This setting affects both hybrid and thinking-only models, and in some may result in lower overall quality than expected. For thinking-only models like Qwen3-VL this setting might have to be combined with prefill to guarantee preventing thinking.
Models supporting this feature are marked with an \uf0eb icon."},
{"id":"","label":"Repetition penalty","localized":"","reload":"","hint":"Discourages reusing tokens that already appear in the prompt or output by penalizing their probabilities.
Like adding friction to revisiting previous choices. Helps break repetitive loops but may reduce coherence at aggressive values.
Set to 1 to disable."},
{"id":"","label":"text guidance scale","localized":"","reload":"","hint":"text guidance scale"},
{"id":"","label":"template","localized":"","reload":"","hint":"template"},
diff --git a/installer.py b/installer.py
index ff7fc88cf..a02a5aba3 100644
--- a/installer.py
+++ b/installer.py
@@ -648,7 +648,7 @@ def check_diffusers():
t_start = time.time()
if args.skip_all:
return
- sha = 'ed6e5ecf67ac762e878d3d6e25bda4e77b0c820a' # diffusers commit hash
+ sha = 'dad5cb55e6ade24fb397525eb023ad4eba37019d' # diffusers commit hash
# if args.use_rocm or args.use_zluda or args.use_directml:
# sha = '043ab2520f6a19fce78e6e060a68dbc947edb9f9' # lock diffusers versions for now
pkg = pkg_resources.working_set.by_key.get('diffusers', None)
diff --git a/modules/ltx/ltx_process.py b/modules/ltx/ltx_process.py
index cca7e357e..9e97fa2e4 100644
--- a/modules/ltx/ltx_process.py
+++ b/modules/ltx/ltx_process.py
@@ -140,7 +140,7 @@ def run_ltx(task_id,
"output_type": "latent",
}
if 'Condition' in shared.sd_model.__class__.__name__:
- base_args["image_cond_noise_scale"] = image_cond_noise_scale
+ base_args["image_cond_noise_scale"] = image_cond_noise_scale
shared.log.debug(f'Video: cls={shared.sd_model.__class__.__name__} op=base {base_args}')
if len(conditions) > 0:
base_args["conditions"] = conditions