mirror of https://github.com/vladmandic/automatic
lint and update diffusers
Signed-off-by: vladmandic <mandic00@live.com>pull/4538/head
parent
1f92d7c24c
commit
e4e863fd6d
11
CHANGELOG.md
11
CHANGELOG.md
|
|
@ -1,6 +1,6 @@
|
||||||
# Change Log for SD.Next
|
# Change Log for SD.Next
|
||||||
|
|
||||||
## Update for 2025-01-10
|
## Update for 2025-01-12
|
||||||
|
|
||||||
- **Models**
|
- **Models**
|
||||||
- [Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512)
|
- [Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512)
|
||||||
|
|
@ -14,9 +14,13 @@
|
||||||
- **Feaures**
|
- **Feaures**
|
||||||
- **SDNQ** now has *19 int* based and *69 float* based quantization types
|
- **SDNQ** now has *19 int* based and *69 float* based quantization types
|
||||||
*note*: not all are exposed via ui purely for simplicity, but all are available via api and scripts
|
*note*: not all are exposed via ui purely for simplicity, but all are available via api and scripts
|
||||||
- allow weights for wildcards, thanks @Tillerz
|
- **wildcards**: allow weights, thanks @Tillerz
|
||||||
- sampler: add laplace beta schedule
|
- **sampler**: add laplace beta schedule
|
||||||
results in better prompt adherence and smoother infills
|
results in better prompt adherence and smoother infills
|
||||||
|
- **prompt enhance**: improve handling and refresh ui, thanks @CalamitousFelicitousness
|
||||||
|
new models such moondream-3 and xiaomo-mimo
|
||||||
|
add support for *thinking* mode where model can reason about the prompt
|
||||||
|
add support for *vision* processing where prompt enhance can also optionally analyze input image
|
||||||
- **UI**
|
- **UI**
|
||||||
- kanvas add send-to functionality
|
- kanvas add send-to functionality
|
||||||
- kanvas improve support for standardui
|
- kanvas improve support for standardui
|
||||||
|
|
@ -52,6 +56,7 @@
|
||||||
- lora loading when using torch without distributed support
|
- lora loading when using torch without distributed support
|
||||||
- google-genai auth, thanks @CalamitousFelicitousness
|
- google-genai auth, thanks @CalamitousFelicitousness
|
||||||
- reduce triton test verbosity
|
- reduce triton test verbosity
|
||||||
|
- improve qwen i2i handling
|
||||||
|
|
||||||
## Update for 2025-12-26
|
## Update for 2025-12-26
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Subproject commit dff67ef8f40a300627b2f20d16484c65d1d43202
|
Subproject commit d83de1303abc930d6a2e8570e6c929a0139510f1
|
||||||
|
|
@ -1224,7 +1224,7 @@
|
||||||
{"id":"","label":"tdd","localized":"","reload":"","hint":"tdd"},
|
{"id":"","label":"tdd","localized":"","reload":"","hint":"tdd"},
|
||||||
{"id":"","label":"te","localized":"","reload":"","hint":"te"},
|
{"id":"","label":"te","localized":"","reload":"","hint":"te"},
|
||||||
{"id":"","label":"temperature","localized":"","reload":"","hint":"Controls randomness in token selection by reshaping the probability distribution.<br>Like adjusting a dial between cautious predictability (low values ~0.4) and creative exploration (higher values ~1). Higher temperatures increase willingness to choose less obvious options, but makes outputs more unpredictable.<br><br>Set to 0 to disable, resulting in silent switch to greedy decoding, disabling sampling."},
|
{"id":"","label":"temperature","localized":"","reload":"","hint":"Controls randomness in token selection by reshaping the probability distribution.<br>Like adjusting a dial between cautious predictability (low values ~0.4) and creative exploration (higher values ~1). Higher temperatures increase willingness to choose less obvious options, but makes outputs more unpredictable.<br><br>Set to 0 to disable, resulting in silent switch to greedy decoding, disabling sampling."},
|
||||||
{"id":"","label":"Thinking mode","localized":"","reload":"","hint":"Enables thinking/reasoning, allowing the model to take more time to generate responses.<br>This can lead to more thoughtful and detailed answers, but will increase response time.<br>This setting affects both hybrid and thinking-only models, and in some may result in lower overall quality than expected. For thinking-only models like Qwen3-VL this setting might have to be combined with prefill to guarantee preventing thinking.<br><br>Models supporting this feature are marked with an \uf0eb icon."},
|
{"id":"","label":"Thinking Mode","localized":"","reload":"","hint":"Enables thinking/reasoning, allowing the model to take more time to generate responses.<br>This can lead to more thoughtful and detailed answers, but will increase response time.<br>This setting affects both hybrid and thinking-only models, and in some may result in lower overall quality than expected. For thinking-only models like Qwen3-VL this setting might have to be combined with prefill to guarantee preventing thinking.<br><br>Models supporting this feature are marked with an \uf0eb icon."},
|
||||||
{"id":"","label":"Repetition penalty","localized":"","reload":"","hint":"Discourages reusing tokens that already appear in the prompt or output by penalizing their probabilities.<br>Like adding friction to revisiting previous choices. Helps break repetitive loops but may reduce coherence at aggressive values.<br><br>Set to 1 to disable."},
|
{"id":"","label":"Repetition penalty","localized":"","reload":"","hint":"Discourages reusing tokens that already appear in the prompt or output by penalizing their probabilities.<br>Like adding friction to revisiting previous choices. Helps break repetitive loops but may reduce coherence at aggressive values.<br><br>Set to 1 to disable."},
|
||||||
{"id":"","label":"text guidance scale","localized":"","reload":"","hint":"text guidance scale"},
|
{"id":"","label":"text guidance scale","localized":"","reload":"","hint":"text guidance scale"},
|
||||||
{"id":"","label":"template","localized":"","reload":"","hint":"template"},
|
{"id":"","label":"template","localized":"","reload":"","hint":"template"},
|
||||||
|
|
|
||||||
|
|
@ -648,7 +648,7 @@ def check_diffusers():
|
||||||
t_start = time.time()
|
t_start = time.time()
|
||||||
if args.skip_all:
|
if args.skip_all:
|
||||||
return
|
return
|
||||||
sha = 'ed6e5ecf67ac762e878d3d6e25bda4e77b0c820a' # diffusers commit hash
|
sha = 'dad5cb55e6ade24fb397525eb023ad4eba37019d' # diffusers commit hash
|
||||||
# if args.use_rocm or args.use_zluda or args.use_directml:
|
# if args.use_rocm or args.use_zluda or args.use_directml:
|
||||||
# sha = '043ab2520f6a19fce78e6e060a68dbc947edb9f9' # lock diffusers versions for now
|
# sha = '043ab2520f6a19fce78e6e060a68dbc947edb9f9' # lock diffusers versions for now
|
||||||
pkg = pkg_resources.working_set.by_key.get('diffusers', None)
|
pkg = pkg_resources.working_set.by_key.get('diffusers', None)
|
||||||
|
|
|
||||||
|
|
@ -140,7 +140,7 @@ def run_ltx(task_id,
|
||||||
"output_type": "latent",
|
"output_type": "latent",
|
||||||
}
|
}
|
||||||
if 'Condition' in shared.sd_model.__class__.__name__:
|
if 'Condition' in shared.sd_model.__class__.__name__:
|
||||||
base_args["image_cond_noise_scale"] = image_cond_noise_scale
|
base_args["image_cond_noise_scale"] = image_cond_noise_scale
|
||||||
shared.log.debug(f'Video: cls={shared.sd_model.__class__.__name__} op=base {base_args}')
|
shared.log.debug(f'Video: cls={shared.sd_model.__class__.__name__} op=base {base_args}')
|
||||||
if len(conditions) > 0:
|
if len(conditions) > 0:
|
||||||
base_args["conditions"] = conditions
|
base_args["conditions"] = conditions
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue