handle taesd init failures

Signed-off-by: vladmandic <mandic00@live.com>
pull/4716/head
vladmandic 2026-03-30 10:24:26 +02:00
parent 849ab8fe1e
commit a370bfc987
2 changed files with 12 additions and 6 deletions

View File

@ -1,8 +1,8 @@
# Change Log for SD.Next
## Update for 2026-03-28
## Update for 2026-03-30
### Highlights for 2026-03-28
### Highlights for 2026-03-30
This release brings massive code refactoring to modernize codebase and removal of some obsolete features. Leaner & Faster!
And since its a bit quieter period when it comes to new models, notable additions would be : *FireRed-Image-Edit*, *SkyWorks-UniPic-3* and new versions of *Anima-Preview*, *Flux-Klein-KV*
@ -20,7 +20,7 @@ Just how big? Some stats: *~530 commits over 880 files*
[ReadMe](https://github.com/vladmandic/automatic/blob/master/README.md) | [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md) | [Docs](https://vladmandic.github.io/sdnext-docs/) | [WiKi](https://github.com/vladmandic/automatic/wiki) | [Discord](https://discord.com/invite/sd-next-federal-batch-inspectors-1101998836328697867) | [Sponsor](https://github.com/sponsors/vladmandic)
### Details for 2026-03-28
### Details for 2026-03-30
- **Models**
- [Google Flash 3.1 Image](https://ai.google.dev/gemini-api/docs/models/gemini-3-flash-preview) a.k.a. *Nano Banana 2*
@ -185,6 +185,7 @@ Just how big? Some stats: *~530 commits over 880 files*
- add `lora` support for flux2-klein
- fix `lora` change when used with `sdnq`
- multiple `sdnq` fixes
- handle `taesd` init errors
## Update for 2026-02-04

View File

@ -153,9 +153,14 @@ def get_model(model_type = 'decoder', variant = None):
def decode(latents):
global first_run # pylint: disable=global-statement
with lock:
vae, variant = get_model(model_type='decoder')
if vae is None or max(latents.shape) > 256: # safetey check of large tensors
return latents
try:
vae, variant = get_model(model_type='decoder')
if vae is None or max(latents.shape) > 256: # safetey check of large tensors
return latents
except Exception as e:
# from modules import errors
# errors.display(e, 'taesd"')
return warn_once(f'load: {e}')
try:
with devices.inference_context():
t0 = time.time()