Commit Graph

20 Commits (749d5b4e8d4308c67fee6faa4ef4dfbde23087f7)

Author SHA1 Message Date
rattus 535c16ce6e
Widen OOM_EXCEPTION to AcceleratorError form (#12835)
Pytorch only filters for OOMs in its own allocators however there are
paths that can OOM on allocators made outside the pytorch allocators.
These manifest as an AllocatorError as pytorch does not have universal
error translation to its OOM type on exception. Handle it. A log I have
for this also shows a double report of the error async, so call the
async discarder to cleanup and make these OOMs look like OOMs.
2026-03-10 00:41:02 -04:00
comfyanonymous 91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
comfyanonymous 79eea51a1d Fix and enforce all ruff W rules. 2025-01-01 03:08:33 -05:00
comfyanonymous d170292594 Remove some trailing white space. 2024-12-27 18:02:30 -05:00
Chenlei Hu 563291ee51
Enforce all pyflake lint rules (#6033)
* Enforce F821 undefined-name

* Enforce all pyflake lint rules
2024-12-12 19:29:37 -05:00
Chenlei Hu 60749f345d
Lint and fix undefined names (3/N) (#6030) 2024-12-12 18:49:40 -05:00
comfyanonymous 2fd9c1308a Fix mask issue in some attention functions. 2024-11-22 02:10:09 -05:00
comfyanonymous 2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
edikius 165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous 1a4edd19cd Fix overflow issue with inplace softmax. 2023-02-10 11:47:41 -05:00
comfyanonymous df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2023-02-09 12:33:27 -05:00
comfyanonymous 047775615b Lower the chances of an OOM. 2023-02-08 14:24:27 -05:00
comfyanonymous 1daccf3678 Run softmax in place if it OOMs. 2023-01-30 19:55:01 -05:00
comfyanonymous 051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2023-01-25 01:22:43 -05:00
comfyanonymous 220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00