Commit Graph

84 Commits (0d7c025af1fd18fccefa36b76362134ccf54e8d2)

Author SHA1 Message Date
Vladimir Mandic d8c82eddd9 lint fixes
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-03-17 12:21:39 -04:00
Vladimir Mandic 8906f962ad deepseek-vl2 experiments
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-15 10:10:04 -05:00
Vladimir Mandic ae9b40c688 refactor to unify latent, resize and model based upscalers
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-01-18 14:29:40 -05:00
Vladimir Mandic 8110f06c3b remove all ldm imports when running in native mode
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2024-12-30 11:42:04 -05:00
Vladimir Mandic ee7517dfb8 expose sdp options 2024-02-19 08:29:24 -05:00
Disty0 43c5be76ca Unite attention optimization settings 2024-02-10 10:32:53 -05:00
Disty0 12e49acbce OpenVINO update to torch 2.2.0 2024-02-09 18:54:44 +03:00
Disty0 d867e7aa2d Diffusers add Dynamic Attention Slicing 2024-02-09 13:49:46 +03:00
Vladimir Mandic 204853afea global crlf to lf 2024-01-10 09:45:26 -05:00
Vladimir Mandic 267905e6bb Revert "Merge pull request #2411 from vladmandic/master"
This reverts commit 64cce8a606, reversing
changes made to 597fc1863f.
2023-10-26 07:30:01 -04:00
Vladimir Mandic 5219daa7fb Revert "Merge branch 'dev' into master"
This reverts commit 4b91ee0044, reversing
changes made to fc7e3c5721.
2023-10-26 07:17:40 -04:00
Vladimir Mandic ecadbc1683 lint updates 2023-10-25 11:10:21 -04:00
Seunghoon Lee 0f81cfc213
#1711 2023-07-16 21:30:27 +09:00
Seunghoon Lee 0a52c44e73
DirectML rework & provide GPU memory usage (AMD only). 2023-07-15 18:55:38 +09:00
Disty0 2a9133bfec IPEX rework 2023-07-14 17:33:24 +03:00
Kubuxu 2eb705df15 Intoduce attention heads dimension into sdp_attnblock_forward
This enables flash-attention and memory-efficient attention optimizations.
2023-07-12 22:12:03 +01:00
Disty0 966eed8dd9 Autodetect IPEX 2023-07-04 23:37:36 +03:00
Disty0 9a7c765506 Run torch.xpu.memory_allocated with device 2023-06-23 11:08:11 +03:00
Vladimir Mandic 9334b2f21c jumbo merge part three 2023-06-14 13:54:23 -04:00
Vladimir Mandic 1d9e490ef9 ruff linting fixes 2023-06-13 12:22:39 -04:00
Vladimir Mandic cb307399dd jumbo merge 2023-06-13 11:59:56 -04:00
Vladimir Mandic 0cca4d452a add saving from process tab 2023-06-07 17:35:27 -04:00
Vladimir Mandic d25b020f61 update 2023-06-02 12:29:21 -04:00
Disty0 562947c944 Use proper device names instead of "xpu" 2023-06-02 01:14:56 +03:00
Vladimir Mandic 54257dd226 refactoring for pylint 2023-05-28 17:09:58 -04:00
Vladimir Mandic 0ccda9bc8b jumbo patch 2023-05-17 14:15:55 -04:00
Disty0 5c9894724c Fix memory monitoring when using IPEX 2023-05-05 12:26:47 +03:00
Disty0 8171d57c36 Remove unnecessary IPEX imports 2023-05-04 02:34:34 +03:00
Vladimir Mandic cb4cff3929 redesign logging 2023-05-02 13:57:16 -04:00
Vladimir Mandic deb0546b46 update requirements 2023-05-01 18:54:50 -04:00
Disty0 de8d0bef9f More patches and Import IPEX after Torch 2023-04-30 18:19:37 +03:00
Disty0 b075d3c8fd Intel ARC Support 2023-04-30 15:13:56 +03:00
Seunghoon Lee df0e89be48
fix.
Unstable & need more test.
2023-04-26 12:45:44 +09:00
Seunghoon Lee 8b75033a11
fix 2023-04-26 12:34:27 +09:00
Seunghoon Lee 09ae33cdf7
Implement torch.dml.
VERY UNSTABLE & NOT TESTED.
2023-04-26 12:21:44 +09:00
Seunghoon Lee db56da075a
need full precision for model & vae.
Stable & tested.
2023-04-25 23:04:52 +09:00
Seunghoon Lee a49a8f8b46
First DirectML implementation.
Unstable and not tested.
2023-04-25 01:43:19 +09:00
Vladimir Mandic ed8819b8fc lycoris, strong linting, model keyword, circular imports 2023-04-15 10:28:31 -04:00
Vladimir Mandic 81b8294e93 switch cmdflags to settings 2023-04-12 10:40:11 -04:00
Vladimir Mandic f181885f0c
Merge pull request #57 from AUTOMATIC1111/master
merge from upstream
2023-03-25 08:47:00 -04:00
FNSpd 280ed8f00f
Update sd_hijack_optimizations.py 2023-03-24 16:29:16 +04:00
FNSpd c84c9df737
Update sd_hijack_optimizations.py 2023-03-21 14:50:22 +04:00
Vladimir Mandic f6679fcc77 add global exception handler 2023-03-17 10:08:07 -04:00
Pam 8d7fa2f67c sdp_attnblock_forward hijack 2023-03-10 22:48:41 +05:00
Pam 37acba2633 argument to disable memory efficient for sdp 2023-03-10 12:19:36 +05:00
Pam fec0a89511 scaled dot product attention 2023-03-07 00:33:13 +05:00
brkirch e3b53fd295 Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.

In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25 01:13:04 -05:00
AUTOMATIC 59146621e2 better support for xformers flash attention on older versions of torch 2023-01-23 16:40:20 +03:00
Takuma Mori 3262e825cc add --xformers-flash-attention option & impl 2023-01-21 17:42:04 +09:00
AUTOMATIC 40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00