Commit Graph

3454 Commits (b7fcf4a41e8d7a8a46ba379927a1e90eb9e11346)

Author SHA1 Message Date
Disty0 fc3d96f6f6 IPEX fix ControlNet PidiNet 2023-08-04 04:15:25 +03:00
Disty0 5f5a564d41 Update compile settings 2023-08-03 22:38:43 +03:00
Disty0 434a1f967f IPEX fixes 2023-08-03 21:06:15 +03:00
Disty0 44b17b7418 Add compile type option 2023-08-03 17:51:29 +03:00
Disty0 4535a99fff Model compile support for IPEX 2023-08-03 17:25:15 +03:00
Disty0 a293e3cdcb Torch 2.0 for IPEX 2023-08-03 16:29:56 +03:00
Vladimir Mandic 246989129f ui settings logging 2023-08-02 08:27:16 +02:00
Vladimir Mandic 2cf4014040 refresh 2023-07-31 11:36:51 +02:00
Seunghoon Lee 1524365284
Update Korean localization & DirectML bug fix. 2023-08-02 02:32:11 +09:00
Disty0 80b834054b CPU offload mode check & Enable compile for IPEX 2023-08-01 02:51:50 +03:00
Seunghoon Lee 8787f36b2c
Merge branch 'master' of https://github.com/vladmandic/automatic 2023-08-01 02:03:51 +09:00
Seunghoon Lee d711880aa9
New option for DirectML: memory stats provider.
1. Performance Counter.
    Get vram size allocated to & used by python.exe from pdh.dll.
    Generation can be slower than atiadlxx.
    Use memory less greedy then atiadlxx.
    Windows only.
2. atiadlxx.
    Get max vram size and available vram size from AMD GPU driver (atiadlxx.dll).
    Use memory more greedy than Performance Counter.
    Windows & WSL are supported.
3. None.
    Assume available vram size is 8GB.
    Use memory regardless of current vram usage.
2023-08-01 01:58:04 +09:00
Disty0 8ffaea76ba Add Diffusers model and VAE variant loading option 2023-07-31 14:39:41 +03:00
Disty0 884e464693 Move base to cpu when using refiner with offload 2023-07-30 18:39:27 +03:00
Vladimir Mandic 2300329893 fix lint 2023-07-30 17:17:56 +02:00
Vladimir Mandic 6e428ff0a2 update diffusers 2023-07-30 17:14:34 +02:00
Vladimir Mandic 13ba9531a5
Merge pull request #1875 from AI-Casanova/diffusers-LoRA
Enable SDXL LoRA (Must install the newest Diffusers!)
2023-07-30 11:13:36 -04:00
Disty0 350be09282 w/a for model cpu offload refiner 2023-07-30 17:18:57 +03:00
Disty0 0180402563 Better move and accelerate handling 2023-07-30 11:36:47 +03:00
Disty0 66a6e783f0 has_accelerate = False for original backend 2023-07-30 01:22:29 +03:00
Kubuxu 4d00224082 Fix pipeline switching 2023-07-30 00:51:35 +03:00
Kubuxu 7e1030d499 Introduce sd_model.has_accelerate 2023-07-30 00:51:35 +03:00
Kubuxu f945cf14b0 Fix model offload by not focring the model to GPU 2023-07-30 00:51:35 +03:00
Disty0 085d1da825 Fix force upcast VAE with Diffusers 2023-07-29 21:01:24 +03:00
Disty0 3258b27523 Update sequential CPU offload check 2023-07-29 19:13:29 +03:00
Disty0 38ecfb6dff Add Move UNet to CPU option 2023-07-29 18:58:08 +03:00
Disty0 3010d1823d Fix modules 2023-07-29 15:44:55 +03:00
Seunghoon Lee 42c6147ac8
cleanup 2023-07-29 13:48:28 +09:00
Seunghoon Lee 7017a4a2a9
Fix UniPC sampler issue on DirectML. 2023-07-29 13:38:41 +09:00
Seunghoon Lee 813eb48bf7
Restore Python 3.9 compatibility. (DirectML) 2023-07-29 12:10:41 +09:00
Seunghoon Lee 47f2f50574
Restore Python 3.9 compatibility. (DirectML) 2023-07-29 12:08:22 +09:00
Disty0 025df86ac0 Cleanup 2023-07-29 02:06:34 +03:00
AI-Casanova fd7f51302e
Update processing_diffusers.py 2023-07-28 18:01:38 -05:00
AI-Casanova 942371b25a
Update lora_diffusers.py 2023-07-28 18:00:44 -05:00
Kubuxu 1da64b9d08 Fix setting VAE Force Upcast in diffusers 2023-07-28 23:07:11 +01:00
Disty0 31e1bb01ff Fix VAE reloading 2023-07-28 23:23:36 +03:00
Disty0 dbd7887632 Fix refiner unloading 2023-07-28 23:12:45 +03:00
Disty0 c8af2affaf Fix original backend reloading 2023-07-28 21:26:54 +03:00
Disty0 cfcf481992 cleanup 2023-07-28 21:10:38 +03:00
Disty0 9f18db474d
Merge pull request #1861 from Nuullll/master
[IPEX] Fix ControlNet depth leres/leres++
2023-07-28 20:15:31 +03:00
Disty0 e6cf3d72cd Fix sequential cpu offload 2023-07-28 20:01:32 +03:00
Seunghoon Lee 77de9cd093
Fix medvram with DirectML. 2023-07-28 23:18:28 +09:00
Seunghoon Lee 0f44332e5c
Make sequential CPU offload available for non-CUDA
Add settings override for DirectML.
Move `devices.set_cuda_params()` to correct line.
2023-07-28 23:11:57 +09:00
Nuullll 3aed536206 [IPEX] Fix ControlNet depth leres/leres++ 2023-07-28 21:25:17 +08:00
Disty0 a32bf083f1 cleanup 2023-07-28 12:15:25 +03:00
Disty0 459a8bc048 ipex fix adetailer 2023-07-28 12:08:02 +03:00
Disty0 f38d5a91bf Move ipex fixes into it's own folder 2023-07-28 10:58:45 +03:00
Disty0 38dcca7399 ipex cleanup 2023-07-28 01:35:02 +03:00
Nuullll 6acb3ef131 [IPEX] Fix batch_norm for Tiled VAE
Tiled VAE invokes `torch.nn.functional.batch_norm` without providing the
`weight` and `bias` parameter, so torch backend creates default empty
tensors for them but bails out with "tensor does not have a device" error.

This patch overrides the `weight` and `bias` parameters to all-ones and
all-zeros if they are `None`.
2023-07-27 22:56:19 +08:00
Vladimir Mandic 86a0cb5f7e add secondary pass info to metadata 2023-07-27 10:06:21 -04:00