Commit Graph

217 Commits (0a49961baaaa594313baffba7a000e4e98712c71)

Author SHA1 Message Date
Disty0 7577a09528 Add IPEX Optimizers and use XPU instead of CPU when using IPEX 2023-05-03 18:12:38 +03:00
Vladimir Mandic 6f976c358f optimize model load 2023-05-02 21:30:34 -04:00
Vladimir Mandic eb03fce3e4 fix logger 2023-05-02 15:57:28 -04:00
Vladimir Mandic 7a083d322b merge commits 2023-05-02 15:06:06 -04:00
Vladimir Mandic cb4cff3929 redesign logging 2023-05-02 13:57:16 -04:00
Vladimir Mandic 22da90d4b8 fix lora memory leak 2023-05-01 10:13:21 -04:00
Disty0 68fc95b2e1 Merge remote-tracking branch 'upstream/master' 2023-04-30 18:28:22 +03:00
Disty0 de8d0bef9f More patches and Import IPEX after Torch 2023-04-30 18:19:37 +03:00
Vladimir Mandic 682330b172 new command line parser 2023-04-30 10:54:59 -04:00
Disty0 b075d3c8fd Intel ARC Support 2023-04-30 15:13:56 +03:00
Vladimir Mandic 20b64aad7b update samplers 2023-04-24 16:16:52 -04:00
Vladimir Mandic cf277e7326 fix dtype logic 2023-04-21 15:04:05 -04:00
Vladimir Mandic 4417d570aa
Merge pull request #233 from Yan233th/master
Fix hasattr to in method
2023-04-21 09:29:51 -04:00
papuSpartan 9e8dc9843c port to vlad 2023-04-21 03:18:08 -05:00
Vladimir Mandic 7939a1649d parse model preload 2023-04-20 23:19:25 -04:00
Vladimir Mandic 0282832f12 fix vae path 2023-04-20 15:50:06 -04:00
Vladimir Mandic 5a0664c945 fixes 2023-04-20 15:35:40 -04:00
Vladimir Mandic 752b91d38a fix model download 2023-04-20 12:29:54 -04:00
Vladimir Mandic 0e7144186d jump patch 2023-04-20 11:20:27 -04:00
Vladimir Mandic 8b1f26324b optional model loader and integrate image info 2023-04-17 15:31:43 -04:00
Vladimir Mandic fd51bb90d0 enable quick launch 2023-04-15 11:51:58 -04:00
Vladimir Mandic ed8819b8fc lycoris, strong linting, model keyword, circular imports 2023-04-15 10:28:31 -04:00
Vladimir Mandic 2ece9782e4 handle duplicate extensions and redo exception handler 2023-04-14 09:57:53 -04:00
Vladimir Mandic 81b8294e93 switch cmdflags to settings 2023-04-12 10:40:11 -04:00
Vladimir Mandic ffc54d0938 update launcher 2023-04-06 11:23:25 -04:00
Vladimir Mandic 8cc3a64201 redo timers 2023-04-04 10:18:39 -04:00
Yan233_ 017398885a Fix hasattr to in method 2023-03-29 23:32:48 +08:00
Vladimir Mandic 86b83fc956
Merge pull request #66 from AUTOMATIC1111/master
merge from upstream
2023-03-28 16:43:39 -04:00
AUTOMATIC 1b63afbedc sort hypernetworks and checkpoints by name 2023-03-28 20:03:57 +03:00
AUTOMATIC1111 f1db987e6a
Merge pull request #8958 from MrCheeze/variations-model
Add support for the unclip (Variations) models, unclip-h and unclip-l
2023-03-28 19:39:20 +03:00
Vladimir Mandic 6fe6eff9b4 improve error handling 2023-03-26 21:50:15 -04:00
MrCheeze 1f08600345 overwrite xformers in the unclip model config if not available 2023-03-26 16:55:29 -04:00
Vladimir Mandic 404a2a2cb2 fix broken generate and add progress bars 2023-03-26 14:23:45 -04:00
MrCheeze 8a34671fe9 Add support for the Variations models (unclip-h and unclip-l) 2023-03-25 21:03:07 -04:00
Vladimir Mandic 284bbcd67b update modules 2023-03-25 09:25:13 -04:00
AUTOMATIC1111 956ed9a737
Merge pull request #8780 from Brawlence/master
Unload and re-load checkpoint to VRAM on request (API & Manual)
2023-03-25 12:03:26 +03:00
carat-johyun 92e173d414 fix variable typo 2023-03-23 14:28:08 +09:00
Φφ 4cbbb881ee Unload checkpoints on Request
…to free VRAM.

New Action buttons in the settings to manually free and reload checkpoints, essentially
juggling models between RAM and VRAM.
2023-03-21 09:28:50 +03:00
AUTOMATIC 6a04a7f20f fix an error loading Lora with empty values in metadata 2023-03-14 11:22:29 +03:00
AUTOMATIC c19530f1a5 Add view metadata button for Lora cards. 2023-03-14 09:10:26 +03:00
w-e-w 014e7323f6 when exists 2023-02-19 20:49:07 +09:00
w-e-w c77f01ff31 fix auto sd download issue 2023-02-19 20:37:40 +09:00
missionfloyd c4ea16a03f Add ".vae.ckpt" to ext_blacklist 2023-02-15 19:47:30 -07:00
missionfloyd 1615f786ee Download model if none are found 2023-02-14 20:54:02 -07:00
AUTOMATIC 668d7e9b9a make it possible to load SD1 checkpoints without CLIP 2023-02-05 11:21:00 +03:00
AUTOMATIC 3e0f9a7543 fix issue with switching back to checkpoint that had its checksum calculated during runtime mentioned in #7506 2023-02-04 15:23:16 +03:00
AUTOMATIC1111 c0e0b5844d
Merge pull request #7470 from cbrownstein-lambda/update-error-message-no-checkpoint
Update error message WRT missing checkpoint file
2023-02-04 12:07:12 +03:00
AUTOMATIC 81823407d9 add --no-hashing 2023-02-04 11:38:56 +03:00
Cody Brownstein fb97acef63 Update error message WRT missing checkpoint file
The Safetensors format is also supported.
2023-02-01 14:51:06 -08:00
AUTOMATIC f6b7768f84 support for searching subdirectory names for extra networks 2023-01-29 10:20:19 +03:00
AUTOMATIC 5d14f282c2 fixed a bug where after switching to a checkpoint with unknown hash, you'd get empty space instead of checkpoint name in UI
fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one.
2023-01-28 16:23:49 +03:00
Max Audron 5eee2ac398 add data-dir flag and set all user data directories based on it 2023-01-27 14:44:30 +01:00
AUTOMATIC 6f31d2210c support detecting midas model
fix broken api for checkpoint list
2023-01-27 11:54:19 +03:00
AUTOMATIC d2ac95fa7b remove the need to place configs near models 2023-01-27 11:28:12 +03:00
AUTOMATIC1111 1574e96729
Merge pull request #6510 from brkirch/unet16-upcast-precision
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25 19:12:29 +03:00
Kyle ee0a0da324 Add instruct-pix2pix hijack
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py

Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25 08:53:23 -05:00
brkirch 84d9ce30cb Add option for float32 sampling with float16 UNet
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
AUTOMATIC c1928cdd61 bring back short hashes to sd checkpoint selection 2023-01-19 18:58:08 +03:00
AUTOMATIC a5bbcd2153 fix bug with "Ignore selected VAE for..." option completely disabling VAE election
rework VAE resolving code to be more simple
2023-01-14 19:56:09 +03:00
AUTOMATIC 08c6f009a5 load hashes from cache for checkpoints that have them
add checkpoint hash to footer
2023-01-14 15:55:40 +03:00
AUTOMATIC febd2b722e update key to use with checkpoints' sha256 in cache 2023-01-14 13:37:55 +03:00
AUTOMATIC f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC 4bd490727e fix for an error caused by skipping initialization, for realsies this time: TypeError: expected str, bytes or os.PathLike object, not NoneType 2023-01-11 18:54:13 +03:00
AUTOMATIC 1a23dc32ac possible fix for fallback for fast model creation from config, attempt 2 2023-01-11 10:34:36 +03:00
AUTOMATIC 4fdacd31e4 possible fix for fallback for fast model creation from config 2023-01-11 10:24:56 +03:00
AUTOMATIC 0f8603a559 add support for transformers==4.25.1
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC ce3f639ec8 add more stuff to ignore when creating model from config
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC 0c3feb202c disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config 2023-01-10 14:08:29 +03:00
Vladimir Mandic 552d7b90bf
allow model load if previous model failed 2023-01-09 18:34:26 -05:00
AUTOMATIC 642142556d use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything 2023-01-04 15:09:53 +03:00
AUTOMATIC 68fbf4558f Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' 2023-01-04 14:53:03 +03:00
AUTOMATIC 0cd6399b8b fix broken inpainting model 2023-01-04 14:29:13 +03:00
AUTOMATIC 8d8a05a3bb find configs for models at runtime rather than when starting 2023-01-04 12:47:42 +03:00
AUTOMATIC 02d7abf514 helpful error message when trying to load 2.0 without config
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC 8f96f92899 call script callbacks for reloaded model after loading embeddings 2023-01-03 18:39:14 +03:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
Vladimir Mandic f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Nicolas Patry 5ba04f9ec0
Attempting to solve slow loads for `safetensors`.
Fixes #5893
2022-12-27 11:27:19 +01:00
Yuval Aboulafia 3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
linuxmobile ( リナックス ) 5a650055de
Removed lenght in sd_model at line 115
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
AUTOMATIC1111 eba60a42eb
Merge pull request #5627 from deanpress/patch-1
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
MrCheeze ec0a48826f unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False) 2022-12-11 11:18:34 -05:00
Dean van Dugteren 59c6511494
fix: fallback model_checkpoint if it's empty
This fixes the following error when SD attempts to start with a deleted checkpoint:

```
Traceback (most recent call last):
  File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
    start()
  File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
    webui.webui()
  File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
    initialize()
  File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
    modules.sd_models.load_model()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
    checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
MrCheeze bd81a09eac fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model 2022-12-10 11:29:26 -05:00
AUTOMATIC1111 ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
Jay Smith 1ed4f0e228 Depth2img model support 2022-12-08 20:50:08 -06:00
AUTOMATIC 0376da180c make it possible to save nai model using safetensors 2022-11-28 08:39:59 +03:00
AUTOMATIC dac9b6f15d add safetensors support for model merging #4869 2022-11-27 15:51:29 +03:00
AUTOMATIC 6074175faa add safetensors to requirements 2022-11-27 14:46:40 +03:00
AUTOMATIC1111 f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
MrCheeze 1e506657e1 no-half support for SD 2.0 2022-11-26 13:28:44 -05:00
Nicolas Patry 0efffbb407 Supporting `*.safetensors` format.
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
Muhammad Rizqi Nur 8662b5e57f Merge branch 'a1111' into vae-fix-none 2022-11-19 16:38:21 +07:00
Muhammad Rizqi Nur 2c5ca706a7 Remove no longer necessary parts and add vae_file safeguard 2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur c7be83bf02 Misc
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur abc1e79a5d Fix base VAE caching was done after loading VAE, also add safeguard 2022-11-19 11:41:41 +07:00
cluder eebf49592a restore #4035 behavior
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder 3b51d239ac - do not use ckpt cache, if disabled
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
AUTOMATIC 99043f3360 fix one of previous merges breaking the program 2022-11-04 11:20:42 +03:00