Some update/fixes for FP8 (#332)

* Update README for fp8

* Don't convert norm layer to fp8
pull/333/head
Kohaku-Blueleaf 2023-11-21 12:00:26 +08:00 committed by GitHub
parent bac47592e5
commit 151139f149
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 4 additions and 2 deletions

View File

@ -228,7 +228,7 @@ Adding `--xformers` / `--opt-sdp-attention` to your command lines can significan
FP8 requires torch >= 2.1.0 and WebUI [test-fp8](https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8) branch by [@KohakuBlueleaf](https://github.com/KohakuBlueleaf). Follow these steps to enable FP8:
1. Switch to `test-fp8` branch via `git checkout test-fp8` in your `stable-diffusion-webui` directory.
1. Reinstall torch via adding `--reinstall-torch` ONCE to your command line arguments.
1. Add `--opt-unet-fp8-storage` to your command line arguments and launch WebUI.
1. Goto Settings Tab > Optimizations > FP8 weight, change it to `Enable`
### LCM
[Latent Consistency Model](https://github.com/luosiallen/latent-consistency-model) is a recent breakthrough in Stable Diffusion community. I provide a "gift" to everyone who update this extension to >= [v1.12.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.12.1) - you will find `LCM` sampler in the normal place you select samplers in WebUI. You can generate images / videos within 6-8 steps if you

View File

@ -48,7 +48,9 @@ class AnimateDiffMM:
if not shared.cmd_opts.no_half:
self.mm.half()
if getattr(devices, "fp8", False):
self.mm.to(torch.float8_e4m3fn)
for module in self.mm.modules():
if isinstance(module, (torch.nn.Conv2d, torch.nn.Linear))):
module.to(torch.float8_e4m3fn)
def inject(self, sd_model, model_name="mm_sd_v15.ckpt"):