* separate settings * remove cn hack, move cn related code to be within cn * small improvement * fix some bugs * remove print * sparsectrl * sparsectrl * I'm not sure why sparsectrl is not working * remove lora * simplify infv2v * small change * fix loglevel string * update i2i-batch, not tested yet, not sure if it's buggy. till now, i2i-batch should be the only code duplicate in this repository. later on, I will discuss with A1111 if it's ok to have batch-size in i2i-batch. * remove lcm sampler (#393) * fix alphas-comprod for a1111 dev pr #14145 (#397) * simplify mm * conflict * I don't know what to update next * rewrite * rewrite * basic feature work * doc * Update README.md |
||
|---|---|---|
| .github | ||
| docs | ||
| model | ||
| scripts | ||
| .gitignore | ||
| README.md | ||
| motion_module.py | ||
README.md
AnimateDiff for Stable Diffusion WebUI Forge
This branch is specifically designed for Stable Diffusion WebUI Forge by lllyasviel. See here for how to install forge and this extension. See Update for current status.
This extension aim for integrating AnimateDiff w/ CLI into lllyasviel's Forge Adaption of AUTOMATIC1111 Stable Diffusion WebUI and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.
This extension implements AnimateDiff in a different way. It makes heavy use of Unet Patcher, so that you do not need to reload your model weights if you don't want to, and I do not have to monkey patch anything.
You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI. This extension will also be redesigned for forge later.
TusiArt (for users physically inside P.R.China mainland) and TensorArt (for users outside P.R.China mainland) offers online service of this extension.
Table of Contents
Update | TODO | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor
Update
v2.0.0-fin02/05/2023: t2i, prompt travel, infinite generation, all kinds of optimizations have been proven to be working properly and elegantly.
TODO
- Motion lora and ControlNet V2V are still under heavy construction, but I expect to release a working version soon in this week. - When all previous features are working properly, I will soon release SparseCtrl, Magic Animate and Moore Animate Anyone.
- Then this documentation will be completely refactored and an official video tutorial will be available in YouTube and bilibili.
- Later updates will occur in both WebUIs if possible. But due to the significant diffuculty in maintaining Mikubill/sd-webui-controlnet, I will not be able to bring some features to original A1111 WebUI. Such updates will be clearly documented.
Model Zoo
I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download adapter for V3. You may still use the old links if you want, for all models except adapter for V3.
- "Official" models by @guoyww: Google Drive | HuggingFace | CivitAI
- "Stabilized" community models by @manshoety: HuggingFace
- "TemporalDiff" models by @CiaraRowles: HuggingFace
- "HotShotXL" models by @hotshotco: HuggingFace
Documentation
- How to Use -> Preparation | WebUI | API | Parameters
- Features -> Img2Vid | Prompt Travel | ControlNet V2V | [ Model Spec -> Motion LoRA | V3 | SDXL ]
- Performance -> [ Optimizations -> Attention | FP8 | LCM ] | VRAM | #Batch Size
- Demo -> Basic Usage | Motion LoRA | Prompt Travel | AnimateDiff V3 | AnimateDiff SDXL | ControlNet V2V
Tutorial
TODO
Thanks
I thank researchers from Shanghai AI Lab, especially @guoyww for creating AnimateDiff. I also thank @neggles and @s9roll7 for creating and improving AnimateDiff CLI Prompt Travel. This extension could not be made possible without these creative works.
I also thank community developers, especially
- @zappityzap who developed the majority of the output features
- @TDS4874 and @opparco for resolving the grey issue which significantly improve the performance
- @lllyasviel for providing technical support of forge
and many others who have contributed to this extension.
I also thank community users, especially @streamline who provided dataset and workflow of ControlNet V2V. His workflow is extremely amazing and definitely worth checking out.
Star History
Sponsor
You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.
| AliPay | PayPal | |
|---|---|---|
![]() |
![]() |

