Allows users to create video2video and text2video animations using any SD models as a backbone. Please, make sure that 'sd-webui-controlnet' extension is also installed.
Go to file
Maria 9c5611355d
Partial Fix in Base_ui
2023-05-12 17:47:21 -04:00
FloweR v0.8 2023-05-11 07:34:29 +03:00
RAFT Add only necessary RAFT code 2023-05-05 05:37:25 +03:00
examples cn settings preview update 2023-05-12 03:10:48 +03:00
old_scripts v0.6 2023-05-05 05:37:25 +03:00
scripts Partial Fix in Base_ui 2023-05-12 17:47:21 -04:00
.gitignore Text to video script added 2023-04-19 02:26:53 +03:00
LICENSE Update LICENSE 2023-05-01 12:58:44 +03:00
install.py minor fixes 2023-05-05 05:37:25 +03:00
readme.md best CN params preview 2023-05-12 02:40:13 +03:00
requirements.txt v0.6 2023-05-05 05:37:25 +03:00

readme.md

SD-CN-Animation

This project allows you to automate video stylization task using StableDiffusion and ControlNet. It also allows you to generate completely new videos from text at any resolution and length in contrast to other current text2video methods using any Stable Diffusion model as a backbone, including custom ones. It uses 'RAFT' optical flow estimation algorithm to keep the animation stable and create an occlusion mask that is used to generate the next frame. In text to video mode it relies on 'FloweR' method (work in progress) that predicts optical flow from the previous frames.

sd-cn-animation ui preview sd-cn-animation ui preview

In vid2vid mode do not forget to activate ControlNet model to achieve better results. Without it the resulting video might be quite choppy.
Here are CN parameters that seem to give the best results so far:
sd-cn-animation cn params

Video to Video Examples:

Original video "Jessica Chastain" "Watercolor painting"

Examples presented are generated at 1024x576 resolution using the 'realisticVisionV13_v13' model as a base. They were cropt, downsized and compressed for better loading speed. You can see them in their original quality in the 'examples' folder.

Text to Video Examples:

"close up of a flower" "bonfire near the camp in the mountains at night" "close up of a diamond laying on the table"
"close up of macaroni on the plate" "close up of golden sphere" "a tree standing in the winter forest"

All examples you can see here are originally generated at 512x512 resolution using the 'sd-v1-5-inpainting' model as a base. They were downsized and compressed for better loading speed. You can see them in their original quality in the 'examples' folder. Actual prompts used were stated in the following format: "RAW photo, {subject}, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3", only the 'subject' part is described in the table above.

Installing the extension

To install the extension go to 'Extensions' tab in Automatic1111 web-ui, then go to 'Install from URL' tab. In 'URL for extension's git repository' field inter the path to this repository, i.e. 'https://github.com/volotat/SD-CN-Animation.git'. Leave 'Local directory name' field empty. Then just press 'Install' button. Restart web-ui, new 'SD-CN-Animation' tab should appear. All generated video will be saved into 'stable-diffusion-webui/outputs/sd-cn-animation' folder.

Last version changes: v0.8

  • Better error handling. Fixes an issue when errors may not appear in the console.
  • Fixed an issue with deprecated variables. Should be a resolution of running the extension on other webui forks.
  • Slight improvements in vid2vid processing pipeline.
  • Video preview added to the UI. It will become available at the end of the processing.
  • Time elapsed/left indication added.
  • Fixed an issue with color drifting on some models.
  • Sampler type and sampling steps settings added to text2video mode.
  • Added automatic resizing before processing with RAFT and FloweR models.