diff --git a/examples/cn_settings.png b/examples/cn_settings.png new file mode 100644 index 0000000..322d0b3 Binary files /dev/null and b/examples/cn_settings.png differ diff --git a/readme.md b/readme.md index c33b4b4..f18b02f 100644 --- a/readme.md +++ b/readme.md @@ -4,7 +4,9 @@ This project allows you to automate video stylization task using StableDiffusion ![sd-cn-animation ui preview](examples/ui_preview.png) sd-cn-animation ui preview -Note: In vid2vid mode do not forget to activate any ControlNet model to achieve better results. Without it the resulting video might be quite choppy. I personally prefer to use 'hed' model with 0.65 control strength. +**In vid2vid mode do not forget to activate ControlNet model to achieve better results. Without it the resulting video might be quite choppy.** +Here are CN parameters that seem to give the best results so far: +![sd-cn-animation cn params](examples/cn_settings.png) ### Video to Video Examples: @@ -62,4 +64,9 @@ To install the extension go to 'Extensions' tab in [Automatic1111 web-ui](https: * Time elapsed/left indication added. * Fixed an issue with color drifting on some models. * Sampler type and sampling steps settings added to text2video mode. -* Added automatic resizing before processing with RAFT and FloweR models. \ No newline at end of file +* Added automatic resizing before processing with RAFT and FloweR models. + + \ No newline at end of file