best CN params preview
parent
46ae16e4cb
commit
ea0b5e19fc
Binary file not shown.
|
After Width: | Height: | Size: 98 KiB |
11
readme.md
11
readme.md
|
|
@ -4,7 +4,9 @@ This project allows you to automate video stylization task using StableDiffusion
|
|||

|
||||
sd-cn-animation ui preview
|
||||
|
||||
Note: In vid2vid mode do not forget to activate any ControlNet model to achieve better results. Without it the resulting video might be quite choppy. I personally prefer to use 'hed' model with 0.65 control strength.
|
||||
**In vid2vid mode do not forget to activate ControlNet model to achieve better results. Without it the resulting video might be quite choppy.**
|
||||
Here are CN parameters that seem to give the best results so far:
|
||||

|
||||
|
||||
|
||||
### Video to Video Examples:
|
||||
|
|
@ -62,4 +64,9 @@ To install the extension go to 'Extensions' tab in [Automatic1111 web-ui](https:
|
|||
* Time elapsed/left indication added.
|
||||
* Fixed an issue with color drifting on some models.
|
||||
* Sampler type and sampling steps settings added to text2video mode.
|
||||
* Added automatic resizing before processing with RAFT and FloweR models.
|
||||
* Added automatic resizing before processing with RAFT and FloweR models.
|
||||
|
||||
<!--
|
||||
## Last version changes: v0.9
|
||||
* Issue #76 fixed.
|
||||
-->
|
||||
Loading…
Reference in New Issue