WebUI extension for ControlNet. Note: (WIP), so don't expect seed reproducibility - as updates may change things.
 
 
 
 
 
Go to file
Mikubill 623c5a44d6 fix: use generic update to avoid corner issue 2023-02-17 03:33:54 +00:00
annotator Revert "use relative path for download directory" 2023-02-17 01:26:13 +09:00
models Create cldm_v21.yaml 2023-02-15 17:22:02 +09:00
samples docs: remove poc smaple 2023-02-15 02:01:13 +00:00
scripts fix: use generic update to avoid corner issue 2023-02-17 03:33:54 +00:00
.gitignore feat: make canny mode works 2023-02-13 07:05:22 +00:00
README.md Merge pull request #99 from akx/no-prettytable 2023-02-17 00:00:17 +09:00
extract_controlnet.py feat: make canny mode works 2023-02-13 07:05:22 +00:00
extract_controlnet_diff.py make diff in advance 2023-02-16 08:28:42 +09:00
install.py Patch out prettytable requirement 2023-02-16 16:38:47 +02:00
preload.py add additional models path in settings tab 2023-02-15 03:59:29 -05:00

README.md

sd-webui-controlnet

(WIP) WebUI extension for ControlNet

This extension is for AUTOMATIC1111's Stable Diffusion web UI, allows the Web UI to add ControlNet to the original Stable Diffusion model to generate images. The addition is on-the-fly, the merging is not required.

ControlNet is a neural network structure to control diffusion models by adding extra conditions.

Thanks & Inspired: kohya-ss/sd-webui-additional-networks

Limits

  • Dragging large file on the Web UI may freeze the entire page. It is better to use the upload file option instead.
  • Just like WebUI's hijack, we used some interpolate to accept arbitrary size configure (see scripts/cldm.py)
  • Processor thresholds are set in scripts/processor.py. Change it if needed.
  • Transfer control (for full model) is disabled by default. This option is experimental and may change in the future.

Install

Some users may need to install the cv2 library before using it: pip install opencv-python

  1. Open "Extensions" tab.
  2. Open "Install from URL" tab in the tab.
  3. Enter URL of this repo to "URL for extension's git repository".
  4. Press "Install" button.
  5. Reload/Restart Web UI.

Usage

  1. Put the ControlNet models (.pt, .pth, .ckpt or .safetensors) inside the sd-webui-controlnet/models folder.
  2. Open "txt2img" or "img2img" tab, write your prompts.
  3. Press "Refresh models" and select the model you want to use. (If nothing appears, try reload/restart the webui)
  4. Upload your image and select preprocessor, done.

Currently it supports both full models and trimmed models. Use extract_controlnet.py to extract controlnet from original .pth file.

Pretrained Models: https://huggingface.co/lllyasviel/ControlNet/tree/main/models

Extraction

Two methods can be used to reduce the model's filesize:

  1. Directly extract controlnet from original .pth file using extract_controlnet.py.

  2. Transfer control from original checkpoint by making difference using extract_controlnet_diff.py.

All type of models can be correctly recognized and loaded. The results of different extraction methods are discussed in https://github.com/lllyasviel/ControlNet/discussions/12 and https://github.com/Mikubill/sd-webui-controlnet/issues/73.

Pre-extracted model: https://huggingface.co/webui/ControlNet-modules-safetensors

Pre-extracted difference model: https://huggingface.co/kohya-ss/ControlNet-diff-modules

Examples

Source Input Output
(no preprocessor)
(no preprocessor)

Minimum Requirements

  • (Windows) (NVIDIA: Ampere) 4gb - with --xformers enabled, and Low VRAM mode ticked in the UI, goes up to 768x832