These scripts allow for dynamic CFG control during generation steps. With the right settings, this could help get the details of high CFG without damaging the generated image even with low denoising in img2img.
Updated
It is a depth aware extension that can help to create multiple complex subjects on a single image. It generates a background, then multiple foreground subjects, cuts their backgrounds after a depth analysis, paste them onto the background and finally does an img2img for a clean finish.
Updated
Merge models with separate rate for each 25 U-Net block (input, middle, output).
Updated
An extension to allow managing custom depth inputs to Stable Diffusion depth2img models.
Updated
Add a button to convert the prompts used in NovelAI for use in the WebUI. In addition, add a button that allows you to recall a previously used prompt.
Updated
DAAM stands for Diffusion Attentive Attribution Maps. Enter the attention text (must be a string contained in the prompt) and run. An overlapping image with a heatmap for each attention will be generated along with the original image.
Updated
Old unmaintained localizations that used to be a part of main repository
Updated
Use transformers models to generate prompts.
Updated
Adds the ability to apply multiple hypernetworks at once. Apply multiple hypernetworks sequentially, with different weights.
Updated
Allows training an embedding from one or few pictures, specifically meant for applying styles. Also, allows use of these specific embeddings to generated images.
Updated
Random patches by D8ahazard. Auto-load config YAML files for v2, 2.1 models; patch latent-diffusion to fix attention on 2.1 models (black boxes without no-half), whatever else I come up with.
Updated
Finnish localization
Updated
Brazillian portuguese localization
Updated
Russian localization
Updated
Generates highlighted sectors of a submitted input image, based on input prompts. Use with tokenizer extension. See the readme for more info.
Updated