Edit the pose of 3D models in the WebUI, and generate Openpose/Depth/Normal/Canny maps for ControlNet.
Updated
Temporal Coherence Tools
Updated
Create masks for img2img based on a depth estimation made by MiDaS.
Updated
An extension of the built-in Composable Diffusion, allows you to determine the region of the latent space that reflects your subprompts. Note: New maintainer, uninstall prev. ext if needed.
Updated
This extension converts images into blocks and creates schematics for easy importing into Minecraft using the Litematica mod.
Updated
Generate hallways with the depth-to-image model at 512 resolution. It can be tweaked to work with other models/resolutions.
Updated
Pre-trained model, determines if aesthetic/non-aesthetic, does 5 different style recognition modes, and Waifu confirmation. Also has a tab with Batch processing.
Updated
Applies mirroring and flips to the latent images to produce anything from subtle balanced compositions to perfect reflections
Updated
Updated
Inspect any token(a word) or Textual-Inversion embeddings and find out which embeddings are similar. You can mix, modify, or create the embeddings in seconds.
Updated
These scripts allow for dynamic CFG control during generation steps. With the right settings, this could help get the details of high CFG without damaging the generated image even with low denoising in img2img.
Updated
It is a depth aware extension that can help to create multiple complex subjects on a single image. It generates a background, then multiple foreground subjects, cuts their backgrounds after a depth analysis, paste them onto the background and finally does an img2img for a clean finish.
Updated
Merge models with separate rate for each 25 U-Net block (input, middle, output).
Updated
An extension to allow managing custom depth inputs to Stable Diffusion depth2img models.
Updated
Add a button to convert the prompts used in NovelAI for use in the WebUI. In addition, add a button that allows you to recall a previously used prompt.
Updated
DAAM stands for Diffusion Attentive Attribution Maps. Enter the attention text (must be a string contained in the prompt) and run. An overlapping image with a heatmap for each attention will be generated along with the original image.
Updated
Old unmaintained localizations that used to be a part of main repository
Updated
Use transformers models to generate prompts.
Updated
Adds the ability to apply multiple hypernetworks at once. Apply multiple hypernetworks sequentially, with different weights.
Updated
Allows training an embedding from one or few pictures, specifically meant for applying styles. Also, allows use of these specific embeddings to generated images.
Updated