Edit the pose of 3D models in the WebUI, and generate Openpose/Depth/Normal/Canny maps for ControlNet.

Updated 2023-04-15 13:21:06 +00:00

Temporal Coherence Tools

Updated 2023-04-15 04:28:34 +00:00

Create masks for img2img based on a depth estimation made by MiDaS.

Updated 2023-04-13 04:10:08 +00:00

An extension of the built-in Composable Diffusion, allows you to determine the region of the latent space that reflects your subprompts. Note: New maintainer, uninstall prev. ext if needed.

Updated 2023-04-02 11:24:25 +00:00

This extension converts images into blocks and creates schematics for easy importing into Minecraft using the Litematica mod.

Updated 2023-03-31 14:38:41 +00:00

Generate hallways with the depth-to-image model at 512 resolution. It can be tweaked to work with other models/resolutions.

Updated 2023-03-28 23:58:24 +00:00

Pre-trained model, determines if aesthetic/non-aesthetic, does 5 different style recognition modes, and Waifu confirmation. Also has a tab with Batch processing.

Updated 2023-03-26 13:36:41 +00:00

Applies mirroring and flips to the latent images to produce anything from subtle balanced compositions to perfect reflections

Updated 2023-03-25 14:37:33 +00:00

Updated 2023-03-23 18:33:36 +00:00

Inspect any token(a word) or Textual-Inversion embeddings and find out which embeddings are similar. You can mix, modify, or create the embeddings in seconds.

Updated 2023-03-18 12:01:18 +00:00

These scripts allow for dynamic CFG control during generation steps. With the right settings, this could help get the details of high CFG without damaging the generated image even with low denoising in img2img.

Updated 2023-03-09 10:43:02 +00:00

It is a depth aware extension that can help to create multiple complex subjects on a single image. It generates a background, then multiple foreground subjects, cuts their backgrounds after a depth analysis, paste them onto the background and finally does an img2img for a clean finish.

Updated 2023-03-06 14:11:30 +00:00

Merge models with separate rate for each 25 U-Net block (input, middle, output).

Updated 2023-03-02 13:57:46 +00:00

An extension to allow managing custom depth inputs to Stable Diffusion depth2img models.

Updated 2023-02-04 06:52:38 +00:00

Add a button to convert the prompts used in NovelAI for use in the WebUI. In addition, add a button that allows you to recall a previously used prompt.

Updated 2023-01-28 06:33:03 +00:00

DAAM stands for Diffusion Attentive Attribution Maps. Enter the attention text (must be a string contained in the prompt) and run. An overlapping image with a heatmap for each attention will be generated along with the original image.

Updated 2023-01-26 04:20:48 +00:00

Old unmaintained localizations that used to be a part of main repository

Updated 2023-01-20 14:17:33 +00:00

Use transformers models to generate prompts.

Updated 2023-01-20 11:15:12 +00:00

Adds the ability to apply multiple hypernetworks at once. Apply multiple hypernetworks sequentially, with different weights.

Updated 2023-01-10 06:48:35 +00:00

Allows training an embedding from one or few pictures, specifically meant for applying styles. Also, allows use of these specific embeddings to generated images.

Updated 2023-01-06 10:59:30 +00:00