decouple from origin

pull/36/head
Vladimir Mandic 2023-01-05 12:17:31 -05:00
parent af55cf273c
commit a70b6d0013
49 changed files with 1015 additions and 820 deletions

38
.gitignore vendored
View File

@ -2,33 +2,13 @@ __pycache__
*.ckpt
*.safetensors
*.pth
/ESRGAN/*
/SwinIR/*
/repositories
/venv
/tmp
/model.ckpt
/models/**/*
/GFPGANv1.3.pth
/gfpgan/weights/*.pth
/ui-config.json
/outputs
/config.json
/log
/webui.settings.bat
/embeddings
/styles.csv
*.pt
*.bin
/params.txt
/styles.csv.bak
/webui-user.bat
/webui-user.sh
/interrogate
/user.css
/.idea
notification.mp3
/SwinIR
/textual_inversion
.vscode
/extensions
/test/stdout.txt
/test/stderr.txt
/repositories/*
/embeddings/*
/extensions/*
/outputs/*
/models/**/*
/tmp

View File

@ -1,12 +0,0 @@
* @AUTOMATIC1111
# if you were managing a localization and were removed from this file, this is because
# the intended way to do localizations now is via extensions. See:
# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions
# Make a repo with your localization and since you are still listed as a collaborator
# you can add it to the wiki page yourself. This change is because some people complained
# the git commit log is cluttered with things unrelated to almost everyone and
# because I believe this is the best overall for the project to handle localizations almost
# entirely without my oversight.

156
README.md
View File

@ -1,153 +1,3 @@
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.
![](txt2img_Screenshot.png)
Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) wiki page for extra scripts developed by users.
## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a ((tuxedo)) - will pay more attention to tuxedo
- a man in a (tuxedo:1.21) - alternative syntax
- select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with --allow-code to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Random artist button
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Use Hypernetworks
- Use VAEs
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH"
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Place `model.ckpt` in the `models` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
5. _*(Optional)*_ Place `GFPGANv1.4.pth` in the base directory, alongside `webui.py` (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
6. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Security advice - RyotaK
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
# Stable Diffusion - Automatic
Custom fork of <https://github.com/AUTOMATIC1111/stable-diffusion-webui>

152
config.json Normal file
View File

@ -0,0 +1,152 @@
{
"samples_save": true,
"samples_format": "jpg",
"samples_filename_pattern": "",
"save_images_add_number": true,
"grid_save": true,
"grid_format": "jpg",
"grid_extended_filename": true,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"n_rows": -1,
"enable_pnginfo": true,
"save_txt": false,
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"jpeg_quality": 80,
"export_for_4chan": false,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": true,
"save_selected_only": true,
"do_not_add_watermark": true,
"temp_dir": "",
"clean_temp_dir_at_start": true,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/text",
"outdir_img2img_samples": "outputs/image",
"outdir_extras_samples": "outputs/extras",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/grids",
"outdir_img2img_grids": "outputs/grids",
"outdir_save": "outputs/save",
"save_to_dirs": false,
"grid_save_to_dirs": false,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "",
"directories_max_prompt_words": 16,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+"
],
"upscaler_for_img2img": "SwinIR_4x",
"use_scale_latent_for_hires_fix": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.15,
"face_restoration_unload": false,
"memmon_poll_rate": 1,
"samples_log_stdout": false,
"multiple_tqdm": true,
"unload_models_when_training": true,
"pin_memory": true,
"save_optimizer_state": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 100.0,
"training_xattention_optimizations": true,
"sd_model_checkpoint": "unstable-photoreal-v5.ckpt [358923a7]",
"sd_checkpoint_cache": 0,
"sd_vae": "vae-ft-mse-840000-ema-pruned",
"sd_vae_as_default": false,
"sd_hypernetwork": "None",
"sd_hypernetwork_strength": 1.0,
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"enable_quantization": true,
"enable_emphasis": true,
"use_old_emphasis_implementation": false,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"random_artist_categories": [],
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": false,
"interrogate_return_ranks": true,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 32,
"interrogate_clip_max_length": 1024,
"interrogate_clip_dict_limit": 0.0,
"interrogate_deepbooru_score_threshold": 0.7,
"deepbooru_sort_alpha": false,
"deepbooru_use_spaces": false,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"show_progressbar": true,
"show_progress_every_n_steps": -1,
"show_progress_type": "Full",
"show_progress_grid": true,
"return_grid": true,
"do_not_show_images": false,
"add_model_hash_to_info": true,
"add_model_name_to_info": true,
"disable_weights_auto_swap": false,
"send_seed": true,
"send_size": true,
"font": "",
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"show_progress_in_title": true,
"quicksettings": "sd_model_checkpoint",
"localization": "None",
"hide_samplers": [
"Euler",
"LMS",
"Heun",
"DPM2",
"DPM2 a",
"DPM++ 2M",
"DPM fast",
"DPM adaptive",
"DPM++ 2S a Karras",
"DDIM",
"PLMS",
"DPM++ 2S a",
"DPM++ SDE Karras",
"DPM2 a Karras"
],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_noise": 1.0,
"eta_noise_seed_delta": 0,
"disabled_extensions": [],
"ldsr_steps": 100,
"ldsr_cached": false,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"inspiration_max_samples": 20,
"inspiration_rows_num": 16,
"inspiration_cols_num": 6,
"images_history_preload": false,
"images_record_paths": true,
"images_delete_message": true,
"images_history_page_columns": 6.0,
"images_history_page_rows": 20.0,
"images_history_pages_perload": 20.0,
"sd_vae_checkpoint_cache": 0,
"use_old_karras_scheduler_sigmas": false,
"samplers_in_dropdown": true,
"aesthetic_scorer_enabled": false,
"aesthetic_scorer_clip_model": "ViT-L/14",
"dimensions_and_batch_together": true,
"ui_reorder": "sampler, dimensions, cfg, seed, checkboxes, hires_fix, batch, scripts",
"always_discard_next_to_last_sigma": false,
"clip_models_path": "models/CLIP"
}

View File

@ -0,0 +1,67 @@
model:
base_learning_rate: 1.0e-4
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False # we set this to false because this is an inference only config
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
use_checkpoint: True
use_fp16: True
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"

158
configs/v2-inpaint.yaml Normal file
View File

@ -0,0 +1,158 @@
model:
base_learning_rate: 5.0e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: hybrid
scale_factor: 0.18215
monitor: val/loss_simple_ema
finetune_keys: null
use_ema: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
use_checkpoint: True
image_size: 32 # unused
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"
data:
target: ldm.data.laion.WebDataModuleFromConfig
params:
tar_base: null # for concat as in LAION-A
p_unsafe_threshold: 0.1
filter_word_list: "data/filters.yaml"
max_pwatermark: 0.45
batch_size: 8
num_workers: 6
multinode: True
min_size: 512
train:
shards:
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-0/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-1/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-2/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-3/{00000..18699}.tar -"
- "pipe:aws s3 cp s3://stability-aws/laion-a-native/part-4/{00000..18699}.tar -" #{00000-94333}.tar"
shuffle: 10000
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.RandomCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
# NOTE use enough shards to avoid empty validation loops in workers
validation:
shards:
- "pipe:aws s3 cp s3://deep-floyd-s3/datasets/laion_cleaned-part5/{93001..94333}.tar - "
shuffle: 0
image_key: jpg
image_transforms:
- target: torchvision.transforms.Resize
params:
size: 512
interpolation: 3
- target: torchvision.transforms.CenterCrop
params:
size: 512
postprocess:
target: ldm.data.laion.AddMask
params:
mode: "512train-large"
p_drop: 0.25
lightning:
find_unused_parameters: True
modelcheckpoint:
params:
every_n_train_steps: 5000
callbacks:
metrics_over_trainsteps_checkpoint:
params:
every_n_train_steps: 10000
image_logger:
target: main.ImageLogger
params:
enable_autocast: False
disabled: False
batch_frequency: 1000
max_images: 4
increase_log_steps: False
log_first_step: False
log_images_kwargs:
use_ema_scope: False
inpaint: False
plot_progressive_rows: False
plot_diffusion_rows: False
N: 4
unconditional_guidance_scale: 5.0
unconditional_guidance_label: [""]
ddim_steps: 50 # todo check these out for depth2img,
ddim_eta: 0.0 # todo check these out for depth2img,
trainer:
benchmark: True
val_check_interval: 5000000
num_sanity_val_steps: 0
accumulate_grad_batches: 1

1
embeddings Symbolic link
View File

@ -0,0 +1 @@
/mnt/c/Users/mandi/OneDrive/Generative/Embeddings

View File

@ -1,11 +0,0 @@
name: automatic
channels:
- pytorch
- defaults
dependencies:
- python=3.10
- pip=22.2.2
- cudatoolkit=11.3
- pytorch=1.12.1
- torchvision=0.13.1
- numpy=1.23.1

1
models Symbolic link
View File

@ -0,0 +1 @@
/mnt/c/Users/mandi/OneDrive/Generative/Models

Binary file not shown.

View File

@ -1,16 +1,18 @@
blendmodes
accelerate
basicsr
fairscale
fonts
font-roboto
fonts
gfpgan
gradio
invisible-watermark
jsonmerge
kornia
lark
numpy
omegaconf
opencv-contrib-python
requests
piexif
Pillow
pytorch_lightning
@ -23,11 +25,10 @@ einops
jsonmerge
clean-fid
resize-right
torchdiffeq
kornia
lark
inflection
GitPython
torchsde
safetensors
psutil; sys_platform == 'darwin'
scikit-image
timm
torch
torchdiffeq
torchsde
transformers

View File

@ -1,8 +1,11 @@
accelerate==0.15.0
basicsr==1.4.2
blendmodes==2022
transformers==4.25.1
accelerate==0.15.0
basicsr==1.4.2
gfpgan==1.3.8
GitPython==3.1.27
gradio==3.15.0
numpy==1.23.5
Pillow==9.4.0
@ -11,20 +14,7 @@ torch
omegaconf==2.2.3
pytorch_lightning==1.7.7
scikit-image==0.19.2
fonts
font-roboto
timm==0.6.7
fairscale==0.4.9
piexif==1.1.3
einops==0.4.1
jsonmerge==1.8.0
clean-fid==0.1.29
resize-right==0.0.2
torchdiffeq==0.2.3
kornia==0.6.7
lark==1.1.2
inflection==0.5.1
GitPython==3.1.27
torchsde==0.2.5
safetensors==0.2.7
httpcore<=0.15
transformers==4.25.1

Binary file not shown.

Before

Width:  |  Height:  |  Size: 513 KiB

View File

@ -0,0 +1 @@
/home/vlado/dev/sd-extensions/scripts/save_steps_animation.py

View File

View File

@ -1,29 +0,0 @@
import unittest
class TestExtrasWorking(unittest.TestCase):
def setUp(self):
self.url_img2img = "http://localhost:7860/sdapi/v1/extra-single-image"
self.simple_extras = {
"resize_mode": 0,
"show_extras_results": True,
"gfpgan_visibility": 0,
"codeformer_visibility": 0,
"codeformer_weight": 0,
"upscaling_resize": 2,
"upscaling_resize_w": 128,
"upscaling_resize_h": 128,
"upscaling_crop": True,
"upscaler_1": "None",
"upscaler_2": "None",
"extras_upscaler_2_visibility": 0,
"image": ""
}
class TestExtrasCorrectness(unittest.TestCase):
pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,47 +0,0 @@
import unittest
import requests
class TestTxt2ImgWorking(unittest.TestCase):
def setUp(self):
self.url_txt2img = "http://localhost:7860/sdapi/v1/txt2img"
self.simple_txt2img = {
"enable_hr": False,
"denoising_strength": 0,
"firstphase_width": 0,
"firstphase_height": 0,
"prompt": "example prompt",
"styles": [],
"seed": -1,
"subseed": -1,
"subseed_strength": 0,
"seed_resize_from_h": -1,
"seed_resize_from_w": -1,
"batch_size": 1,
"n_iter": 1,
"steps": 3,
"cfg_scale": 7,
"width": 64,
"height": 64,
"restore_faces": False,
"tiling": False,
"negative_prompt": "",
"eta": 0,
"s_churn": 0,
"s_tmax": 0,
"s_tmin": 0,
"s_noise": 1,
"sampler_index": "Euler a"
}
def test_txt2img_with_restore_faces_performed(self):
self.simple_txt2img["restore_faces"] = True
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
class TestTxt2ImgCorrectness(unittest.TestCase):
pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,55 +0,0 @@
import unittest
import requests
from gradio.processing_utils import encode_pil_to_base64
from PIL import Image
class TestImg2ImgWorking(unittest.TestCase):
def setUp(self):
self.url_img2img = "http://localhost:7860/sdapi/v1/img2img"
self.simple_img2img = {
"init_images": [encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))],
"resize_mode": 0,
"denoising_strength": 0.75,
"mask": None,
"mask_blur": 4,
"inpainting_fill": 0,
"inpaint_full_res": False,
"inpaint_full_res_padding": 0,
"inpainting_mask_invert": 0,
"prompt": "example prompt",
"styles": [],
"seed": -1,
"subseed": -1,
"subseed_strength": 0,
"seed_resize_from_h": -1,
"seed_resize_from_w": -1,
"batch_size": 1,
"n_iter": 1,
"steps": 3,
"cfg_scale": 7,
"width": 64,
"height": 64,
"restore_faces": False,
"tiling": False,
"negative_prompt": "",
"eta": 0,
"s_churn": 0,
"s_tmax": 0,
"s_tmin": 0,
"s_noise": 1,
"override_settings": {},
"sampler_index": "Euler a",
"include_init_images": False
}
def test_img2img_simple_performed(self):
self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)
def test_inpainting_masked_performed(self):
self.simple_img2img["mask"] = encode_pil_to_base64(Image.open(r"test/test_files/mask_basic.png"))
self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)
if __name__ == "__main__":
unittest.main()

View File

@ -1,68 +0,0 @@
import unittest
import requests
class TestTxt2ImgWorking(unittest.TestCase):
def setUp(self):
self.url_txt2img = "http://localhost:7860/sdapi/v1/txt2img"
self.simple_txt2img = {
"enable_hr": False,
"denoising_strength": 0,
"firstphase_width": 0,
"firstphase_height": 0,
"prompt": "example prompt",
"styles": [],
"seed": -1,
"subseed": -1,
"subseed_strength": 0,
"seed_resize_from_h": -1,
"seed_resize_from_w": -1,
"batch_size": 1,
"n_iter": 1,
"steps": 3,
"cfg_scale": 7,
"width": 64,
"height": 64,
"restore_faces": False,
"tiling": False,
"negative_prompt": "",
"eta": 0,
"s_churn": 0,
"s_tmax": 0,
"s_tmin": 0,
"s_noise": 1,
"sampler_index": "Euler a"
}
def test_txt2img_simple_performed(self):
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_with_negative_prompt_performed(self):
self.simple_txt2img["negative_prompt"] = "example negative prompt"
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_not_square_image_performed(self):
self.simple_txt2img["height"] = 128
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_with_hrfix_performed(self):
self.simple_txt2img["enable_hr"] = True
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_with_tiling_performed(self):
self.simple_txt2img["tiling"] = True
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_with_vanilla_sampler_performed(self):
self.simple_txt2img["sampler_index"] = "PLMS"
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
self.simple_txt2img["sampler_index"] = "DDIM"
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
def test_txt2img_multiple_batches_performed(self):
self.simple_txt2img["n_iter"] = 2
self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)
if __name__ == "__main__":
unittest.main()

View File

@ -1,53 +0,0 @@
import unittest
import requests
class UtilsTests(unittest.TestCase):
def setUp(self):
self.url_options = "http://localhost:7860/sdapi/v1/options"
self.url_cmd_flags = "http://localhost:7860/sdapi/v1/cmd-flags"
self.url_samplers = "http://localhost:7860/sdapi/v1/samplers"
self.url_upscalers = "http://localhost:7860/sdapi/v1/upscalers"
self.url_sd_models = "http://localhost:7860/sdapi/v1/sd-models"
self.url_hypernetworks = "http://localhost:7860/sdapi/v1/hypernetworks"
self.url_face_restorers = "http://localhost:7860/sdapi/v1/face-restorers"
self.url_realesrgan_models = "http://localhost:7860/sdapi/v1/realesrgan-models"
self.url_prompt_styles = "http://localhost:7860/sdapi/v1/prompt-styles"
self.url_artist_categories = "http://localhost:7860/sdapi/v1/artist-categories"
self.url_artists = "http://localhost:7860/sdapi/v1/artists"
def test_options_get(self):
self.assertEqual(requests.get(self.url_options).status_code, 200)
def test_cmd_flags(self):
self.assertEqual(requests.get(self.url_cmd_flags).status_code, 200)
def test_samplers(self):
self.assertEqual(requests.get(self.url_samplers).status_code, 200)
def test_upscalers(self):
self.assertEqual(requests.get(self.url_upscalers).status_code, 200)
def test_sd_models(self):
self.assertEqual(requests.get(self.url_sd_models).status_code, 200)
def test_hypernetworks(self):
self.assertEqual(requests.get(self.url_hypernetworks).status_code, 200)
def test_face_restorers(self):
self.assertEqual(requests.get(self.url_face_restorers).status_code, 200)
def test_realesrgan_models(self):
self.assertEqual(requests.get(self.url_realesrgan_models).status_code, 200)
def test_prompt_styles(self):
self.assertEqual(requests.get(self.url_prompt_styles).status_code, 200)
def test_artist_categories(self):
self.assertEqual(requests.get(self.url_artist_categories).status_code, 200)
def test_artists(self):
self.assertEqual(requests.get(self.url_artists).status_code, 200)
if __name__ == "__main__":
unittest.main()

View File

@ -1,24 +0,0 @@
import unittest
import requests
import time
def run_tests(proc, test_dir):
timeout_threshold = 240
start_time = time.time()
while time.time()-start_time < timeout_threshold:
try:
requests.head("http://localhost:7860/")
break
except requests.exceptions.ConnectionError:
if proc.poll() is not None:
break
if proc.poll() is None:
if test_dir is None:
test_dir = ""
suite = unittest.TestLoader().discover(test_dir, pattern="*_test.py", top_level_dir="test")
result = unittest.TextTestRunner(verbosity=2).run(suite)
return len(result.failures) + len(result.errors)
else:
print("Launch unsuccessful")
return 1

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 362 B

1
train/log Symbolic link
View File

@ -0,0 +1 @@
/mnt/c/Users/mandi/OneDrive/Generative/Log

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

548
ui-config.json Normal file
View File

@ -0,0 +1,548 @@
{
"txt2img/Prompt/visible": true,
"txt2img/Prompt/value": "photorealistic, high detailed, sharp focus, depth of field",
"txt2img/Negative prompt/visible": true,
"txt2img/Negative prompt/value": "foggy, blurry, blurred, duplicate, ugly, mutilated, mutation, mutated, out of frame, bad anatomy, disfigured, deformed, censored, low res, watermark, text, poorly drawn face, signature",
"txt2img/Style 1/value": "None",
"txt2img/Style 1/visible": true,
"txt2img/Style 2/value": "None",
"txt2img/Style 2/visible": true,
"txt2img/Sampling method/value": "Euler a",
"txt2img/Sampling method/visible": true,
"txt2img/Sampling Steps/visible": true,
"txt2img/Sampling Steps/value": 20,
"txt2img/Sampling Steps/minimum": 1,
"txt2img/Sampling Steps/maximum": 200,
"txt2img/Sampling Steps/step": 1,
"txt2img/Width/visible": true,
"txt2img/Width/value": 512,
"txt2img/Width/minimum": 64,
"txt2img/Width/maximum": 2048,
"txt2img/Width/step": 8,
"txt2img/Height/visible": true,
"txt2img/Height/value": 512,
"txt2img/Height/minimum": 64,
"txt2img/Height/maximum": 2048,
"txt2img/Height/step": 8,
"txt2img/Batch count/visible": true,
"txt2img/Batch count/value": 1,
"txt2img/Batch count/minimum": 1,
"txt2img/Batch count/maximum": 100,
"txt2img/Batch count/step": 1,
"txt2img/Batch size/visible": true,
"txt2img/Batch size/value": 1,
"txt2img/Batch size/minimum": 1,
"txt2img/Batch size/maximum": 16,
"txt2img/Batch size/step": 1,
"txt2img/CFG Scale/visible": true,
"txt2img/CFG Scale/value": 6.0,
"txt2img/CFG Scale/step": 0.1,
"txt2img/CFG Scale/minimum": 1.0,
"txt2img/CFG Scale/maximum": 30.0,
"txt2img/Seed/visible": true,
"txt2img/Seed/value": -1.0,
"txt2img/Extra/visible": true,
"txt2img/Extra/value": false,
"txt2img/Variation seed/visible": true,
"txt2img/Variation seed/value": -1.0,
"txt2img/Variation strength/visible": true,
"txt2img/Variation strength/value": 0.0,
"txt2img/Variation strength/minimum": 0,
"txt2img/Variation strength/maximum": 1,
"txt2img/Variation strength/step": 0.01,
"txt2img/Resize seed from width/visible": true,
"txt2img/Resize seed from width/value": 0,
"txt2img/Resize seed from width/minimum": 0,
"txt2img/Resize seed from width/maximum": 2048,
"txt2img/Resize seed from width/step": 8,
"txt2img/Resize seed from height/visible": true,
"txt2img/Resize seed from height/value": 0,
"txt2img/Resize seed from height/minimum": 0,
"txt2img/Resize seed from height/maximum": 2048,
"txt2img/Resize seed from height/step": 8,
"txt2img/Restore faces/visible": true,
"txt2img/Restore faces/value": false,
"txt2img/Tiling/visible": true,
"txt2img/Tiling/value": false,
"txt2img/Hires. fix/visible": true,
"txt2img/Hires. fix/value": false,
"txt2img/Upscale by/visible": true,
"txt2img/Upscale by/value": 2.0,
"txt2img/Upscale by/minimum": 1.0,
"txt2img/Upscale by/maximum": 4.0,
"txt2img/Upscale by/step": 0.05,
"txt2img/Denoising strength/visible": true,
"txt2img/Denoising strength/value": 0.7,
"txt2img/Denoising strength/minimum": 0.0,
"txt2img/Denoising strength/maximum": 1.0,
"txt2img/Denoising strength/step": 0.01,
"txt2img/Script/value": "None",
"txt2img/Script/visible": true,
"img2img/Prompt/visible": true,
"img2img/Prompt/value": "",
"img2img/Negative prompt/visible": true,
"img2img/Negative prompt/value": "",
"img2img/Style 1/value": "None",
"img2img/Style 1/visible": true,
"img2img/Style 2/value": "None",
"img2img/Style 2/visible": true,
"img2img/Mask blur/visible": true,
"img2img/Mask blur/value": 4,
"img2img/Mask blur/minimum": 0,
"img2img/Mask blur/maximum": 64,
"img2img/Mask blur/step": 1,
"img2img/Mask transparency/value": 0,
"img2img/Mask transparency/minimum": 0,
"img2img/Mask transparency/maximum": 100,
"img2img/Mask transparency/step": 1,
"img2img/Mask source/visible": true,
"img2img/Mask source/value": "Draw mask",
"img2img/Mask mode/visible": true,
"img2img/Mask mode/value": "Inpaint masked",
"img2img/Masked content/visible": true,
"img2img/Masked content/value": "original",
"img2img/Inpaint area/visible": true,
"img2img/Inpaint area/value": "Whole picture",
"img2img/Only masked padding, pixels/visible": true,
"img2img/Only masked padding, pixels/value": 32,
"img2img/Only masked padding, pixels/minimum": 0,
"img2img/Only masked padding, pixels/maximum": 256,
"img2img/Only masked padding, pixels/step": 4,
"img2img/Input directory/visible": true,
"img2img/Input directory/value": "",
"img2img/Output directory/visible": true,
"img2img/Output directory/value": "",
"img2img/Resize mode/visible": true,
"img2img/Resize mode/value": "Just resize",
"img2img/Sampling method/value": "Euler a",
"img2img/Sampling method/visible": true,
"img2img/Sampling Steps/visible": true,
"img2img/Sampling Steps/value": 20,
"img2img/Sampling Steps/minimum": 1,
"img2img/Sampling Steps/maximum": 200,
"img2img/Sampling Steps/step": 1,
"img2img/Width/visible": true,
"img2img/Width/value": 512,
"img2img/Width/minimum": 64,
"img2img/Width/maximum": 2048,
"img2img/Width/step": 8,
"img2img/Height/visible": true,
"img2img/Height/value": 512,
"img2img/Height/minimum": 64,
"img2img/Height/maximum": 2048,
"img2img/Height/step": 8,
"img2img/Batch count/visible": true,
"img2img/Batch count/value": 1,
"img2img/Batch count/minimum": 1,
"img2img/Batch count/maximum": 100,
"img2img/Batch count/step": 1,
"img2img/Batch size/visible": true,
"img2img/Batch size/value": 1,
"img2img/Batch size/minimum": 1,
"img2img/Batch size/maximum": 16,
"img2img/Batch size/step": 1,
"img2img/CFG Scale/visible": true,
"img2img/CFG Scale/value": 6.0,
"img2img/CFG Scale/minimum": 1.0,
"img2img/CFG Scale/maximum": 30.0,
"img2img/CFG Scale/step": 0.5,
"img2img/Denoising strength/visible": true,
"img2img/Denoising strength/value": 0.75,
"img2img/Denoising strength/minimum": 0.0,
"img2img/Denoising strength/maximum": 1.0,
"img2img/Denoising strength/step": 0.01,
"img2img/Seed/visible": true,
"img2img/Seed/value": -1.0,
"img2img/Extra/visible": true,
"img2img/Extra/value": false,
"img2img/Variation seed/visible": true,
"img2img/Variation seed/value": -1.0,
"img2img/Variation strength/visible": true,
"img2img/Variation strength/value": 0.0,
"img2img/Variation strength/minimum": 0,
"img2img/Variation strength/maximum": 1,
"img2img/Variation strength/step": 0.01,
"img2img/Resize seed from width/visible": true,
"img2img/Resize seed from width/value": 0,
"img2img/Resize seed from width/minimum": 0,
"img2img/Resize seed from width/maximum": 2048,
"img2img/Resize seed from width/step": 8,
"img2img/Resize seed from height/visible": true,
"img2img/Resize seed from height/value": 0,
"img2img/Resize seed from height/minimum": 0,
"img2img/Resize seed from height/maximum": 2048,
"img2img/Resize seed from height/step": 8,
"img2img/Restore faces/visible": true,
"img2img/Restore faces/value": false,
"img2img/Tiling/visible": true,
"img2img/Tiling/value": false,
"img2img/Script/value": "None",
"img2img/Script/visible": true,
"extras/Input directory/visible": true,
"extras/Input directory/value": "",
"extras/Output directory/visible": true,
"extras/Output directory/value": "",
"extras/Show result images/visible": true,
"extras/Show result images/value": true,
"extras/Resize/visible": true,
"extras/Resize/value": 4,
"extras/Resize/minimum": 1.0,
"extras/Resize/maximum": 8.0,
"extras/Resize/step": 0.05,
"extras/Width/visible": true,
"extras/Width/value": 512,
"extras/Height/visible": true,
"extras/Height/value": 512,
"extras/Crop to fit/visible": true,
"extras/Crop to fit/value": true,
"extras/Upscaler 1/visible": true,
"extras/Upscaler 1/value": "SwinIR_4x",
"extras/Upscaler 2/visible": false,
"extras/Upscaler 2/value": "None",
"extras/Upscaler 2 visibility/visible": false,
"extras/Upscaler 2 visibility/value": 1,
"extras/Upscaler 2 visibility/minimum": 0.0,
"extras/Upscaler 2 visibility/maximum": 1.0,
"extras/Upscaler 2 visibility/step": 0.001,
"extras/GFPGAN visibility/visible": true,
"extras/GFPGAN visibility/value": 0,
"extras/GFPGAN visibility/minimum": 0.0,
"extras/GFPGAN visibility/maximum": 1.0,
"extras/GFPGAN visibility/step": 0.001,
"extras/CodeFormer visibility/visible": true,
"extras/CodeFormer visibility/value": 1.0,
"extras/CodeFormer visibility/minimum": 0.0,
"extras/CodeFormer visibility/maximum": 1.0,
"extras/CodeFormer visibility/step": 0.001,
"extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/visible": true,
"extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/value": 0.15,
"extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/minimum": 0.0,
"extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/maximum": 1.0,
"extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/step": 0.001,
"extras/Upscale Before Restoring Faces/visible": true,
"extras/Upscale Before Restoring Faces/value": false,
"modelmerger/Custom Name (Optional)/visible": true,
"modelmerger/Custom Name (Optional)/value": "",
"modelmerger/Multiplier (M) - set to 0 to get model A/visible": true,
"modelmerger/Multiplier (M) - set to 0 to get model A/value": 0.3,
"modelmerger/Multiplier (M) - set to 0 to get model A/minimum": 0.0,
"modelmerger/Multiplier (M) - set to 0 to get model A/maximum": 1.0,
"modelmerger/Multiplier (M) - set to 0 to get model A/step": 0.05,
"modelmerger/Interpolation Method/visible": true,
"modelmerger/Interpolation Method/value": "Weighted sum",
"modelmerger/Checkpoint format/visible": true,
"modelmerger/Checkpoint format/value": "ckpt",
"modelmerger/Save as float16/visible": true,
"modelmerger/Save as float16/value": true,
"train/Name/visible": true,
"train/Name/value": "",
"train/Initialization text/visible": true,
"train/Initialization text/value": "person",
"train/Number of vectors per token/visible": true,
"train/Number of vectors per token/value": 2,
"train/Number of vectors per token/minimum": 1,
"train/Number of vectors per token/maximum": 75,
"train/Number of vectors per token/step": 1,
"train/Overwrite Old Embedding/visible": true,
"train/Overwrite Old Embedding/value": true,
"train/Enter hypernetwork layer structure/visible": true,
"train/Enter hypernetwork layer structure/value": "1, 2, 1",
"train/Add layer normalization/visible": true,
"train/Add layer normalization/value": false,
"train/Use dropout/visible": true,
"train/Use dropout/value": false,
"train/Overwrite Old Hypernetwork/visible": true,
"train/Overwrite Old Hypernetwork/value": false,
"train/Source directory/visible": true,
"train/Source directory/value": "",
"train/Destination directory/visible": true,
"train/Destination directory/value": "",
"train/Width/visible": true,
"train/Width/value": 512,
"train/Width/minimum": 64,
"train/Width/maximum": 2048,
"train/Width/step": 8,
"train/Height/visible": true,
"train/Height/value": 512,
"train/Height/minimum": 64,
"train/Height/maximum": 2048,
"train/Height/step": 8,
"train/Create flipped copies/visible": true,
"train/Create flipped copies/value": false,
"train/Split oversized images/visible": true,
"train/Split oversized images/value": false,
"train/Auto focal point crop/visible": true,
"train/Auto focal point crop/value": false,
"train/Use BLIP for caption/visible": true,
"train/Use BLIP for caption/value": false,
"train/Use deepbooru for caption/visible": true,
"train/Use deepbooru for caption/value": false,
"train/Split image threshold/visible": true,
"train/Split image threshold/value": 0.5,
"train/Split image threshold/minimum": 0.0,
"train/Split image threshold/maximum": 1.0,
"train/Split image threshold/step": 0.05,
"train/Split image overlap ratio/visible": true,
"train/Split image overlap ratio/value": 0.2,
"train/Split image overlap ratio/minimum": 0.0,
"train/Split image overlap ratio/maximum": 0.9,
"train/Split image overlap ratio/step": 0.05,
"train/Focal point face weight/visible": true,
"train/Focal point face weight/value": 0.9,
"train/Focal point face weight/minimum": 0.0,
"train/Focal point face weight/maximum": 1.0,
"train/Focal point face weight/step": 0.05,
"train/Focal point entropy weight/visible": true,
"train/Focal point entropy weight/value": 0.15,
"train/Focal point entropy weight/minimum": 0.0,
"train/Focal point entropy weight/maximum": 1.0,
"train/Focal point entropy weight/step": 0.05,
"train/Focal point edges weight/visible": true,
"train/Focal point edges weight/value": 0.5,
"train/Focal point edges weight/minimum": 0.0,
"train/Focal point edges weight/maximum": 1.0,
"train/Focal point edges weight/step": 0.05,
"train/Create debug image/visible": true,
"train/Create debug image/value": false,
"train/Embedding Learning rate/visible": true,
"train/Embedding Learning rate/value": "0.01:100, 0.005:300, 0.001:700, 0.0005:1500, 0.0001:2500",
"train/Hypernetwork Learning rate/visible": true,
"train/Hypernetwork Learning rate/value": "0.00001",
"train/Batch size/visible": true,
"train/Batch size/value": 1,
"train/Gradient accumulation steps/visible": true,
"train/Gradient accumulation steps/value": 1,
"train/Dataset directory/visible": true,
"train/Dataset directory/value": "",
"train/Log directory/visible": true,
"train/Log directory/value": "train/log",
"train/Prompt template file/visible": true,
"train/Prompt template file/value": "train/templates/subject_filewords.txt",
"train/Max steps/visible": true,
"train/Max steps/value": 2500,
"train/Save an image to log directory every N steps, 0 to disable/visible": true,
"train/Save an image to log directory every N steps, 0 to disable/value": 250,
"train/Save a copy of embedding to log directory every N steps, 0 to disable/visible": true,
"train/Save a copy of embedding to log directory every N steps, 0 to disable/value": 250,
"train/Save images with embedding in PNG chunks/visible": true,
"train/Save images with embedding in PNG chunks/value": true,
"train/Read parameters (prompt, etc...) from txt2img tab when making previews/visible": true,
"train/Read parameters (prompt, etc...) from txt2img tab when making previews/value": false,
"train/Shuffle tags by ',' when creating prompts./visible": true,
"train/Shuffle tags by ',' when creating prompts./value": false,
"train/Drop out tags when creating prompts./visible": true,
"train/Drop out tags when creating prompts./value": 0,
"train/Drop out tags when creating prompts./minimum": 0,
"train/Drop out tags when creating prompts./maximum": 1,
"train/Drop out tags when creating prompts./step": 0.1,
"train/Choose latent sampling method/visible": true,
"train/Choose latent sampling method/value": "once",
"txt2img/Sampling steps/visible": true,
"txt2img/Sampling steps/value": 20,
"txt2img/Sampling steps/minimum": 1,
"txt2img/Sampling steps/maximum": 150,
"txt2img/Sampling steps/step": 1,
"txt2img/Hires steps/visible": true,
"txt2img/Hires steps/value": 0,
"txt2img/Hires steps/minimum": 0,
"txt2img/Hires steps/maximum": 150,
"txt2img/Hires steps/step": 1,
"txt2img/Resize width to/visible": true,
"txt2img/Resize width to/value": 0,
"txt2img/Resize width to/minimum": 0,
"txt2img/Resize width to/maximum": 2048,
"txt2img/Resize width to/step": 8,
"txt2img/Resize height to/visible": true,
"txt2img/Resize height to/value": 0,
"txt2img/Resize height to/minimum": 0,
"txt2img/Resize height to/maximum": 2048,
"txt2img/Resize height to/step": 8,
"customscript/save_steps_animation.py/txt2img/Script Enabled/visible": true,
"customscript/save_steps_animation.py/txt2img/Script Enabled/value": false,
"customscript/save_steps_animation.py/txt2img/Codec/visible": true,
"customscript/save_steps_animation.py/txt2img/Codec/value": "x264",
"customscript/save_steps_animation.py/txt2img/Interpolation/visible": true,
"customscript/save_steps_animation.py/txt2img/Interpolation/value": "mci",
"customscript/save_steps_animation.py/txt2img/Duration/visible": true,
"customscript/save_steps_animation.py/txt2img/Duration/value": 10,
"customscript/save_steps_animation.py/txt2img/Duration/minimum": 0.5,
"customscript/save_steps_animation.py/txt2img/Duration/maximum": 120,
"customscript/save_steps_animation.py/txt2img/Duration/step": 0.1,
"customscript/save_steps_animation.py/txt2img/Skip steps/visible": true,
"customscript/save_steps_animation.py/txt2img/Skip steps/value": 5,
"customscript/save_steps_animation.py/txt2img/Skip steps/minimum": 0,
"customscript/save_steps_animation.py/txt2img/Skip steps/maximum": 100,
"customscript/save_steps_animation.py/txt2img/Skip steps/step": 1,
"customscript/save_steps_animation.py/txt2img/Debug info/visible": true,
"customscript/save_steps_animation.py/txt2img/Debug info/value": false,
"customscript/save_steps_animation.py/txt2img/Run on incomplete/visible": true,
"customscript/save_steps_animation.py/txt2img/Run on incomplete/value": true,
"customscript/save_steps_animation.py/txt2img/Delete intermediate/visible": true,
"customscript/save_steps_animation.py/txt2img/Delete intermediate/value": true,
"customscript/save_steps_animation.py/txt2img/Create animation/visible": true,
"customscript/save_steps_animation.py/txt2img/Create animation/value": true,
"customscript/save_steps_animation.py/txt2img/Path for intermediate files/visible": true,
"customscript/save_steps_animation.py/txt2img/Path for intermediate files/value": "intermediate",
"customscript/save_steps_animation.py/txt2img/Path for output animation file/visible": true,
"customscript/save_steps_animation.py/txt2img/Path for output animation file/value": "animation",
"customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/visible": true,
"customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/value": false,
"customscript/prompt_matrix.py/txt2img/Use different seed for each picture/visible": true,
"customscript/prompt_matrix.py/txt2img/Use different seed for each picture/value": false,
"customscript/prompts_from_file.py/txt2img/Iterate seed every line/visible": true,
"customscript/prompts_from_file.py/txt2img/Iterate seed every line/value": false,
"customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/visible": true,
"customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/value": false,
"customscript/prompts_from_file.py/txt2img/List of prompt inputs/visible": true,
"customscript/prompts_from_file.py/txt2img/List of prompt inputs/value": "",
"customscript/xy_grid.py/txt2img/X values/visible": true,
"customscript/xy_grid.py/txt2img/X values/value": "",
"customscript/xy_grid.py/txt2img/Y values/visible": true,
"customscript/xy_grid.py/txt2img/Y values/value": "",
"customscript/xy_grid.py/txt2img/Draw legend/visible": true,
"customscript/xy_grid.py/txt2img/Draw legend/value": true,
"customscript/xy_grid.py/txt2img/Include Separate Images/visible": true,
"customscript/xy_grid.py/txt2img/Include Separate Images/value": false,
"customscript/xy_grid.py/txt2img/Keep -1 for seeds/visible": true,
"customscript/xy_grid.py/txt2img/Keep -1 for seeds/value": false,
"customscript/inspiration.py/txt2img/Prompt Placeholder, which can be used at the top of prompt input/visible": true,
"customscript/inspiration.py/txt2img/Prompt Placeholder, which can be used at the top of prompt input/value": "{inspiration}",
"img2img/Sampling steps/visible": true,
"img2img/Sampling steps/value": 20,
"img2img/Sampling steps/minimum": 1,
"img2img/Sampling steps/maximum": 150,
"img2img/Sampling steps/step": 1,
"customscript/save_steps_animation.py/img2img/Script Enabled/visible": true,
"customscript/save_steps_animation.py/img2img/Script Enabled/value": false,
"customscript/save_steps_animation.py/img2img/Codec/visible": true,
"customscript/save_steps_animation.py/img2img/Codec/value": "x264",
"customscript/save_steps_animation.py/img2img/Interpolation/visible": true,
"customscript/save_steps_animation.py/img2img/Interpolation/value": "mci",
"customscript/save_steps_animation.py/img2img/Duration/visible": true,
"customscript/save_steps_animation.py/img2img/Duration/value": 10,
"customscript/save_steps_animation.py/img2img/Duration/minimum": 0.5,
"customscript/save_steps_animation.py/img2img/Duration/maximum": 120,
"customscript/save_steps_animation.py/img2img/Duration/step": 0.1,
"customscript/save_steps_animation.py/img2img/Skip steps/visible": true,
"customscript/save_steps_animation.py/img2img/Skip steps/value": 5,
"customscript/save_steps_animation.py/img2img/Skip steps/minimum": 0,
"customscript/save_steps_animation.py/img2img/Skip steps/maximum": 100,
"customscript/save_steps_animation.py/img2img/Skip steps/step": 1,
"customscript/save_steps_animation.py/img2img/Debug info/visible": true,
"customscript/save_steps_animation.py/img2img/Debug info/value": false,
"customscript/save_steps_animation.py/img2img/Run on incomplete/visible": true,
"customscript/save_steps_animation.py/img2img/Run on incomplete/value": true,
"customscript/save_steps_animation.py/img2img/Delete intermediate/visible": true,
"customscript/save_steps_animation.py/img2img/Delete intermediate/value": true,
"customscript/save_steps_animation.py/img2img/Create animation/visible": true,
"customscript/save_steps_animation.py/img2img/Create animation/value": true,
"customscript/save_steps_animation.py/img2img/Path for intermediate files/visible": true,
"customscript/save_steps_animation.py/img2img/Path for intermediate files/value": "intermediate",
"customscript/save_steps_animation.py/img2img/Path for output animation file/visible": true,
"customscript/save_steps_animation.py/img2img/Path for output animation file/value": "animation",
"customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/visible": true,
"customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/value": true,
"customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/visible": true,
"customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/value": true,
"customscript/img2imgalt.py/img2img/Original prompt/visible": true,
"customscript/img2imgalt.py/img2img/Original prompt/value": "",
"customscript/img2imgalt.py/img2img/Original negative prompt/visible": true,
"customscript/img2imgalt.py/img2img/Original negative prompt/value": "",
"customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/visible": true,
"customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/value": true,
"customscript/img2imgalt.py/img2img/Decode steps/visible": true,
"customscript/img2imgalt.py/img2img/Decode steps/value": 50,
"customscript/img2imgalt.py/img2img/Decode steps/minimum": 1,
"customscript/img2imgalt.py/img2img/Decode steps/maximum": 150,
"customscript/img2imgalt.py/img2img/Decode steps/step": 1,
"customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/visible": true,
"customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/value": true,
"customscript/img2imgalt.py/img2img/Decode CFG scale/visible": true,
"customscript/img2imgalt.py/img2img/Decode CFG scale/value": 1.0,
"customscript/img2imgalt.py/img2img/Decode CFG scale/minimum": 0.0,
"customscript/img2imgalt.py/img2img/Decode CFG scale/maximum": 15.0,
"customscript/img2imgalt.py/img2img/Decode CFG scale/step": 0.1,
"customscript/img2imgalt.py/img2img/Randomness/visible": true,
"customscript/img2imgalt.py/img2img/Randomness/value": 0.0,
"customscript/img2imgalt.py/img2img/Randomness/minimum": 0.0,
"customscript/img2imgalt.py/img2img/Randomness/maximum": 1.0,
"customscript/img2imgalt.py/img2img/Randomness/step": 0.01,
"customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/visible": true,
"customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/value": false,
"customscript/loopback.py/img2img/Loops/visible": true,
"customscript/loopback.py/img2img/Loops/value": 4,
"customscript/loopback.py/img2img/Loops/minimum": 1,
"customscript/loopback.py/img2img/Loops/maximum": 32,
"customscript/loopback.py/img2img/Loops/step": 1,
"customscript/loopback.py/img2img/Denoising strength change factor/visible": true,
"customscript/loopback.py/img2img/Denoising strength change factor/value": 1,
"customscript/loopback.py/img2img/Denoising strength change factor/minimum": 0.9,
"customscript/loopback.py/img2img/Denoising strength change factor/maximum": 1.1,
"customscript/loopback.py/img2img/Denoising strength change factor/step": 0.01,
"customscript/outpainting_mk_2.py/img2img/Pixels to expand/visible": true,
"customscript/outpainting_mk_2.py/img2img/Pixels to expand/value": 128,
"customscript/outpainting_mk_2.py/img2img/Pixels to expand/minimum": 8,
"customscript/outpainting_mk_2.py/img2img/Pixels to expand/maximum": 256,
"customscript/outpainting_mk_2.py/img2img/Pixels to expand/step": 8,
"customscript/outpainting_mk_2.py/img2img/Mask blur/visible": true,
"customscript/outpainting_mk_2.py/img2img/Mask blur/value": 8,
"customscript/outpainting_mk_2.py/img2img/Mask blur/minimum": 0,
"customscript/outpainting_mk_2.py/img2img/Mask blur/maximum": 64,
"customscript/outpainting_mk_2.py/img2img/Mask blur/step": 1,
"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/visible": true,
"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/value": 1.0,
"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/minimum": 0.0,
"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/maximum": 4.0,
"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/step": 0.01,
"customscript/outpainting_mk_2.py/img2img/Color variation/visible": true,
"customscript/outpainting_mk_2.py/img2img/Color variation/value": 0.05,
"customscript/outpainting_mk_2.py/img2img/Color variation/minimum": 0.0,
"customscript/outpainting_mk_2.py/img2img/Color variation/maximum": 1.0,
"customscript/outpainting_mk_2.py/img2img/Color variation/step": 0.01,
"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/visible": true,
"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/value": 128,
"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/minimum": 8,
"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/maximum": 256,
"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/step": 8,
"customscript/poor_mans_outpainting.py/img2img/Mask blur/visible": true,
"customscript/poor_mans_outpainting.py/img2img/Mask blur/value": 4,
"customscript/poor_mans_outpainting.py/img2img/Mask blur/minimum": 0,
"customscript/poor_mans_outpainting.py/img2img/Mask blur/maximum": 64,
"customscript/poor_mans_outpainting.py/img2img/Mask blur/step": 1,
"customscript/poor_mans_outpainting.py/img2img/Masked content/visible": true,
"customscript/poor_mans_outpainting.py/img2img/Masked content/value": "fill",
"customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/visible": true,
"customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/value": false,
"customscript/prompt_matrix.py/img2img/Use different seed for each picture/visible": true,
"customscript/prompt_matrix.py/img2img/Use different seed for each picture/value": false,
"customscript/prompts_from_file.py/img2img/Iterate seed every line/visible": true,
"customscript/prompts_from_file.py/img2img/Iterate seed every line/value": false,
"customscript/prompts_from_file.py/img2img/Use same random seed for all lines/visible": true,
"customscript/prompts_from_file.py/img2img/Use same random seed for all lines/value": false,
"customscript/prompts_from_file.py/img2img/List of prompt inputs/visible": true,
"customscript/prompts_from_file.py/img2img/List of prompt inputs/value": "",
"customscript/sd_upscale.py/img2img/Tile overlap/visible": true,
"customscript/sd_upscale.py/img2img/Tile overlap/value": 64,
"customscript/sd_upscale.py/img2img/Tile overlap/minimum": 0,
"customscript/sd_upscale.py/img2img/Tile overlap/maximum": 256,
"customscript/sd_upscale.py/img2img/Tile overlap/step": 16,
"customscript/sd_upscale.py/img2img/Scale Factor/visible": true,
"customscript/sd_upscale.py/img2img/Scale Factor/value": 2.0,
"customscript/sd_upscale.py/img2img/Scale Factor/minimum": 1.0,
"customscript/sd_upscale.py/img2img/Scale Factor/maximum": 4.0,
"customscript/sd_upscale.py/img2img/Scale Factor/step": 0.05,
"customscript/sd_upscale.py/img2img/Upscaler/visible": true,
"customscript/sd_upscale.py/img2img/Upscaler/value": "None",
"customscript/xy_grid.py/img2img/X values/visible": true,
"customscript/xy_grid.py/img2img/X values/value": "",
"customscript/xy_grid.py/img2img/Y values/visible": true,
"customscript/xy_grid.py/img2img/Y values/value": "",
"customscript/xy_grid.py/img2img/Draw legend/visible": true,
"customscript/xy_grid.py/img2img/Draw legend/value": true,
"customscript/xy_grid.py/img2img/Include Separate Images/visible": true,
"customscript/xy_grid.py/img2img/Include Separate Images/value": false,
"customscript/xy_grid.py/img2img/Keep -1 for seeds/visible": true,
"customscript/xy_grid.py/img2img/Keep -1 for seeds/value": false,
"customscript/inspiration.py/img2img/Prompt Placeholder, which can be used at the top of prompt input/visible": true,
"customscript/inspiration.py/img2img/Prompt Placeholder, which can be used at the top of prompt input/value": "{inspiration}"
}

59
user.css Normal file
View File

@ -0,0 +1,59 @@
/* generic html tags */
html { font-size: 16px; }
body, button, input, select, textarea { font-family: "Segoe UI"; font-variant: small-caps; }
img { background-color: black; }
input[type=range] { height: 14px; appearance: none; margin-top: 12px }
input[type=range]::-webkit-slider-runnable-track { width: 100%; height: 18px; cursor: pointer; box-shadow: 2px 2px 3px #111111; background: #50555C; border-radius: 2px; border: 0px solid #222222; }
input[type=range]::-webkit-slider-thumb { box-shadow: 2px 2px 3px #111111; border: 0px solid #000000; height: 18px; width: 40px; border-radius: 2px; background: var(--highlight-color); cursor: pointer; -webkit-appearance: none; margin-top: 0px; }
::-webkit-scrollbar { width: 12px; }
::-webkit-scrollbar-track { background: #333333; }
::-webkit-scrollbar-thumb { background-color: var(--highlight-color); border-radius: 2px; border-width: 0; box-shadow: 2px 2px 3px #111111; }
/* main gradio components by selector */
div.gradio-container.dark > div.w-full.flex.flex-col.min-h-screen > div { background-color: black; }
/* gradio shadowroot */
.gradio-container { font-family: "Segoe UI"; font-variant: small-caps; --left-column: 540px; --highlight-color: #CE6400; --inactive-color: #4E1400; }
/* gradio style classes */
.tabs { background-color: black; }
.dark { background-color: black; }
.dark .gr-panel { border-radius: 0; background-color: black; }
.dark .gr-input-label { color: lightyellow; border-width: 0; background: transparent; padding: 2px !important; }
.dark .gr-input { background-color: #333333 !important; }
.dark .gr-form { border-radius: 0; border-width: 0; }
.dark .gr-check-radio { background-color: var(--inactive-color); border-width: 0; border-radius: 2px; box-shadow: 2px 2px 3px #111111; }
.dark .gr-check-radio:checked { background-color: var(--highlight-color); }
.dark .gr-button { border-radius: 0; font-weight: normal; box-shadow: 2px 2px 3px #111111; font-size: 0.9rem; min-width: 32px; }
.dark .gr-box { border-radius: 0; background-color: #222222; box-shadow: 2px 2px 3px #111111; border-width: 0; padding-bottom: 12px; }
.dark .dark\:bg-gray-900 { background-color: black; }
.dark .bg-white { color: lightyellow; border-radius: 0; }
.dark fieldset span.text-gray-500, .dark .gr-block.gr-box span.text-gray-500, .dark label.block span { background-color: #222222; padding: 0 12px 0 12px; }
.dark .bg-gray-200, .dark .\!bg-gray-200 { background-color: transparent; }
.gap-2 { padding-top: 8px; }
.border-2 { border-width: 0; }
.border-b-2 { border-bottom-width: 2px; border-color: var(--highlight-color) !important; padding-bottom: 2px; margin-bottom: 8px; }
.px-4 { padding-lefT: 1rem; padding-right: 1rem; }
.py-6 { padding-bottom: 0; }
.overflow-hidden .flex .flex-col .relative col .gap-4 { min-width: var(--left-column); max-width: var(--left-column); } /* this is a problematic one */
/* automatic style classes */
.progressDiv .progress { background: var(--highlight-color); border-radius: 2px; }
.gallery-item { box-shadow: none !important; }
/* gradio elements overrides */
#txt2img_settings, #img2img_settings { min-width: var(--left-column); max-width: var(--left-column); }
#txt2img_prompt > label > textarea { font-size: 1.2rem; }
#txt2img_neg_prompt > label > textarea { font-size: 1.2rem; }
#img2img_prompt > label > textarea { font-size: 1.2rem; }
#img2img_neg_prompt > label > textarea { font-size: 1.2rem; }
#txt2img_generate, #img2img_generate, #txt2img_interrupt, #img2img_interrupt, #txt2img_skip, #img2img_skip { margin-top: 10px; min-height: 2rem; height: 63px; }
#txt2img_interrupt, #img2img_interrupt, #txt2img_skip, #img2img_skip { background-color: var(--inactive-color); }
#txt2img_gallery { background: black; }
#tab_extensions table { background-color: #222222; }
#style_pos_col, #style_neg_col, #roll_col { display: none; }
#interrogate_col { margin-top: 10px; }
#txt2img_progress_row > div { min-width: var(--left-column); max-width: var(--left-column); }
#save-animation { border-radius: 0 !important; margin-bottom: 16px; background-color: #111111; }
#open_folder_txt2img, #open_folder_img2img, #open_folder_extras { display: none }
#footer { display: none; }

View File

@ -1,19 +0,0 @@
#!/bin/bash
####################################################################
# macOS defaults #
# Please modify webui-user.sh to change these instead of this file #
####################################################################
if [[ -x "$(command -v python3.10)" ]]
then
python_cmd="python3.10"
fi
export install_dir="$HOME"
export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu interrogate"
export TORCH_COMMAND="pip install torch==1.12.1 torchvision==0.13.1"
export K_DIFFUSION_REPO="https://github.com/brkirch/k-diffusion.git"
export K_DIFFUSION_COMMIT_HASH="51c9778f269cedb55a4d88c79c0246d35bdadb71"
export PYTORCH_ENABLE_MPS_FALLBACK=1
####################################################################

View File

@ -1,8 +0,0 @@
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
call webui.bat

View File

@ -1,46 +0,0 @@
#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
#export COMMANDLINE_ARGS=""
# python3 executable
#python_cmd="python3"
# git executable
#export GIT="git"
# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"
# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"
# install command for torch
#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113"
# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"
# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""
# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export TAMING_TRANSFORMERS_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""
# Uncomment to enable accelerated launch
#export ACCELERATE="True"
###########################################

View File

@ -1,74 +0,0 @@
@echo off
if not defined PYTHON (set PYTHON=python)
if not defined VENV_DIR (set VENV_DIR=venv)
set ERROR_REPORTING=FALSE
mkdir tmp 2>NUL
%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :start_venv
echo Couldn't launch python
goto :show_stdout_stderr
:start_venv
if [%VENV_DIR%] == [-] goto :skip_venv
dir %VENV_DIR%\Scripts\Python.exe >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :activate_venv
for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i"
echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%
%PYTHON_FULLNAME% -m venv %VENV_DIR% >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :activate_venv
echo Unable to create venv in directory %VENV_DIR%
goto :show_stdout_stderr
:activate_venv
set PYTHON="%~dp0%VENV_DIR%\Scripts\Python.exe"
echo venv %PYTHON%
if [%ACCELERATE%] == ["True"] goto :accelerate
goto :launch
:skip_venv
:accelerate
echo "Checking for accelerate"
set ACCELERATE="%~dp0%VENV_DIR%\Scripts\accelerate.exe"
if EXIST %ACCELERATE% goto :accelerate_launch
:launch
%PYTHON% launch.py %*
pause
exit /b
:accelerate_launch
echo "Accelerating"
%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py
pause
exit /b
:show_stdout_stderr
echo.
echo exit code: %errorlevel%
for /f %%i in ("tmp\stdout.txt") do set size=%%~zi
if %size% equ 0 goto :show_stderr
echo.
echo stdout:
type tmp\stdout.txt
:show_stderr
for /f %%i in ("tmp\stderr.txt") do set size=%%~zi
if %size% equ 0 goto :show_stderr
echo.
echo stderr:
type tmp\stderr.txt
:endofscript
echo.
echo Launch unsuccessful. Exiting.
pause

169
webui.sh
View File

@ -1,169 +0,0 @@
#!/usr/bin/env bash
#################################################
# Please do not make any changes to this file, #
# change the variables in webui-user.sh instead #
#################################################
# If run from macOS, load defaults from webui-macos-env.sh
if [[ "$OSTYPE" == "darwin"* ]]; then
if [[ -f webui-macos-env.sh ]]
then
source ./webui-macos-env.sh
fi
fi
# Read variables from webui-user.sh
# shellcheck source=/dev/null
if [[ -f webui-user.sh ]]
then
source ./webui-user.sh
fi
# Set defaults
# Install directory without trailing slash
if [[ -z "${install_dir}" ]]
then
install_dir="/home/$(whoami)"
fi
# Name of the subdirectory (defaults to stable-diffusion-webui)
if [[ -z "${clone_dir}" ]]
then
clone_dir="stable-diffusion-webui"
fi
# python3 executable
if [[ -z "${python_cmd}" ]]
then
python_cmd="python3"
fi
# git executable
if [[ -z "${GIT}" ]]
then
export GIT="git"
fi
# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
if [[ -z "${venv_dir}" ]]
then
venv_dir="venv"
fi
if [[ -z "${LAUNCH_SCRIPT}" ]]
then
LAUNCH_SCRIPT="launch.py"
fi
# this script cannot be run as root by default
can_run_as_root=0
# read any command line flags to the webui.sh script
while getopts "f" flag > /dev/null 2>&1
do
case ${flag} in
f) can_run_as_root=1;;
*) break;;
esac
done
# Disable sentry logging
export ERROR_REPORTING=FALSE
# Do not reinstall existing pip packages on Debian/Ubuntu
export PIP_IGNORE_INSTALLED=0
# Pretty print
delimiter="################################################################"
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[32mInstall script for stable-diffusion + Web UI\n"
printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m"
printf "\n%s\n" "${delimiter}"
# Do not run as root
if [[ $(id -u) -eq 0 && can_run_as_root -eq 0 ]]
then
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
printf "\n%s\n" "${delimiter}"
exit 1
else
printf "\n%s\n" "${delimiter}"
printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)"
printf "\n%s\n" "${delimiter}"
fi
if [[ -d .git ]]
then
printf "\n%s\n" "${delimiter}"
printf "Repo already cloned, using it as install directory"
printf "\n%s\n" "${delimiter}"
install_dir="${PWD}/../"
clone_dir="${PWD##*/}"
fi
# Check prerequisites
for preq in "${GIT}" "${python_cmd}"
do
if ! hash "${preq}" &>/dev/null
then
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[31mERROR: %s is not installed, aborting...\e[0m" "${preq}"
printf "\n%s\n" "${delimiter}"
exit 1
fi
done
if ! "${python_cmd}" -c "import venv" &>/dev/null
then
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[31mERROR: python3-venv is not installed, aborting...\e[0m"
printf "\n%s\n" "${delimiter}"
exit 1
fi
cd "${install_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/, aborting...\e[0m" "${install_dir}"; exit 1; }
if [[ -d "${clone_dir}" ]]
then
cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
else
printf "\n%s\n" "${delimiter}"
printf "Clone stable-diffusion-webui"
printf "\n%s\n" "${delimiter}"
"${GIT}" clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git "${clone_dir}"
cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
fi
printf "\n%s\n" "${delimiter}"
printf "Create and activate python venv"
printf "\n%s\n" "${delimiter}"
cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
if [[ ! -d "${venv_dir}" ]]
then
"${python_cmd}" -m venv "${venv_dir}"
first_launch=1
fi
# shellcheck source=/dev/null
if [[ -f "${venv_dir}"/bin/activate ]]
then
source "${venv_dir}"/bin/activate
else
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
printf "\n%s\n" "${delimiter}"
exit 1
fi
if [[ ! -z "${ACCELERATE}" ]] && [ ${ACCELERATE}="True" ] && [ -x "$(command -v accelerate)" ]
then
printf "\n%s\n" "${delimiter}"
printf "Accelerating launch.py..."
printf "\n%s\n" "${delimiter}"
exec accelerate launch --num_cpu_threads_per_process=6 "${LAUNCH_SCRIPT}" "$@"
else
printf "\n%s\n" "${delimiter}"
printf "Launching launch.py..."
printf "\n%s\n" "${delimiter}"
exec "${python_cmd}" "${LAUNCH_SCRIPT}" "$@"
fi

1
wiki Submodule

@ -0,0 +1 @@
Subproject commit bc9a09829dbc90760c8626c83824e05cfab5cb98