Autorunner (#25)
* Some first steps * Fancy filename work and started on model choosing * progress * Decent first version working * Stabilisation and fixes * Added quality gate Started reworking how prompts are generated from scripts. So it works more in line with One Button run * Reworked how prompt compounder works * Started some work on IMG2IMG * Some changes and fixes * Added img2img options to interface * First version of ultimate sd upscale Some fun with buttons as well * added ultimate SD upscale logic to UI some small bugfixes as well * controlnet tile resample integration (!!!) * Some changes Controlnet is misbehaving, researching * Fixed controlnet started work on extras upscaling * Finished upscale with extras * Working on upscale only mode * Some stability and Ui changes * Minor changes * Update one_button_run_and_upscale.md Started work on user_guide * Minor bug in size choosing * Update one_button_run_and_upscale.md * Update README.md * Update my_first_generation.md * Update my_first_generation.md * Update my_first_generation.md * added folder button hacky fix for PLMS en UniPC * Update my_first_generation.md * Update README.md * Update my_first_generation.md * Update one_button_run_and_upscale.md * last bits finally stable? * last bug? * Update one_button_run_and_upscale.md * Stability fixes * fixing the last bugpull/26/head
parent
8385fcdb95
commit
849d965af5
38
README.md
38
README.md
|
|
@ -12,6 +12,8 @@ It generates an entire prompt from scratch. It is random, but controlled. You si
|
||||||
|
|
||||||
It is best used on all-purpose models, such as Stable Difussion 1.5 or those based on 1.5. Such as [deliberate](https://civitai.com/models/4823/deliberate) and [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10). However, feel free to use it on your personal favorite models.
|
It is best used on all-purpose models, such as Stable Difussion 1.5 or those based on 1.5. Such as [deliberate](https://civitai.com/models/4823/deliberate) and [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10). However, feel free to use it on your personal favorite models.
|
||||||
|
|
||||||
|
A simple user guide for first time use and settings is available [here](https://github.com/AIrjen/OneButtonPrompt/user_guides/my_first_generation.md).
|
||||||
|
|
||||||
## How to use in automatic1111
|
## How to use in automatic1111
|
||||||
In TXT2IMG or IMG2IMG, select the script "One Button Prompt".
|
In TXT2IMG or IMG2IMG, select the script "One Button Prompt".
|
||||||
|
|
||||||
|
|
@ -33,7 +35,7 @@ Enjoy creating awesome pictures:
|
||||||
Please be aware, that not each picture will be awesome due to the randomness of the prompt, artist and model used.
|
Please be aware, that not each picture will be awesome due to the randomness of the prompt, artist and model used.
|
||||||
You might get an epic landscape, or a photo of an Aggregavated Trout. In my experience, about 1 in 5 is good. Everyone of them is interesting.
|
You might get an epic landscape, or a photo of an Aggregavated Trout. In my experience, about 1 in 5 is good. Everyone of them is interesting.
|
||||||
|
|
||||||
Some more examples below.
|
Some more examples below. And check the first time user guide [here](https://github.com/AIrjen/OneButtonPrompt/user_guides/my_first_generation.md).
|
||||||
|
|
||||||
### Some details
|
### Some details
|
||||||
It will generate between 0 and 3 artists, and add those the prompt.
|
It will generate between 0 and 3 artists, and add those the prompt.
|
||||||
|
|
@ -124,20 +126,36 @@ If you want more control, use "current prompt + AND". An example would be "Art b
|
||||||
>
|
>
|
||||||
>AND art by brandon woelfel, 2 people, Abstract, selfie shot angle of a 1920's Nasty slight Michelle Yeoh surrounded by Grapefruits, Blonde hair styled as Short and messy, glowing Turquoise eyes, Anime screencap, Panfuturism, Magic the gathering, photolab, High quality
|
>AND art by brandon woelfel, 2 people, Abstract, selfie shot angle of a 1920's Nasty slight Michelle Yeoh surrounded by Grapefruits, Blonde hair styled as Short and messy, glowing Turquoise eyes, Anime screencap, Panfuturism, Magic the gathering, photolab, High quality
|
||||||
|
|
||||||
# off-hands, automatic generation
|
# One Button Run and Upscale
|
||||||
This project started out as a personal project, to automatically generate windows wallpapers. The code is still in here, and can be used. You would need to set it up correctly, so it is for advanced users only.
|
Using the API feature of WebUI, this allows you to:
|
||||||
|
|
||||||
|
This will allow you to:
|
||||||
|
1. Generate an image with TXT2IMG
|
||||||
|
1. Can enable Hi res. fix
|
||||||
|
2. Possible to set up a __Quality Gate__, so only the best images get upscaled
|
||||||
|
3. Possible to ignore the One Button Prompt generation, and __use your own prompts__
|
||||||
|
2. Upscale that image with IMG2IMG
|
||||||
|
1. This proces can be repeated. Loopback enabled.
|
||||||
|
2. Supports __SD Upscale__, __Ultimate SD Upscale__ and __Controlnet tile_resample__ methods of upscaling
|
||||||
|
3. Upscale with EXTRAS
|
||||||
|
4. Possiblities for __just upscale__ existing images
|
||||||
|
|
||||||
|
All with a single press of __One Button__.
|
||||||
|
|
||||||
|
[User Guide here!](https://github.com/AIrjen/OneButtonPrompt/user_guides/one_button_run_and_upscale.md#one-button-run-and-upscale)
|
||||||
|
|
||||||
In the main.py script, there is logic that calls the API's from automatic1111. Uncomment the lines you need, such as txt2img. You also need to create some folders on your computer (see code)
|
|
||||||
Start automatic1111 with the option --api and run the main script.
|
|
||||||
Edit the main script to set the amount of loops/images to generate, and uncomment the txt2img, img2img and upscale scripts to taste.
|
|
||||||
|
|
||||||
It was build for personal development, so adjust directories and settings accordingly.
|
|
||||||
|
|
||||||
# roadmap
|
# roadmap
|
||||||
Some ideas I'd like to implement:
|
Some ideas I'd like to implement:
|
||||||
- Better workflow management in workflow assist tab
|
- The In Control update
|
||||||
- Curated artist lists
|
- Choose your own subject
|
||||||
- SD 2.1 support (inversion negative prompt)
|
- Split up subjects more, and pick more detailed subjects, such as food, female, building, etc
|
||||||
|
- Support for LoRA and textual inversions
|
||||||
|
- Trigger word support
|
||||||
|
- ~~Bring upscale automation to front-end~~ Done
|
||||||
|
- ~~Better workflow management in workflow assist tab~~ Done
|
||||||
|
- ~~Curated artist lists~~ Done
|
||||||
- Ongoing: list refinements and new features in the prompt generation
|
- Ongoing: list refinements and new features in the prompt generation
|
||||||
|
|
||||||
If you have a good idea or suggestion, let me know, or build it yourself ;)
|
If you have a good idea or suggestion, let me know, or build it yourself ;)
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
From One Button run and upscale
|
||||||
|
There needs to be a file in each directory for GIT to work properly.
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
From One Button run and upscale
|
||||||
|
There needs to be a file in each directory for GIT to work properly.
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
From One Button run and upscale
|
||||||
|
There needs to be a file in each directory for GIT to work properly.
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
From One Button run and upscale
|
||||||
|
There needs to be a file in each directory for GIT to work properly.
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
From One Button run and upscale
|
||||||
|
There needs to be a file in each directory for GIT to work properly.
|
||||||
|
|
||||||
|
In this directory, place all images that you want to upscale in a batch.
|
||||||
|
Then start One Button run and upscale with just 'only upscale'
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,4 +1,4 @@
|
||||||
import json
|
import os
|
||||||
import requests
|
import requests
|
||||||
import io
|
import io
|
||||||
import base64
|
import base64
|
||||||
|
|
@ -7,37 +7,40 @@ from PIL import Image, PngImagePlugin
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def call_extras(imagelocation):
|
def call_extras(imagelocation,originalimage, originalpnginfo ="", apiurl="http://127.0.0.1:7860",filename="",extrasupscaler1 = "all",extrasupscaler2 ="all",extrasupscaler2visiblity="0.5",extrasupscaler2gfpgan="0",extrasupscaler2codeformer="0.15",extrasupscaler2codeformerweight="0.1",extrasresize="2"):
|
||||||
|
|
||||||
|
|
||||||
imagewip = Image.open(imagelocation)
|
|
||||||
#rest of prompt things
|
#rest of prompt things
|
||||||
upscaling_resize = "2"
|
upscaling_resize = extrasresize
|
||||||
upscaler_1 = "4x-UltraSharp"
|
upscaler_1 = extrasupscaler1
|
||||||
upscaler_2 = "R-ESRGAN 4x+"
|
upscaler_2 = extrasupscaler2
|
||||||
|
|
||||||
with open(imagelocation, "rb") as image_file:
|
with open(imagelocation, "rb") as image_file:
|
||||||
encoded_string = base64.b64encode(image_file.read())
|
encoded_string = base64.b64encode(image_file.read())
|
||||||
encodedstring2 = encoded_string.decode('utf-8')
|
encodedstring2 = encoded_string.decode('utf-8')
|
||||||
#params to stay the same
|
#params to stay the same
|
||||||
url = "http://127.0.0.1:7860"
|
url = apiurl
|
||||||
outputextrasfolder = 'C:\\automated_output\\extras\\'
|
script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
|
||||||
outputextrasilename = str(uuid.uuid4())
|
outputextrasfolder = os.path.join(script_dir, "./automated_outputs/extras/" )
|
||||||
|
if(filename==""):
|
||||||
|
filename = str(uuid.uuid4())
|
||||||
|
outputextrasilename = filename
|
||||||
outputextraspng = '.png'
|
outputextraspng = '.png'
|
||||||
outputextrasFull = '{}{}{}'.format(outputextrasfolder,outputextrasilename,outputextraspng)
|
outputextrasFull = '{}{}{}'.format(outputextrasfolder,outputextrasilename,outputextraspng)
|
||||||
|
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"upscaling_resize": upscaling_resize,
|
"upscaling_resize": float(upscaling_resize),
|
||||||
"upscaler_1": upscaler_1,
|
"upscaler_1": upscaler_1,
|
||||||
"image": encodedstring2,
|
"image": encodedstring2,
|
||||||
"resize_mode": 0,
|
"resize_mode": 0,
|
||||||
"show_extras_results": "false",
|
"show_extras_results": "false",
|
||||||
"gfpgan_visibility": 0,
|
"gfpgan_visibility": extrasupscaler2gfpgan ,
|
||||||
"codeformer_visibility": 0.15,
|
"codeformer_visibility": extrasupscaler2visiblity,
|
||||||
"codeformer_weight": 0.1,
|
"codeformer_weight": extrasupscaler2codeformerweight,
|
||||||
"upscaling_crop": "false",
|
"upscaling_crop": "false",
|
||||||
"upscaler_2": upscaler_2,
|
"upscaler_2": upscaler_2,
|
||||||
"extras_upscaler_2_visibility": 0.5,
|
"extras_upscaler_2_visibility": extrasupscaler2visiblity,
|
||||||
"upscale_first": "true",
|
"upscale_first": "true",
|
||||||
"rb_enabled": "false", # the remove backgrounds plugin is automatically turned on, need to turn it off
|
"rb_enabled": "false", # the remove backgrounds plugin is automatically turned on, need to turn it off
|
||||||
"models": "None" # the remove backgrounds plugin is automatically turned on, need to turn it off
|
"models": "None" # the remove backgrounds plugin is automatically turned on, need to turn it off
|
||||||
|
|
@ -47,6 +50,30 @@ def call_extras(imagelocation):
|
||||||
response = requests.post(url=f'{url}/sdapi/v1/extra-single-image', json=payload)
|
response = requests.post(url=f'{url}/sdapi/v1/extra-single-image', json=payload)
|
||||||
|
|
||||||
image = Image.open(io.BytesIO(base64.b64decode(response.json().get("image"))))
|
image = Image.open(io.BytesIO(base64.b64decode(response.json().get("image"))))
|
||||||
image.save(outputextrasFull)
|
|
||||||
|
# when using just upscale, we somehow can't get the png info. Unless we do IMG2IMG or TXT2IMG first, then it is added.
|
||||||
|
# minor issue, so this solves it for now
|
||||||
|
|
||||||
|
#if(originalpnginfo==""):
|
||||||
|
# png_payload = {
|
||||||
|
# "image": "data:image/png;base64," + image[0]
|
||||||
|
# }
|
||||||
|
|
||||||
|
#print("and here!")
|
||||||
|
#print(png_payload)
|
||||||
|
# response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
||||||
|
|
||||||
|
#print("here!")
|
||||||
|
#print(response2)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# pnginfo = PngImagePlugin.PngInfo()
|
||||||
|
# pnginfo.add_text("parameters", response2.json().get("info"))
|
||||||
|
|
||||||
|
# originalpnginfo = pnginfo
|
||||||
|
|
||||||
|
|
||||||
|
image.save(outputextrasFull, pnginfo=originalpnginfo)
|
||||||
|
|
||||||
return outputextrasFull
|
return outputextrasFull
|
||||||
|
|
|
||||||
193
call_img2img.py
193
call_img2img.py
|
|
@ -1,47 +1,116 @@
|
||||||
import json
|
import os
|
||||||
import requests
|
import requests
|
||||||
import io
|
import io
|
||||||
import base64
|
import base64
|
||||||
import uuid
|
import uuid
|
||||||
from PIL import Image, PngImagePlugin
|
from PIL import Image, PngImagePlugin
|
||||||
|
from modules import shared
|
||||||
|
from model_lists import *
|
||||||
|
|
||||||
|
|
||||||
|
def call_img2img(imagelocation,originalimage, originalpnginfo ="", apiurl="http://127.0.0.1:7860",filename="", prompt = "", negativeprompt = "", img2imgsamplingsteps = "20", img2imgcfg = "7", img2imgsamplingmethod = "DPM++ SDE Karras", img2imgupscaler = "R-ESRGAN 4x+", img2imgmodel = "currently selected model", denoising_strength = "0.3", scale = "2", padding = "64",upscalescript="SD upscale",usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8",controlnetenabled=False, controlnetmodel="",controlnetblockymode=False):
|
||||||
|
|
||||||
def call_img2img(imagelocation, denoising_strength = 0.25, scale = 1.5, padding = 64):
|
negativepromptfound = 0
|
||||||
|
#params to stay the same
|
||||||
#params to stay the same
|
url = apiurl
|
||||||
url = "http://127.0.0.1:7860"
|
script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
|
||||||
outputimg2imgfolder = 'C:\\automated_output\\img2img\\'
|
outputimg2imgfolder = os.path.join(script_dir, "./automated_outputs/img2img/" )
|
||||||
outputimg2imgfilename = str(uuid.uuid4())
|
if(filename==""):
|
||||||
|
filename = str(uuid.uuid4())
|
||||||
outputimg2imgpng = '.png'
|
outputimg2imgpng = '.png'
|
||||||
outputimg2imgFull = '{}{}{}'.format(outputimg2imgfolder,outputimg2imgfilename,outputimg2imgpng)
|
outputimg2imgFull = '{}{}{}'.format(outputimg2imgfolder,filename,outputimg2imgpng)
|
||||||
|
|
||||||
|
|
||||||
encodedstringlist = []
|
encodedstringlist = []
|
||||||
|
# need to convert the values to the correct index number for Ultimate SD Upscaler
|
||||||
|
redrawmodelist =["Linear","Chess","None"]
|
||||||
|
seamsfixmodelist = ["None","Band pass","Half tile offset pass","Half tile offset pass + intersections"]
|
||||||
|
usduredrawint = redrawmodelist.index(usduredraw)
|
||||||
|
seamsfixmodeint = seamsfixmodelist.index(usduSeamsfix)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#rest of prompt things
|
#rest of prompt things
|
||||||
|
|
||||||
sampler_index = "DPM2 Karras"
|
sampler_index = img2imgsamplingmethod
|
||||||
steps = "20"
|
steps = img2imgsamplingsteps
|
||||||
prompt = "hello world"
|
cfg_scale = img2imgcfg
|
||||||
cfg_scale = "7"
|
|
||||||
width = "512"
|
|
||||||
height = "512"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
with open(imagelocation, "rb") as image_file:
|
with open(imagelocation, "rb") as image_file:
|
||||||
encoded_string = base64.b64encode(image_file.read())
|
encoded_string = base64.b64encode(image_file.read())
|
||||||
encodedstringlist.append(encoded_string.decode('utf-8'))
|
encodedstringlist.append(encoded_string.decode('utf-8'))
|
||||||
encodedstring2 = encoded_string.decode('utf-8')
|
|
||||||
|
# If we don't have a prompt, get it from the original image file
|
||||||
|
# This is used when only_upscale is activated
|
||||||
|
if(prompt==""):
|
||||||
|
with open(originalimage, "rb") as originalimage_file:
|
||||||
|
originalencoded_string = base64.b64encode(originalimage_file.read())
|
||||||
|
encodedstring2 = originalencoded_string.decode('utf-8')
|
||||||
|
|
||||||
|
|
||||||
# prompt from picture?
|
# get prompt from picture
|
||||||
png_payload = {
|
png_payload = {
|
||||||
"image": encodedstring2
|
"image": encodedstring2
|
||||||
}
|
}
|
||||||
response3 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
response3 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
||||||
|
|
||||||
pnginfo = str(response3.json().get("info"))
|
pnginfo = str(response3.json().get("info"))
|
||||||
|
|
||||||
prompt = pnginfo[:pnginfo.rfind("Steps")]
|
|
||||||
|
prompt = pnginfo[:pnginfo.rfind("Steps")]
|
||||||
|
|
||||||
|
|
||||||
|
if(prompt.rfind("Negative prompt") != -1):
|
||||||
|
prompt = prompt[:prompt.rfind("Negative prompt")]
|
||||||
|
|
||||||
|
negativepromptfound = 1
|
||||||
|
|
||||||
|
if(negativepromptfound == 1):
|
||||||
|
negativeprompt = pnginfo[:pnginfo.rfind("Steps")]
|
||||||
|
negativeprompt = negativeprompt.replace(prompt,"")
|
||||||
|
|
||||||
|
|
||||||
|
# set the automatic upscale
|
||||||
|
|
||||||
|
checkprompt = prompt.lower()
|
||||||
|
if(img2imgupscaler != "automatic"):
|
||||||
|
upscaler = img2imgupscaler
|
||||||
|
else:
|
||||||
|
upscalerlist = get_upscalers_for_img2img()
|
||||||
|
# on automatic, make some choices about what upscaler to use
|
||||||
|
# photos, prefer 4x ultrasharp
|
||||||
|
# anime, cartoon or drawing, go for R-ESRGAN 4x+ Anime6B
|
||||||
|
# else, R-ESRGAN 4x+"
|
||||||
|
if("hoto" in checkprompt and "4x-UltraSharp" in upscalerlist):
|
||||||
|
upscaler = "4x-UltraSharp"
|
||||||
|
elif("anime" in checkprompt or "cartoon" in checkprompt or "draw" in checkprompt or "vector" in checkprompt or "cel shad" in checkprompt or "visual novel" in checkprompt):
|
||||||
|
upscaler = "R-ESRGAN 4x+ Anime6B"
|
||||||
|
else:
|
||||||
|
upscaler = "R-ESRGAN 4x+"
|
||||||
|
|
||||||
|
if(upscaler== "4x-UltraSharp"):
|
||||||
|
denoising_strength = "0.35"
|
||||||
|
if(upscaler== "R-ESRGAN 4x+ Anime6B+"):
|
||||||
|
denoising_strength = "0.6" # 0.6 is fine for the anime upscaler
|
||||||
|
if(upscaler== "R-ESRGAN 4x+"):
|
||||||
|
denoising_strength = "0.5" # default 0.6 is a lot and changes a lot of details
|
||||||
|
|
||||||
|
|
||||||
|
#wierd blocky mode comes up when the treshold is set way too high and the denoising strenght is strong
|
||||||
|
if(controlnetblockymode==True):
|
||||||
|
treshold = int(padding)
|
||||||
|
if(float(denoising_strength) < 0.65):
|
||||||
|
denoising_strength = "0.65"
|
||||||
|
else:
|
||||||
|
treshold = 1
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"resize_mode": 0,
|
"resize_mode": 0,
|
||||||
|
|
@ -50,33 +119,87 @@ def call_img2img(imagelocation, denoising_strength = 0.25, scale = 1.5, padding
|
||||||
"batch_size": "1",
|
"batch_size": "1",
|
||||||
"n_iter": "1",
|
"n_iter": "1",
|
||||||
"prompt": prompt,
|
"prompt": prompt,
|
||||||
|
"negative_prompt": negativeprompt,
|
||||||
"steps": steps,
|
"steps": steps,
|
||||||
"cfg_scale": cfg_scale,
|
"cfg_scale": cfg_scale,
|
||||||
"width": width,
|
#"width": width,
|
||||||
"height": height,
|
#"height": height,
|
||||||
"include_init_images": "true",
|
"include_init_images": "true",
|
||||||
"init_images": encodedstringlist,
|
"init_images": encodedstringlist,
|
||||||
"script_name": "SD upscale",
|
|
||||||
"script_args": ["",padding,"4x-UltraSharp",scale]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if(img2imgmodel != "currently selected model"):
|
||||||
|
payload.update({"sd_model": img2imgmodel})
|
||||||
#
|
#
|
||||||
|
|
||||||
|
# https://github.com/Mikubill/sd-webui-controlnet/wiki/API
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
if(controlnetenabled==True and controlnetmodel!=""):
|
||||||
|
payload.update({"alwayson_scripts": {
|
||||||
|
"controlnet": {
|
||||||
|
"args": [
|
||||||
|
{
|
||||||
|
"module": "tile_resample",
|
||||||
|
"model": controlnetmodel, # control_v11f1e_sd15_tile [a371b31b]
|
||||||
|
#"input_image": encodedstringlist,
|
||||||
|
"control_mode": 2, #"ControlNet is more important" : the controlnet model has more impact than the prompt
|
||||||
|
#"resize_mode": 0
|
||||||
|
"threshold_a": treshold
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if(upscalescript=="SD upscale"):
|
||||||
|
payload.update({"script_name": upscalescript})
|
||||||
|
payload.update({"script_args": ["",int(padding),upscaler,float(scale)]})
|
||||||
|
|
||||||
|
if(upscalescript=="Ultimate SD upscale"):
|
||||||
|
upscaler_index = [x.name.lower() for x in shared.sd_upscalers].index(upscaler.lower())
|
||||||
|
payload.update({"script_name": upscalescript})
|
||||||
|
payload.update({"script_args": ["",int(usdutilewidth),int(usdutileheight),int(usdumaskblur),int(padding), int(usduswidth), usdusdenoise,int(usduspadding),
|
||||||
|
upscaler_index,True,usduredrawint,False,int(usdusmaskblur),
|
||||||
|
seamsfixmodeint,2,"","",float(scale)]})
|
||||||
|
|
||||||
|
# Ultimate SD Upscale params:
|
||||||
|
#_, tile_width, tile_height, mask_blur, padding, seams_fix_width, seams_fix_denoise, seams_fix_padding,
|
||||||
|
# upscaler_index, save_upscaled_image, redraw_mode, save_seams_fix_image, seams_fix_mask_blur,
|
||||||
|
# seams_fix_type, target_size_type, custom_width, custom_height, custom_scale):
|
||||||
|
|
||||||
|
# target_size_type = 2
|
||||||
|
# custom_scale = 2
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
response = requests.post(url=f'{url}/sdapi/v1/img2img', json=payload)
|
response = requests.post(url=f'{url}/sdapi/v1/img2img', json=payload)
|
||||||
|
|
||||||
|
|
||||||
r = response.json()
|
r = response.json()
|
||||||
|
|
||||||
|
|
||||||
for i in r['images']:
|
for i in r['images']:
|
||||||
image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
|
image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
|
||||||
|
|
||||||
png_payload = {
|
if(originalpnginfo==""):
|
||||||
"image": "data:image/png;base64," + i
|
png_payload = {
|
||||||
}
|
"image": "data:image/png;base64," + i
|
||||||
response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
}
|
||||||
|
|
||||||
pnginfo = PngImagePlugin.PngInfo()
|
#print("and here!")
|
||||||
pnginfo.add_text("parameters", response2.json().get("info"))
|
#print(png_payload)
|
||||||
image.save(outputimg2imgFull, pnginfo=pnginfo)
|
response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
||||||
|
|
||||||
return outputimg2imgFull
|
#print("here!")
|
||||||
|
#print(response2)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
pnginfo = PngImagePlugin.PngInfo()
|
||||||
|
pnginfo.add_text("parameters", response2.json().get("info"))
|
||||||
|
|
||||||
|
originalpnginfo = pnginfo
|
||||||
|
image.save(outputimg2imgFull, pnginfo=originalpnginfo)
|
||||||
|
|
||||||
|
return [outputimg2imgFull,originalpnginfo]
|
||||||
|
|
|
||||||
175
call_txt2img.py
175
call_txt2img.py
|
|
@ -3,19 +3,25 @@ import requests
|
||||||
import io
|
import io
|
||||||
import base64
|
import base64
|
||||||
import uuid
|
import uuid
|
||||||
|
import sys, os
|
||||||
from PIL import Image, PngImagePlugin
|
from PIL import Image, PngImagePlugin
|
||||||
|
from model_lists import *
|
||||||
|
|
||||||
def call_txt2img(passingprompt,ratio,upscale,debugmode):
|
def call_txt2img(passingprompt,ratio,upscale,debugmode,filename="",model = "currently selected model",samplingsteps = "40",cfg= "7",hiressteps ="0",denoisestrength="0.6",samplingmethod="DPM++ SDE Karras", upscaler="R-ESRGAN 4x+",hiresscale="2",apiurl="http://127.0.0.1:7860", qualitygate=False,quality="7.6",runs="5",negativeprompt=""):
|
||||||
|
|
||||||
#set the prompt!
|
#set the prompt!
|
||||||
prompt = passingprompt
|
prompt = passingprompt
|
||||||
|
checkprompt = passingprompt.lower()
|
||||||
|
|
||||||
|
#set the URL for the API
|
||||||
|
url = apiurl
|
||||||
|
|
||||||
#rest of prompt things
|
#rest of prompt things
|
||||||
sampler_index = "DPM2 Karras"
|
sampler_index = samplingmethod
|
||||||
steps = "20"
|
steps = samplingsteps
|
||||||
if(debugmode==1):
|
if(debugmode==1):
|
||||||
steps="10"
|
steps="10"
|
||||||
cfg_scale = "7"
|
cfg_scale = cfg
|
||||||
|
|
||||||
#size
|
#size
|
||||||
if(ratio=='wide'):
|
if(ratio=='wide'):
|
||||||
|
|
@ -34,23 +40,65 @@ def call_txt2img(passingprompt,ratio,upscale,debugmode):
|
||||||
enable_hr = upscale
|
enable_hr = upscale
|
||||||
if(debugmode==1):
|
if(debugmode==1):
|
||||||
enable_hr="False"
|
enable_hr="False"
|
||||||
denoising_strength = "0.35"
|
|
||||||
hr_scale = "2"
|
|
||||||
hr_upscaler = "4x-UltraSharp"
|
|
||||||
hr_second_pass_steps = str(round(int(steps)/2))
|
|
||||||
|
|
||||||
|
#defaults
|
||||||
|
hr_scale = hiresscale
|
||||||
|
denoising_strength = denoisestrength
|
||||||
|
|
||||||
|
hr_second_pass_steps = hiressteps
|
||||||
|
#hr_upscaler = "LDSR" # We have the time, why not use LDSR
|
||||||
|
|
||||||
|
if(upscaler != "automatic"):
|
||||||
|
hr_upscaler = upscaler
|
||||||
|
else:
|
||||||
|
upscalerlist = get_upscalers()
|
||||||
|
# on automatic, make some choices about what upscaler to use
|
||||||
|
# photos, prefer 4x ultrasharp
|
||||||
|
# anime, cartoon or drawing, go for R-ESRGAN 4x+ Anime6B
|
||||||
|
# else, R-ESRGAN 4x+"
|
||||||
|
if("hoto" in checkprompt and "4x-UltraSharp" in upscalerlist):
|
||||||
|
hr_upscaler = "4x-UltraSharp"
|
||||||
|
elif("anime" in checkprompt or "cartoon" in checkprompt or "draw" in checkprompt or "vector" in checkprompt or "cel shad" in checkprompt or "visual novel" in checkprompt):
|
||||||
|
hr_upscaler = "R-ESRGAN 4x+ Anime6B"
|
||||||
|
else:
|
||||||
|
hr_upscaler = "R-ESRGAN 4x+"
|
||||||
|
|
||||||
|
if(hiressteps==0):
|
||||||
|
hiressteps = samplingsteps
|
||||||
|
hr_second_pass_steps = int(hiressteps/2)
|
||||||
|
|
||||||
|
hr_scale = 2
|
||||||
|
|
||||||
|
if(hr_upscaler== "4x-UltraSharp"):
|
||||||
|
denoising_strength = "0.35"
|
||||||
|
if(hr_upscaler== "R-ESRGAN 4x+ Anime6B+"):
|
||||||
|
denoising_strength = "0.6" # 0.6 is fine for the anime upscaler
|
||||||
|
if(hr_upscaler== "R-ESRGAN 4x+"):
|
||||||
|
denoising_strength = "0.5" # default 0.6 is a lot and changes a lot of details
|
||||||
|
|
||||||
|
|
||||||
#params to stay the same
|
#params to stay the same
|
||||||
url = "http://127.0.0.1:7860"
|
|
||||||
outputTXT2IMGfolder = 'C:\\automated_output\\txt2img\\'
|
|
||||||
outputTXT2IMGfilename = str(uuid.uuid4())
|
|
||||||
outputTXT2IMGpng = '.png'
|
|
||||||
outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,outputTXT2IMGfilename,outputTXT2IMGpng)
|
|
||||||
outputTXT2IMGtxtfolder = 'C:\\automated_output\\prompts\\'
|
|
||||||
outputTXT2IMGtxt = '.txt'
|
|
||||||
outputTXT2IMGtxtFull = '{}{}{}'.format(outputTXT2IMGtxtfolder,outputTXT2IMGfilename,outputTXT2IMGtxt)
|
|
||||||
|
|
||||||
|
script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
|
||||||
|
outputTXT2IMGfolder = os.path.join(script_dir, "./automated_outputs/txt2img/")
|
||||||
|
if(filename==""):
|
||||||
|
filename = str(uuid.uuid4())
|
||||||
|
|
||||||
|
outputTXT2IMGpng = '.png'
|
||||||
|
#outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,filename,outputTXT2IMGpng)
|
||||||
|
outputTXT2IMGtxtfolder = os.path.join(script_dir, "./automated_outputs/prompts/")
|
||||||
|
outputTXT2IMGtxt = '.txt'
|
||||||
|
outputTXT2IMGtxtFull = '{}{}{}'.format(outputTXT2IMGtxtfolder,filename,outputTXT2IMGtxt)
|
||||||
|
|
||||||
|
# params for quality gate
|
||||||
|
isGoodNumber = float(quality)
|
||||||
|
foundgood = False
|
||||||
|
MaxRuns = int(runs)
|
||||||
|
Runs = 0
|
||||||
|
scorelist = []
|
||||||
|
scoredeclist = []
|
||||||
|
imagelist = []
|
||||||
|
pnginfolist = []
|
||||||
|
|
||||||
|
|
||||||
#call TXT2IMG
|
#call TXT2IMG
|
||||||
|
|
@ -67,28 +115,97 @@ def call_txt2img(passingprompt,ratio,upscale,debugmode):
|
||||||
"hr_scale": hr_scale,
|
"hr_scale": hr_scale,
|
||||||
"hr_upscaler": hr_upscaler,
|
"hr_upscaler": hr_upscaler,
|
||||||
"hr_second_pass_steps": hr_second_pass_steps
|
"hr_second_pass_steps": hr_second_pass_steps
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
|
if(model != "currently selected model"):
|
||||||
|
payload.update({"sd_model": model})
|
||||||
|
|
||||||
r = response.json()
|
if(negativeprompt != ""):
|
||||||
|
payload.update({"negative_prompt": negativeprompt})
|
||||||
|
|
||||||
for i in r['images']:
|
while Runs < MaxRuns:
|
||||||
image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
|
|
||||||
|
|
||||||
png_payload = {
|
# make the filename unique for each run _0, _1, etc.
|
||||||
"image": "data:image/png;base64," + i
|
addrun = "_" + str(Runs)
|
||||||
}
|
filenamefull = filename + addrun
|
||||||
response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,filenamefull,outputTXT2IMGpng)
|
||||||
|
|
||||||
pnginfo = PngImagePlugin.PngInfo()
|
|
||||||
pnginfo.add_text("parameters", response2.json().get("info"))
|
response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
|
||||||
image.save(outputTXT2IMGFull, pnginfo=pnginfo)
|
|
||||||
|
r = response.json()
|
||||||
|
|
||||||
|
for i in r['images']:
|
||||||
|
image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
|
||||||
|
|
||||||
|
png_payload = {
|
||||||
|
"image": "data:image/png;base64," + i
|
||||||
|
}
|
||||||
|
response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
|
||||||
|
|
||||||
|
pnginfo = PngImagePlugin.PngInfo()
|
||||||
|
pnginfo.add_text("parameters", response2.json().get("info"))
|
||||||
|
image.save(outputTXT2IMGFull, pnginfo=pnginfo)
|
||||||
|
|
||||||
|
if(qualitygate==True):
|
||||||
|
# check if the file exists in the parent directory
|
||||||
|
imagescorer_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'stable-diffusion-webui-aesthetic-image-scorer', 'scripts'))
|
||||||
|
#print(imagescorer_path)
|
||||||
|
if imagescorer_path not in sys.path:
|
||||||
|
sys.path.append(imagescorer_path)
|
||||||
|
try:
|
||||||
|
|
||||||
|
import image_scorer
|
||||||
|
print("Found aesthetic-image-scorer! Using this to measure the results...")
|
||||||
|
score = image_scorer.get_score(image)
|
||||||
|
scoredeclist.append(score)
|
||||||
|
score = round(score,1)
|
||||||
|
|
||||||
|
scorelist.append(score)
|
||||||
|
imagelist.append(outputTXT2IMGFull)
|
||||||
|
pnginfolist.append(pnginfo)
|
||||||
|
|
||||||
|
print("This image has scored: "+ str(score) + " out of " + str(isGoodNumber))
|
||||||
|
if(score >= isGoodNumber or debugmode == 1):
|
||||||
|
foundgood = True
|
||||||
|
print("Yay its good! Keeping this result.")
|
||||||
|
else:
|
||||||
|
runstodo = MaxRuns - Runs - 1
|
||||||
|
print("Not a good result. Retrying for another " + str(runstodo) + " times or until the image is good enough.")
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
foundgood = True # just continue :)
|
||||||
|
|
||||||
|
# handle the case where the module doesn't exist
|
||||||
|
print("Could not find the stable-diffusion-webui-aesthetic-image-scorer extension.")
|
||||||
|
print("Install this extension via the WebUI to use Quality Gate")
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
foundgood = True # If there is no quality gate, then everything is good. So we escape this loop
|
||||||
|
|
||||||
|
Runs += 1
|
||||||
|
if(foundgood == True):
|
||||||
|
break #Break the loop if we found something good. Or if we set it to good :)
|
||||||
|
|
||||||
|
if(len(imagelist) > 0):
|
||||||
|
|
||||||
|
if(foundgood == True):
|
||||||
|
print("Removing any other images generated this run (if any).")
|
||||||
|
else:
|
||||||
|
print("Stopped trying, keeping the best image we had so far.")
|
||||||
|
|
||||||
|
# Get the index of the first occurrence of the maximum value in the list
|
||||||
|
indexofimagetokeep = scoredeclist.index(max(scoredeclist))
|
||||||
|
outputTXT2IMGFull = imagelist[indexofimagetokeep] #store the image to keep in here, so we can pass it along
|
||||||
|
pnginfo = pnginfolist[indexofimagetokeep]
|
||||||
|
imagelist.pop(indexofimagetokeep)
|
||||||
|
#remove all other images
|
||||||
|
for imagelocation in imagelist:
|
||||||
|
os.remove(imagelocation)
|
||||||
|
|
||||||
|
|
||||||
with open(outputTXT2IMGtxtFull,'w',encoding="utf8") as txt:
|
with open(outputTXT2IMGtxtFull,'w',encoding="utf8") as txt:
|
||||||
json_object = json.dumps(payload, indent = 4)
|
json_object = json.dumps(payload, indent = 4)
|
||||||
txt.write(json_object)
|
txt.write(json_object)
|
||||||
|
|
||||||
return outputTXT2IMGFull
|
return [outputTXT2IMGFull,pnginfo]
|
||||||
210
main.py
210
main.py
|
|
@ -1,37 +1,205 @@
|
||||||
import sys, os
|
import sys, os
|
||||||
|
import random
|
||||||
|
import uuid
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
sys.path.append(os.path.abspath(".."))
|
sys.path.append(os.path.abspath(".."))
|
||||||
|
|
||||||
from call_txt2img import *
|
from call_txt2img import *
|
||||||
from call_img2img import *
|
from call_img2img import *
|
||||||
from build_dynamic_prompt import *
|
from build_dynamic_prompt import *
|
||||||
from call_extras import *
|
from call_extras import *
|
||||||
|
from model_lists import *
|
||||||
|
|
||||||
|
|
||||||
# needs following directories to exist:
|
def generateimages(amount = 1, size = "all",model = "currently selected model",samplingsteps = "40",cfg= "7",hiresfix = True,hiressteps ="0",denoisestrength="0.6",samplingmethod="DPM++ SDE Karras", upscaler="R-ESRGAN 4x+", hiresscale="2",apiurl="http://127.0.0.1:7860",qualitygate=False,quality="7.6",runs="5",insanitylevel="5",subject="all", artist="all", imagetype="all",silentmode=False, workprompt="", antistring="",prefixprompt="", suffixprompt="", negativeprompt="",promptcompounderlevel = "1", seperator="comma", img2imgbatch = "1", img2imgsamplingsteps = "20", img2imgcfg = "7", img2imgsamplingmethod = "DPM++ SDE Karras", img2imgupscaler = "R-ESRGAN 4x+", img2imgmodel = "currently selected model", img2imgactivate = False, img2imgscale = "2", img2imgpadding = "64",img2imgdenoisestrength="0.3",ultimatesdupscale=False,usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8",controlnetenabled=False, controlnetmodel="",img2imgdenoisestrengthmod="-0.05",enableextraupscale = False,controlnetblockymode = False,extrasupscaler1 = "all",extrasupscaler2 ="all",extrasupscaler2visiblity="0.5",extrasupscaler2gfpgan="0",extrasupscaler2codeformer="0.15",extrasupscaler2codeformerweight="0.1",extrasresize="2",onlyupscale="false"):
|
||||||
# C:\automated_output\
|
loops = int(amount) # amount of images to generate
|
||||||
# C:\automated_output\extras\
|
steps = 0
|
||||||
# C:\automated_output\img2img\
|
upscalefilelist=[]
|
||||||
# C:\automated_output\txt2img\
|
originalimage = ""
|
||||||
# C:\automated_output\Prompts\
|
originalpnginfo =""
|
||||||
|
randomprompt = ""
|
||||||
|
filename=""
|
||||||
|
originalsize=size
|
||||||
|
|
||||||
loops = 20 # amount of images to generate
|
if(onlyupscale==True):
|
||||||
steps = 0
|
script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
|
||||||
|
inputupscalemefolder = os.path.join(script_dir, "./automated_outputs/upscale_me/" )
|
||||||
|
|
||||||
while steps < loops:
|
for upscalefilename in os.listdir(inputupscalemefolder):
|
||||||
# build prompt
|
f = os.path.join(inputupscalemefolder, upscalefilename)
|
||||||
randomprompt = build_dynamic_prompt(7,"all","all","all", False)
|
# checking if it is a file
|
||||||
|
if os.path.isfile(f):
|
||||||
|
if(f[-3:]!="txt"):
|
||||||
|
upscalefilelist.append(f)
|
||||||
|
|
||||||
# prompt + size
|
loops = len(upscalefilelist)
|
||||||
|
|
||||||
#txt2img = call_txt2img(randomprompt, "portrait",True,0)
|
if(loops==0):
|
||||||
#txt2img = call_txt2img(randomprompt, "wide" ,True, 0)
|
print('No files to upscale found! Please place images in //upscale_me// folder')
|
||||||
#txt2img = call_txt2img(randomprompt, "ultrawide",True,0)
|
else:
|
||||||
#txt2img = call_txt2img(randomprompt, "square",True,0)
|
print("")
|
||||||
|
print("Found and upscaling files")
|
||||||
|
print("")
|
||||||
|
|
||||||
# upscale via img2img first
|
|
||||||
#img2img = call_img2img(txt2img,0.25,1.5,256)
|
|
||||||
|
|
||||||
# upscale via extras upscaler next
|
modellist=get_models()
|
||||||
#finalfile = call_extras(img2img)
|
samplerlist=get_samplers()
|
||||||
|
upscalerlist=get_upscalers()
|
||||||
|
img2imgupscalerlist=get_upscalers_for_img2img()
|
||||||
|
img2imgsamplerlist=get_samplers_for_img2img()
|
||||||
|
|
||||||
steps += 1
|
if(ultimatesdupscale==False):
|
||||||
|
upscalescript="SD upscale"
|
||||||
|
else:
|
||||||
|
upscalescript="Ultimate SD upscale"
|
||||||
|
|
||||||
|
|
||||||
|
while steps < loops:
|
||||||
|
# build prompt
|
||||||
|
if(silentmode==True and workprompt == ""):
|
||||||
|
print("Trying to use provided workflow prompt, but is empty. Generating a random prompt instead.")
|
||||||
|
|
||||||
|
if(onlyupscale==False): # only do txt2img when onlyupscale is False
|
||||||
|
if(silentmode==True and workprompt != ""):
|
||||||
|
randomprompt = workprompt
|
||||||
|
print("Using provided workflow prompt")
|
||||||
|
print(workprompt)
|
||||||
|
|
||||||
|
else:
|
||||||
|
randomprompt = build_dynamic_prompt(insanitylevel,subject,artist,imagetype, False,antistring,prefixprompt,suffixprompt,promptcompounderlevel, seperator)
|
||||||
|
|
||||||
|
# make the filename, from from a to the first comma
|
||||||
|
start_index = randomprompt.find("of a ") + len("of a ")
|
||||||
|
|
||||||
|
# find the index of the first comma after "of a" or end of the prompt
|
||||||
|
end_index = randomprompt.find(",", start_index)
|
||||||
|
if(end_index == -1):
|
||||||
|
end_index=len(randomprompt)
|
||||||
|
|
||||||
|
# extract the desired substring using slicing
|
||||||
|
filename = randomprompt[start_index:end_index]
|
||||||
|
|
||||||
|
# cleanup some unsafe things in the filename
|
||||||
|
filename = filename.replace("\"", "")
|
||||||
|
filename = filename.replace("[", "")
|
||||||
|
filename = filename.replace("|", "")
|
||||||
|
filename = filename.replace("]", "")
|
||||||
|
filename = filename.replace(":", "_")
|
||||||
|
filename = re.sub(r'[0-9]+', '', filename)
|
||||||
|
|
||||||
|
if(filename==""):
|
||||||
|
filename = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# create a datetime object for the current date and time
|
||||||
|
now = datetime.now()
|
||||||
|
filenamecomplete = now.strftime("%Y%m%d%H%M%S") + "_" + filename.replace(" ", "_").strip()
|
||||||
|
|
||||||
|
# prompt + size
|
||||||
|
if(originalsize == "all"):
|
||||||
|
sizelist = ["portrait", "wide", "square"]
|
||||||
|
size = random.choice(sizelist)
|
||||||
|
|
||||||
|
|
||||||
|
#Check if there is any random value we have to choose or not
|
||||||
|
if(model=="all"):
|
||||||
|
model = random.choice(modellist)
|
||||||
|
#lets not do inpainting models
|
||||||
|
while "inpaint" in model:
|
||||||
|
model = random.choice(modellist)
|
||||||
|
print("Going to run with model " + model)
|
||||||
|
|
||||||
|
if(samplingmethod=="all"):
|
||||||
|
samplingmethod = random.choice(samplerlist)
|
||||||
|
print ("Going to run with sampling method " + samplingmethod)
|
||||||
|
|
||||||
|
if(upscaler=="all" and hiresfix == True):
|
||||||
|
upscaler = random.choice(upscalerlist)
|
||||||
|
print ("Going to run with upscaler " + upscaler)
|
||||||
|
|
||||||
|
# WebUI fix for PLMS and UniPC and hiresfix
|
||||||
|
if(samplingmethod in ['PLMS', 'UniPC']): # PLMS/UniPC do not support hirefix so we just silently switch to DDIM
|
||||||
|
samplingmethod = 'DDIM'
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
txt2img = call_txt2img(randomprompt, size ,hiresfix, 0, filenamecomplete,model ,samplingsteps,cfg, hiressteps, denoisestrength,samplingmethod, upscaler,hiresscale,apiurl,qualitygate,quality,runs,negativeprompt)
|
||||||
|
originalimage = txt2img[0] #Set this for later use
|
||||||
|
originalpnginfo = txt2img[1] #Sort of hacky way of bringing this forward. But if it works, it works
|
||||||
|
|
||||||
|
image = txt2img[0]
|
||||||
|
else:
|
||||||
|
if(filename==""):
|
||||||
|
filename = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# create a datetime object for the current date and time
|
||||||
|
now = datetime.now()
|
||||||
|
filenamecomplete = now.strftime("%Y%m%d%H%M%S") + "_" + filename.replace(" ", "_").strip()
|
||||||
|
image = upscalefilelist[steps] # else we get the image from the upscale file list
|
||||||
|
originalimage = image # this is also the original image file
|
||||||
|
|
||||||
|
|
||||||
|
# upscale via img2img
|
||||||
|
|
||||||
|
img2imgloops = int(img2imgbatch)
|
||||||
|
if(img2imgactivate == False): # If we dont want to run, turn it off
|
||||||
|
img2imgloops = 0
|
||||||
|
img2imgsteps = 0
|
||||||
|
|
||||||
|
# start the batching!
|
||||||
|
while img2imgsteps < img2imgloops:
|
||||||
|
|
||||||
|
|
||||||
|
#Check if there is any random value we have to choose or not
|
||||||
|
if(img2imgmodel=="all"):
|
||||||
|
img2imgmodel = random.choice(modellist)
|
||||||
|
#lets not do inpainting models
|
||||||
|
while "inpaint" in model:
|
||||||
|
img2imgmodel = random.choice(modellist)
|
||||||
|
print("Going to upscale with model " + img2imgmodel)
|
||||||
|
|
||||||
|
if(img2imgsamplingmethod=="all"):
|
||||||
|
img2imgsamplingmethod = random.choice(img2imgsamplerlist)
|
||||||
|
print ("Going to upscale with sampling method " + img2imgsamplingmethod)
|
||||||
|
|
||||||
|
if(img2imgupscaler=="all"):
|
||||||
|
img2imgupscaler = random.choice(img2imgupscalerlist)
|
||||||
|
print ("Going to run with upscaler " + img2imgupscaler)
|
||||||
|
|
||||||
|
# WebUI fix for PLMS and UniPC and img2img
|
||||||
|
if(img2imgsamplingmethod in ['PLMS', 'UniPC']): # PLMS/UniPC do not support img2img so we just silently switch to DDIM
|
||||||
|
img2imgsamplingmethod = 'DDIM'
|
||||||
|
|
||||||
|
img2img = call_img2img(image, originalimage, originalpnginfo, apiurl, filenamecomplete, randomprompt,negativeprompt,img2imgsamplingsteps, img2imgcfg, img2imgsamplingmethod, img2imgupscaler, img2imgmodel, img2imgdenoisestrength, img2imgscale, img2imgpadding,upscalescript,usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur,controlnetenabled, controlnetmodel,controlnetblockymode)
|
||||||
|
|
||||||
|
image = img2img[0]
|
||||||
|
if(originalpnginfo==""):
|
||||||
|
originalpnginfo = img2img[1]
|
||||||
|
|
||||||
|
img2imgdenoisestrength = str(float(img2imgdenoisestrength) + float(img2imgdenoisestrengthmod)) # lower or increase the denoise strength for each batch
|
||||||
|
img2imgpadding = int(int(img2imgpadding) * float(img2imgscale)) # also increase padding by scale
|
||||||
|
|
||||||
|
if(int(img2imgpadding)>256): # but not overdo it :D
|
||||||
|
img2imgpadding="256"
|
||||||
|
|
||||||
|
img2imgsteps += 1
|
||||||
|
|
||||||
|
# upscale via extras upscaler next
|
||||||
|
|
||||||
|
if(enableextraupscale==True):
|
||||||
|
if(extrasupscaler1=="all"):
|
||||||
|
extrasupscaler1 = random.choice(img2imgupscalerlist)
|
||||||
|
print ("Going to upscale with upscaler 1 " + extrasupscaler1)
|
||||||
|
|
||||||
|
if(extrasupscaler2=="all"):
|
||||||
|
extrasupscaler2 = random.choice(img2imgupscalerlist)
|
||||||
|
print ("Going to upscale with upscaler 2 " + extrasupscaler2)
|
||||||
|
|
||||||
|
image = call_extras(image, originalimage, originalpnginfo, apiurl, filenamecomplete,extrasupscaler1,extrasupscaler2 ,extrasupscaler2visiblity,extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight,extrasresize)
|
||||||
|
|
||||||
|
steps += 1
|
||||||
|
|
||||||
|
|
||||||
|
print("")
|
||||||
|
print("All done!")
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,71 @@
|
||||||
|
import modules.scripts as scripts
|
||||||
|
import os
|
||||||
|
from modules import modelloader, paths, shared
|
||||||
|
from modules.paths import models_path
|
||||||
|
|
||||||
|
|
||||||
|
def get_models():
|
||||||
|
model_dir = "Stable-diffusion"
|
||||||
|
model_path = os.path.abspath(os.path.join(paths.models_path, model_dir))
|
||||||
|
model_url = None
|
||||||
|
modellist = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
|
||||||
|
modellist = [s.replace(model_path, "") for s in modellist]
|
||||||
|
modellist = [s.replace("\\\\", "") for s in modellist]
|
||||||
|
modellist = [s.replace("\\", "") for s in modellist]
|
||||||
|
modellist = [s.replace(".ckpt", "") for s in modellist]
|
||||||
|
modellist = [s.replace(".safetensors", "") for s in modellist]
|
||||||
|
return modellist
|
||||||
|
|
||||||
|
def get_upscalers():
|
||||||
|
#Upscalers are sort of hardcoded as well for Latent, but not for the 2 others. So build it up!
|
||||||
|
latentlist=["Latent","Latent (antialiased)","Latent (bicubic)","Latent (bicubic antialiased)","Latent (nearest)","Latent (nearest-exact)","Lanczos","Nearest"]
|
||||||
|
|
||||||
|
RealESRGAN_dir = "RealESRGAN"
|
||||||
|
RealESRGAN_path = os.path.abspath(os.path.join(paths.models_path, RealESRGAN_dir))
|
||||||
|
model_url = None
|
||||||
|
RealESRGANlist = modelloader.load_models(model_path=RealESRGAN_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".pt", ".pth"], download_name="", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
|
||||||
|
RealESRGANlist = [s.replace(RealESRGAN_path, "") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace("\\\\", "") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace("\\", "") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace(".pth", "") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace(".pt", "") for s in RealESRGANlist]
|
||||||
|
|
||||||
|
ESRGAN_dir = "ESRGAN"
|
||||||
|
ESRGAN_path = os.path.abspath(os.path.join(paths.models_path, ESRGAN_dir))
|
||||||
|
ESRGANlist = modelloader.load_models(model_path=ESRGAN_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".pt", ".pth"], download_name="", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
|
||||||
|
ESRGANlist = [s.replace(ESRGAN_path, "") for s in ESRGANlist]
|
||||||
|
ESRGANlist = [s.replace("\\\\", "") for s in ESRGANlist]
|
||||||
|
ESRGANlist = [s.replace("\\", "") for s in ESRGANlist]
|
||||||
|
ESRGANlist = [s.replace(".pth", "") for s in ESRGANlist]
|
||||||
|
ESRGANlist = [s.replace(".pt", "") for s in ESRGANlist]
|
||||||
|
|
||||||
|
#hardcode some things for Real ESGRAN, because its named differently (note, I could have just hardcoded this. Oh well...)
|
||||||
|
RealESRGANlist = [s.replace("RealESRGAN_x4plus","R-ESRGAN 4x+") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace("RealESRGAN x4plus_anime_6B","R-ESRGAN 4x+ Anime6B") for s in RealESRGANlist]
|
||||||
|
RealESRGANlist = [s.replace("R-ESRGAN 4x+_anime_6B","R-ESRGAN 4x+ Anime6B") for s in RealESRGANlist]
|
||||||
|
|
||||||
|
upscalerlist = latentlist + RealESRGANlist + ESRGANlist
|
||||||
|
return upscalerlist
|
||||||
|
|
||||||
|
def get_samplers():
|
||||||
|
#Samplers are hardcoded in WEBui, so lets do it here as well
|
||||||
|
samplerlist = ["Euler a", "Euler", "LMS","Heun","DPM2","DPM2 a","DPM++ 2S a","DPM++ 2M","DPM++ SDE","DPM fast","DPM adaptive","LMS Karras","DPM2 Karras","DPM2 a Karras","DPM++ 2S a Karras","DPM++ 2M Karras","DPM++ SDE Karras"]
|
||||||
|
samplerlist += ["DDIM","UniPC", "PLMS"]
|
||||||
|
return samplerlist
|
||||||
|
|
||||||
|
def get_upscalers_for_img2img():
|
||||||
|
upscalerlist = get_upscalers()
|
||||||
|
# basically have to remove a lot of these, these aren't supported
|
||||||
|
upscalerlist.remove("Latent")
|
||||||
|
upscalerlist.remove("Latent (antialiased)")
|
||||||
|
upscalerlist.remove("Latent (bicubic)")
|
||||||
|
upscalerlist.remove("Latent (bicubic antialiased)")
|
||||||
|
upscalerlist.remove("Latent (nearest)")
|
||||||
|
upscalerlist.remove("Latent (nearest-exact)")
|
||||||
|
return upscalerlist
|
||||||
|
|
||||||
|
def get_samplers_for_img2img():
|
||||||
|
#Samplers are hardcoded in WEBui, so lets do it here as well
|
||||||
|
samplerlist = ["Euler a", "Euler", "LMS","Heun","DPM2","DPM2 a","DPM++ 2S a","DPM++ 2M","DPM++ SDE","DPM fast","DPM adaptive","LMS Karras","DPM2 Karras","DPM2 a Karras","DPM++ 2S a Karras","DPM++ 2M Karras","DPM++ SDE Karras"]
|
||||||
|
samplerlist += ["DDIM"] #UniPC and PLMS dont support upscaling apparently
|
||||||
|
return samplerlist
|
||||||
|
|
@ -1,20 +1,58 @@
|
||||||
import modules.scripts as scripts
|
import modules.scripts as scripts
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
import os
|
import os
|
||||||
|
import platform
|
||||||
|
import subprocess as sp
|
||||||
|
|
||||||
from modules import images
|
from modules import images
|
||||||
from modules.processing import process_images, Processed
|
from modules.processing import process_images, Processed
|
||||||
from modules.processing import Processed
|
from modules.processing import Processed
|
||||||
from modules.shared import opts, cmd_opts, state
|
from modules.shared import opts, cmd_opts, state
|
||||||
|
|
||||||
|
|
||||||
from build_dynamic_prompt import *
|
from build_dynamic_prompt import *
|
||||||
|
from main import *
|
||||||
|
from model_lists import *
|
||||||
|
|
||||||
subjects = ["all","object","animal","humanoid", "landscape", "concept"]
|
subjects = ["all","object","animal","humanoid", "landscape", "concept"]
|
||||||
artists = ["all", "none", "popular", "greg mode", "3D", "abstract", "angular", "anime" ,"architecture", "art nouveau", "art deco", "baroque", "bauhaus", "cartoon", "character", "children's illustration", "cityscape", "clean", "cloudscape", "collage", "colorful", "comics", "cubism", "dark", "detailed", "digital", "expressionism", "fantasy", "fashion", "fauvism", "figurativism", "gore", "graffiti", "graphic design", "high contrast", "horror", "impressionism", "installation", "landscape", "light", "line drawing", "low contrast", "luminism", "magical realism", "manga", "melanin", "messy", "monochromatic", "nature", "nudity", "photography", "pop art", "portrait", "primitivism", "psychedelic", "realism", "renaissance", "romanticism", "scene", "sci-fi", "sculpture", "seascape", "space", "stained glass", "still life", "storybook realism", "street art", "streetscape", "surrealism", "symbolism", "textile", "ukiyo-e", "vibrant", "watercolor", "whimsical"]
|
artists = ["all", "none", "popular", "greg mode", "3D", "abstract", "angular", "anime" ,"architecture", "art nouveau", "art deco", "baroque", "bauhaus", "cartoon", "character", "children's illustration", "cityscape", "clean", "cloudscape", "collage", "colorful", "comics", "cubism", "dark", "detailed", "digital", "expressionism", "fantasy", "fashion", "fauvism", "figurativism", "gore", "graffiti", "graphic design", "high contrast", "horror", "impressionism", "installation", "landscape", "light", "line drawing", "low contrast", "luminism", "magical realism", "manga", "melanin", "messy", "monochromatic", "nature", "nudity", "photography", "pop art", "portrait", "primitivism", "psychedelic", "realism", "renaissance", "romanticism", "scene", "sci-fi", "sculpture", "seascape", "space", "stained glass", "still life", "storybook realism", "street art", "streetscape", "surrealism", "symbolism", "textile", "ukiyo-e", "vibrant", "watercolor", "whimsical"]
|
||||||
imagetypes = ["all", "all - force multiple", "photograph", "octane render","digital art","concept art", "painting", "portrait", "anime key visual", "only other types"]
|
imagetypes = ["all", "all - force multiple", "photograph", "octane render","digital art","concept art", "painting", "portrait", "anime key visual", "only other types"]
|
||||||
promptmode = ["at the back", "in the front"]
|
promptmode = ["at the back", "in the front"]
|
||||||
promptcompounder = ["1", "2", "3", "4", "5"]
|
promptcompounder = ["1", "2", "3", "4", "5"]
|
||||||
ANDtogglemode = ["comma", "AND", "current prompt + AND", "current prompt + AND + current prompt", "automatic AND"]
|
ANDtogglemode = ["none", "automatic", "prefix AND prompt + suffix", "prefix + prefix + prompt + suffix"]
|
||||||
|
seperatorlist = ["comma", "AND", "BREAK"]
|
||||||
|
|
||||||
|
#for autorun and upscale
|
||||||
|
sizelist = ["all", "portrait", "wide", "square", "ultrawide"]
|
||||||
|
|
||||||
|
modellist = get_models()
|
||||||
|
modellist.insert(0,"all")
|
||||||
|
modellist.insert(0,"currently selected model") # First value us the currently selected model
|
||||||
|
|
||||||
|
upscalerlist = get_upscalers()
|
||||||
|
upscalerlist.insert(0,"automatic")
|
||||||
|
upscalerlist.insert(0,"all")
|
||||||
|
|
||||||
|
samplerlist = get_samplers()
|
||||||
|
samplerlist.insert(0,"all")
|
||||||
|
|
||||||
|
#for img2img
|
||||||
|
img2imgupscalerlist = get_upscalers_for_img2img()
|
||||||
|
img2imgupscalerlist.insert(0,"automatic")
|
||||||
|
img2imgupscalerlist.insert(0,"all")
|
||||||
|
|
||||||
|
img2imgsamplerlist = get_samplers_for_img2img()
|
||||||
|
img2imgsamplerlist.insert(0,"all")
|
||||||
|
|
||||||
|
#for ultimate SD upscale
|
||||||
|
|
||||||
|
seams_fix_types = ["None","Band pass","Half tile offset pass","Half tile offset pass + intersections"]
|
||||||
|
redraw_modes = ["Linear","Chess","None"]
|
||||||
|
|
||||||
|
#folder stuff
|
||||||
|
folder_symbol = '\U0001f4c2' # 📂
|
||||||
|
sys.path.append(os.path.abspath(".."))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class Script(scripts.Script):
|
class Script(scripts.Script):
|
||||||
|
|
@ -27,18 +65,34 @@ class Script(scripts.Script):
|
||||||
|
|
||||||
|
|
||||||
def ui(self, is_img2img):
|
def ui(self, is_img2img):
|
||||||
def gen_prompt(insanitylevel, subject, artist, imagetype, antistring):
|
def gen_prompt(insanitylevel, subject, artist, imagetype, antistring, prefixprompt, suffixprompt, promptcompounderlevel, seperator):
|
||||||
|
|
||||||
promptlist = []
|
promptlist = []
|
||||||
|
|
||||||
for i in range(5):
|
for i in range(5):
|
||||||
promptlist.append(build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring))
|
promptlist.append(build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring,prefixprompt,suffixprompt,promptcompounderlevel,seperator))
|
||||||
|
|
||||||
return promptlist
|
return promptlist
|
||||||
|
|
||||||
def prompttoworkflowprompt(text):
|
def prompttoworkflowprompt(text):
|
||||||
return text
|
return text
|
||||||
|
|
||||||
|
# Copied code from WebUI
|
||||||
|
def openfolder():
|
||||||
|
script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
|
||||||
|
automatedoutputsfolder = os.path.join(script_dir, "../automated_outputs/" )
|
||||||
|
|
||||||
|
path = os.path.normpath(automatedoutputsfolder)
|
||||||
|
|
||||||
|
if platform.system() == "Windows":
|
||||||
|
os.startfile(path)
|
||||||
|
elif platform.system() == "Darwin":
|
||||||
|
sp.Popen(["open", path])
|
||||||
|
elif "microsoft-standard-WSL2" in platform.uname().release:
|
||||||
|
sp.Popen(["wsl-open", path])
|
||||||
|
else:
|
||||||
|
sp.Popen(["xdg-open", path])
|
||||||
|
|
||||||
|
|
||||||
with gr.Tab("Main"):
|
with gr.Tab("Main"):
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
|
|
@ -55,8 +109,9 @@ class Script(scripts.Script):
|
||||||
imagetypes, label="type of image", value="all")
|
imagetypes, label="type of image", value="all")
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Column():
|
with gr.Column():
|
||||||
promptlocation = gr.Dropdown(
|
prefixprompt = gr.Textbox(label="Place this in front of generated prompt (prefix)",value="")
|
||||||
promptmode, label="Location of existing prompt", value="at the back")
|
suffixprompt = gr.Textbox(label="Place this at back of generated prompt (suffix)",value="")
|
||||||
|
negativeprompt = gr.Textbox(label="Use this negative prompt",value="")
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Column():
|
with gr.Column():
|
||||||
antistring = gr.Textbox(label="Filter out following properties (comma seperated). Example ""film grain, purple, cat"" ")
|
antistring = gr.Textbox(label="Filter out following properties (comma seperated). Example ""film grain, purple, cat"" ")
|
||||||
|
|
@ -129,14 +184,11 @@ class Script(scripts.Script):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Location of existing prompt
|
### Other prompt fields
|
||||||
|
|
||||||
<font size="2">
|
The existing prompt and negative prompt fields are ignored.
|
||||||
If you put a prompt in the prompt field, it will be added onto the generated prompt. You can determine where to put it in the front or the back of the generated prompt.
|
|
||||||
|
|
||||||
1. at the back
|
Add a prompt prefix, suffix and the negative prompt in the respective fields. They will be automatically added during processing.
|
||||||
|
|
||||||
2. in the front
|
|
||||||
|
|
||||||
</font>
|
</font>
|
||||||
|
|
||||||
|
|
@ -148,6 +200,8 @@ class Script(scripts.Script):
|
||||||
|
|
||||||
This way, you don't ever have to add it manually again. This file won't be overwritten during upgrades.
|
This way, you don't ever have to add it manually again. This file won't be overwritten during upgrades.
|
||||||
|
|
||||||
|
Idea by redditor jonesaid.
|
||||||
|
|
||||||
</font>
|
</font>
|
||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
|
|
@ -200,9 +254,13 @@ class Script(scripts.Script):
|
||||||
with gr.Column(scale=1):
|
with gr.Column(scale=1):
|
||||||
promptcompounderlevel = gr.Dropdown(
|
promptcompounderlevel = gr.Dropdown(
|
||||||
promptcompounder, label="Prompt compounder", value="1")
|
promptcompounder, label="Prompt compounder", value="1")
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
seperator = gr.Dropdown(
|
||||||
|
seperatorlist, label="Prompt seperator", value="comma")
|
||||||
with gr.Column(scale=2):
|
with gr.Column(scale=2):
|
||||||
ANDtoggle = gr.Dropdown(
|
ANDtoggle = gr.Dropdown(
|
||||||
ANDtogglemode, label="Prompt seperator mode", value="comma")
|
ANDtogglemode, label="Prompt seperator mode", value="none")
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
gr.Markdown(
|
gr.Markdown(
|
||||||
"""
|
"""
|
||||||
|
|
@ -217,9 +275,9 @@ class Script(scripts.Script):
|
||||||
This was originally a bug in the first release when using multiple batches, now brought back as a feature.
|
This was originally a bug in the first release when using multiple batches, now brought back as a feature.
|
||||||
Raised by redditor drone2222, to bring this back as a toggle, since it did create interesting results. So here it is.
|
Raised by redditor drone2222, to bring this back as a toggle, since it did create interesting results. So here it is.
|
||||||
|
|
||||||
You can toggle the separator mode. Standardly this is a comma, but you can choose an AND and a newline.
|
You can toggle the separator mode. Standardly this is a comma, but you can choose an AND or a BREAK.
|
||||||
|
|
||||||
You can also choose for "current prompt + AND" or "current prompt + AND + current prompt". This is best used in conjuction with the Latent Couple extension when you want some control. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
|
You can also choose the prompt seperator mode for use with Latent Couple extension
|
||||||
|
|
||||||
Example flow:
|
Example flow:
|
||||||
|
|
||||||
|
|
@ -227,11 +285,15 @@ class Script(scripts.Script):
|
||||||
|
|
||||||
In the main tab, set the subject to humanoids
|
In the main tab, set the subject to humanoids
|
||||||
|
|
||||||
In the prompt field then add for example: Art by artistname, 2 people
|
In the prefix prompt field then add for example: Art by artistname, 2 people
|
||||||
|
|
||||||
Set the prompt compounder to: 2
|
Set the prompt compounder to: 2
|
||||||
|
|
||||||
"automatic AND" is entirely build around Latent Couple. It will pass artists and the amount of people/animals/objects to generate in the prompt automatically. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
|
Set the Prompt seperator to: AND
|
||||||
|
|
||||||
|
Set the Prompt Seperator mode to: prefix AND prompt + suffix
|
||||||
|
|
||||||
|
"automatic" is entirely build around Latent Couple. It will pass artists and the amount of people/animals/objects to generate in the prompt automatically. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
|
||||||
|
|
||||||
Example flow:
|
Example flow:
|
||||||
|
|
||||||
|
|
@ -243,12 +305,166 @@ class Script(scripts.Script):
|
||||||
|
|
||||||
Set the prompt compounder to: 2
|
Set the prompt compounder to: 2
|
||||||
|
|
||||||
|
Set the Prompt seperator to: AND
|
||||||
|
|
||||||
|
Set the Prompt Seperator mode to: automatic
|
||||||
|
|
||||||
|
|
||||||
</font>
|
</font>
|
||||||
|
|
||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
genprom.click(gen_prompt, inputs=[insanitylevel,subject, artist, imagetype, antistring], outputs=[prompt1, prompt2, prompt3,prompt4,prompt5])
|
with gr.Tab("One Button Run and Upscale"):
|
||||||
|
with gr.Row():
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
### TXT2IMG
|
||||||
|
<font size="2">
|
||||||
|
Start WebUi with option --api for this to work.
|
||||||
|
</font>
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
startmain = gr.Button("Start generating and upscaling!")
|
||||||
|
automatedoutputsfolderbutton = gr.Button(folder_symbol)
|
||||||
|
apiurl = gr.Textbox(label="URL", value="http://127.0.0.1:7860")
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
onlyupscale = gr.Checkbox(label="Don't generate, only upscale", value=False)
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
<font size="2">
|
||||||
|
Only upscale will not use txt2img to generate an image.
|
||||||
|
|
||||||
|
Instead it will pick up all files in the \\upscale_me\\ folder and upscale them with below settings.
|
||||||
|
</font>
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
|
||||||
|
amountofimages = gr.Slider(1, 50, value="20", step=1, label="Amount of images to generate")
|
||||||
|
size = gr.Dropdown(
|
||||||
|
sizelist, label="Size to generate", value="all")
|
||||||
|
with gr.Row(scale=1):
|
||||||
|
samplingsteps = gr.Slider(1, 100, value="20", step=1, label="Sampling steps")
|
||||||
|
cfg = gr.Slider(1,20, value="6.0", step=0.1, label="CFG")
|
||||||
|
with gr.Row(scale=1):
|
||||||
|
hiresfix = gr.Checkbox(label="hires. fix", value=True)
|
||||||
|
hiressteps = gr.Slider(0, 100, value = "0", step=1, label="Hires steps")
|
||||||
|
hiresscale = gr.Slider(1, 4, value = "2", step=0.05, label="Scale")
|
||||||
|
denoisestrength = gr.Slider(0, 1, value="0.60", step=0.01, label="Denoise strength")
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
|
||||||
|
model = gr.Dropdown(
|
||||||
|
modellist, label="model to use", value="currently selected model")
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
samplingmethod = gr.Dropdown(
|
||||||
|
samplerlist, label= "Sampler", value="all")
|
||||||
|
upscaler = gr.Dropdown(
|
||||||
|
upscalerlist, label="hires upscaler", value="all")
|
||||||
|
with gr.Row():
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
### Quality Gate
|
||||||
|
<font size="2">
|
||||||
|
Uses aesthetic image scorer extension to check the quality of the image.
|
||||||
|
|
||||||
|
Once turned on, it will retry for n amount of times to get an image with the quality score. If not, it will take the best image so far and continue.
|
||||||
|
|
||||||
|
Idea and inspiration by xKean.
|
||||||
|
</font>
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
qualitygate = gr.Checkbox(label="Quality Gate", value=False)
|
||||||
|
quality = gr.Slider(1, 10, value = "7.2", step=0.1, label="Quality", visible = False)
|
||||||
|
runs = gr.Slider(1, 50, value = "5", step=1, label="Amount of tries", visible = False)
|
||||||
|
with gr.Row():
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
### IMG2IMG upscale
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
img2imgactivate = gr.Checkbox(label="Upscale image with IMG2IMG", value=True)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
img2imgbatch = gr.Slider(1, 5, value="1", step=1, label="Amount times to repeat upscaling with IMG2IMG (loopback)")
|
||||||
|
img2imgsamplingsteps = gr.Slider(1, 100, value="20", step=1, label="img2img Sampling steps")
|
||||||
|
img2imgcfg = gr.Slider(1,20, value="6", step=0.1, label="img2img CFG")
|
||||||
|
img2imgdenoisestrength = gr.Slider(0, 1, value="0.30", step=0.01, label="img2img denoise strength")
|
||||||
|
img2imgdenoisestrengthmod = gr.Slider(-1,1, value = "-0.05", step=0.01, label="adjust denoise each img2img batch")
|
||||||
|
with gr.Column(scale=1):
|
||||||
|
img2imgmodel = gr.Dropdown(
|
||||||
|
modellist, label="img2img model to use", value="currently selected model")
|
||||||
|
img2imgsamplingmethod = gr.Dropdown(
|
||||||
|
img2imgsamplerlist, label= "img2img sampler", value="all")
|
||||||
|
img2imgupscaler = gr.Dropdown(
|
||||||
|
img2imgupscalerlist, label="img2img upscaler", value="all")
|
||||||
|
with gr.Row():
|
||||||
|
img2imgscale = gr.Slider(1, 4, value="2", step=0.05, label="img2img scale")
|
||||||
|
img2imgpadding = gr.Slider(32, 256, value="64", step=12, label="img2img padding")
|
||||||
|
with gr.Row():
|
||||||
|
ultimatesdupscale = gr.Checkbox(label="Use Ultimate SD Upscale script instead", value=False)
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
<font size="2">
|
||||||
|
This requires the Ultimate SD Upscale extension, install this if you haven't
|
||||||
|
</font>
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
#usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur
|
||||||
|
#usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8"
|
||||||
|
usdutilewidth = gr.Slider(0, 2048, value="512", step=12, label="tile width", visible = False)
|
||||||
|
usdutileheight = gr.Slider(0, 2048, value="0", step=12, label="tile height", visible = False)
|
||||||
|
usdumaskblur = gr.Slider(0, 64, value="8", step=1, label="Mask blur", visible = False)
|
||||||
|
usduredraw = gr.Dropdown(
|
||||||
|
redraw_modes, label="Type", value="Linear", visible = False)
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
usduSeamsfix = gr.Dropdown(
|
||||||
|
seams_fix_types, label="Seams fix", value="None", visible = False)
|
||||||
|
usdusdenoise = gr.Slider(0, 1, value="0.35", step=0.01, label="Seams denoise strenght", visible = False)
|
||||||
|
usduswidth = gr.Slider(0, 128, value="64", step=12, label="Seams Width", visible = False)
|
||||||
|
usduspadding = gr.Slider(0, 128, value="32", step=12, label="Seams padding", visible = False)
|
||||||
|
usdusmaskblur = gr.Slider(0, 64, value="8", step=1, label="Seams Mask blur (offset pass only)", visible = False)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
controlnetenabled = gr.Checkbox(label="Enable controlnet tile resample", value=False)
|
||||||
|
controlnetblockymode = gr.Checkbox(label="also enable wierd blocky upscale mode", value=False)
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
controlnetmodel = gr.Textbox(label="Controlnet tile model name", value = "control_v11f1e_sd15_tile [a371b31b]")
|
||||||
|
with gr.Row():
|
||||||
|
gr.Markdown(
|
||||||
|
"""
|
||||||
|
<font size="2">
|
||||||
|
This requires Controlnet 1.1 extension and the tile resample model, install this if you haven't
|
||||||
|
In settings for Controlnet, enable "Allow other script to control this extension"
|
||||||
|
|
||||||
|
Don't use wierd blocky upscale mode. Or maybe do?
|
||||||
|
</font>
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
enableextraupscale = gr.Checkbox(label="Enable upscale with extras", value=False)
|
||||||
|
with gr.Row():
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
extrasresize = gr.Slider(0, 8, value="2", step=0.05, label="Upscale resize", visible = False)
|
||||||
|
extrasupscaler1 = gr.Dropdown(
|
||||||
|
img2imgupscalerlist, label="upscaler 1", value="all", visible = False)
|
||||||
|
extrasupscaler2 = gr.Dropdown(
|
||||||
|
img2imgupscalerlist, label="upscaler 2", value="all", visible = False)
|
||||||
|
extrasupscaler2visiblity = gr.Slider(0, 1, value="0.5", step=0.05, label="Upscaler 2 vis.", visible = False)
|
||||||
|
with gr.Column(scale = 1):
|
||||||
|
extrasupscaler2gfpgan = gr.Slider(0, 1, value="0", step=0.05, label="GFPGAN vis.", visible = False)
|
||||||
|
extrasupscaler2codeformer = gr.Slider(0, 1, value="0.15", step=0.05, label="CodeFormer vis.", visible = False)
|
||||||
|
extrasupscaler2codeformerweight = gr.Slider(0, 1, value="0.1", step=0.05, label="CodeFormer weight", visible = False)
|
||||||
|
|
||||||
|
|
||||||
|
genprom.click(gen_prompt, inputs=[insanitylevel,subject, artist, imagetype, antistring,prefixprompt, suffixprompt,promptcompounderlevel, seperator], outputs=[prompt1, prompt2, prompt3,prompt4,prompt5])
|
||||||
|
|
||||||
prompt1toworkflow.click(prompttoworkflowprompt, inputs=prompt1, outputs=workprompt)
|
prompt1toworkflow.click(prompttoworkflowprompt, inputs=prompt1, outputs=workprompt)
|
||||||
prompt2toworkflow.click(prompttoworkflowprompt, inputs=prompt2, outputs=workprompt)
|
prompt2toworkflow.click(prompttoworkflowprompt, inputs=prompt2, outputs=workprompt)
|
||||||
|
|
@ -256,14 +472,118 @@ class Script(scripts.Script):
|
||||||
prompt4toworkflow.click(prompttoworkflowprompt, inputs=prompt4, outputs=workprompt)
|
prompt4toworkflow.click(prompttoworkflowprompt, inputs=prompt4, outputs=workprompt)
|
||||||
prompt5toworkflow.click(prompttoworkflowprompt, inputs=prompt5, outputs=workprompt)
|
prompt5toworkflow.click(prompttoworkflowprompt, inputs=prompt5, outputs=workprompt)
|
||||||
|
|
||||||
|
startmain.click(generateimages, inputs=[amountofimages,size,model,samplingsteps,cfg,hiresfix,hiressteps,denoisestrength,samplingmethod, upscaler,hiresscale, apiurl, qualitygate, quality, runs,insanitylevel,subject, artist, imagetype, silentmode, workprompt, antistring, prefixprompt, suffixprompt,negativeprompt,promptcompounderlevel, seperator, img2imgbatch, img2imgsamplingsteps, img2imgcfg, img2imgsamplingmethod, img2imgupscaler, img2imgmodel,img2imgactivate, img2imgscale, img2imgpadding,img2imgdenoisestrength,ultimatesdupscale,usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur, controlnetenabled, controlnetmodel,img2imgdenoisestrengthmod,enableextraupscale,controlnetblockymode,extrasupscaler1,extrasupscaler2,extrasupscaler2visiblity,extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight,extrasresize,onlyupscale])
|
||||||
|
|
||||||
|
automatedoutputsfolderbutton.click(openfolder)
|
||||||
|
|
||||||
|
# Turn things off and on for onlyupscale and txt2img
|
||||||
|
def onlyupscalevalues(onlyupscale):
|
||||||
|
onlyupscale = not onlyupscale
|
||||||
|
return {
|
||||||
|
amountofimages: gr.update(visible=onlyupscale),
|
||||||
|
size: gr.update(visible=onlyupscale),
|
||||||
|
samplingsteps: gr.update(visible=onlyupscale),
|
||||||
|
cfg: gr.update(visible=onlyupscale),
|
||||||
|
|
||||||
|
hiresfix: gr.update(visible=onlyupscale),
|
||||||
|
hiressteps: gr.update(visible=onlyupscale),
|
||||||
|
hiresscale: gr.update(visible=onlyupscale),
|
||||||
|
denoisestrength: gr.update(visible=onlyupscale),
|
||||||
|
upscaler: gr.update(visible=onlyupscale),
|
||||||
|
|
||||||
|
model: gr.update(visible=onlyupscale),
|
||||||
|
samplingmethod: gr.update(visible=onlyupscale),
|
||||||
|
upscaler: gr.update(visible=onlyupscale),
|
||||||
|
|
||||||
|
qualitygate: gr.update(visible=onlyupscale),
|
||||||
|
quality: gr.update(visible=onlyupscale),
|
||||||
|
runs: gr.update(visible=onlyupscale)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
onlyupscale.change(
|
||||||
|
onlyupscalevalues,
|
||||||
|
[onlyupscale],
|
||||||
|
[amountofimages,size,samplingsteps,cfg,hiresfix,hiressteps,hiresscale,denoisestrength,upscaler,model,samplingmethod,upscaler,qualitygate,quality,runs]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
return [insanitylevel,subject, artist, imagetype, promptlocation, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring]
|
# Turn things off and on for hiresfix
|
||||||
|
def hireschangevalues(hiresfix):
|
||||||
|
return {
|
||||||
|
hiressteps: gr.update(visible=hiresfix),
|
||||||
|
hiresscale: gr.update(visible=hiresfix),
|
||||||
|
denoisestrength: gr.update(visible=hiresfix),
|
||||||
|
upscaler: gr.update(visible=hiresfix)
|
||||||
|
}
|
||||||
|
|
||||||
|
hiresfix.change(
|
||||||
|
hireschangevalues,
|
||||||
|
[hiresfix],
|
||||||
|
[hiressteps,hiresscale,denoisestrength,upscaler]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Turn things off and on for quality gate
|
||||||
|
def qgatechangevalues(qualitygate):
|
||||||
|
return {
|
||||||
|
quality: gr.update(visible=qualitygate),
|
||||||
|
runs: gr.update(visible=qualitygate)
|
||||||
|
}
|
||||||
|
|
||||||
|
qualitygate.change(
|
||||||
|
qgatechangevalues,
|
||||||
|
[qualitygate],
|
||||||
|
[quality,runs]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Turn things off and on for USDU
|
||||||
|
def ultimatesdupscalechangevalues(ultimatesdupscale):
|
||||||
|
return {
|
||||||
|
usdutilewidth: gr.update(visible=ultimatesdupscale),
|
||||||
|
usdutileheight: gr.update(visible=ultimatesdupscale),
|
||||||
|
usdumaskblur: gr.update(visible=ultimatesdupscale),
|
||||||
|
usduredraw: gr.update(visible=ultimatesdupscale),
|
||||||
|
|
||||||
|
usduSeamsfix: gr.update(visible=ultimatesdupscale),
|
||||||
|
usdusdenoise: gr.update(visible=ultimatesdupscale),
|
||||||
|
usduswidth: gr.update(visible=ultimatesdupscale),
|
||||||
|
usduspadding: gr.update(visible=ultimatesdupscale),
|
||||||
|
usdusmaskblur: gr.update(visible=ultimatesdupscale)
|
||||||
|
}
|
||||||
|
|
||||||
|
ultimatesdupscale.change(
|
||||||
|
ultimatesdupscalechangevalues,
|
||||||
|
[ultimatesdupscale],
|
||||||
|
[usdutilewidth,usdutileheight,usdumaskblur,usduredraw,usduSeamsfix,usdusdenoise,usduswidth,usduspadding,usdusmaskblur]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Turn things off and on for EXTRAS
|
||||||
|
def enableextraupscalechangevalues(enableextraupscale):
|
||||||
|
return {
|
||||||
|
extrasupscaler1: gr.update(visible=enableextraupscale),
|
||||||
|
extrasupscaler2: gr.update(visible=enableextraupscale),
|
||||||
|
extrasupscaler2visiblity: gr.update(visible=enableextraupscale),
|
||||||
|
extrasresize: gr.update(visible=enableextraupscale),
|
||||||
|
|
||||||
|
extrasupscaler2gfpgan: gr.update(visible=enableextraupscale),
|
||||||
|
extrasupscaler2codeformer: gr.update(visible=enableextraupscale),
|
||||||
|
extrasupscaler2codeformerweight: gr.update(visible=enableextraupscale)
|
||||||
|
}
|
||||||
|
|
||||||
|
enableextraupscale.change(
|
||||||
|
enableextraupscalechangevalues,
|
||||||
|
[enableextraupscale],
|
||||||
|
[extrasupscaler1,extrasupscaler2,extrasupscaler2visiblity,extrasresize, extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
return [insanitylevel,subject, artist, imagetype, prefixprompt,suffixprompt,negativeprompt, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring, seperator]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def run(self, p, insanitylevel, subject, artist, imagetype, promptlocation, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring):
|
def run(self, p, insanitylevel, subject, artist, imagetype, prefixprompt,suffixprompt,negativeprompt, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring,seperator):
|
||||||
|
|
||||||
images = []
|
images = []
|
||||||
infotexts = []
|
infotexts = []
|
||||||
|
|
@ -275,23 +595,23 @@ class Script(scripts.Script):
|
||||||
batchsize = p.batch_size
|
batchsize = p.batch_size
|
||||||
p.n_iter = 1
|
p.n_iter = 1
|
||||||
p.batch_size = 1
|
p.batch_size = 1
|
||||||
originalprompt = p.prompt
|
|
||||||
|
|
||||||
|
|
||||||
if(silentmode and workprompt != ""):
|
if(silentmode and workprompt != ""):
|
||||||
print("Workflow mode turned on, not generating a prompt. Using workflow prompt.")
|
print("Workflow mode turned on, not generating a prompt. Using workflow prompt.")
|
||||||
elif(silentmode):
|
elif(silentmode):
|
||||||
print("Warning, workflow mode is turned on, but no workprompt has been given.")
|
print("Warning, workflow mode is turned on, but no workprompt has been given.")
|
||||||
elif p.prompt != "":
|
elif p.prompt != "" or p.negative_prompt != "":
|
||||||
print("Prompt is not empty, adding current prompt " + promptlocation + " of the generated prompt")
|
print("Please note that existing prompt and negative prompt fields are (no longer) used")
|
||||||
|
|
||||||
if(ANDtoggle == "automatic AND" and artist == "none"):
|
if(ANDtoggle == "automatic" and artist == "none"):
|
||||||
print("Automatic AND and artist mode set to none, don't work together well. Ignoring this setting!")
|
print("Automatic and artist mode set to none, don't work together well. Ignoring this setting!")
|
||||||
artist = "all"
|
artist = "all"
|
||||||
|
|
||||||
if(ANDtoggle == "automatic AND" and originalprompt != ""):
|
if(ANDtoggle == "automatic" and (prefixprompt != "")):
|
||||||
print("Automatic AND doesnt work well if there is an original prompt filled in. Ignoring the original prompt!")
|
print("Automatic doesnt work well if there is an prefix prompt filled in. Ignoring this prompt fields!")
|
||||||
originalprompt = ""
|
prefixprompt = ""
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -301,41 +621,37 @@ class Script(scripts.Script):
|
||||||
# prompt compounding
|
# prompt compounding
|
||||||
print("Starting generating the prompt")
|
print("Starting generating the prompt")
|
||||||
preppedprompt = ""
|
preppedprompt = ""
|
||||||
if(ANDtoggle == "automatic AND" and originalprompt == ""):
|
|
||||||
|
if(ANDtoggle == "automatic"):
|
||||||
|
preppedprompt += prefixprompt + ", "
|
||||||
if(artist != "none"):
|
if(artist != "none"):
|
||||||
originalprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, True, antistring)
|
preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, True, antistring)
|
||||||
if(subject == "humanoid"):
|
if(subject == "humanoid"):
|
||||||
originalprompt += ", " + promptcompounderlevel + " people"
|
preppedprompt += ", " + promptcompounderlevel + " people"
|
||||||
if(subject == "landscape"):
|
if(subject == "landscape"):
|
||||||
originalprompt += ", landscape"
|
preppedprompt += ", landscape"
|
||||||
if(subject == "animal"):
|
if(subject == "animal"):
|
||||||
originalprompt += ", " + promptcompounderlevel + " animals"
|
preppedprompt += ", " + promptcompounderlevel + " animals"
|
||||||
if(subject == "object"):
|
if(subject == "object"):
|
||||||
originalprompt += ", " + promptcompounderlevel + " objects"
|
preppedprompt += ", " + promptcompounderlevel + " objects"
|
||||||
|
|
||||||
if(ANDtoggle != "AND" and ANDtoggle != "comma" and originalprompt != ""):
|
if(ANDtoggle != "none" and ANDtoggle != "automatic"):
|
||||||
preppedprompt += originalprompt + " \n AND "
|
preppedprompt += prefixprompt
|
||||||
|
|
||||||
for i in range(int(promptcompounderlevel)):
|
if(ANDtoggle != "none"):
|
||||||
if(ANDtoggle == "automatic AND"):
|
if(ANDtoggle!="prefix + prefix + prompt + suffix"):
|
||||||
preppedprompt += originalprompt + ", " + build_dynamic_prompt(insanitylevel,subject,"none", imagetype, False, antistring)
|
prefixprompt = ""
|
||||||
elif(ANDtoggle != "AND" and ANDtoggle != "comma" and originalprompt != "" and ANDtoggle != "current prompt + AND" ):
|
if(seperator == "comma"):
|
||||||
preppedprompt += originalprompt + ", " + build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring)
|
preppedprompt += " \n , "
|
||||||
else:
|
else:
|
||||||
preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring)
|
preppedprompt += " \n " + seperator + " "
|
||||||
if(i + 1 != int(promptcompounderlevel)):
|
|
||||||
if(ANDtoggle == "comma"):
|
|
||||||
preppedprompt += ", "
|
|
||||||
else:
|
|
||||||
preppedprompt += " \n AND "
|
|
||||||
|
|
||||||
|
#Here is where we build a "normal" prompt
|
||||||
|
preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring, prefixprompt, suffixprompt,promptcompounderlevel, seperator)
|
||||||
|
|
||||||
if(promptlocation == "in the front" and originalprompt != "" and (ANDtoggle == "AND" or ANDtoggle == "comma")):
|
# set everything ready
|
||||||
p.prompt = originalprompt + ", " + preppedprompt
|
p.prompt = preppedprompt
|
||||||
elif(promptlocation == "at the back" and originalprompt != "" and (ANDtoggle == "AND" or ANDtoggle == "comma")):
|
p.negative_prompt = negativeprompt
|
||||||
p.prompt = preppedprompt + ", " + originalprompt # add existing prompt to the back?
|
|
||||||
else:
|
|
||||||
p.prompt = preppedprompt # dont add anything
|
|
||||||
|
|
||||||
if(silentmode == True):
|
if(silentmode == True):
|
||||||
p.prompt = workprompt
|
p.prompt = workprompt
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,120 @@
|
||||||
|
# My first generation
|
||||||
|
Here are some example methods and tips on how to receive some good results.
|
||||||
|
|
||||||
|
Maybe it can give you some ideas and inspiration on how to use this tool. You can just leave everything on "all", which is the most fun!
|
||||||
|
|
||||||
|
It works best with models that are more general and multi purpose. Such as [deliberate](https://civitai.com/models/4823/deliberate) and [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10). However, feel free to use it on your personal favorite models.
|
||||||
|
|
||||||
|
For models which are more trained on specific subjects, match the subject to the model to get the best results.
|
||||||
|
|
||||||
|
## Portraits
|
||||||
|
|
||||||
|
In this example, I am using the deliberate model.
|
||||||
|
I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and lowered the CFG Scale to "6"
|
||||||
|
You can increase the batch count, to keep creating new images.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Then I scroll down and activate the One Button prompt script.
|
||||||
|
|
||||||
|
I set the complexity of the prompt to 5, which is a nice middle ground for prompts.
|
||||||
|
|
||||||
|
As a "Subject Type" I select "humanoid", this will ensure I will get a human, or humanoid like result from the prompt.
|
||||||
|
|
||||||
|
For "Artists" I select "popular". This is a list of popular artists from images from CivitAI. Such as LOIS, Artgerm, alphonse mucha and many others. You can also try and switch "artists" to "Portrait" or "Character"
|
||||||
|
|
||||||
|
As "Type of image" I select "portrait", so that it will always default to a portrait image. You can also try to switch "Type of image" to "digital art" to get more full body shots.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
At that point, you can just press generate.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/21151b88-e5bb-471e-a66f-c95c05168c18.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/da4ffb1f-1d38-4712-8141-4d4443c0b75a.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> portrait, ,close up of a Hopeless Slovak Mind Flayer, cosplaying as John Wick, Dark hair styled as Ballerina bun, Day of the Dead face paint, Desaturated, (art by Jeremy Mann:1.2)
|
||||||
|
|
||||||
|
> art by Greg Rutkowski, portrait, ,close up of a Brutal Micronesian [Sphinx|James from pokemon], wearing Russian Violet 1980s neon clothing, fashion modeling pose, at Nighttime, Visual novel, Grim, soft light, Infrared
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/5a835260-53e0-426b-aeef-bb048ea63905.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/66609fa3-b657-4e4d-8db1-9cd8b3b635f4.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> portrait, ,close up of a Alluring Compelling Macedonian Tom Hardy, Skydiving, Bending backwards, Gray hair styled as Ponytail, from inside of a Cambridge, Simple illustration, Surprising, Heidelberg School, soft light, Magic the gathering, artstation
|
||||||
|
|
||||||
|
> portrait, ,close up of a plump Female Paladin, with Lycra skin, background is The Chaco Culture, art by Loish
|
||||||
|
|
||||||
|
## Photos
|
||||||
|
In this example, I am using the [Realistic Vision](https://civitai.com/models/4201/realistic-vision-v20) model.
|
||||||
|
|
||||||
|
I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and kept the CFG Scale to "7"
|
||||||
|
You can increase the batch count, to keep creating new images.
|
||||||
|
|
||||||
|
I set the complexity of the prompt to 5, which is a nice middle ground for prompts. For photo's you might want to go a bit lower.
|
||||||
|
|
||||||
|
As a "Subject Type" I select "humanoid", as the Realistic Vision model likes to generate humans.
|
||||||
|
|
||||||
|
For "Artists" I select "photograph". This is a list of photography artists. However "fashion" is also a good option here. I can also recommend "none", so no artist is generated at all
|
||||||
|
When using Photograph artists, there is a very high chance of getting a portrait image.
|
||||||
|
|
||||||
|
As "Type of image" I select "photograph", so that it will always default to a portrait image.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/6fed6b83-5ebe-44b1-bd30-3dad00ee6e4e.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/a5fcd78a-ca82-4b33-ac7f-5f2c02720d81.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> [art by Tony Conrad|art by Ed Freeman], photograph, two shot angle of a Chanel Iman, wearing Ultrarealistic Preppy clothing, Honey hair styled as Short and messy, film grain, Canon 5d mark 4, macro lens
|
||||||
|
|
||||||
|
> photograph, High exposure of a Dynamic Ancient Male Writer, Dark hair styled as Tousled, Hazy conditions, Bloom light, film grain, Sony A9 II, Depth of field 100mm, Spirals, dslr, art by Larry Sultan, (art by Ray Collins:1.1)
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/18c3a1ab-cfa3-4975-8a8b-c4872daf86c4.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/a67cf67b-6365-41a8-a791-b6a2b643b2ed.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> photograph, Dutch angle shot of a Homey Nightelf, Pink and Silver hair styled as Dreadlocks, Process Art, Nostalgic lighting, film grain, Canon eos 5d mark 4, telephoto lens
|
||||||
|
|
||||||
|
> art by Geof Kern, (art by Guo Pei:0.8), photograph, 3/4 view of a Reiwa Era Kehlani, Writing, Dark hair styled as Wavy, Necklace, Soul patch, Sunny, Dada Art, volumetric lighting, film grain, dslr, F/1.8, Plain white background
|
||||||
|
|
||||||
|
## Sci-fi Animals (adding trigger words and prompt prefix)
|
||||||
|
In this example, I am using the [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10) model.
|
||||||
|
|
||||||
|
I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and kept the CFG Scale to "7"
|
||||||
|
I've set the Widht to 768 to create some widescreen images.
|
||||||
|
|
||||||
|
You can increase the batch count, to keep creating new images.
|
||||||
|
|
||||||
|
I set the complexity of the prompt to 7, since we can go a little crazy here, but not too much.
|
||||||
|
|
||||||
|
As a "Subject Type" I select "animal", we want to create some cool animals.
|
||||||
|
|
||||||
|
For "Artists" I select "sci-fi". This is a list of sci-fi artists. However "fantasy" is also a good option here.
|
||||||
|
|
||||||
|
As "Type of image" I select "concept art", because we want some cool concept art to show up.
|
||||||
|
|
||||||
|
As an addition, I am also filling in the prompt prefix field with "dreamlikeart, sci-fi animal"
|
||||||
|
|
||||||
|
"dreamlikeart" is the triggerword of the dreamlike diffusion model, so we want to add that as the start
|
||||||
|
|
||||||
|
Adding "sci-fi animal" at the start, makes sure it understand it needs to create a sci-fi animal, regardless of what else the prompts generates. You can play around with these settings, also depending on the model you are using and the results you want.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/52714307-a2f6-4a22-a011-a1752f2e8523.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/375c0995-cade-46c8-8a92-d093aa6cb2d6.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> dreamlikeart, sci-fi animal, (art by Kilian Eng:1.1), concept art, Street level shot of a Floating were badger, from inside of The Tablets of Stone, Foggy, Movie still, Seapunk Art, Low shutter, Lomography, opulent
|
||||||
|
|
||||||
|
> dreamlikeart, sci-fi animal, concept art, ground level shot of a Glam monster Chicken, Realistic, in the style of Bust of Nefertiti, (art by Brian Sum:1.3)
|
||||||
|
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/6e23079a-1981-4707-8058-ff8acd474666.png" alt="click!" width="30%" height="30%">
|
||||||
|
<img src="https://github.com/AIrjen/OneButtonPrompt/assets/130234949/15a11263-a9db-42c2-a8da-cae63479c802.png" alt="click!" width="30%" height="30%">
|
||||||
|
|
||||||
|
> dreamlikeart, sci-fi animal, concept art, aerial shot of a Happy Stegosaurus, Triathlon, wearing 1920's dark pastel Denim shorts, Adventure pose, moody lighting, Fish-eye Lens, Cinestill, National Geographic, dripping Mustard, photolab, (art by Ed Emshwiller:1.3),art by Andy Fairhurst
|
||||||
|
|
||||||
|
> dreamlikeart, sci-fi animal, concept art, full shot of a Lovely Scorpion, background is Mountain, Spring, Rough sketch, key light, 50mm, Film grain, Dota style, inticrate details
|
||||||
|
|
@ -0,0 +1,239 @@
|
||||||
|
# One Button Run and Upscale
|
||||||
|
An extensive new feature of One Button Prompt. One Button goes brrrr.
|
||||||
|
|
||||||
|
This will allow you to:
|
||||||
|
1. Generate an image with TXT2IMG
|
||||||
|
1. Can enable Hi res. fix
|
||||||
|
2. Possible to set up a __Quality Gate__, so only the best images get upscaled
|
||||||
|
3. Possible to ignore the One Button Prompt generation, and __use your own prompts__
|
||||||
|
2. Upscale that image with IMG2IMG
|
||||||
|
1. This proces can be repeated. Loopback enabled.
|
||||||
|
2. Supports __SD Upscale__, __Ultimate SD Upscale__ and __Controlnet tile_resample__ methods of upscaling
|
||||||
|
3. Upscale with EXTRAS
|
||||||
|
4. Possiblity to __just batch upscale__ existing images
|
||||||
|
|
||||||
|
All with a single press of __One Button__.
|
||||||
|
|
||||||
|
## How does it work?
|
||||||
|
It works by using various calls of the WebUI API and calling them with the correct parameters in the correct order.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
The basic requirement, is that WebUI is started with --api enabled. To do so, go into your webui-user.bat file in the WebUi folder, and add --api to the line with set COMMANDLINE_ARGS
|
||||||
|
For example, this is how my file looks like.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After you have made your changes, restart WebUI with webui-user.bat
|
||||||
|
|
||||||
|
### Optional requirements
|
||||||
|
#### Quality Gate
|
||||||
|
To be able to use the Quality Gate functionality, you need to install the following extension into WebUI:
|
||||||
|
|
||||||
|
[image-scorer](https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer)
|
||||||
|
|
||||||
|
You can install this via the "Install from URL" option in WebUI
|
||||||
|
|
||||||
|
#### Ultimate SD Upscale
|
||||||
|
To be able to use Ultimate SD Upscale, you need to have that installed. It can be found here:
|
||||||
|
|
||||||
|
[Ultimate SD Upscale](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111)
|
||||||
|
|
||||||
|
You can install this via the "Install from URL" option in WebUI
|
||||||
|
|
||||||
|
#### ControlNET 1.1 and Tile_resample
|
||||||
|
To be able to use ControlNET 1.1 and the tile_resample method, you need to install both.
|
||||||
|
|
||||||
|
[ControlNET](https://github.com/Mikubill/sd-webui-controlnet)
|
||||||
|
|
||||||
|
[Tile model](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth)
|
||||||
|
|
||||||
|
Here is an extensive [guide on civitai](https://civitai.com/models/59811/4k-resolution-upscale-8x-controlnet-tile-resample-in-depth-with-resources)
|
||||||
|
|
||||||
|
Here is a [youtube video guide from Olivio Sarikas](https://www.youtube.com/watch?v=zrGLEgGFJY4)
|
||||||
|
|
||||||
|
## Image locations
|
||||||
|
One Button Prompt uses its own locations and filenaming convention.
|
||||||
|
|
||||||
|
Go to your WebUI installation folder and then \extensions\onebuttonprompt\automated_outputs\
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Here you should see the One Button Prompt folder structure.
|
||||||
|
1. txt2img -> txt2img results from One Button Run are stored here.
|
||||||
|
2. promps -> txt2img prompts and parameters used are stored here.
|
||||||
|
3. img2img -> img2img results from One Button Run are stored here. With loopback enabled, they are overwritten when loopback is done.
|
||||||
|
4. extras -> extras results from One Button Run are stored here.
|
||||||
|
5. upscale_me -> Place images here for using "just upscale" mode.
|
||||||
|
|
||||||
|
When using One Button Prompt to generate the prompt, the subject will be part of the name.
|
||||||
|
|
||||||
|
Here are some examples, so you can quickly identify the different pictures.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## One Button run tab and options explained
|
||||||
|
|
||||||
|
### TXT2IMG
|
||||||
|
Important to note, is that the TXT2IMG prompt generation process works with the options set in the Main, Workflow Assist and Advanced tabs.
|
||||||
|
|
||||||
|
So you can set up any specifics you want for the prompt generation first.
|
||||||
|
|
||||||
|
The __"folder"__ buttons opens to your One Button Prompt automated outputs folders.
|
||||||
|
|
||||||
|
There are some general options to set first.
|
||||||
|
|
||||||
|
__URL__ -> This should be the URL used by WebUI (see your webbrowser). This is standardly __http://127.0.0.1:7860__ , but change it to your specific instance.
|
||||||
|
|
||||||
|
__Amount of images to generate__ -> How many times should it repeat the entire process. So how many images to generate and upscale
|
||||||
|
|
||||||
|
__Don't generate, only upscale__ -> Place images in the /upscale_me/ folder, when enabled, it will skip the TXT2IMG part, and will start batch upscaling the images with the set parameters
|
||||||
|
|
||||||
|
__model to use__ -> Select which model to use during generation. __"Currently Selected Model"__ meand the model you have loaded right now. __"all"__ means a random model (not inpainting models). Or select a specific one.
|
||||||
|
|
||||||
|
|
||||||
|
For TXT2IMG most options should be familiar. I will here explain some of the additions and changes.
|
||||||
|
|
||||||
|
|
||||||
|
__Size to generate__:
|
||||||
|
|
||||||
|
1. __all__ -> picks randomly between __portrait__, __wide__ and __square__
|
||||||
|
2. __portrait__ -> 512x768
|
||||||
|
3. __wide__ -> 768x512
|
||||||
|
4. __square__ -> 512x512
|
||||||
|
5. __ultrawide__ -> 1280x360 (Don't worry, this one is just for me, and won't be used when picking "all")
|
||||||
|
|
||||||
|
__Sampler__:
|
||||||
|
|
||||||
|
Added option for __"all"__, picks randomly
|
||||||
|
|
||||||
|
__Hirex upscaler__:
|
||||||
|
|
||||||
|
Added option for __"all"__, picks randomly
|
||||||
|
|
||||||
|
Added option __"automatic"__, sets Upscaler and Denoise Strength based on prompt. Example, if the prompt contains anime, it will try to use the anime upscaler.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
#### Quality Gate
|
||||||
|
Idea and inspired by xKean. Such an awesome addition.
|
||||||
|
See requirements above.
|
||||||
|
|
||||||
|
When enabling __Quality Gate__, it will repeat the above TXT2IMG process until:
|
||||||
|
- an Image reaches the __Quality Score__
|
||||||
|
- we have reached the __amount of tries__
|
||||||
|
|
||||||
|
When an image reaches the quality score, all other images are removed. It will then continue with the quality image.
|
||||||
|
If we reach the amount of tries, it will __pick the image with the highest score__. All other images are removed. It will then continue with the highest quality image.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
### IMG2IMG
|
||||||
|
Again, most options should be familiar for using IMG2IMG when upscaling. It defaults to using __"SD Upscale"__ method which is included in WebUI standardly.
|
||||||
|
Enable __"Upscale image with IMG2IMG"__ to actually turn this on.
|
||||||
|
|
||||||
|
__Amount times to repeat upscaling with IMG2IMG (loopback)__ -> This controls the amount of times to use IMG2IMG to upscale.
|
||||||
|
|
||||||
|
I will describe some of the changes from normal.
|
||||||
|
|
||||||
|
__model to use__ -> Select which model to use during generation. __"Currently Selected Model"__ meand the model you have loaded right now. __"all"__ means a random model (not inpainting models). Or select a specific one.
|
||||||
|
|
||||||
|
Note that you can have a different model selected here, than used in the TXT2IMG process.
|
||||||
|
|
||||||
|
__Sampler__:
|
||||||
|
|
||||||
|
Added option for __"all"__, picks randomly
|
||||||
|
|
||||||
|
__Upscaler__:
|
||||||
|
|
||||||
|
Added option for __"all"__, picks randomly
|
||||||
|
|
||||||
|
Added option __"automatic"__, sets Upscaler and Denoise Strength based on prompt. Example, if the prompt contains anime, it will try to use the anime upscaler.
|
||||||
|
|
||||||
|
__adjust denoise each img2img batch__ -> Adds or subtracts this amount of denoise, during each IMG2IMG batch. Usually you want a lower denoise when upscaling larger images.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
#### Use Ultimate SD Upscale script
|
||||||
|
Turn on __"Use Ultimate SD Upscale script instead"__ to use Ultimate SD Upscale. You need to have that extension installed, see requirements above.
|
||||||
|
|
||||||
|
Here, all options from Ultimate SD Upscale are available.
|
||||||
|
|
||||||
|
It uses img2img padding for the padding.
|
||||||
|
It will always be set to __"Scale from image size"__
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
#### Controlnet tile resample
|
||||||
|
Best used in combination with Ultimate SD Upscale and the 4x-UltraSharp upscaler, however you can use it with the normal SD Upscaler as well.
|
||||||
|
The controlnet tile model name is filled in for you, but if a later or newer version comes out, you might have to change this to that specific one. Current version is __"control_v11f1e_sd15_tile [a371b31b]"__
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
There is also a option for __"also enable wierd blocky upscale mode".__ This was a bug I found during development, but brought in as a feature. Best used with 4x-UltraSharp upscaler and a decent denoise (0.7-0.8).
|
||||||
|
|
||||||
|
Here is an example result, so you can see what to expect.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> art by Daria Petrilli,art by J.C. Leyendecker, landscape of a City, at Overcast, Simple illustration, Lonely, Industrial Art, volumetric lighting, DayGlo and electric pink hue, under water
|
||||||
|
|
||||||
|
## Upscale with EXTRAS
|
||||||
|
The last part is rather straightforward. You can at the last step, upscale the image through the EXTRAS tab.
|
||||||
|
All the main options are here, you do need to enable __"Enable upscale with extras"__.
|
||||||
|
Again, the upscalers set to __"all"__ are random. You can get both the same upscaler.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
# Just Upscale mode
|
||||||
|
Next to the Start button, is a checkbox for __"Don't generate, only upscale"__
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This mode __skips the entire TXT2IMG part__ of the batch. Instead, it will pick up all image files placed in the __\extensions\OneButtonPrompt-dev\automated_outputs\upscale_me\ folder__, and starts looping over those instead.
|
||||||
|
It will ignore any .txt file.
|
||||||
|
It will keep the original files, so you have to remove them before starting the next batch again.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This method will __read the original prompt and negative prompt__ from the image file (if it exists) and use that during the upscaling process.
|
||||||
|
Thus it can be used for __any image generated by the WebUI__, not specific to OneButtonPrompt image files.
|
||||||
|
|
||||||
|
This method is perfect for anyone who just generates their images via TXT2IMG, and then __upscale in batch__ the best results automatically.
|
||||||
|
|
||||||
|
|
||||||
|
# Workflows
|
||||||
|
Each one of you has their own workflow ideas. People might prefer hires fix over tile upscaling, or some people might prefer upscaling multiple times as a loopback.
|
||||||
|
|
||||||
|
For me, I like to cherry pick results, and then use the "Just Upscale mode" to batch upscale my favorites.
|
||||||
|
|
||||||
|
You can set everything to "all", and just start generating a bunch of random stuff.
|
||||||
|
|
||||||
|
I'm sure your specific way of upscaling and working is supported by all the options offered here. Missing something? Let me know!
|
||||||
|
|
||||||
|
## Using your own prompts
|
||||||
|
If you don't like the results of the One Button Prompt generator (how could you not!), you can __turn off the prompt generation, and use your own prompts__ instead.
|
||||||
|
|
||||||
|
Go to the __"Workflow assist"__ tab, and enable __"Workflow mode"__.
|
||||||
|
Put your prompt in the __Workflow prompt__ field, and it will start using that
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
For the __negative prompt__, use the __"Main"__ tab
|
||||||
|
|
||||||
|
Put the negative prompt in the __"Use this negative prompt"__ field.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
And done, you can now use One Button Run and Upscale with your own prompts.
|
||||||
|
|
||||||
|
|
||||||
|
### Please enjoy
|
||||||
|
Keep having fun!
|
||||||
Loading…
Reference in New Issue