diff --git a/README.md b/README.md
index ffd3b41..2bd680f 100644
--- a/README.md
+++ b/README.md
@@ -12,6 +12,8 @@ It generates an entire prompt from scratch. It is random, but controlled. You si
It is best used on all-purpose models, such as Stable Difussion 1.5 or those based on 1.5. Such as [deliberate](https://civitai.com/models/4823/deliberate) and [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10). However, feel free to use it on your personal favorite models.
+A simple user guide for first time use and settings is available [here](https://github.com/AIrjen/OneButtonPrompt/user_guides/my_first_generation.md).
+
## How to use in automatic1111
In TXT2IMG or IMG2IMG, select the script "One Button Prompt".
@@ -33,7 +35,7 @@ Enjoy creating awesome pictures:
Please be aware, that not each picture will be awesome due to the randomness of the prompt, artist and model used.
You might get an epic landscape, or a photo of an Aggregavated Trout. In my experience, about 1 in 5 is good. Everyone of them is interesting.
-Some more examples below.
+Some more examples below. And check the first time user guide [here](https://github.com/AIrjen/OneButtonPrompt/user_guides/my_first_generation.md).
### Some details
It will generate between 0 and 3 artists, and add those the prompt.
@@ -124,20 +126,36 @@ If you want more control, use "current prompt + AND". An example would be "Art b
>
>AND art by brandon woelfel, 2 people, Abstract, selfie shot angle of a 1920's Nasty slight Michelle Yeoh surrounded by Grapefruits, Blonde hair styled as Short and messy, glowing Turquoise eyes, Anime screencap, Panfuturism, Magic the gathering, photolab, High quality
-# off-hands, automatic generation
-This project started out as a personal project, to automatically generate windows wallpapers. The code is still in here, and can be used. You would need to set it up correctly, so it is for advanced users only.
+# One Button Run and Upscale
+Using the API feature of WebUI, this allows you to:
+
+This will allow you to:
+1. Generate an image with TXT2IMG
+ 1. Can enable Hi res. fix
+ 2. Possible to set up a __Quality Gate__, so only the best images get upscaled
+ 3. Possible to ignore the One Button Prompt generation, and __use your own prompts__
+2. Upscale that image with IMG2IMG
+ 1. This proces can be repeated. Loopback enabled.
+ 2. Supports __SD Upscale__, __Ultimate SD Upscale__ and __Controlnet tile_resample__ methods of upscaling
+3. Upscale with EXTRAS
+4. Possiblities for __just upscale__ existing images
+
+All with a single press of __One Button__.
+
+[User Guide here!](https://github.com/AIrjen/OneButtonPrompt/user_guides/one_button_run_and_upscale.md#one-button-run-and-upscale)
-In the main.py script, there is logic that calls the API's from automatic1111. Uncomment the lines you need, such as txt2img. You also need to create some folders on your computer (see code)
-Start automatic1111 with the option --api and run the main script.
-Edit the main script to set the amount of loops/images to generate, and uncomment the txt2img, img2img and upscale scripts to taste.
-It was build for personal development, so adjust directories and settings accordingly.
# roadmap
Some ideas I'd like to implement:
-- Better workflow management in workflow assist tab
-- Curated artist lists
-- SD 2.1 support (inversion negative prompt)
+- The In Control update
+ - Choose your own subject
+ - Split up subjects more, and pick more detailed subjects, such as food, female, building, etc
+ - Support for LoRA and textual inversions
+ - Trigger word support
+- ~~Bring upscale automation to front-end~~ Done
+- ~~Better workflow management in workflow assist tab~~ Done
+- ~~Curated artist lists~~ Done
- Ongoing: list refinements and new features in the prompt generation
If you have a good idea or suggestion, let me know, or build it yourself ;)
diff --git a/automated_outputs/extras/this_is_where_extras_files_lands.txt b/automated_outputs/extras/this_is_where_extras_files_lands.txt
new file mode 100644
index 0000000..5d56e77
--- /dev/null
+++ b/automated_outputs/extras/this_is_where_extras_files_lands.txt
@@ -0,0 +1,2 @@
+From One Button run and upscale
+There needs to be a file in each directory for GIT to work properly.
\ No newline at end of file
diff --git a/automated_outputs/img2img/this_is_where_img2img_files_lands.txt b/automated_outputs/img2img/this_is_where_img2img_files_lands.txt
new file mode 100644
index 0000000..5d56e77
--- /dev/null
+++ b/automated_outputs/img2img/this_is_where_img2img_files_lands.txt
@@ -0,0 +1,2 @@
+From One Button run and upscale
+There needs to be a file in each directory for GIT to work properly.
\ No newline at end of file
diff --git a/automated_outputs/prompts/this_is_where_prompt_files_lands.txt b/automated_outputs/prompts/this_is_where_prompt_files_lands.txt
new file mode 100644
index 0000000..5d56e77
--- /dev/null
+++ b/automated_outputs/prompts/this_is_where_prompt_files_lands.txt
@@ -0,0 +1,2 @@
+From One Button run and upscale
+There needs to be a file in each directory for GIT to work properly.
\ No newline at end of file
diff --git a/automated_outputs/txt2img/this_is_where_txt2img_files_lands.txt b/automated_outputs/txt2img/this_is_where_txt2img_files_lands.txt
new file mode 100644
index 0000000..5d56e77
--- /dev/null
+++ b/automated_outputs/txt2img/this_is_where_txt2img_files_lands.txt
@@ -0,0 +1,2 @@
+From One Button run and upscale
+There needs to be a file in each directory for GIT to work properly.
\ No newline at end of file
diff --git a/automated_outputs/upscale_me/this_is_where_you_should_place_images_to_upscale.txt b/automated_outputs/upscale_me/this_is_where_you_should_place_images_to_upscale.txt
new file mode 100644
index 0000000..d30df49
--- /dev/null
+++ b/automated_outputs/upscale_me/this_is_where_you_should_place_images_to_upscale.txt
@@ -0,0 +1,5 @@
+From One Button run and upscale
+There needs to be a file in each directory for GIT to work properly.
+
+In this directory, place all images that you want to upscale in a batch.
+Then start One Button run and upscale with just 'only upscale'
\ No newline at end of file
diff --git a/build_dynamic_prompt.py b/build_dynamic_prompt.py
index 1f78686..050cf3f 100644
--- a/build_dynamic_prompt.py
+++ b/build_dynamic_prompt.py
@@ -10,20 +10,9 @@ from random_functions import *
# insanity level controls randomness of propmt 0-10
# forcesubject van be used to force a certain type of subject
# Set artistmode to none, to exclude artists
-def build_dynamic_prompt(insanitylevel = 5, forcesubject = "all", artists = "all", imagetype = "all", onlyartists = False, antivalues = ""):
-
- completeprompt = ", "
-
- isphoto = 0
- othertype = 0
- humanspecial = 0
- animaladdedsomething = 0
- isweighted = 0
- amountofimagetypes = 0
- hybridorswap = ""
- artistmode = "normal"
- insideshot = 0
+def build_dynamic_prompt(insanitylevel = 5, forcesubject = "all", artists = "all", imagetype = "all", onlyartists = False, antivalues = "", prefixprompt = "", suffixprompt ="",promptcompounderlevel ="1", seperator = "comma"):
+
# first build up a complete anti list. Those values are removing during list building
# this uses the antivalues string AND the antilist.csv
emptylist = []
@@ -78,568 +67,610 @@ def build_dynamic_prompt(insanitylevel = 5, forcesubject = "all", artists = "all
timeperiodlist = csv_to_list("timeperiods",antilist)
vomitlist = csv_to_list("vomit",antilist)
+ completeprompt = ""
- # create artist list to use in the code, maybe based on category
- if(artists != "all" and artists != "none"):
- artistlist = artist_category_csv_to_list("artists_and_category",artists)
- else:
- artistlist = csv_to_list("artists")
-
-
-
- if(insanitylevel==0):
- insanitylevel = random.randint(1, 10) # 10 = add everything, 1 is add almost nothing
- insanitylevel3 = int((insanitylevel/3) + 1.20)
-
- print("Setting insanity level to " + str(insanitylevel))
-
- # main chooser: 0 object, 1 animal, 2 humanoid, 3 landscape, 4 event/concept
- mainchooserlist = ["object","animal","humanoid", "landscape", "concept"]
- mainchooser = mainchooserlist[random.randint(0, 4)]
- if(forcesubject != "" and forcesubject != "all"):
- mainchooser = forcesubject
- # 0 object, 1 animal, 2 animal as human, 3 ManWoman, 4 Job, 5 fictional, 6 non fictional, 7 humanoid, 8 landscape, 9 event
- if(mainchooser == "object"):
- subjectchooser = "object"
- if(mainchooser == "animal"):
- # sometimes interpret the animal as a human
- if(random.randint(0,5) < 5):
- subjectchooser = "animal"
+
+ promptstocompound = int(promptcompounderlevel)
+ compoundcounter = 0
+
+ while compoundcounter < promptstocompound:
+ isphoto = 0
+ othertype = 0
+ humanspecial = 0
+ animaladdedsomething = 0
+ isweighted = 0
+ amountofimagetypes = 0
+ hybridorswap = ""
+ artistmode = "normal"
+ insideshot = 0
+
+ completeprompt += prefixprompt
+
+ completeprompt += ", "
+
+
+ # create artist list to use in the code, maybe based on category
+ if(artists != "all" and artists != "none"):
+ artistlist = artist_category_csv_to_list("artists_and_category",artists)
else:
- subjectchooser = "animal as human"
- if(mainchooser == "humanoid"):
- subjectchooserlist = ["human", "job", "fictional", "non fictional", "humanoid"]
- subjectchooser = subjectchooserlist[random.randint(0, 4)]
- if(mainchooser == "landscape"):
- subjectchooser = "landscape"
- if(mainchooser == "concept"):
- subjectchooserlist = ["event", "concept"]
- subjectchooser = subjectchooserlist[random.randint(0, 1)]
+ artistlist = csv_to_list("artists")
- hybridlist = ["-animal-", "-object-", "-fictional-", "-nonfictional-", "-building-", "-vehicle-"]
- hybridhumanlist = ["-fictional-", "-nonfictional-"]
+
+ if(insanitylevel==0):
+ insanitylevel = random.randint(1, 10) # 10 = add everything, 1 is add almost nothing
+ insanitylevel3 = int((insanitylevel/3) + 1.20)
+
+ print("Setting insanity level to " + str(insanitylevel))
+
+ # main chooser: 0 object, 1 animal, 2 humanoid, 3 landscape, 4 event/concept
+ mainchooserlist = ["object","animal","humanoid", "landscape", "concept"]
+ mainchooser = mainchooserlist[random.randint(0, 4)]
+
+ if(forcesubject != "" and forcesubject != "all"):
+ mainchooser = forcesubject
+ # 0 object, 1 animal, 2 animal as human, 3 ManWoman, 4 Job, 5 fictional, 6 non fictional, 7 humanoid, 8 landscape, 9 event
+ if(mainchooser == "object"):
+ subjectchooser = "object"
+ if(mainchooser == "animal"):
+ # sometimes interpret the animal as a human
+ if(random.randint(0,5) < 5):
+ subjectchooser = "animal"
+ else:
+ subjectchooser = "animal as human"
+ if(mainchooser == "humanoid"):
+ subjectchooserlist = ["human", "job", "fictional", "non fictional", "humanoid"]
+ subjectchooser = subjectchooserlist[random.randint(0, 4)]
+ if(mainchooser == "landscape"):
+ subjectchooser = "landscape"
+ if(mainchooser == "concept"):
+ subjectchooserlist = ["event", "concept"]
+ subjectchooser = subjectchooserlist[random.randint(0, 1)]
+
+
+ hybridlist = ["-animal-", "-object-", "-fictional-", "-nonfictional-", "-building-", "-vehicle-"]
+ hybridhumanlist = ["-fictional-", "-nonfictional-"]
+
+
+ # start artist part
+
+ artistsplacement = "front"
+ if(uncommon_dist(insanitylevel) and onlyartists == False):
+ artistsplacement = "back"
+
+ if(artists != "none" and artistsplacement == "front"):
+ # take 1-3 artists, weighted to 1-2
+ step = random.randint(0, 1)
+ end = random.randint(1, insanitylevel3)
+
+
+
+
+ # determine artist mode:
+ # normal
+ # hybrid |
+ # switching A:B:X
+ # adding at step x a:X
+ # stopping at step x ::X
+ # enhancing from step x
+
+
+
+
+ modeselector = random.randint(0,10)
+ if modeselector < 5 and end - step >= 2:
+ artistmodeslist = ["hybrid", "stopping", "adding", "switching", "enhancing"]
+ artistmode = artistmodeslist[modeselector]
+ if (artistmode in ["hybrid","switching"] and end - step == 1):
+ artistmode = "normal"
+
+ if(onlyartists == True and artistmode == "enhancing"):
+ artistmode = "normal"
+ if(onlyartists == True and step == end):
+ step = step - 1
+
+ if artistmode in ["hybrid", "stopping", "adding","switching"]:
+ completeprompt += " ["
+
+ while step < end:
+ if(normal_dist(insanitylevel)):
+ isweighted = 1
+
+ if isweighted == 1:
+ completeprompt += " ("
+
+ #completeprompt = add_from_csv(completeprompt, "artists", 0, "art by ","")
+ completeprompt += "art by " + random.choice(artistlist)
+
+ if isweighted == 1:
+ completeprompt += ":" + str(1 + (random.randint(-3,3)/10)) + ")"
+
+ if artistmode in ["hybrid"] and not end - step == 1:
+ completeprompt += "|"
+ if artistmode in ["switching"] and not end - step == 1:
+ completeprompt += ":"
+
+ if artistmode not in ["hybrid", "switching"]and not end - step == 1:
+ completeprompt += ","
+
+ isweighted = 0
+
+ step = step + 1
+
+ if artistmode in ["stopping"]:
+ completeprompt += "::"
+ completeprompt += str(random.randint(1,19))
+
+ if artistmode in ["switching","adding"]:
+ completeprompt += ":" + str(random.randint(1,18))
+ if artistmode in ["hybrid", "stopping","adding", "switching"]:
+ completeprompt += "]"
+
+
+ if(onlyartists == True):
+ completeprompt = completeprompt.strip(", ")
+ print("only generated these artists:" + completeprompt)
+ return completeprompt
+
+
+ completeprompt += ", "
+
+
+
+ if artistmode in ["enhancing"]:
+ completeprompt += " ["
+
+ # start image type
+
+ if(imagetype != "all" and imagetype != "all - force multiple" and imagetype != "only other types"):
+ completeprompt += " " + imagetype + ", "
+ elif(imagetype == "all - force multiple" or unique_dist(insanitylevel)):
+ amountofimagetypes = random.randint(2,3)
+ elif(imagetype == "only other types"):
+ othertype = 1
+ completeprompt += random.choice(othertypelist) + " of a "
+
+ if(imagetype == "all" and normal_dist(insanitylevel) and amountofimagetypes <= 1):
+ amountofimagetypes = 1
+
+ for i in range(amountofimagetypes):
+ # one in 6 images is a complex/other type
+ if(random.randint(0,5) < 5):
+ completeprompt += ", " + random.choice(imagetypelist) + " "
+ else:
+ othertype = 1
+ completeprompt += ", " + random.choice(othertypelist) + " "
+
+ if(othertype==1):
+ completeprompt += " of a "
+
+
+ # start shot size
+
+ if(mainchooser in ["object", "animal", "humanoid", "concept"] and othertype == 0 and "portrait" not in completeprompt):
+ completeprompt += random.choice(shotsizelist) + " of a "
+ elif("portrait" in completeprompt):
+ completeprompt += " ,close up of a "
+ elif(mainchooser in ["landscape"]):
+ completeprompt += " landscape of a "
- # start artist part
-
- artistsplacement = "front"
- if(uncommon_dist(insanitylevel) and onlyartists == False):
- artistsplacement = "back"
-
- if(artists != "none" and artistsplacement == "front"):
- # take 1-3 artists, weighted to 1-2
- step = random.randint(0, 1)
- end = random.randint(1, insanitylevel3)
-
-
-
-
- # determine artist mode:
- # normal
- # hybrid |
- # switching A:B:X
- # adding at step x a:X
- # stopping at step x ::X
- # enhancing from step x
-
-
-
-
- modeselector = random.randint(0,10)
- if modeselector < 5 and end - step >= 2:
- artistmodeslist = ["hybrid", "stopping", "adding", "switching", "enhancing"]
- artistmode = artistmodeslist[modeselector]
- if (artistmode in ["hybrid","switching"] and end - step == 1):
- artistmode = "normal"
+ # start subject building
- if(onlyartists == True and artistmode == "enhancing"):
- artistmode = "normal"
- if(onlyartists == True and step == end):
- step = step - 1
+ # start with descriptive qualities
+
+ # Common to have 1 description, uncommon to have 2
+ if(common_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
- if artistmode in ["hybrid", "stopping", "adding","switching"]:
- completeprompt += " ["
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
+
+ if(subjectchooser in ["animal as human,","human", "job", "fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
+ completeprompt += random.choice(bodytypelist) + " "
+
+ if(subjectchooser in ["object","animal as human,","human", "job", "fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
+ completeprompt += random.choice(culturelist) + " "
+
+ if(mainchooser == "object"):
+ objecttypelist = [objectlist, buildinglist, vehiclelist] # first select a random list, then randomly select from the corresponding list
- while step < end:
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ chosenobjecttype = random.choice(objecttypelist)
+
+ completeprompt += random.choice(chosenobjecttype) + " "
+
+ if(hybridorswap == "hybrid"):
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += "|" + random.choice(hybridlist) + "]"
+ else:
+ completeprompt += "|"
+ completeprompt += random.choice(chosenobjecttype) + " "
+ completeprompt += "]"
+ if(hybridorswap == "swap"):
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += ":" + random.choice(hybridlist) + ":" + str(random.randint(1,5)) + "]"
+ else:
+ completeprompt += ":"
+ completeprompt += random.choice(chosenobjecttype) + " "
+ completeprompt += ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ if(mainchooser == "animal"):
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ if unique_dist(insanitylevel):
+ animaladdlist = ["baby", "were", "giant", "monster"]
+ animaladdedsomething = 1
+ completeprompt += random.choice(animaladdlist) + " -animal-"
+ if(animaladdedsomething != 1):
+ completeprompt += random.choice(animallist) + " "
+
+ if(hybridorswap == "hybrid"):
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += "|" + random.choice(hybridlist) + "]"
+ else:
+ completeprompt += "| -animal- ]"
+ if(hybridorswap == "swap"):
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += ":" + random.choice(hybridlist) + ":" + str(random.randint(1,5)) + "]"
+ else:
+ completeprompt += ":-animal-:" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ if(subjectchooser == "human"):
+ completeprompt += random.choice(manwomanlist) + " "
+
+ if(subjectchooser == "job"):
+ completeprompt += random.choice(malefemalelist) + " "
+ completeprompt += random.choice(joblist) + " "
+
+ if(subjectchooser == "fictional"):
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ completeprompt += random.choice(fictionallist) + " "
+
+ if(hybridorswap == "hybrid"):
+ completeprompt += "|" + random.choice(hybridhumanlist) + "]"
+ if(hybridorswap == "swap"):
+ completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ if(subjectchooser == "non fictional"):
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ completeprompt += random.choice(nonfictionallist) + " "
+
+ if(hybridorswap == "hybrid"):
+ completeprompt += "|" + random.choice(hybridhumanlist) + "]"
+ if(hybridorswap == "swap"):
+ completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ if(subjectchooser == "humanoid"):
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ completeprompt += random.choice(humanoidlist) + " "
+
+ if(hybridorswap == "hybrid"):
+ completeprompt += "|" + random.choice(hybridhumanlist) + "]"
+ if(hybridorswap == "swap"):
+ completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ if(subjectchooser == "landscape"):
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ completeprompt += random.choice(locationlist) + " "
+
+ if(hybridorswap == "hybrid"):
+ completeprompt += "|" + "-location-" + "]"
+ if(hybridorswap == "swap"):
+ completeprompt += ":" + "-location-" + ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+
+ # shots from inside can create cool effects in landscapes
+ if(unique_dist(insanitylevel)):
+ insideshot = 1
+ completeprompt += " from inside of a "
+ addontolocation = [locationlist,buildinglist]
+ completeprompt += random.choice(random.choice(addontolocation)) + " "
+
+ if(normal_dist(insanitylevel) and insideshot == 0):
+ completeprompt += " and "
+ if(rare_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
+ if(rare_dist(insanitylevel)):
+ completeprompt += random.choice(culturelist) + " "
+
+ addontolocation = [locationlist,buildinglist, vehiclelist]
+ completeprompt += random.choice(random.choice(addontolocation)) + " "
+
+
+ if(subjectchooser == "event"):
+ completeprompt += " \"" + random.choice(eventlist) + "\" "
+
+ if(subjectchooser == "concept"):
+ completeprompt += " \" The " + random.choice(conceptprefixlist) + " of " + random.choice(conceptsuffixlist) + "\" "
+
+ # object with a face
+ if(mainchooser == "object" and unique_dist(insanitylevel)):
+ completeprompt += " with a face "
+
+ # object materials
+ if(mainchooser == "object" and uncommon_dist(insanitylevel)):
+ completeprompt += " made from -material- "
+
+ # object detailing
+ if(mainchooser == "object" and rare_dist(insanitylevel)):
+ completeprompt += " detailed with "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
+ completeprompt += "-material- patterns "
+
+
+ # riding an animal, holding an object or driving a vehicle, rare
+ if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and rare_dist(insanitylevel)):
+ humanspecial = 1
+ speciallist = [" riding a -animal- ", " holding a -object- ", " driving a -vehicle-", " visiting a -building-", " with a -animal-", " surrounded by -object-s"]
+ completeprompt += random.choice(speciallist)
+
+
+
+ completeprompt += ", "
+
+ # SD understands emoji's. Can be used to manipulate facial expressions.
+ # emoji, legendary
+ if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and legendary_dist(insanitylevel)):
+ completeprompt += random.choice(emojilist) + ", "
+
+
+ # cosplaying
+ if(subjectchooser in ["animal as human", "non fictional", "humanoid"] and rare_dist(insanitylevel) and humanspecial != 1):
+ completeprompt += "cosplaying as " + random.choice(fictionallist) + ", "
+
+ # Job
+ # either go job or activity, not both
+
+ if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel) and humanspecial != 1):
+ joboractivitylist = [joblist,humanactivitylist]
+ completeprompt += random.choice(random.choice(joboractivitylist)) + ", "
+
+
+ if(subjectchooser in ["animal as human","human","job", "fictional", "non fictional", "humanoid"] and legendary_dist(insanitylevel)):
+ skintypelist = ["-color-", "-material-"]
+ completeprompt += ", with " + random.choice(skintypelist) + " skin, "
+
+ # outfit builder
+ if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
+ completeprompt += ", wearing "
if(normal_dist(insanitylevel)):
- isweighted = 1
+ completeprompt += random.choice(descriptorlist) + " "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(culturelist) + " "
+ if(normal_dist(insanitylevel)):
+ completeprompt += " -color- "
+ if(rare_dist(insanitylevel)):
+ completeprompt += " -material- "
- if isweighted == 1:
- completeprompt += " ("
+ if rare_dist(insanitylevel):
+ hybridorswaplist = ["hybrid", "swap"]
+ hybridorswap = random.choice(hybridorswaplist)
+ completeprompt += "["
+
+ completeprompt += random.choice(outfitlist) + " "
+
+ if(hybridorswap == "hybrid"):
+ completeprompt += "|" + "-outfit-" + "]"
+ if(hybridorswap == "swap"):
+ completeprompt += ":" + "-outfit-" + ":" + str(random.randint(1,5)) + "]"
+ hybridorswap = ""
+ completeprompt += ", "
- #completeprompt = add_from_csv(completeprompt, "artists", 0, "art by ","")
- completeprompt += "art by " + random.choice(artistlist)
-
- if isweighted == 1:
- completeprompt += ":" + str(1 + (random.randint(-3,3)/10)) + ")"
-
- if artistmode in ["hybrid"] and not end - step == 1:
- completeprompt += "|"
- if artistmode in ["switching"] and not end - step == 1:
- completeprompt += ":"
- if artistmode not in ["hybrid", "switching"]and not end - step == 1:
- completeprompt += ","
-
- isweighted = 0
-
- step = step + 1
-
- if artistmode in ["stopping"]:
- completeprompt += "::"
- completeprompt += str(random.randint(1,19))
+ if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and uncommon_dist(insanitylevel) and humanspecial != 1):
+ completeprompt += random.choice(poselist) + ", "
- if artistmode in ["switching","adding"]:
- completeprompt += ":" + str(random.randint(1,18))
- if artistmode in ["hybrid", "stopping","adding", "switching"]:
- completeprompt += "]"
+ if(subjectchooser in ["human","job","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
+ completeprompt += random.choice(haircolorlist) + " "
+ completeprompt += " hair styled as " + random.choice(hairstylelist) + ", "
+
+ if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
+ completeprompt += random.choice(accessorielist) + ", "
+ # Sometimes get 2
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(accessorielist) + ", "
+ # or even three, these are fun and often minor :)
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(accessorielist) + ", "
+
+ if(legendary_dist(insanitylevel) and subjectchooser not in ["landscape", "concept"]):
+ insideshot = 1
+ completeprompt += ", from inside of a "
+ addontolocation = [locationlist,buildinglist]
+ completeprompt += random.choice(random.choice(addontolocation)) + ", "
+
+ if(subjectchooser not in ["landscape", "concept"] and humanspecial != 1 and insideshot == 0 and normal_dist(insanitylevel)):
+ backgroundtypelist = ["landscape", "buildingbackground", "insidebuilding"]
+ backgroundtype = random.choice(backgroundtypelist)
+ if(backgroundtype == "landscape"):
+ completeprompt += "background is " + random.choice(locationlist) + ", "
+ elif(backgroundtype == "buildingbackground"):
+ completeprompt += ", background is "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
+ completeprompt += random.choice(buildinglist) + ", "
+ elif(backgroundtype == "insidebuilding"):
+ completeprompt += ", inside a "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(descriptorlist) + " "
+ completeprompt += random.choice(buildinglist) + ", "
- if(onlyartists == True):
- completeprompt = completeprompt.strip(", ")
- print("only generated these artists:" + completeprompt)
- return completeprompt
- completeprompt = completeprompt + ", "
+ # landscapes it is nice to always have a time period
+ if(normal_dist(insanitylevel) or subjectchooser=="landscape"):
+ completeprompt += random.choice(timeperiodlist) + ", "
+
+ if(mainchooser not in ["landscape"] and rare_dist(insanitylevel)):
+ completeprompt += random.choice(focuslist) + ", "
+
+
+
+ # others
+ if(normal_dist(insanitylevel)):
+ completeprompt += random.choice(directionlist) + ", "
+
+ if(normal_dist(insanitylevel)):
+ completeprompt += random.choice(moodlist) + ", "
+
+ if(normal_dist(insanitylevel)):
+ completeprompt += random.choice(artmovementlist) + ", "
+
+ if(normal_dist(insanitylevel)):
+ completeprompt += random.choice(lightinglist) + ", "
+
+ # determine wether we have a photo or not
+ if("hoto" in completeprompt):
+ isphoto = 1
+ if(common_dist(insanitylevel) and not "film grain" in antilist):
+ completeprompt += ", film grain, "
+
+ if(isphoto == 1):
+ completeprompt += random.choice(cameralist) + ", "
+
+ if(normal_dist(insanitylevel) or isphoto == 1):
+ completeprompt += random.choice(lenslist) + ", "
+
+ if(normal_dist(insanitylevel)):
+ completeprompt += random.choice(colorschemelist) + ", "
+
+ # vomit some cool/wierd things into the prompt
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(vomitlist) + ", "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(vomitlist) + ", "
+
+ #adding a great work of art, like starry night has cool effects. But this should happen only very rarely.
+ if(novel_dist(insanitylevel)):
+ completeprompt += " in the style of " + random.choice(greatworklist) + ", "
+
+ # everyone loves the adding quality. The better models don't need this, but lets add it anyway
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(qualitylist) + ", "
+ if(uncommon_dist(insanitylevel)):
+ completeprompt += random.choice(qualitylist) + ", "
if artistmode in ["enhancing"]:
- completeprompt += " ["
-
- # start image type
-
- if(imagetype != "all" and imagetype != "all - force multiple" and imagetype != "only other types"):
- completeprompt += " " + imagetype + ", "
- elif(imagetype == "all - force multiple" or unique_dist(insanitylevel)):
- amountofimagetypes = random.randint(2,3)
- elif(imagetype == "only other types"):
- othertype = 1
- completeprompt += random.choice(othertypelist) + " of a "
-
- if(imagetype == "all" and normal_dist(insanitylevel) and amountofimagetypes <= 1):
- amountofimagetypes = 1
-
- for i in range(amountofimagetypes):
- # one in 6 images is a complex/other type
- if(random.randint(0,5) < 5):
- completeprompt += random.choice(imagetypelist) + ", "
- else:
- othertype = 1
- completeprompt += random.choice(othertypelist) + " of a "
+ completeprompt += "::" + str(random.randint(1,17)) + "]"
- # start shot size
- if(mainchooser in ["object", "animal", "humanoid", "concept"] and othertype == 0 and "portrait" not in completeprompt):
- completeprompt += random.choice(shotsizelist) + " of a "
- elif("portrait" in completeprompt):
- completeprompt += " ,close up of a "
-
+ if(artists != "none" and artistsplacement == "back"):
+ completeprompt += ", "
+ # take 1-3 artists, weighted to 1-2
+ step = random.randint(0, 1)
+ end = random.randint(1, insanitylevel3)
- # start subject building
-
- # start with descriptive qualities
-
- # Common to have 1 description, uncommon to have 2
- if(common_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- if(subjectchooser in ["animal as human,","human", "job", "fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
- completeprompt += random.choice(bodytypelist) + " "
- if(subjectchooser in ["object","animal as human,","human", "job", "fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
- completeprompt += random.choice(culturelist) + " "
+ # determine artist mode:
+ # normal
+ # hybrid |
+ # switching A:B:X
+ # adding at step x a:X
+ # stopping at step x ::X
- if(mainchooser == "object"):
- objecttypelist = [objectlist, buildinglist, vehiclelist] # first select a random list, then randomly select from the corresponding list
-
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
+
+ modeselector = random.randint(0,10)
+ if modeselector < 4 and end - step >= 2:
+ artistmodeslist = ["hybrid", "stopping", "adding", "switching"]
+ artistmode = artistmodeslist[modeselector]
+ if artistmode in ["hybrid","switching"] and end - step == 1:
+ artistmode = "normal"
+
+ if artistmode in ["hybrid", "stopping", "adding","switching"]:
+ completeprompt += " ["
+
+ while step < end:
+ if(normal_dist(insanitylevel)):
+ isweighted = 1
+
+ if isweighted == 1:
+ completeprompt += " ("
- chosenobjecttype = random.choice(objecttypelist)
+ #completeprompt = add_from_csv(completeprompt, "artists", 0, "art by ","")
+ completeprompt += "art by " + random.choice(artistlist)
+
+ if isweighted == 1:
+ completeprompt += ":" + str(1 + (random.randint(-3,3)/10)) + ")"
+
+ if artistmode in ["hybrid"] and not end - step == 1:
+ completeprompt += "|"
+ if artistmode in ["switching"] and not end - step == 1:
+ completeprompt += ":"
+
+ if artistmode not in ["hybrid", "switching"]and not end - step == 1:
+ completeprompt += ","
+
+ isweighted = 0
+
+ step = step + 1
- completeprompt += random.choice(chosenobjecttype) + " "
-
- if(hybridorswap == "hybrid"):
- if(uncommon_dist(insanitylevel)):
- completeprompt += "|" + random.choice(hybridlist) + "]"
- else:
- completeprompt += "|"
- completeprompt += random.choice(chosenobjecttype) + " "
+ if artistmode in ["stopping"]:
+ completeprompt += "::"
+ completeprompt += str(random.randint(1,19))
+
+ if artistmode in ["switching","adding"]:
+ completeprompt += ":" + str(random.randint(1,18))
+ if artistmode in ["hybrid", "stopping","adding", "switching"]:
completeprompt += "]"
- if(hybridorswap == "swap"):
- if(uncommon_dist(insanitylevel)):
- completeprompt += ":" + random.choice(hybridlist) + ":" + str(random.randint(1,5)) + "]"
- else:
- completeprompt += ":"
- completeprompt += random.choice(chosenobjecttype) + " "
- completeprompt += ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
+ # end of the artist stuff
- if(mainchooser == "animal"):
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
-
- if unique_dist(insanitylevel):
- animaladdlist = ["baby", "were", "giant", "monster"]
- animaladdedsomething = 1
- completeprompt += random.choice(animaladdlist) + " -animal-"
- if(animaladdedsomething != 1):
- completeprompt += random.choice(animallist) + " "
-
- if(hybridorswap == "hybrid"):
- if(uncommon_dist(insanitylevel)):
- completeprompt += "|" + random.choice(hybridlist) + "]"
- else:
- completeprompt += "| -animal- ]"
- if(hybridorswap == "swap"):
- if(uncommon_dist(insanitylevel)):
- completeprompt += ":" + random.choice(hybridlist) + ":" + str(random.randint(1,5)) + "]"
- else:
- completeprompt += ":-animal-:" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
-
- if(subjectchooser == "human"):
- completeprompt += random.choice(manwomanlist) + " "
-
- if(subjectchooser == "job"):
- completeprompt += random.choice(malefemalelist) + " "
- completeprompt += random.choice(joblist) + " "
-
- if(subjectchooser == "fictional"):
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
- completeprompt += random.choice(fictionallist) + " "
-
- if(hybridorswap == "hybrid"):
- completeprompt += "|" + random.choice(hybridhumanlist) + "]"
- if(hybridorswap == "swap"):
- completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
-
- if(subjectchooser == "non fictional"):
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
-
- completeprompt += random.choice(nonfictionallist) + " "
-
- if(hybridorswap == "hybrid"):
- completeprompt += "|" + random.choice(hybridhumanlist) + "]"
- if(hybridorswap == "swap"):
- completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
-
- if(subjectchooser == "humanoid"):
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
-
- completeprompt += random.choice(humanoidlist) + " "
-
- if(hybridorswap == "hybrid"):
- completeprompt += "|" + random.choice(hybridhumanlist) + "]"
- if(hybridorswap == "swap"):
- completeprompt += ":" + random.choice(hybridhumanlist) + ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
-
- if(subjectchooser == "landscape"):
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
-
- completeprompt += random.choice(locationlist) + " "
-
- if(hybridorswap == "hybrid"):
- completeprompt += "|" + "-location-" + "]"
- if(hybridorswap == "swap"):
- completeprompt += ":" + "-location-" + ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
-
- # shots from inside can create cool effects in landscapes
- if(unique_dist(insanitylevel)):
- insideshot = 1
- completeprompt += " from inside of a "
- addontolocation = [locationlist,buildinglist]
- completeprompt += random.choice(random.choice(addontolocation)) + " "
-
- if(normal_dist(insanitylevel) and insideshot == 0):
- completeprompt += " and "
- if(rare_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- if(rare_dist(insanitylevel)):
- completeprompt += random.choice(culturelist) + " "
-
- addontolocation = [locationlist,buildinglist, vehiclelist]
- completeprompt += random.choice(random.choice(addontolocation)) + " "
-
-
- if(subjectchooser == "event"):
- completeprompt += " \"" + random.choice(eventlist) + "\" "
-
- if(subjectchooser == "concept"):
- completeprompt += " \" The " + random.choice(conceptprefixlist) + " of " + random.choice(conceptsuffixlist) + "\" "
-
- # object with a face
- if(mainchooser == "object" and unique_dist(insanitylevel)):
- completeprompt += " with a face "
-
- # object materials
- if(mainchooser == "object" and uncommon_dist(insanitylevel)):
- completeprompt += " made from -material- "
-
- # object detailing
- if(mainchooser == "object" and rare_dist(insanitylevel)):
- completeprompt += " detailed with "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- completeprompt += "-material- patterns "
-
-
- # riding an animal, holding an object or driving a vehicle, rare
- if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and rare_dist(insanitylevel)):
- humanspecial = 1
- speciallist = [" riding a -animal- ", " holding a -object- ", " driving a -vehicle-", " visiting a -building-", " with a -animal-", " surrounded by -object-s"]
- completeprompt += random.choice(speciallist)
-
-
-
- completeprompt += ", "
-
- # SD understands emoji's. Can be used to manipulate facial expressions.
- # emoji, legendary
- if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and legendary_dist(insanitylevel)):
- completeprompt += random.choice(emojilist) + ", "
-
-
- # cosplaying
- if(subjectchooser in ["animal as human", "non fictional", "humanoid"] and rare_dist(insanitylevel) and humanspecial != 1):
- completeprompt += "cosplaying as " + random.choice(fictionallist) + ", "
-
- # Job
- # either go job or activity, not both
-
- if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel) and humanspecial != 1):
- joboractivitylist = [joblist,humanactivitylist]
- completeprompt += random.choice(random.choice(joboractivitylist)) + ", "
-
-
- if(subjectchooser in ["animal as human","human","job", "fictional", "non fictional", "humanoid"] and legendary_dist(insanitylevel)):
- skintypelist = ["-color-", "-material-"]
- completeprompt += ", with " + random.choice(skintypelist) + " skin, "
-
- # outfit builder
- if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
- completeprompt += ", wearing "
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(culturelist) + " "
- if(normal_dist(insanitylevel)):
- completeprompt += " -color- "
- if(rare_dist(insanitylevel)):
- completeprompt += " -material- "
-
- if rare_dist(insanitylevel):
- hybridorswaplist = ["hybrid", "swap"]
- hybridorswap = random.choice(hybridorswaplist)
- completeprompt += "["
-
- completeprompt += random.choice(outfitlist) + " "
-
- if(hybridorswap == "hybrid"):
- completeprompt += "|" + "-outfit-" + "]"
- if(hybridorswap == "swap"):
- completeprompt += ":" + "-outfit-" + ":" + str(random.randint(1,5)) + "]"
- hybridorswap = ""
- completeprompt += ", "
-
-
- if(subjectchooser in ["animal as human","human","fictional", "non fictional", "humanoid"] and uncommon_dist(insanitylevel) and humanspecial != 1):
- completeprompt += random.choice(poselist) + ", "
-
- if(subjectchooser in ["human","job","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
- completeprompt += random.choice(haircolorlist) + " "
- completeprompt += " hair styled as " + random.choice(hairstylelist) + ", "
-
- if(subjectchooser in ["animal as human,","human","fictional", "non fictional", "humanoid"] and normal_dist(insanitylevel)):
- completeprompt += random.choice(accessorielist) + ", "
- # Sometimes get 2
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(accessorielist) + ", "
- # or even three, these are fun and often minor :)
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(accessorielist) + ", "
-
- if(legendary_dist(insanitylevel) and subjectchooser not in ["landscape", "concept"]):
- insideshot = 1
- completeprompt += ", from inside of a "
- addontolocation = [locationlist,buildinglist]
- completeprompt += random.choice(random.choice(addontolocation)) + ", "
-
- if(subjectchooser not in ["landscape", "concept"] and humanspecial != 1 and insideshot == 0 and normal_dist(insanitylevel)):
- backgroundtypelist = ["landscape", "buildingbackground", "insidebuilding"]
- backgroundtype = random.choice(backgroundtypelist)
- if(backgroundtype == "landscape"):
- completeprompt += "background is " + random.choice(locationlist) + ", "
- elif(backgroundtype == "buildingbackground"):
- completeprompt += ", background is "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- completeprompt += random.choice(buildinglist) + ", "
- elif(backgroundtype == "insidebuilding"):
- completeprompt += ", inside a "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(descriptorlist) + " "
- completeprompt += random.choice(buildinglist) + ", "
-
-
-
-
-
- # landscapes it is nice to always have a time period
- if(normal_dist(insanitylevel) or subjectchooser=="landscape"):
- completeprompt += random.choice(timeperiodlist) + ", "
-
- if(mainchooser not in ["landscape"] and rare_dist(insanitylevel)):
- completeprompt += random.choice(focuslist) + ", "
-
-
-
- # others
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(directionlist) + ", "
-
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(moodlist) + ", "
-
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(artmovementlist) + ", "
-
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(lightinglist) + ", "
-
- # determine wether we have a photo or not
- if("hoto" in completeprompt):
- isphoto = 1
- if(common_dist(insanitylevel) and not "film grain" in antilist):
- completeprompt += ", film grain, "
-
- if(isphoto == 1):
- completeprompt += random.choice(cameralist) + ", "
-
- if(normal_dist(insanitylevel) or isphoto == 1):
- completeprompt += random.choice(lenslist) + ", "
-
- if(normal_dist(insanitylevel)):
- completeprompt += random.choice(colorschemelist) + ", "
-
- # vomit some cool/wierd things into the prompt
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(vomitlist) + ", "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(vomitlist) + ", "
-
- #adding a great work of art, like starry night has cool effects. But this should happen only very rarely.
- if(novel_dist(insanitylevel)):
- completeprompt += " in the style of " + random.choice(greatworklist) + ", "
-
- # everyone loves the adding quality. The better models don't need this, but lets add it anyway
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(qualitylist) + ", "
- if(uncommon_dist(insanitylevel)):
- completeprompt += random.choice(qualitylist) + ", "
-
-
- if artistmode in ["enhancing"]:
- completeprompt += "::" + str(random.randint(1,17)) + "]"
-
-
-
- if(artists != "none" and artistsplacement == "back"):
completeprompt += ", "
- # take 1-3 artists, weighted to 1-2
- step = random.randint(0, 1)
- end = random.randint(1, insanitylevel3)
-
-
-
-
- # determine artist mode:
- # normal
- # hybrid |
- # switching A:B:X
- # adding at step x a:X
- # stopping at step x ::X
+ completeprompt += suffixprompt
+ # and then up the compounding stuff
+ compoundcounter += 1
- modeselector = random.randint(0,10)
- if modeselector < 4 and end - step >= 2:
- artistmodeslist = ["hybrid", "stopping", "adding", "switching"]
- artistmode = artistmodeslist[modeselector]
- if artistmode in ["hybrid","switching"] and end - step == 1:
- artistmode = "normal"
-
- if artistmode in ["hybrid", "stopping", "adding","switching"]:
- completeprompt += " ["
-
- while step < end:
- if(normal_dist(insanitylevel)):
- isweighted = 1
-
- if isweighted == 1:
- completeprompt += " ("
-
- #completeprompt = add_from_csv(completeprompt, "artists", 0, "art by ","")
- completeprompt += "art by " + random.choice(artistlist)
-
- if isweighted == 1:
- completeprompt += ":" + str(1 + (random.randint(-3,3)/10)) + ")"
-
- if artistmode in ["hybrid"] and not end - step == 1:
- completeprompt += "|"
- if artistmode in ["switching"] and not end - step == 1:
- completeprompt += ":"
-
- if artistmode not in ["hybrid", "switching"]and not end - step == 1:
- completeprompt += ","
-
- isweighted = 0
-
- step = step + 1
-
- if artistmode in ["stopping"]:
- completeprompt += "::"
- completeprompt += str(random.randint(1,19))
-
- if artistmode in ["switching","adding"]:
- completeprompt += ":" + str(random.randint(1,18))
- if artistmode in ["hybrid", "stopping","adding", "switching"]:
- completeprompt += "]"
-
-
- completeprompt += ", "
+ # Here comes all the seperator stuff for prompt compounding
+ if(compoundcounter < promptstocompound):
+ if(seperator == "comma"):
+ completeprompt += " \n , "
+ else:
+ completeprompt += " \n " + seperator + " "
+
+
+ #end of the while loop, now clean up the prompt
# lol, this needs a rewrite :D
while "-color-" in completeprompt or "-material-" in completeprompt or "-animal-" in completeprompt or "-object-" in completeprompt or "-fictional-" in completeprompt or "-nonfictional-" in completeprompt or "-conceptsuffix-" in completeprompt or "-building-" in completeprompt or "-vehicle-" in completeprompt or "-outfit-" in completeprompt or "-location-" in completeprompt:
@@ -676,6 +707,10 @@ def build_dynamic_prompt(insanitylevel = 5, forcesubject = "all", artists = "all
while "-animal-" in completeprompt:
completeprompt = completeprompt.replace('-animal-', random.choice(animallist),1)
+
+
+
+
completeprompt = re.sub('\[ ', '[', completeprompt)
completeprompt = re.sub(' \]', ']', completeprompt)
completeprompt = re.sub(' \|', '|', completeprompt)
@@ -695,11 +730,13 @@ def build_dynamic_prompt(insanitylevel = 5, forcesubject = "all", artists = "all
completeprompt = re.sub(',\(', ', (', completeprompt)
completeprompt = re.sub('a The', 'The', completeprompt)
+ completeprompt = re.sub(', ,', ',', completeprompt)
- completeprompt = re.sub(' +', ' ', completeprompt[2:]) # remove first character, that is always a comma. Remove any excess spaces
completeprompt = completeprompt.strip(", ")
+
+
#just for me, some fun with posting fake dev messages (ala old sim games)
if(random.randint(1, 50)==1):
print("")
diff --git a/call_extras.py b/call_extras.py
index 18b55b9..7b7b489 100644
--- a/call_extras.py
+++ b/call_extras.py
@@ -1,4 +1,4 @@
-import json
+import os
import requests
import io
import base64
@@ -7,37 +7,40 @@ from PIL import Image, PngImagePlugin
-def call_extras(imagelocation):
+def call_extras(imagelocation,originalimage, originalpnginfo ="", apiurl="http://127.0.0.1:7860",filename="",extrasupscaler1 = "all",extrasupscaler2 ="all",extrasupscaler2visiblity="0.5",extrasupscaler2gfpgan="0",extrasupscaler2codeformer="0.15",extrasupscaler2codeformerweight="0.1",extrasresize="2"):
+
- imagewip = Image.open(imagelocation)
#rest of prompt things
- upscaling_resize = "2"
- upscaler_1 = "4x-UltraSharp"
- upscaler_2 = "R-ESRGAN 4x+"
+ upscaling_resize = extrasresize
+ upscaler_1 = extrasupscaler1
+ upscaler_2 = extrasupscaler2
with open(imagelocation, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
encodedstring2 = encoded_string.decode('utf-8')
#params to stay the same
- url = "http://127.0.0.1:7860"
- outputextrasfolder = 'C:\\automated_output\\extras\\'
- outputextrasilename = str(uuid.uuid4())
+ url = apiurl
+ script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
+ outputextrasfolder = os.path.join(script_dir, "./automated_outputs/extras/" )
+ if(filename==""):
+ filename = str(uuid.uuid4())
+ outputextrasilename = filename
outputextraspng = '.png'
outputextrasFull = '{}{}{}'.format(outputextrasfolder,outputextrasilename,outputextraspng)
payload = {
- "upscaling_resize": upscaling_resize,
+ "upscaling_resize": float(upscaling_resize),
"upscaler_1": upscaler_1,
"image": encodedstring2,
"resize_mode": 0,
"show_extras_results": "false",
- "gfpgan_visibility": 0,
- "codeformer_visibility": 0.15,
- "codeformer_weight": 0.1,
+ "gfpgan_visibility": extrasupscaler2gfpgan ,
+ "codeformer_visibility": extrasupscaler2visiblity,
+ "codeformer_weight": extrasupscaler2codeformerweight,
"upscaling_crop": "false",
"upscaler_2": upscaler_2,
- "extras_upscaler_2_visibility": 0.5,
+ "extras_upscaler_2_visibility": extrasupscaler2visiblity,
"upscale_first": "true",
"rb_enabled": "false", # the remove backgrounds plugin is automatically turned on, need to turn it off
"models": "None" # the remove backgrounds plugin is automatically turned on, need to turn it off
@@ -47,6 +50,30 @@ def call_extras(imagelocation):
response = requests.post(url=f'{url}/sdapi/v1/extra-single-image', json=payload)
image = Image.open(io.BytesIO(base64.b64decode(response.json().get("image"))))
- image.save(outputextrasFull)
+
+ # when using just upscale, we somehow can't get the png info. Unless we do IMG2IMG or TXT2IMG first, then it is added.
+ # minor issue, so this solves it for now
+
+ #if(originalpnginfo==""):
+ # png_payload = {
+ # "image": "data:image/png;base64," + image[0]
+ # }
+
+ #print("and here!")
+ #print(png_payload)
+ # response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
+
+ #print("here!")
+ #print(response2)
+
+
+
+ # pnginfo = PngImagePlugin.PngInfo()
+ # pnginfo.add_text("parameters", response2.json().get("info"))
+
+ # originalpnginfo = pnginfo
+
+
+ image.save(outputextrasFull, pnginfo=originalpnginfo)
return outputextrasFull
diff --git a/call_img2img.py b/call_img2img.py
index a77c0a0..ffa2269 100644
--- a/call_img2img.py
+++ b/call_img2img.py
@@ -1,47 +1,116 @@
-import json
+import os
import requests
import io
import base64
import uuid
from PIL import Image, PngImagePlugin
+from modules import shared
+from model_lists import *
+def call_img2img(imagelocation,originalimage, originalpnginfo ="", apiurl="http://127.0.0.1:7860",filename="", prompt = "", negativeprompt = "", img2imgsamplingsteps = "20", img2imgcfg = "7", img2imgsamplingmethod = "DPM++ SDE Karras", img2imgupscaler = "R-ESRGAN 4x+", img2imgmodel = "currently selected model", denoising_strength = "0.3", scale = "2", padding = "64",upscalescript="SD upscale",usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8",controlnetenabled=False, controlnetmodel="",controlnetblockymode=False):
-def call_img2img(imagelocation, denoising_strength = 0.25, scale = 1.5, padding = 64):
-
- #params to stay the same
- url = "http://127.0.0.1:7860"
- outputimg2imgfolder = 'C:\\automated_output\\img2img\\'
- outputimg2imgfilename = str(uuid.uuid4())
+ negativepromptfound = 0
+ #params to stay the same
+ url = apiurl
+ script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
+ outputimg2imgfolder = os.path.join(script_dir, "./automated_outputs/img2img/" )
+ if(filename==""):
+ filename = str(uuid.uuid4())
outputimg2imgpng = '.png'
- outputimg2imgFull = '{}{}{}'.format(outputimg2imgfolder,outputimg2imgfilename,outputimg2imgpng)
+ outputimg2imgFull = '{}{}{}'.format(outputimg2imgfolder,filename,outputimg2imgpng)
encodedstringlist = []
+ # need to convert the values to the correct index number for Ultimate SD Upscaler
+ redrawmodelist =["Linear","Chess","None"]
+ seamsfixmodelist = ["None","Band pass","Half tile offset pass","Half tile offset pass + intersections"]
+ usduredrawint = redrawmodelist.index(usduredraw)
+ seamsfixmodeint = seamsfixmodelist.index(usduSeamsfix)
+
+
+
+
+
#rest of prompt things
- sampler_index = "DPM2 Karras"
- steps = "20"
- prompt = "hello world"
- cfg_scale = "7"
- width = "512"
- height = "512"
+ sampler_index = img2imgsamplingmethod
+ steps = img2imgsamplingsteps
+ cfg_scale = img2imgcfg
+
+
+
+
+
+
with open(imagelocation, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
encodedstringlist.append(encoded_string.decode('utf-8'))
- encodedstring2 = encoded_string.decode('utf-8')
+
+ # If we don't have a prompt, get it from the original image file
+ # This is used when only_upscale is activated
+ if(prompt==""):
+ with open(originalimage, "rb") as originalimage_file:
+ originalencoded_string = base64.b64encode(originalimage_file.read())
+ encodedstring2 = originalencoded_string.decode('utf-8')
- # prompt from picture?
- png_payload = {
- "image": encodedstring2
- }
- response3 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
+ # get prompt from picture
+ png_payload = {
+ "image": encodedstring2
+ }
+ response3 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
- pnginfo = str(response3.json().get("info"))
+ pnginfo = str(response3.json().get("info"))
- prompt = pnginfo[:pnginfo.rfind("Steps")]
+
+ prompt = pnginfo[:pnginfo.rfind("Steps")]
+
+
+ if(prompt.rfind("Negative prompt") != -1):
+ prompt = prompt[:prompt.rfind("Negative prompt")]
+
+ negativepromptfound = 1
+
+ if(negativepromptfound == 1):
+ negativeprompt = pnginfo[:pnginfo.rfind("Steps")]
+ negativeprompt = negativeprompt.replace(prompt,"")
+
+
+ # set the automatic upscale
+
+ checkprompt = prompt.lower()
+ if(img2imgupscaler != "automatic"):
+ upscaler = img2imgupscaler
+ else:
+ upscalerlist = get_upscalers_for_img2img()
+ # on automatic, make some choices about what upscaler to use
+ # photos, prefer 4x ultrasharp
+ # anime, cartoon or drawing, go for R-ESRGAN 4x+ Anime6B
+ # else, R-ESRGAN 4x+"
+ if("hoto" in checkprompt and "4x-UltraSharp" in upscalerlist):
+ upscaler = "4x-UltraSharp"
+ elif("anime" in checkprompt or "cartoon" in checkprompt or "draw" in checkprompt or "vector" in checkprompt or "cel shad" in checkprompt or "visual novel" in checkprompt):
+ upscaler = "R-ESRGAN 4x+ Anime6B"
+ else:
+ upscaler = "R-ESRGAN 4x+"
+
+ if(upscaler== "4x-UltraSharp"):
+ denoising_strength = "0.35"
+ if(upscaler== "R-ESRGAN 4x+ Anime6B+"):
+ denoising_strength = "0.6" # 0.6 is fine for the anime upscaler
+ if(upscaler== "R-ESRGAN 4x+"):
+ denoising_strength = "0.5" # default 0.6 is a lot and changes a lot of details
+
+
+ #wierd blocky mode comes up when the treshold is set way too high and the denoising strenght is strong
+ if(controlnetblockymode==True):
+ treshold = int(padding)
+ if(float(denoising_strength) < 0.65):
+ denoising_strength = "0.65"
+ else:
+ treshold = 1
payload = {
"resize_mode": 0,
@@ -50,33 +119,87 @@ def call_img2img(imagelocation, denoising_strength = 0.25, scale = 1.5, padding
"batch_size": "1",
"n_iter": "1",
"prompt": prompt,
+ "negative_prompt": negativeprompt,
"steps": steps,
"cfg_scale": cfg_scale,
- "width": width,
- "height": height,
+ #"width": width,
+ #"height": height,
"include_init_images": "true",
"init_images": encodedstringlist,
- "script_name": "SD upscale",
- "script_args": ["",padding,"4x-UltraSharp",scale]
}
-
+ if(img2imgmodel != "currently selected model"):
+ payload.update({"sd_model": img2imgmodel})
#
+
+ # https://github.com/Mikubill/sd-webui-controlnet/wiki/API
+ #
+
+
+ if(controlnetenabled==True and controlnetmodel!=""):
+ payload.update({"alwayson_scripts": {
+ "controlnet": {
+ "args": [
+ {
+ "module": "tile_resample",
+ "model": controlnetmodel, # control_v11f1e_sd15_tile [a371b31b]
+ #"input_image": encodedstringlist,
+ "control_mode": 2, #"ControlNet is more important" : the controlnet model has more impact than the prompt
+ #"resize_mode": 0
+ "threshold_a": treshold
+ }
+ ]
+ }
+ }
+ })
+ if(upscalescript=="SD upscale"):
+ payload.update({"script_name": upscalescript})
+ payload.update({"script_args": ["",int(padding),upscaler,float(scale)]})
+
+ if(upscalescript=="Ultimate SD upscale"):
+ upscaler_index = [x.name.lower() for x in shared.sd_upscalers].index(upscaler.lower())
+ payload.update({"script_name": upscalescript})
+ payload.update({"script_args": ["",int(usdutilewidth),int(usdutileheight),int(usdumaskblur),int(padding), int(usduswidth), usdusdenoise,int(usduspadding),
+ upscaler_index,True,usduredrawint,False,int(usdusmaskblur),
+ seamsfixmodeint,2,"","",float(scale)]})
+
+ # Ultimate SD Upscale params:
+ #_, tile_width, tile_height, mask_blur, padding, seams_fix_width, seams_fix_denoise, seams_fix_padding,
+ # upscaler_index, save_upscaled_image, redraw_mode, save_seams_fix_image, seams_fix_mask_blur,
+ # seams_fix_type, target_size_type, custom_width, custom_height, custom_scale):
+
+ # target_size_type = 2
+ # custom_scale = 2
+
+
+
response = requests.post(url=f'{url}/sdapi/v1/img2img', json=payload)
+
r = response.json()
+
for i in r['images']:
image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
- png_payload = {
- "image": "data:image/png;base64," + i
- }
- response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
+ if(originalpnginfo==""):
+ png_payload = {
+ "image": "data:image/png;base64," + i
+ }
- pnginfo = PngImagePlugin.PngInfo()
- pnginfo.add_text("parameters", response2.json().get("info"))
- image.save(outputimg2imgFull, pnginfo=pnginfo)
+ #print("and here!")
+ #print(png_payload)
+ response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
- return outputimg2imgFull
+ #print("here!")
+ #print(response2)
+
+
+ pnginfo = PngImagePlugin.PngInfo()
+ pnginfo.add_text("parameters", response2.json().get("info"))
+
+ originalpnginfo = pnginfo
+ image.save(outputimg2imgFull, pnginfo=originalpnginfo)
+
+ return [outputimg2imgFull,originalpnginfo]
diff --git a/call_txt2img.py b/call_txt2img.py
index 295fbfc..b120322 100644
--- a/call_txt2img.py
+++ b/call_txt2img.py
@@ -3,19 +3,25 @@ import requests
import io
import base64
import uuid
+import sys, os
from PIL import Image, PngImagePlugin
+from model_lists import *
-def call_txt2img(passingprompt,ratio,upscale,debugmode):
+def call_txt2img(passingprompt,ratio,upscale,debugmode,filename="",model = "currently selected model",samplingsteps = "40",cfg= "7",hiressteps ="0",denoisestrength="0.6",samplingmethod="DPM++ SDE Karras", upscaler="R-ESRGAN 4x+",hiresscale="2",apiurl="http://127.0.0.1:7860", qualitygate=False,quality="7.6",runs="5",negativeprompt=""):
#set the prompt!
prompt = passingprompt
+ checkprompt = passingprompt.lower()
+
+ #set the URL for the API
+ url = apiurl
#rest of prompt things
- sampler_index = "DPM2 Karras"
- steps = "20"
+ sampler_index = samplingmethod
+ steps = samplingsteps
if(debugmode==1):
steps="10"
- cfg_scale = "7"
+ cfg_scale = cfg
#size
if(ratio=='wide'):
@@ -34,23 +40,65 @@ def call_txt2img(passingprompt,ratio,upscale,debugmode):
enable_hr = upscale
if(debugmode==1):
enable_hr="False"
- denoising_strength = "0.35"
- hr_scale = "2"
- hr_upscaler = "4x-UltraSharp"
- hr_second_pass_steps = str(round(int(steps)/2))
-
+ #defaults
+ hr_scale = hiresscale
+ denoising_strength = denoisestrength
+
+ hr_second_pass_steps = hiressteps
+ #hr_upscaler = "LDSR" # We have the time, why not use LDSR
+
+ if(upscaler != "automatic"):
+ hr_upscaler = upscaler
+ else:
+ upscalerlist = get_upscalers()
+ # on automatic, make some choices about what upscaler to use
+ # photos, prefer 4x ultrasharp
+ # anime, cartoon or drawing, go for R-ESRGAN 4x+ Anime6B
+ # else, R-ESRGAN 4x+"
+ if("hoto" in checkprompt and "4x-UltraSharp" in upscalerlist):
+ hr_upscaler = "4x-UltraSharp"
+ elif("anime" in checkprompt or "cartoon" in checkprompt or "draw" in checkprompt or "vector" in checkprompt or "cel shad" in checkprompt or "visual novel" in checkprompt):
+ hr_upscaler = "R-ESRGAN 4x+ Anime6B"
+ else:
+ hr_upscaler = "R-ESRGAN 4x+"
+
+ if(hiressteps==0):
+ hiressteps = samplingsteps
+ hr_second_pass_steps = int(hiressteps/2)
+
+ hr_scale = 2
+
+ if(hr_upscaler== "4x-UltraSharp"):
+ denoising_strength = "0.35"
+ if(hr_upscaler== "R-ESRGAN 4x+ Anime6B+"):
+ denoising_strength = "0.6" # 0.6 is fine for the anime upscaler
+ if(hr_upscaler== "R-ESRGAN 4x+"):
+ denoising_strength = "0.5" # default 0.6 is a lot and changes a lot of details
+
#params to stay the same
- url = "http://127.0.0.1:7860"
- outputTXT2IMGfolder = 'C:\\automated_output\\txt2img\\'
- outputTXT2IMGfilename = str(uuid.uuid4())
- outputTXT2IMGpng = '.png'
- outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,outputTXT2IMGfilename,outputTXT2IMGpng)
- outputTXT2IMGtxtfolder = 'C:\\automated_output\\prompts\\'
- outputTXT2IMGtxt = '.txt'
- outputTXT2IMGtxtFull = '{}{}{}'.format(outputTXT2IMGtxtfolder,outputTXT2IMGfilename,outputTXT2IMGtxt)
+ script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
+ outputTXT2IMGfolder = os.path.join(script_dir, "./automated_outputs/txt2img/")
+ if(filename==""):
+ filename = str(uuid.uuid4())
+
+ outputTXT2IMGpng = '.png'
+ #outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,filename,outputTXT2IMGpng)
+ outputTXT2IMGtxtfolder = os.path.join(script_dir, "./automated_outputs/prompts/")
+ outputTXT2IMGtxt = '.txt'
+ outputTXT2IMGtxtFull = '{}{}{}'.format(outputTXT2IMGtxtfolder,filename,outputTXT2IMGtxt)
+
+ # params for quality gate
+ isGoodNumber = float(quality)
+ foundgood = False
+ MaxRuns = int(runs)
+ Runs = 0
+ scorelist = []
+ scoredeclist = []
+ imagelist = []
+ pnginfolist = []
#call TXT2IMG
@@ -67,28 +115,97 @@ def call_txt2img(passingprompt,ratio,upscale,debugmode):
"hr_scale": hr_scale,
"hr_upscaler": hr_upscaler,
"hr_second_pass_steps": hr_second_pass_steps
-
}
- response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
-
- r = response.json()
-
- for i in r['images']:
- image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
-
- png_payload = {
- "image": "data:image/png;base64," + i
- }
- response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
-
- pnginfo = PngImagePlugin.PngInfo()
- pnginfo.add_text("parameters", response2.json().get("info"))
- image.save(outputTXT2IMGFull, pnginfo=pnginfo)
+ if(model != "currently selected model"):
+ payload.update({"sd_model": model})
+ if(negativeprompt != ""):
+ payload.update({"negative_prompt": negativeprompt})
+
+ while Runs < MaxRuns:
+
+ # make the filename unique for each run _0, _1, etc.
+ addrun = "_" + str(Runs)
+ filenamefull = filename + addrun
+ outputTXT2IMGFull = '{}{}{}'.format(outputTXT2IMGfolder,filenamefull,outputTXT2IMGpng)
+
+
+ response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)
+
+ r = response.json()
+
+ for i in r['images']:
+ image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
+
+ png_payload = {
+ "image": "data:image/png;base64," + i
+ }
+ response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)
+
+ pnginfo = PngImagePlugin.PngInfo()
+ pnginfo.add_text("parameters", response2.json().get("info"))
+ image.save(outputTXT2IMGFull, pnginfo=pnginfo)
+
+ if(qualitygate==True):
+ # check if the file exists in the parent directory
+ imagescorer_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'stable-diffusion-webui-aesthetic-image-scorer', 'scripts'))
+ #print(imagescorer_path)
+ if imagescorer_path not in sys.path:
+ sys.path.append(imagescorer_path)
+ try:
+
+ import image_scorer
+ print("Found aesthetic-image-scorer! Using this to measure the results...")
+ score = image_scorer.get_score(image)
+ scoredeclist.append(score)
+ score = round(score,1)
+
+ scorelist.append(score)
+ imagelist.append(outputTXT2IMGFull)
+ pnginfolist.append(pnginfo)
+
+ print("This image has scored: "+ str(score) + " out of " + str(isGoodNumber))
+ if(score >= isGoodNumber or debugmode == 1):
+ foundgood = True
+ print("Yay its good! Keeping this result.")
+ else:
+ runstodo = MaxRuns - Runs - 1
+ print("Not a good result. Retrying for another " + str(runstodo) + " times or until the image is good enough.")
+
+ except ImportError:
+ foundgood = True # just continue :)
+
+ # handle the case where the module doesn't exist
+ print("Could not find the stable-diffusion-webui-aesthetic-image-scorer extension.")
+ print("Install this extension via the WebUI to use Quality Gate")
+ pass
+ else:
+ foundgood = True # If there is no quality gate, then everything is good. So we escape this loop
+
+ Runs += 1
+ if(foundgood == True):
+ break #Break the loop if we found something good. Or if we set it to good :)
+
+ if(len(imagelist) > 0):
+
+ if(foundgood == True):
+ print("Removing any other images generated this run (if any).")
+ else:
+ print("Stopped trying, keeping the best image we had so far.")
+
+ # Get the index of the first occurrence of the maximum value in the list
+ indexofimagetokeep = scoredeclist.index(max(scoredeclist))
+ outputTXT2IMGFull = imagelist[indexofimagetokeep] #store the image to keep in here, so we can pass it along
+ pnginfo = pnginfolist[indexofimagetokeep]
+ imagelist.pop(indexofimagetokeep)
+ #remove all other images
+ for imagelocation in imagelist:
+ os.remove(imagelocation)
+
with open(outputTXT2IMGtxtFull,'w',encoding="utf8") as txt:
json_object = json.dumps(payload, indent = 4)
txt.write(json_object)
- return outputTXT2IMGFull
\ No newline at end of file
+ return [outputTXT2IMGFull,pnginfo]
\ No newline at end of file
diff --git a/main.py b/main.py
index 443a002..5f66457 100644
--- a/main.py
+++ b/main.py
@@ -1,37 +1,205 @@
import sys, os
+import random
+import uuid
+import re
+from datetime import datetime
sys.path.append(os.path.abspath(".."))
from call_txt2img import *
from call_img2img import *
from build_dynamic_prompt import *
from call_extras import *
+from model_lists import *
-# needs following directories to exist:
-# C:\automated_output\
-# C:\automated_output\extras\
-# C:\automated_output\img2img\
-# C:\automated_output\txt2img\
-# C:\automated_output\Prompts\
+def generateimages(amount = 1, size = "all",model = "currently selected model",samplingsteps = "40",cfg= "7",hiresfix = True,hiressteps ="0",denoisestrength="0.6",samplingmethod="DPM++ SDE Karras", upscaler="R-ESRGAN 4x+", hiresscale="2",apiurl="http://127.0.0.1:7860",qualitygate=False,quality="7.6",runs="5",insanitylevel="5",subject="all", artist="all", imagetype="all",silentmode=False, workprompt="", antistring="",prefixprompt="", suffixprompt="", negativeprompt="",promptcompounderlevel = "1", seperator="comma", img2imgbatch = "1", img2imgsamplingsteps = "20", img2imgcfg = "7", img2imgsamplingmethod = "DPM++ SDE Karras", img2imgupscaler = "R-ESRGAN 4x+", img2imgmodel = "currently selected model", img2imgactivate = False, img2imgscale = "2", img2imgpadding = "64",img2imgdenoisestrength="0.3",ultimatesdupscale=False,usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8",controlnetenabled=False, controlnetmodel="",img2imgdenoisestrengthmod="-0.05",enableextraupscale = False,controlnetblockymode = False,extrasupscaler1 = "all",extrasupscaler2 ="all",extrasupscaler2visiblity="0.5",extrasupscaler2gfpgan="0",extrasupscaler2codeformer="0.15",extrasupscaler2codeformerweight="0.1",extrasresize="2",onlyupscale="false"):
+ loops = int(amount) # amount of images to generate
+ steps = 0
+ upscalefilelist=[]
+ originalimage = ""
+ originalpnginfo =""
+ randomprompt = ""
+ filename=""
+ originalsize=size
-loops = 20 # amount of images to generate
-steps = 0
+ if(onlyupscale==True):
+ script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
+ inputupscalemefolder = os.path.join(script_dir, "./automated_outputs/upscale_me/" )
+
+ for upscalefilename in os.listdir(inputupscalemefolder):
+ f = os.path.join(inputupscalemefolder, upscalefilename)
+ # checking if it is a file
+ if os.path.isfile(f):
+ if(f[-3:]!="txt"):
+ upscalefilelist.append(f)
+
+ loops = len(upscalefilelist)
+
+ if(loops==0):
+ print('No files to upscale found! Please place images in //upscale_me// folder')
+ else:
+ print("")
+ print("Found and upscaling files")
+ print("")
+
+
+ modellist=get_models()
+ samplerlist=get_samplers()
+ upscalerlist=get_upscalers()
+ img2imgupscalerlist=get_upscalers_for_img2img()
+ img2imgsamplerlist=get_samplers_for_img2img()
+
+ if(ultimatesdupscale==False):
+ upscalescript="SD upscale"
+ else:
+ upscalescript="Ultimate SD upscale"
-while steps < loops:
- # build prompt
- randomprompt = build_dynamic_prompt(7,"all","all","all", False)
- # prompt + size
-
- #txt2img = call_txt2img(randomprompt, "portrait",True,0)
- #txt2img = call_txt2img(randomprompt, "wide" ,True, 0)
- #txt2img = call_txt2img(randomprompt, "ultrawide",True,0)
- #txt2img = call_txt2img(randomprompt, "square",True,0)
+ while steps < loops:
+ # build prompt
+ if(silentmode==True and workprompt == ""):
+ print("Trying to use provided workflow prompt, but is empty. Generating a random prompt instead.")
- # upscale via img2img first
- #img2img = call_img2img(txt2img,0.25,1.5,256)
+ if(onlyupscale==False): # only do txt2img when onlyupscale is False
+ if(silentmode==True and workprompt != ""):
+ randomprompt = workprompt
+ print("Using provided workflow prompt")
+ print(workprompt)
- # upscale via extras upscaler next
- #finalfile = call_extras(img2img)
+ else:
+ randomprompt = build_dynamic_prompt(insanitylevel,subject,artist,imagetype, False,antistring,prefixprompt,suffixprompt,promptcompounderlevel, seperator)
- steps += 1
\ No newline at end of file
+ # make the filename, from from a to the first comma
+ start_index = randomprompt.find("of a ") + len("of a ")
+
+ # find the index of the first comma after "of a" or end of the prompt
+ end_index = randomprompt.find(",", start_index)
+ if(end_index == -1):
+ end_index=len(randomprompt)
+
+ # extract the desired substring using slicing
+ filename = randomprompt[start_index:end_index]
+
+ # cleanup some unsafe things in the filename
+ filename = filename.replace("\"", "")
+ filename = filename.replace("[", "")
+ filename = filename.replace("|", "")
+ filename = filename.replace("]", "")
+ filename = filename.replace(":", "_")
+ filename = re.sub(r'[0-9]+', '', filename)
+
+ if(filename==""):
+ filename = str(uuid.uuid4())
+
+ # create a datetime object for the current date and time
+ now = datetime.now()
+ filenamecomplete = now.strftime("%Y%m%d%H%M%S") + "_" + filename.replace(" ", "_").strip()
+
+ # prompt + size
+ if(originalsize == "all"):
+ sizelist = ["portrait", "wide", "square"]
+ size = random.choice(sizelist)
+
+
+ #Check if there is any random value we have to choose or not
+ if(model=="all"):
+ model = random.choice(modellist)
+ #lets not do inpainting models
+ while "inpaint" in model:
+ model = random.choice(modellist)
+ print("Going to run with model " + model)
+
+ if(samplingmethod=="all"):
+ samplingmethod = random.choice(samplerlist)
+ print ("Going to run with sampling method " + samplingmethod)
+
+ if(upscaler=="all" and hiresfix == True):
+ upscaler = random.choice(upscalerlist)
+ print ("Going to run with upscaler " + upscaler)
+
+ # WebUI fix for PLMS and UniPC and hiresfix
+ if(samplingmethod in ['PLMS', 'UniPC']): # PLMS/UniPC do not support hirefix so we just silently switch to DDIM
+ samplingmethod = 'DDIM'
+
+
+
+
+ txt2img = call_txt2img(randomprompt, size ,hiresfix, 0, filenamecomplete,model ,samplingsteps,cfg, hiressteps, denoisestrength,samplingmethod, upscaler,hiresscale,apiurl,qualitygate,quality,runs,negativeprompt)
+ originalimage = txt2img[0] #Set this for later use
+ originalpnginfo = txt2img[1] #Sort of hacky way of bringing this forward. But if it works, it works
+
+ image = txt2img[0]
+ else:
+ if(filename==""):
+ filename = str(uuid.uuid4())
+
+ # create a datetime object for the current date and time
+ now = datetime.now()
+ filenamecomplete = now.strftime("%Y%m%d%H%M%S") + "_" + filename.replace(" ", "_").strip()
+ image = upscalefilelist[steps] # else we get the image from the upscale file list
+ originalimage = image # this is also the original image file
+
+
+ # upscale via img2img
+
+ img2imgloops = int(img2imgbatch)
+ if(img2imgactivate == False): # If we dont want to run, turn it off
+ img2imgloops = 0
+ img2imgsteps = 0
+
+ # start the batching!
+ while img2imgsteps < img2imgloops:
+
+
+ #Check if there is any random value we have to choose or not
+ if(img2imgmodel=="all"):
+ img2imgmodel = random.choice(modellist)
+ #lets not do inpainting models
+ while "inpaint" in model:
+ img2imgmodel = random.choice(modellist)
+ print("Going to upscale with model " + img2imgmodel)
+
+ if(img2imgsamplingmethod=="all"):
+ img2imgsamplingmethod = random.choice(img2imgsamplerlist)
+ print ("Going to upscale with sampling method " + img2imgsamplingmethod)
+
+ if(img2imgupscaler=="all"):
+ img2imgupscaler = random.choice(img2imgupscalerlist)
+ print ("Going to run with upscaler " + img2imgupscaler)
+
+ # WebUI fix for PLMS and UniPC and img2img
+ if(img2imgsamplingmethod in ['PLMS', 'UniPC']): # PLMS/UniPC do not support img2img so we just silently switch to DDIM
+ img2imgsamplingmethod = 'DDIM'
+
+ img2img = call_img2img(image, originalimage, originalpnginfo, apiurl, filenamecomplete, randomprompt,negativeprompt,img2imgsamplingsteps, img2imgcfg, img2imgsamplingmethod, img2imgupscaler, img2imgmodel, img2imgdenoisestrength, img2imgscale, img2imgpadding,upscalescript,usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur,controlnetenabled, controlnetmodel,controlnetblockymode)
+
+ image = img2img[0]
+ if(originalpnginfo==""):
+ originalpnginfo = img2img[1]
+
+ img2imgdenoisestrength = str(float(img2imgdenoisestrength) + float(img2imgdenoisestrengthmod)) # lower or increase the denoise strength for each batch
+ img2imgpadding = int(int(img2imgpadding) * float(img2imgscale)) # also increase padding by scale
+
+ if(int(img2imgpadding)>256): # but not overdo it :D
+ img2imgpadding="256"
+
+ img2imgsteps += 1
+
+ # upscale via extras upscaler next
+
+ if(enableextraupscale==True):
+ if(extrasupscaler1=="all"):
+ extrasupscaler1 = random.choice(img2imgupscalerlist)
+ print ("Going to upscale with upscaler 1 " + extrasupscaler1)
+
+ if(extrasupscaler2=="all"):
+ extrasupscaler2 = random.choice(img2imgupscalerlist)
+ print ("Going to upscale with upscaler 2 " + extrasupscaler2)
+
+ image = call_extras(image, originalimage, originalpnginfo, apiurl, filenamecomplete,extrasupscaler1,extrasupscaler2 ,extrasupscaler2visiblity,extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight,extrasresize)
+
+ steps += 1
+
+
+ print("")
+ print("All done!")
diff --git a/model_lists.py b/model_lists.py
new file mode 100644
index 0000000..56f94e8
--- /dev/null
+++ b/model_lists.py
@@ -0,0 +1,71 @@
+import modules.scripts as scripts
+import os
+from modules import modelloader, paths, shared
+from modules.paths import models_path
+
+
+def get_models():
+ model_dir = "Stable-diffusion"
+ model_path = os.path.abspath(os.path.join(paths.models_path, model_dir))
+ model_url = None
+ modellist = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
+ modellist = [s.replace(model_path, "") for s in modellist]
+ modellist = [s.replace("\\\\", "") for s in modellist]
+ modellist = [s.replace("\\", "") for s in modellist]
+ modellist = [s.replace(".ckpt", "") for s in modellist]
+ modellist = [s.replace(".safetensors", "") for s in modellist]
+ return modellist
+
+def get_upscalers():
+ #Upscalers are sort of hardcoded as well for Latent, but not for the 2 others. So build it up!
+ latentlist=["Latent","Latent (antialiased)","Latent (bicubic)","Latent (bicubic antialiased)","Latent (nearest)","Latent (nearest-exact)","Lanczos","Nearest"]
+
+ RealESRGAN_dir = "RealESRGAN"
+ RealESRGAN_path = os.path.abspath(os.path.join(paths.models_path, RealESRGAN_dir))
+ model_url = None
+ RealESRGANlist = modelloader.load_models(model_path=RealESRGAN_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".pt", ".pth"], download_name="", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
+ RealESRGANlist = [s.replace(RealESRGAN_path, "") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace("\\\\", "") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace("\\", "") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace(".pth", "") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace(".pt", "") for s in RealESRGANlist]
+
+ ESRGAN_dir = "ESRGAN"
+ ESRGAN_path = os.path.abspath(os.path.join(paths.models_path, ESRGAN_dir))
+ ESRGANlist = modelloader.load_models(model_path=ESRGAN_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".pt", ".pth"], download_name="", ext_blacklist=[".vae.ckpt", ".vae.safetensors"])
+ ESRGANlist = [s.replace(ESRGAN_path, "") for s in ESRGANlist]
+ ESRGANlist = [s.replace("\\\\", "") for s in ESRGANlist]
+ ESRGANlist = [s.replace("\\", "") for s in ESRGANlist]
+ ESRGANlist = [s.replace(".pth", "") for s in ESRGANlist]
+ ESRGANlist = [s.replace(".pt", "") for s in ESRGANlist]
+
+ #hardcode some things for Real ESGRAN, because its named differently (note, I could have just hardcoded this. Oh well...)
+ RealESRGANlist = [s.replace("RealESRGAN_x4plus","R-ESRGAN 4x+") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace("RealESRGAN x4plus_anime_6B","R-ESRGAN 4x+ Anime6B") for s in RealESRGANlist]
+ RealESRGANlist = [s.replace("R-ESRGAN 4x+_anime_6B","R-ESRGAN 4x+ Anime6B") for s in RealESRGANlist]
+
+ upscalerlist = latentlist + RealESRGANlist + ESRGANlist
+ return upscalerlist
+
+def get_samplers():
+ #Samplers are hardcoded in WEBui, so lets do it here as well
+ samplerlist = ["Euler a", "Euler", "LMS","Heun","DPM2","DPM2 a","DPM++ 2S a","DPM++ 2M","DPM++ SDE","DPM fast","DPM adaptive","LMS Karras","DPM2 Karras","DPM2 a Karras","DPM++ 2S a Karras","DPM++ 2M Karras","DPM++ SDE Karras"]
+ samplerlist += ["DDIM","UniPC", "PLMS"]
+ return samplerlist
+
+def get_upscalers_for_img2img():
+ upscalerlist = get_upscalers()
+ # basically have to remove a lot of these, these aren't supported
+ upscalerlist.remove("Latent")
+ upscalerlist.remove("Latent (antialiased)")
+ upscalerlist.remove("Latent (bicubic)")
+ upscalerlist.remove("Latent (bicubic antialiased)")
+ upscalerlist.remove("Latent (nearest)")
+ upscalerlist.remove("Latent (nearest-exact)")
+ return upscalerlist
+
+def get_samplers_for_img2img():
+ #Samplers are hardcoded in WEBui, so lets do it here as well
+ samplerlist = ["Euler a", "Euler", "LMS","Heun","DPM2","DPM2 a","DPM++ 2S a","DPM++ 2M","DPM++ SDE","DPM fast","DPM adaptive","LMS Karras","DPM2 Karras","DPM2 a Karras","DPM++ 2S a Karras","DPM++ 2M Karras","DPM++ SDE Karras"]
+ samplerlist += ["DDIM"] #UniPC and PLMS dont support upscaling apparently
+ return samplerlist
diff --git a/scripts/onebuttonprompt.py b/scripts/onebuttonprompt.py
index 08f9238..a330a82 100644
--- a/scripts/onebuttonprompt.py
+++ b/scripts/onebuttonprompt.py
@@ -1,20 +1,58 @@
import modules.scripts as scripts
import gradio as gr
import os
+import platform
+import subprocess as sp
from modules import images
from modules.processing import process_images, Processed
from modules.processing import Processed
from modules.shared import opts, cmd_opts, state
+
from build_dynamic_prompt import *
+from main import *
+from model_lists import *
subjects = ["all","object","animal","humanoid", "landscape", "concept"]
artists = ["all", "none", "popular", "greg mode", "3D", "abstract", "angular", "anime" ,"architecture", "art nouveau", "art deco", "baroque", "bauhaus", "cartoon", "character", "children's illustration", "cityscape", "clean", "cloudscape", "collage", "colorful", "comics", "cubism", "dark", "detailed", "digital", "expressionism", "fantasy", "fashion", "fauvism", "figurativism", "gore", "graffiti", "graphic design", "high contrast", "horror", "impressionism", "installation", "landscape", "light", "line drawing", "low contrast", "luminism", "magical realism", "manga", "melanin", "messy", "monochromatic", "nature", "nudity", "photography", "pop art", "portrait", "primitivism", "psychedelic", "realism", "renaissance", "romanticism", "scene", "sci-fi", "sculpture", "seascape", "space", "stained glass", "still life", "storybook realism", "street art", "streetscape", "surrealism", "symbolism", "textile", "ukiyo-e", "vibrant", "watercolor", "whimsical"]
imagetypes = ["all", "all - force multiple", "photograph", "octane render","digital art","concept art", "painting", "portrait", "anime key visual", "only other types"]
promptmode = ["at the back", "in the front"]
promptcompounder = ["1", "2", "3", "4", "5"]
-ANDtogglemode = ["comma", "AND", "current prompt + AND", "current prompt + AND + current prompt", "automatic AND"]
+ANDtogglemode = ["none", "automatic", "prefix AND prompt + suffix", "prefix + prefix + prompt + suffix"]
+seperatorlist = ["comma", "AND", "BREAK"]
+
+#for autorun and upscale
+sizelist = ["all", "portrait", "wide", "square", "ultrawide"]
+
+modellist = get_models()
+modellist.insert(0,"all")
+modellist.insert(0,"currently selected model") # First value us the currently selected model
+
+upscalerlist = get_upscalers()
+upscalerlist.insert(0,"automatic")
+upscalerlist.insert(0,"all")
+
+samplerlist = get_samplers()
+samplerlist.insert(0,"all")
+
+#for img2img
+img2imgupscalerlist = get_upscalers_for_img2img()
+img2imgupscalerlist.insert(0,"automatic")
+img2imgupscalerlist.insert(0,"all")
+
+img2imgsamplerlist = get_samplers_for_img2img()
+img2imgsamplerlist.insert(0,"all")
+
+#for ultimate SD upscale
+
+seams_fix_types = ["None","Band pass","Half tile offset pass","Half tile offset pass + intersections"]
+redraw_modes = ["Linear","Chess","None"]
+
+#folder stuff
+folder_symbol = '\U0001f4c2' # 📂
+sys.path.append(os.path.abspath(".."))
+
class Script(scripts.Script):
@@ -27,17 +65,33 @@ class Script(scripts.Script):
def ui(self, is_img2img):
- def gen_prompt(insanitylevel, subject, artist, imagetype, antistring):
+ def gen_prompt(insanitylevel, subject, artist, imagetype, antistring, prefixprompt, suffixprompt, promptcompounderlevel, seperator):
promptlist = []
for i in range(5):
- promptlist.append(build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring))
+ promptlist.append(build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring,prefixprompt,suffixprompt,promptcompounderlevel,seperator))
return promptlist
def prompttoworkflowprompt(text):
return text
+
+ # Copied code from WebUI
+ def openfolder():
+ script_dir = os.path.dirname(os.path.abspath(__file__)) # Script directory
+ automatedoutputsfolder = os.path.join(script_dir, "../automated_outputs/" )
+
+ path = os.path.normpath(automatedoutputsfolder)
+
+ if platform.system() == "Windows":
+ os.startfile(path)
+ elif platform.system() == "Darwin":
+ sp.Popen(["open", path])
+ elif "microsoft-standard-WSL2" in platform.uname().release:
+ sp.Popen(["wsl-open", path])
+ else:
+ sp.Popen(["xdg-open", path])
with gr.Tab("Main"):
@@ -55,8 +109,9 @@ class Script(scripts.Script):
imagetypes, label="type of image", value="all")
with gr.Row():
with gr.Column():
- promptlocation = gr.Dropdown(
- promptmode, label="Location of existing prompt", value="at the back")
+ prefixprompt = gr.Textbox(label="Place this in front of generated prompt (prefix)",value="")
+ suffixprompt = gr.Textbox(label="Place this at back of generated prompt (suffix)",value="")
+ negativeprompt = gr.Textbox(label="Use this negative prompt",value="")
with gr.Row():
with gr.Column():
antistring = gr.Textbox(label="Filter out following properties (comma seperated). Example ""film grain, purple, cat"" ")
@@ -129,14 +184,11 @@ class Script(scripts.Script):
- ### Location of existing prompt
+ ### Other prompt fields
-
- If you put a prompt in the prompt field, it will be added onto the generated prompt. You can determine where to put it in the front or the back of the generated prompt.
+ The existing prompt and negative prompt fields are ignored.
- 1. at the back
-
- 2. in the front
+ Add a prompt prefix, suffix and the negative prompt in the respective fields. They will be automatically added during processing.
@@ -148,6 +200,8 @@ class Script(scripts.Script):
This way, you don't ever have to add it manually again. This file won't be overwritten during upgrades.
+ Idea by redditor jonesaid.
+
"""
)
@@ -200,9 +254,13 @@ class Script(scripts.Script):
with gr.Column(scale=1):
promptcompounderlevel = gr.Dropdown(
promptcompounder, label="Prompt compounder", value="1")
+ with gr.Row():
+ with gr.Column(scale=1):
+ seperator = gr.Dropdown(
+ seperatorlist, label="Prompt seperator", value="comma")
with gr.Column(scale=2):
ANDtoggle = gr.Dropdown(
- ANDtogglemode, label="Prompt seperator mode", value="comma")
+ ANDtogglemode, label="Prompt seperator mode", value="none")
with gr.Row():
gr.Markdown(
"""
@@ -217,9 +275,9 @@ class Script(scripts.Script):
This was originally a bug in the first release when using multiple batches, now brought back as a feature.
Raised by redditor drone2222, to bring this back as a toggle, since it did create interesting results. So here it is.
- You can toggle the separator mode. Standardly this is a comma, but you can choose an AND and a newline.
+ You can toggle the separator mode. Standardly this is a comma, but you can choose an AND or a BREAK.
- You can also choose for "current prompt + AND" or "current prompt + AND + current prompt". This is best used in conjuction with the Latent Couple extension when you want some control. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
+ You can also choose the prompt seperator mode for use with Latent Couple extension
Example flow:
@@ -227,11 +285,15 @@ class Script(scripts.Script):
In the main tab, set the subject to humanoids
- In the prompt field then add for example: Art by artistname, 2 people
+ In the prefix prompt field then add for example: Art by artistname, 2 people
Set the prompt compounder to: 2
+
+ Set the Prompt seperator to: AND
- "automatic AND" is entirely build around Latent Couple. It will pass artists and the amount of people/animals/objects to generate in the prompt automatically. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
+ Set the Prompt Seperator mode to: prefix AND prompt + suffix
+
+ "automatic" is entirely build around Latent Couple. It will pass artists and the amount of people/animals/objects to generate in the prompt automatically. Set the prompt compounder equal to the amount of areas defined in Laten Couple.
Example flow:
@@ -243,27 +305,285 @@ class Script(scripts.Script):
Set the prompt compounder to: 2
+ Set the Prompt seperator to: AND
+
+ Set the Prompt Seperator mode to: automatic
+
"""
)
- genprom.click(gen_prompt, inputs=[insanitylevel,subject, artist, imagetype, antistring], outputs=[prompt1, prompt2, prompt3,prompt4,prompt5])
+ with gr.Tab("One Button Run and Upscale"):
+ with gr.Row():
+ gr.Markdown(
+ """
+ ### TXT2IMG
+
+ Start WebUi with option --api for this to work.
+
+ """
+ )
+ with gr.Row():
+ with gr.Column(scale=1):
+ startmain = gr.Button("Start generating and upscaling!")
+ automatedoutputsfolderbutton = gr.Button(folder_symbol)
+ apiurl = gr.Textbox(label="URL", value="http://127.0.0.1:7860")
+ with gr.Column(scale=1):
+ onlyupscale = gr.Checkbox(label="Don't generate, only upscale", value=False)
+ gr.Markdown(
+ """
+
+ Only upscale will not use txt2img to generate an image.
+
+ Instead it will pick up all files in the \\upscale_me\\ folder and upscale them with below settings.
+
+ """
+ )
+ with gr.Row():
+ with gr.Column(scale=1):
+
+ amountofimages = gr.Slider(1, 50, value="20", step=1, label="Amount of images to generate")
+ size = gr.Dropdown(
+ sizelist, label="Size to generate", value="all")
+ with gr.Row(scale=1):
+ samplingsteps = gr.Slider(1, 100, value="20", step=1, label="Sampling steps")
+ cfg = gr.Slider(1,20, value="6.0", step=0.1, label="CFG")
+ with gr.Row(scale=1):
+ hiresfix = gr.Checkbox(label="hires. fix", value=True)
+ hiressteps = gr.Slider(0, 100, value = "0", step=1, label="Hires steps")
+ hiresscale = gr.Slider(1, 4, value = "2", step=0.05, label="Scale")
+ denoisestrength = gr.Slider(0, 1, value="0.60", step=0.01, label="Denoise strength")
+ with gr.Column(scale=1):
+
+ model = gr.Dropdown(
+ modellist, label="model to use", value="currently selected model")
+ with gr.Column(scale=1):
+ samplingmethod = gr.Dropdown(
+ samplerlist, label= "Sampler", value="all")
+ upscaler = gr.Dropdown(
+ upscalerlist, label="hires upscaler", value="all")
+ with gr.Row():
+ gr.Markdown(
+ """
+ ### Quality Gate
+
+ Uses aesthetic image scorer extension to check the quality of the image.
+
+ Once turned on, it will retry for n amount of times to get an image with the quality score. If not, it will take the best image so far and continue.
+
+ Idea and inspiration by xKean.
+
+ """
+ )
+ with gr.Row():
+ qualitygate = gr.Checkbox(label="Quality Gate", value=False)
+ quality = gr.Slider(1, 10, value = "7.2", step=0.1, label="Quality", visible = False)
+ runs = gr.Slider(1, 50, value = "5", step=1, label="Amount of tries", visible = False)
+ with gr.Row():
+ gr.Markdown(
+ """
+ ### IMG2IMG upscale
+ """
+ )
+ with gr.Row():
+ img2imgactivate = gr.Checkbox(label="Upscale image with IMG2IMG", value=True)
+ with gr.Row():
+ with gr.Column(scale=1):
+ img2imgbatch = gr.Slider(1, 5, value="1", step=1, label="Amount times to repeat upscaling with IMG2IMG (loopback)")
+ img2imgsamplingsteps = gr.Slider(1, 100, value="20", step=1, label="img2img Sampling steps")
+ img2imgcfg = gr.Slider(1,20, value="6", step=0.1, label="img2img CFG")
+ img2imgdenoisestrength = gr.Slider(0, 1, value="0.30", step=0.01, label="img2img denoise strength")
+ img2imgdenoisestrengthmod = gr.Slider(-1,1, value = "-0.05", step=0.01, label="adjust denoise each img2img batch")
+ with gr.Column(scale=1):
+ img2imgmodel = gr.Dropdown(
+ modellist, label="img2img model to use", value="currently selected model")
+ img2imgsamplingmethod = gr.Dropdown(
+ img2imgsamplerlist, label= "img2img sampler", value="all")
+ img2imgupscaler = gr.Dropdown(
+ img2imgupscalerlist, label="img2img upscaler", value="all")
+ with gr.Row():
+ img2imgscale = gr.Slider(1, 4, value="2", step=0.05, label="img2img scale")
+ img2imgpadding = gr.Slider(32, 256, value="64", step=12, label="img2img padding")
+ with gr.Row():
+ ultimatesdupscale = gr.Checkbox(label="Use Ultimate SD Upscale script instead", value=False)
+ gr.Markdown(
+ """
+
+ This requires the Ultimate SD Upscale extension, install this if you haven't
+
+ """
+ )
+ with gr.Row():
+ with gr.Column(scale = 1):
+ #usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur
+ #usdutilewidth = "512", usdutileheight = "0", usdumaskblur = "8", usduredraw ="Linear", usduSeamsfix = "None", usdusdenoise = "0.35", usduswidth = "64", usduspadding ="32", usdusmaskblur = "8"
+ usdutilewidth = gr.Slider(0, 2048, value="512", step=12, label="tile width", visible = False)
+ usdutileheight = gr.Slider(0, 2048, value="0", step=12, label="tile height", visible = False)
+ usdumaskblur = gr.Slider(0, 64, value="8", step=1, label="Mask blur", visible = False)
+ usduredraw = gr.Dropdown(
+ redraw_modes, label="Type", value="Linear", visible = False)
+ with gr.Column(scale = 1):
+ usduSeamsfix = gr.Dropdown(
+ seams_fix_types, label="Seams fix", value="None", visible = False)
+ usdusdenoise = gr.Slider(0, 1, value="0.35", step=0.01, label="Seams denoise strenght", visible = False)
+ usduswidth = gr.Slider(0, 128, value="64", step=12, label="Seams Width", visible = False)
+ usduspadding = gr.Slider(0, 128, value="32", step=12, label="Seams padding", visible = False)
+ usdusmaskblur = gr.Slider(0, 64, value="8", step=1, label="Seams Mask blur (offset pass only)", visible = False)
+ with gr.Row():
+ with gr.Column(scale = 1):
+ controlnetenabled = gr.Checkbox(label="Enable controlnet tile resample", value=False)
+ controlnetblockymode = gr.Checkbox(label="also enable wierd blocky upscale mode", value=False)
+ with gr.Column(scale = 1):
+ controlnetmodel = gr.Textbox(label="Controlnet tile model name", value = "control_v11f1e_sd15_tile [a371b31b]")
+ with gr.Row():
+ gr.Markdown(
+ """
+
+ This requires Controlnet 1.1 extension and the tile resample model, install this if you haven't
+ In settings for Controlnet, enable "Allow other script to control this extension"
+
+ Don't use wierd blocky upscale mode. Or maybe do?
+
+ """
+ )
+ with gr.Row():
+ with gr.Column(scale = 1):
+ enableextraupscale = gr.Checkbox(label="Enable upscale with extras", value=False)
+ with gr.Row():
+ with gr.Column(scale = 1):
+ extrasresize = gr.Slider(0, 8, value="2", step=0.05, label="Upscale resize", visible = False)
+ extrasupscaler1 = gr.Dropdown(
+ img2imgupscalerlist, label="upscaler 1", value="all", visible = False)
+ extrasupscaler2 = gr.Dropdown(
+ img2imgupscalerlist, label="upscaler 2", value="all", visible = False)
+ extrasupscaler2visiblity = gr.Slider(0, 1, value="0.5", step=0.05, label="Upscaler 2 vis.", visible = False)
+ with gr.Column(scale = 1):
+ extrasupscaler2gfpgan = gr.Slider(0, 1, value="0", step=0.05, label="GFPGAN vis.", visible = False)
+ extrasupscaler2codeformer = gr.Slider(0, 1, value="0.15", step=0.05, label="CodeFormer vis.", visible = False)
+ extrasupscaler2codeformerweight = gr.Slider(0, 1, value="0.1", step=0.05, label="CodeFormer weight", visible = False)
+
+
+ genprom.click(gen_prompt, inputs=[insanitylevel,subject, artist, imagetype, antistring,prefixprompt, suffixprompt,promptcompounderlevel, seperator], outputs=[prompt1, prompt2, prompt3,prompt4,prompt5])
prompt1toworkflow.click(prompttoworkflowprompt, inputs=prompt1, outputs=workprompt)
prompt2toworkflow.click(prompttoworkflowprompt, inputs=prompt2, outputs=workprompt)
prompt3toworkflow.click(prompttoworkflowprompt, inputs=prompt3, outputs=workprompt)
prompt4toworkflow.click(prompttoworkflowprompt, inputs=prompt4, outputs=workprompt)
prompt5toworkflow.click(prompttoworkflowprompt, inputs=prompt5, outputs=workprompt)
+
+ startmain.click(generateimages, inputs=[amountofimages,size,model,samplingsteps,cfg,hiresfix,hiressteps,denoisestrength,samplingmethod, upscaler,hiresscale, apiurl, qualitygate, quality, runs,insanitylevel,subject, artist, imagetype, silentmode, workprompt, antistring, prefixprompt, suffixprompt,negativeprompt,promptcompounderlevel, seperator, img2imgbatch, img2imgsamplingsteps, img2imgcfg, img2imgsamplingmethod, img2imgupscaler, img2imgmodel,img2imgactivate, img2imgscale, img2imgpadding,img2imgdenoisestrength,ultimatesdupscale,usdutilewidth, usdutileheight, usdumaskblur, usduredraw, usduSeamsfix, usdusdenoise, usduswidth, usduspadding, usdusmaskblur, controlnetenabled, controlnetmodel,img2imgdenoisestrengthmod,enableextraupscale,controlnetblockymode,extrasupscaler1,extrasupscaler2,extrasupscaler2visiblity,extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight,extrasresize,onlyupscale])
+
+ automatedoutputsfolderbutton.click(openfolder)
+
+ # Turn things off and on for onlyupscale and txt2img
+ def onlyupscalevalues(onlyupscale):
+ onlyupscale = not onlyupscale
+ return {
+ amountofimages: gr.update(visible=onlyupscale),
+ size: gr.update(visible=onlyupscale),
+ samplingsteps: gr.update(visible=onlyupscale),
+ cfg: gr.update(visible=onlyupscale),
+
+ hiresfix: gr.update(visible=onlyupscale),
+ hiressteps: gr.update(visible=onlyupscale),
+ hiresscale: gr.update(visible=onlyupscale),
+ denoisestrength: gr.update(visible=onlyupscale),
+ upscaler: gr.update(visible=onlyupscale),
+
+ model: gr.update(visible=onlyupscale),
+ samplingmethod: gr.update(visible=onlyupscale),
+ upscaler: gr.update(visible=onlyupscale),
+
+ qualitygate: gr.update(visible=onlyupscale),
+ quality: gr.update(visible=onlyupscale),
+ runs: gr.update(visible=onlyupscale)
+
+ }
+
+ onlyupscale.change(
+ onlyupscalevalues,
+ [onlyupscale],
+ [amountofimages,size,samplingsteps,cfg,hiresfix,hiressteps,hiresscale,denoisestrength,upscaler,model,samplingmethod,upscaler,qualitygate,quality,runs]
+ )
+ # Turn things off and on for hiresfix
+ def hireschangevalues(hiresfix):
+ return {
+ hiressteps: gr.update(visible=hiresfix),
+ hiresscale: gr.update(visible=hiresfix),
+ denoisestrength: gr.update(visible=hiresfix),
+ upscaler: gr.update(visible=hiresfix)
+ }
- return [insanitylevel,subject, artist, imagetype, promptlocation, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring]
+ hiresfix.change(
+ hireschangevalues,
+ [hiresfix],
+ [hiressteps,hiresscale,denoisestrength,upscaler]
+ )
+
+ # Turn things off and on for quality gate
+ def qgatechangevalues(qualitygate):
+ return {
+ quality: gr.update(visible=qualitygate),
+ runs: gr.update(visible=qualitygate)
+ }
+
+ qualitygate.change(
+ qgatechangevalues,
+ [qualitygate],
+ [quality,runs]
+ )
+
+ # Turn things off and on for USDU
+ def ultimatesdupscalechangevalues(ultimatesdupscale):
+ return {
+ usdutilewidth: gr.update(visible=ultimatesdupscale),
+ usdutileheight: gr.update(visible=ultimatesdupscale),
+ usdumaskblur: gr.update(visible=ultimatesdupscale),
+ usduredraw: gr.update(visible=ultimatesdupscale),
+
+ usduSeamsfix: gr.update(visible=ultimatesdupscale),
+ usdusdenoise: gr.update(visible=ultimatesdupscale),
+ usduswidth: gr.update(visible=ultimatesdupscale),
+ usduspadding: gr.update(visible=ultimatesdupscale),
+ usdusmaskblur: gr.update(visible=ultimatesdupscale)
+ }
+
+ ultimatesdupscale.change(
+ ultimatesdupscalechangevalues,
+ [ultimatesdupscale],
+ [usdutilewidth,usdutileheight,usdumaskblur,usduredraw,usduSeamsfix,usdusdenoise,usduswidth,usduspadding,usdusmaskblur]
+ )
+
+ # Turn things off and on for EXTRAS
+ def enableextraupscalechangevalues(enableextraupscale):
+ return {
+ extrasupscaler1: gr.update(visible=enableextraupscale),
+ extrasupscaler2: gr.update(visible=enableextraupscale),
+ extrasupscaler2visiblity: gr.update(visible=enableextraupscale),
+ extrasresize: gr.update(visible=enableextraupscale),
+
+ extrasupscaler2gfpgan: gr.update(visible=enableextraupscale),
+ extrasupscaler2codeformer: gr.update(visible=enableextraupscale),
+ extrasupscaler2codeformerweight: gr.update(visible=enableextraupscale)
+ }
+
+ enableextraupscale.change(
+ enableextraupscalechangevalues,
+ [enableextraupscale],
+ [extrasupscaler1,extrasupscaler2,extrasupscaler2visiblity,extrasresize, extrasupscaler2gfpgan,extrasupscaler2codeformer,extrasupscaler2codeformerweight]
+ )
+
+
+
+ return [insanitylevel,subject, artist, imagetype, prefixprompt,suffixprompt,negativeprompt, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring, seperator]
- def run(self, p, insanitylevel, subject, artist, imagetype, promptlocation, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring):
+ def run(self, p, insanitylevel, subject, artist, imagetype, prefixprompt,suffixprompt,negativeprompt, promptcompounderlevel, ANDtoggle, silentmode, workprompt, antistring,seperator):
images = []
infotexts = []
@@ -275,23 +595,23 @@ class Script(scripts.Script):
batchsize = p.batch_size
p.n_iter = 1
p.batch_size = 1
- originalprompt = p.prompt
-
+
if(silentmode and workprompt != ""):
print("Workflow mode turned on, not generating a prompt. Using workflow prompt.")
elif(silentmode):
print("Warning, workflow mode is turned on, but no workprompt has been given.")
- elif p.prompt != "":
- print("Prompt is not empty, adding current prompt " + promptlocation + " of the generated prompt")
+ elif p.prompt != "" or p.negative_prompt != "":
+ print("Please note that existing prompt and negative prompt fields are (no longer) used")
- if(ANDtoggle == "automatic AND" and artist == "none"):
- print("Automatic AND and artist mode set to none, don't work together well. Ignoring this setting!")
+ if(ANDtoggle == "automatic" and artist == "none"):
+ print("Automatic and artist mode set to none, don't work together well. Ignoring this setting!")
artist = "all"
- if(ANDtoggle == "automatic AND" and originalprompt != ""):
- print("Automatic AND doesnt work well if there is an original prompt filled in. Ignoring the original prompt!")
- originalprompt = ""
+ if(ANDtoggle == "automatic" and (prefixprompt != "")):
+ print("Automatic doesnt work well if there is an prefix prompt filled in. Ignoring this prompt fields!")
+ prefixprompt = ""
+
@@ -301,41 +621,37 @@ class Script(scripts.Script):
# prompt compounding
print("Starting generating the prompt")
preppedprompt = ""
- if(ANDtoggle == "automatic AND" and originalprompt == ""):
- if(artist != "none"):
- originalprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, True, antistring)
- if(subject == "humanoid"):
- originalprompt += ", " + promptcompounderlevel + " people"
- if(subject == "landscape"):
- originalprompt += ", landscape"
- if(subject == "animal"):
- originalprompt += ", " + promptcompounderlevel + " animals"
- if(subject == "object"):
- originalprompt += ", " + promptcompounderlevel + " objects"
-
- if(ANDtoggle != "AND" and ANDtoggle != "comma" and originalprompt != ""):
- preppedprompt += originalprompt + " \n AND "
- for i in range(int(promptcompounderlevel)):
- if(ANDtoggle == "automatic AND"):
- preppedprompt += originalprompt + ", " + build_dynamic_prompt(insanitylevel,subject,"none", imagetype, False, antistring)
- elif(ANDtoggle != "AND" and ANDtoggle != "comma" and originalprompt != "" and ANDtoggle != "current prompt + AND" ):
- preppedprompt += originalprompt + ", " + build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring)
+ if(ANDtoggle == "automatic"):
+ preppedprompt += prefixprompt + ", "
+ if(artist != "none"):
+ preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, True, antistring)
+ if(subject == "humanoid"):
+ preppedprompt += ", " + promptcompounderlevel + " people"
+ if(subject == "landscape"):
+ preppedprompt += ", landscape"
+ if(subject == "animal"):
+ preppedprompt += ", " + promptcompounderlevel + " animals"
+ if(subject == "object"):
+ preppedprompt += ", " + promptcompounderlevel + " objects"
+
+ if(ANDtoggle != "none" and ANDtoggle != "automatic"):
+ preppedprompt += prefixprompt
+
+ if(ANDtoggle != "none"):
+ if(ANDtoggle!="prefix + prefix + prompt + suffix"):
+ prefixprompt = ""
+ if(seperator == "comma"):
+ preppedprompt += " \n , "
else:
- preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring)
- if(i + 1 != int(promptcompounderlevel)):
- if(ANDtoggle == "comma"):
- preppedprompt += ", "
- else:
- preppedprompt += " \n AND "
+ preppedprompt += " \n " + seperator + " "
+
+ #Here is where we build a "normal" prompt
+ preppedprompt += build_dynamic_prompt(insanitylevel,subject,artist, imagetype, False, antistring, prefixprompt, suffixprompt,promptcompounderlevel, seperator)
-
- if(promptlocation == "in the front" and originalprompt != "" and (ANDtoggle == "AND" or ANDtoggle == "comma")):
- p.prompt = originalprompt + ", " + preppedprompt
- elif(promptlocation == "at the back" and originalprompt != "" and (ANDtoggle == "AND" or ANDtoggle == "comma")):
- p.prompt = preppedprompt + ", " + originalprompt # add existing prompt to the back?
- else:
- p.prompt = preppedprompt # dont add anything
+ # set everything ready
+ p.prompt = preppedprompt
+ p.negative_prompt = negativeprompt
if(silentmode == True):
p.prompt = workprompt
diff --git a/user_guides/my_first_generation.md b/user_guides/my_first_generation.md
new file mode 100644
index 0000000..0ff17f5
--- /dev/null
+++ b/user_guides/my_first_generation.md
@@ -0,0 +1,120 @@
+# My first generation
+Here are some example methods and tips on how to receive some good results.
+
+Maybe it can give you some ideas and inspiration on how to use this tool. You can just leave everything on "all", which is the most fun!
+
+It works best with models that are more general and multi purpose. Such as [deliberate](https://civitai.com/models/4823/deliberate) and [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10). However, feel free to use it on your personal favorite models.
+
+For models which are more trained on specific subjects, match the subject to the model to get the best results.
+
+## Portraits
+
+In this example, I am using the deliberate model.
+I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and lowered the CFG Scale to "6"
+You can increase the batch count, to keep creating new images.
+
+
+
+
+Then I scroll down and activate the One Button prompt script.
+
+I set the complexity of the prompt to 5, which is a nice middle ground for prompts.
+
+As a "Subject Type" I select "humanoid", this will ensure I will get a human, or humanoid like result from the prompt.
+
+For "Artists" I select "popular". This is a list of popular artists from images from CivitAI. Such as LOIS, Artgerm, alphonse mucha and many others. You can also try and switch "artists" to "Portrait" or "Character"
+
+As "Type of image" I select "portrait", so that it will always default to a portrait image. You can also try to switch "Type of image" to "digital art" to get more full body shots.
+
+
+
+At that point, you can just press generate.
+
+Examples:
+
+
+
+
+> portrait, ,close up of a Hopeless Slovak Mind Flayer, cosplaying as John Wick, Dark hair styled as Ballerina bun, Day of the Dead face paint, Desaturated, (art by Jeremy Mann:1.2)
+
+> art by Greg Rutkowski, portrait, ,close up of a Brutal Micronesian [Sphinx|James from pokemon], wearing Russian Violet 1980s neon clothing, fashion modeling pose, at Nighttime, Visual novel, Grim, soft light, Infrared
+
+
+
+
+> portrait, ,close up of a Alluring Compelling Macedonian Tom Hardy, Skydiving, Bending backwards, Gray hair styled as Ponytail, from inside of a Cambridge, Simple illustration, Surprising, Heidelberg School, soft light, Magic the gathering, artstation
+
+> portrait, ,close up of a plump Female Paladin, with Lycra skin, background is The Chaco Culture, art by Loish
+
+## Photos
+In this example, I am using the [Realistic Vision](https://civitai.com/models/4201/realistic-vision-v20) model.
+
+I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and kept the CFG Scale to "7"
+You can increase the batch count, to keep creating new images.
+
+I set the complexity of the prompt to 5, which is a nice middle ground for prompts. For photo's you might want to go a bit lower.
+
+As a "Subject Type" I select "humanoid", as the Realistic Vision model likes to generate humans.
+
+For "Artists" I select "photograph". This is a list of photography artists. However "fashion" is also a good option here. I can also recommend "none", so no artist is generated at all
+When using Photograph artists, there is a very high chance of getting a portrait image.
+
+As "Type of image" I select "photograph", so that it will always default to a portrait image.
+
+
+
+Examples:
+
+
+
+
+> [art by Tony Conrad|art by Ed Freeman], photograph, two shot angle of a Chanel Iman, wearing Ultrarealistic Preppy clothing, Honey hair styled as Short and messy, film grain, Canon 5d mark 4, macro lens
+
+> photograph, High exposure of a Dynamic Ancient Male Writer, Dark hair styled as Tousled, Hazy conditions, Bloom light, film grain, Sony A9 II, Depth of field 100mm, Spirals, dslr, art by Larry Sultan, (art by Ray Collins:1.1)
+
+
+
+
+> photograph, Dutch angle shot of a Homey Nightelf, Pink and Silver hair styled as Dreadlocks, Process Art, Nostalgic lighting, film grain, Canon eos 5d mark 4, telephoto lens
+
+> art by Geof Kern, (art by Guo Pei:0.8), photograph, 3/4 view of a Reiwa Era Kehlani, Writing, Dark hair styled as Wavy, Necklace, Soul patch, Sunny, Dada Art, volumetric lighting, film grain, dslr, F/1.8, Plain white background
+
+## Sci-fi Animals (adding trigger words and prompt prefix)
+In this example, I am using the [dreamlike diffusion](https://civitai.com/models/1274/dreamlike-diffusion-10) model.
+
+I've set the Sampling method to "DPM++ SDE Karras", Sampling steps to "25" and kept the CFG Scale to "7"
+I've set the Widht to 768 to create some widescreen images.
+
+You can increase the batch count, to keep creating new images.
+
+I set the complexity of the prompt to 7, since we can go a little crazy here, but not too much.
+
+As a "Subject Type" I select "animal", we want to create some cool animals.
+
+For "Artists" I select "sci-fi". This is a list of sci-fi artists. However "fantasy" is also a good option here.
+
+As "Type of image" I select "concept art", because we want some cool concept art to show up.
+
+As an addition, I am also filling in the prompt prefix field with "dreamlikeart, sci-fi animal"
+
+"dreamlikeart" is the triggerword of the dreamlike diffusion model, so we want to add that as the start
+
+Adding "sci-fi animal" at the start, makes sure it understand it needs to create a sci-fi animal, regardless of what else the prompts generates. You can play around with these settings, also depending on the model you are using and the results you want.
+
+
+
+Examples:
+
+
+
+
+> dreamlikeart, sci-fi animal, (art by Kilian Eng:1.1), concept art, Street level shot of a Floating were badger, from inside of The Tablets of Stone, Foggy, Movie still, Seapunk Art, Low shutter, Lomography, opulent
+
+> dreamlikeart, sci-fi animal, concept art, ground level shot of a Glam monster Chicken, Realistic, in the style of Bust of Nefertiti, (art by Brian Sum:1.3)
+
+
+
+
+> dreamlikeart, sci-fi animal, concept art, aerial shot of a Happy Stegosaurus, Triathlon, wearing 1920's dark pastel Denim shorts, Adventure pose, moody lighting, Fish-eye Lens, Cinestill, National Geographic, dripping Mustard, photolab, (art by Ed Emshwiller:1.3),art by Andy Fairhurst
+
+> dreamlikeart, sci-fi animal, concept art, full shot of a Lovely Scorpion, background is Mountain, Spring, Rough sketch, key light, 50mm, Film grain, Dota style, inticrate details
diff --git a/user_guides/one_button_run_and_upscale.md b/user_guides/one_button_run_and_upscale.md
new file mode 100644
index 0000000..6297f22
--- /dev/null
+++ b/user_guides/one_button_run_and_upscale.md
@@ -0,0 +1,239 @@
+# One Button Run and Upscale
+An extensive new feature of One Button Prompt. One Button goes brrrr.
+
+This will allow you to:
+1. Generate an image with TXT2IMG
+ 1. Can enable Hi res. fix
+ 2. Possible to set up a __Quality Gate__, so only the best images get upscaled
+ 3. Possible to ignore the One Button Prompt generation, and __use your own prompts__
+2. Upscale that image with IMG2IMG
+ 1. This proces can be repeated. Loopback enabled.
+ 2. Supports __SD Upscale__, __Ultimate SD Upscale__ and __Controlnet tile_resample__ methods of upscaling
+3. Upscale with EXTRAS
+4. Possiblity to __just batch upscale__ existing images
+
+All with a single press of __One Button__.
+
+## How does it work?
+It works by using various calls of the WebUI API and calling them with the correct parameters in the correct order.
+
+## Requirements
+The basic requirement, is that WebUI is started with --api enabled. To do so, go into your webui-user.bat file in the WebUi folder, and add --api to the line with set COMMANDLINE_ARGS
+For example, this is how my file looks like.
+
+
+
+After you have made your changes, restart WebUI with webui-user.bat
+
+### Optional requirements
+#### Quality Gate
+To be able to use the Quality Gate functionality, you need to install the following extension into WebUI:
+
+[image-scorer](https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer)
+
+You can install this via the "Install from URL" option in WebUI
+
+#### Ultimate SD Upscale
+To be able to use Ultimate SD Upscale, you need to have that installed. It can be found here:
+
+[Ultimate SD Upscale](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111)
+
+You can install this via the "Install from URL" option in WebUI
+
+#### ControlNET 1.1 and Tile_resample
+To be able to use ControlNET 1.1 and the tile_resample method, you need to install both.
+
+[ControlNET](https://github.com/Mikubill/sd-webui-controlnet)
+
+[Tile model](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth)
+
+Here is an extensive [guide on civitai](https://civitai.com/models/59811/4k-resolution-upscale-8x-controlnet-tile-resample-in-depth-with-resources)
+
+Here is a [youtube video guide from Olivio Sarikas](https://www.youtube.com/watch?v=zrGLEgGFJY4)
+
+## Image locations
+One Button Prompt uses its own locations and filenaming convention.
+
+Go to your WebUI installation folder and then \extensions\onebuttonprompt\automated_outputs\
+
+
+
+Here you should see the One Button Prompt folder structure.
+1. txt2img -> txt2img results from One Button Run are stored here.
+2. promps -> txt2img prompts and parameters used are stored here.
+3. img2img -> img2img results from One Button Run are stored here. With loopback enabled, they are overwritten when loopback is done.
+4. extras -> extras results from One Button Run are stored here.
+5. upscale_me -> Place images here for using "just upscale" mode.
+
+When using One Button Prompt to generate the prompt, the subject will be part of the name.
+
+Here are some examples, so you can quickly identify the different pictures.
+
+
+
+
+## One Button run tab and options explained
+
+### TXT2IMG
+Important to note, is that the TXT2IMG prompt generation process works with the options set in the Main, Workflow Assist and Advanced tabs.
+
+So you can set up any specifics you want for the prompt generation first.
+
+The __"folder"__ buttons opens to your One Button Prompt automated outputs folders.
+
+There are some general options to set first.
+
+__URL__ -> This should be the URL used by WebUI (see your webbrowser). This is standardly __http://127.0.0.1:7860__ , but change it to your specific instance.
+
+__Amount of images to generate__ -> How many times should it repeat the entire process. So how many images to generate and upscale
+
+__Don't generate, only upscale__ -> Place images in the /upscale_me/ folder, when enabled, it will skip the TXT2IMG part, and will start batch upscaling the images with the set parameters
+
+__model to use__ -> Select which model to use during generation. __"Currently Selected Model"__ meand the model you have loaded right now. __"all"__ means a random model (not inpainting models). Or select a specific one.
+
+
+For TXT2IMG most options should be familiar. I will here explain some of the additions and changes.
+
+
+__Size to generate__:
+
+1. __all__ -> picks randomly between __portrait__, __wide__ and __square__
+2. __portrait__ -> 512x768
+3. __wide__ -> 768x512
+4. __square__ -> 512x512
+5. __ultrawide__ -> 1280x360 (Don't worry, this one is just for me, and won't be used when picking "all")
+
+__Sampler__:
+
+Added option for __"all"__, picks randomly
+
+__Hirex upscaler__:
+
+Added option for __"all"__, picks randomly
+
+Added option __"automatic"__, sets Upscaler and Denoise Strength based on prompt. Example, if the prompt contains anime, it will try to use the anime upscaler.
+
+
+
+
+#### Quality Gate
+Idea and inspired by xKean. Such an awesome addition.
+See requirements above.
+
+When enabling __Quality Gate__, it will repeat the above TXT2IMG process until:
+- an Image reaches the __Quality Score__
+- we have reached the __amount of tries__
+
+When an image reaches the quality score, all other images are removed. It will then continue with the quality image.
+If we reach the amount of tries, it will __pick the image with the highest score__. All other images are removed. It will then continue with the highest quality image.
+
+
+
+
+### IMG2IMG
+Again, most options should be familiar for using IMG2IMG when upscaling. It defaults to using __"SD Upscale"__ method which is included in WebUI standardly.
+Enable __"Upscale image with IMG2IMG"__ to actually turn this on.
+
+__Amount times to repeat upscaling with IMG2IMG (loopback)__ -> This controls the amount of times to use IMG2IMG to upscale.
+
+I will describe some of the changes from normal.
+
+__model to use__ -> Select which model to use during generation. __"Currently Selected Model"__ meand the model you have loaded right now. __"all"__ means a random model (not inpainting models). Or select a specific one.
+
+Note that you can have a different model selected here, than used in the TXT2IMG process.
+
+__Sampler__:
+
+Added option for __"all"__, picks randomly
+
+__Upscaler__:
+
+Added option for __"all"__, picks randomly
+
+Added option __"automatic"__, sets Upscaler and Denoise Strength based on prompt. Example, if the prompt contains anime, it will try to use the anime upscaler.
+
+__adjust denoise each img2img batch__ -> Adds or subtracts this amount of denoise, during each IMG2IMG batch. Usually you want a lower denoise when upscaling larger images.
+
+
+
+
+
+#### Use Ultimate SD Upscale script
+Turn on __"Use Ultimate SD Upscale script instead"__ to use Ultimate SD Upscale. You need to have that extension installed, see requirements above.
+
+Here, all options from Ultimate SD Upscale are available.
+
+It uses img2img padding for the padding.
+It will always be set to __"Scale from image size"__
+
+
+
+
+#### Controlnet tile resample
+Best used in combination with Ultimate SD Upscale and the 4x-UltraSharp upscaler, however you can use it with the normal SD Upscaler as well.
+The controlnet tile model name is filled in for you, but if a later or newer version comes out, you might have to change this to that specific one. Current version is __"control_v11f1e_sd15_tile [a371b31b]"__
+
+
+
+There is also a option for __"also enable wierd blocky upscale mode".__ This was a bug I found during development, but brought in as a feature. Best used with 4x-UltraSharp upscaler and a decent denoise (0.7-0.8).
+
+Here is an example result, so you can see what to expect.
+
+
+
+> art by Daria Petrilli,art by J.C. Leyendecker, landscape of a City, at Overcast, Simple illustration, Lonely, Industrial Art, volumetric lighting, DayGlo and electric pink hue, under water
+
+## Upscale with EXTRAS
+The last part is rather straightforward. You can at the last step, upscale the image through the EXTRAS tab.
+All the main options are here, you do need to enable __"Enable upscale with extras"__.
+Again, the upscalers set to __"all"__ are random. You can get both the same upscaler.
+
+
+
+# Just Upscale mode
+Next to the Start button, is a checkbox for __"Don't generate, only upscale"__
+
+
+
+This mode __skips the entire TXT2IMG part__ of the batch. Instead, it will pick up all image files placed in the __\extensions\OneButtonPrompt-dev\automated_outputs\upscale_me\ folder__, and starts looping over those instead.
+It will ignore any .txt file.
+It will keep the original files, so you have to remove them before starting the next batch again.
+
+Example:
+
+
+
+This method will __read the original prompt and negative prompt__ from the image file (if it exists) and use that during the upscaling process.
+Thus it can be used for __any image generated by the WebUI__, not specific to OneButtonPrompt image files.
+
+This method is perfect for anyone who just generates their images via TXT2IMG, and then __upscale in batch__ the best results automatically.
+
+
+# Workflows
+Each one of you has their own workflow ideas. People might prefer hires fix over tile upscaling, or some people might prefer upscaling multiple times as a loopback.
+
+For me, I like to cherry pick results, and then use the "Just Upscale mode" to batch upscale my favorites.
+
+You can set everything to "all", and just start generating a bunch of random stuff.
+
+I'm sure your specific way of upscaling and working is supported by all the options offered here. Missing something? Let me know!
+
+## Using your own prompts
+If you don't like the results of the One Button Prompt generator (how could you not!), you can __turn off the prompt generation, and use your own prompts__ instead.
+
+Go to the __"Workflow assist"__ tab, and enable __"Workflow mode"__.
+Put your prompt in the __Workflow prompt__ field, and it will start using that
+
+
+
+For the __negative prompt__, use the __"Main"__ tab
+
+Put the negative prompt in the __"Use this negative prompt"__ field.
+
+
+
+And done, you can now use One Button Run and Upscale with your own prompts.
+
+
+### Please enjoy
+Keep having fun!