v11.2.0
parent
89dc9a4904
commit
8bffe17049
|
|
@ -3,7 +3,47 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
For more details on new features, please check the [Manual](./MANUAL.md).
|
||||
|
||||
<details open><summary>11.1.0 - 23 June 2024</summary>
|
||||
<details open><summary>11.2.0 - 29 July 2024</summary>
|
||||
|
||||
### Added
|
||||
- New shortcode `[abs]`: Returns the absolute value of the content
|
||||
- New shortcode `[break]`: Exit a loop or `[function]` early
|
||||
- New shortcode `[continue]`: Skips the remaining code within the current iteration of the loop and moves on to the next iteration
|
||||
- New shortcode `[each]`: Foreach-style loop
|
||||
- New shortcode `[exists]`: Returns whether the specified variable(s) have been set
|
||||
- New shortcode `[help]`: Displays documentation for a given shortcode or feature
|
||||
- New template Autopilot v0.0.1: Quickly set up your basic inference settings, such as CFG and image dimensions
|
||||
- New template Control Freak v0.0.1: Parses your prompt for relevant ControlNet images to use
|
||||
- Bodysnatcher v2.1.1
|
||||
- Magic Spice v0.2.1
|
||||
- cn_upscaler_v0.1.0: Added support for SDXL upscaling with the Promax model
|
||||
- New txt2img preset `dpm_2m_fast_v1`: Uses the DPM++ 2M sampler with only 15 steps, which is surprisingly enough to achieve convergence on some of the newer SDXL models
|
||||
- New ControlNet preset `xl_promax_v1`: Uses Xinsir's excellent all-in-one ControlNet model
|
||||
- Updated Magic Spice preset `pony_photoreal_spice_v3`
|
||||
- Updated ControlNet preset `xl_mistoline_v2`: Improved prompt adherence
|
||||
- `[info]`: Now supports the `directory`, `extension`, and `path` pargs to retrieve different parts of filepath content
|
||||
- `[info]`: Now parses the `string_count` kwarg for shortcodes
|
||||
- `[image_edit]`: Now supports the `upscale` kwarg to enlarge images using upscale models
|
||||
- `[img2img_autosize]`: Now supports the `input` kwarg to run the autosizing feature on a custom image
|
||||
- `[random]`: Now parses the first parg for shortcodes
|
||||
- `[replace]`: Now supports the `_delimiter` kwarg for a custom delimiter
|
||||
|
||||
### Changed
|
||||
- Converted stylesheet to SASS
|
||||
|
||||
### Fixed
|
||||
- `[case]`: Fixed an issue when using nested `[for]` loops inside of this block
|
||||
- `[img2img]`: Removed unnecessary `gradio` import to significantly improve startup time outside of WebUI
|
||||
- `[image_info]`: Fixed `delimiter` kwarg
|
||||
- `[txt2mask]`: Fixed order of `big-to-small` mask sort method
|
||||
|
||||
### Removed
|
||||
- `[upscale]`: This shortcode has been retired in favor of `[image_edit upscale]`
|
||||
- img2img presets no longer contain inference settings that overlap with txt2img presets, such as the sampler name or CFG scale (this is because both presets are now easily selected through the Autopilot template)
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>11.1.0 - 23 June 2024</summary>
|
||||
|
||||
### Added
|
||||
- `[txt2mask]`: New method `panoptic_sam` ([source](https://github.com/segments-ai/panoptic-segment-anything)) for instance-based masking - really good results!
|
||||
|
|
|
|||
|
|
@ -0,0 +1,33 @@
|
|||
Allows you to exit a loop or `[function]` early.
|
||||
|
||||
## Supported shortcodes
|
||||
|
||||
The `[break]` statement has been tested inside of the following shortcodes:
|
||||
|
||||
- `[for]`
|
||||
- `[while]`
|
||||
- `[do]`
|
||||
- `[each]`
|
||||
- `[call]` (Both external files and functions)
|
||||
|
||||
## Example
|
||||
|
||||
Here is how you might use `[break]` inside of a `[for]` loop:
|
||||
|
||||
```
|
||||
[for i=0 "i < 3" "i+1"]
|
||||
[log]Print: [get i][/log]
|
||||
[if "i > 1"]
|
||||
[break]
|
||||
[/if]
|
||||
[/for]
|
||||
[log]Script finished.[/log]
|
||||
```
|
||||
|
||||
## Result
|
||||
|
||||
```
|
||||
Print: 0
|
||||
Print: 1
|
||||
Script finished.
|
||||
```
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
Foreach style loop, similar to Python.
|
||||
|
||||
Returns the content `n` times, where `n` is equal to the length of the `list` parg.
|
||||
|
||||
## Required pargs
|
||||
|
||||
### item
|
||||
|
||||
The value of the nth index in the `list` parg.
|
||||
|
||||
### list
|
||||
|
||||
The list variable to iterate over.
|
||||
|
||||
## Optional kwargs
|
||||
|
||||
### idx
|
||||
|
||||
The variable name to store the iterator counter in. Defaults to `idx`.
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
In this example, we'll populate a list variable called `test_array` with `a|b|c`, and then loop through it with `[each]`:
|
||||
|
||||
```
|
||||
[array test_array 0="a" 1="b" 2="c"]
|
||||
[each item test_array]
|
||||
The iteration is: [get idx]
|
||||
The value of item is: [get item]
|
||||
[/each]
|
||||
```
|
||||
|
||||
### Result:
|
||||
|
||||
```
|
||||
The iteration is: 0
|
||||
The value of item is: a
|
||||
The iteration is: 1
|
||||
The value of item is: b
|
||||
The iteration is: 2
|
||||
The value of item is: c
|
||||
```
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
Returns documentation for the requested feature or shortcode.
|
||||
|
||||
Each parg is a keyword that corresponds to the feature or shortcode you want to learn more about.
|
||||
|
||||
This shortcode is particularly useful in ComfyUI - you can connect the `STRING` output of the Unprompted node to a Preview Text node for an inline reference guide.
|
||||
|
||||
## System arguments
|
||||
|
||||
### _online
|
||||
|
||||
A parg that causes the shortcode to open relevant documentation in your web browser, rather than loading the Markdown files from disk.
|
||||
|
|
@ -241,4 +241,18 @@ Since arguments are processed in order, you can run `save` after some operations
|
|||
|
||||
```
|
||||
[image_edit width=350 height=350 save="C:/image/after_resizing.png" mask="C:/some/mask.png"]
|
||||
```
|
||||
```
|
||||
|
||||
### upscale
|
||||
|
||||
Enhances the `input` image using one or more of the upscaler methods available in the A1111 WebUI.
|
||||
|
||||
The value of this kwarg is a delimited list of upscaler model name(s) to use.
|
||||
|
||||
Supports the `scale` kwarg which is the scale factor to use. Defaults to 1.
|
||||
|
||||
Supports the `upscale_alpha` kwarg which is the opacity value to use when blending the result back into the original image. Defaults to 1, or full opacity.
|
||||
|
||||
Supports the `upscale_model_limit` kwarg which is the maximum number of `upscale` models to use. Defaults to 100. For example, let's say you supply 4 different models and set the `upscale_model_limit` to 1 - if the first couple models are not found on the user's system, it will use the 3rd model and then disregard the 4th.
|
||||
|
||||
Supports the `upscale_keep_res` parg which will maintain the original resolution of the `input` image.
|
||||
|
|
@ -6,7 +6,7 @@ Supports `word_count` for retrieving the number of words in the content, using s
|
|||
|
||||
Supports `sentence_count` for retrieving the number of sentences in the content. Powered by the nltk library.
|
||||
|
||||
Supports `filename` for retreiving the base name of a file from the filepath content. For example, if the content is `C:/pictures/delicious hamburger.png` then this will return `delicious hamburger`.
|
||||
Supports `filename` for retrieving the base name of a file from the filepath content. For example, if the content is `C:/pictures/delicious hamburger.png` then this will return `delicious hamburger`.
|
||||
|
||||
Supports `string_count` for retrieving the number of a custom substring in the content. For example, `[info string_count="the"]the frog and the dog and the log[/info]` will return 3.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,23 +0,0 @@
|
|||
Resizes an image to the given dimensions, works with the SD image by default.
|
||||
|
||||
The first parg is the path to your `image`. It can also take a PIL Image object.
|
||||
|
||||
Supports the `save_out` parg which is the path to save the resized image to. If you do not specify this, the new image will overwrite the original.
|
||||
|
||||
Supports `width` and `height` kwargs which are the new dimensions of the image.
|
||||
|
||||
Supports the `unit` kwarg which is the unit of measurement for the `width` and `height` kwargs. Options include `px` and `%`. Defaults to `px`.
|
||||
|
||||
Supports the `technique` kwarg which is the method of resizing. Options include `scale` and `crop`. Defaults to `scale`.
|
||||
|
||||
Supports the `resample_method` kwarg which is the method of resampling when using the `scale` technique. Options include `Nearest Neighbor`, `Box`, `Bilinear`, `Hamming`, `Bicubic`, and `Lanczos`. Defaults to `Lanczos`.
|
||||
|
||||
Supports the `origin` kwarg which is the anchor point of the image when using the `crop` technique. Options include `top_left`, `top_center`, `top_right`, `middle_left`, `middle_center`, `middle_right`, `bottom_left`, `bottom_center`, and `bottom_right`. Defaults to `middle_center`.
|
||||
|
||||
Supports the `keep_ratio` parg which will preserve the aspect ratio of the image. Note that if you specify both `width` and `height`, it will take precedence over `keep_ratio`.
|
||||
|
||||
Supports the `min_width` and `min_height` kwargs which can be used to set a minimum size for the image. This is applied after the `keep_ratio` parg. If the image is smaller than the minimum, it will be scaled up to the minimum.
|
||||
|
||||
```
|
||||
[resize "a:/inbox/picture.png" width=350]
|
||||
```
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
Enhances a given image using one or more of the upscaler methods available in the A1111 WebUI.
|
||||
|
||||
Supports the `image` kwarg which is the path to the image you wish to enhance. Defaults to the Stable Diffusion output image.
|
||||
|
||||
Supports the `models` kwarg which is a delimited list of upscaler model names to use.
|
||||
|
||||
Supports the `scale` kwarg which is the scale factor to use. Defaults to 1.
|
||||
|
||||
Supports the `visibility` kwarg which is the alpha value to use when blending the result back into the original image. Defaults to 1.
|
||||
|
||||
Supports the `keep_res` parg which will maintain the original resolution of the image.
|
||||
|
|
@ -158,16 +158,16 @@ def str_to_rgb(color_string):
|
|||
|
||||
|
||||
def str_to_pil(string):
|
||||
from PIL import Image
|
||||
log = get_logger()
|
||||
|
||||
if isinstance(string, str) and string.startswith("<PIL.Image.Image"):
|
||||
# Get the PIL object from the memory address
|
||||
# <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x2649930D270>
|
||||
# <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x...>
|
||||
|
||||
try:
|
||||
import ctypes
|
||||
import re
|
||||
from PIL import Image
|
||||
|
||||
# Extract the memory address from the string
|
||||
address = re.search(r"at (0x[0-9A-F]+)", string).group(1)
|
||||
|
|
|
|||
|
|
@ -8,8 +8,9 @@ import sys
|
|||
import time
|
||||
import logging
|
||||
from . import helpers
|
||||
from enum import IntEnum, auto
|
||||
|
||||
VERSION = "11.1.0"
|
||||
VERSION = "11.2.0"
|
||||
|
||||
|
||||
def parse_config(base_dir="."):
|
||||
|
|
@ -30,10 +31,14 @@ def parse_config(base_dir="."):
|
|||
|
||||
class Unprompted:
|
||||
|
||||
shortcodes = shortcodes
|
||||
FlowBreaks = IntEnum("FlowBreaks", ["BREAK", "CONTINUE"], start=0)
|
||||
|
||||
def load_shortcodes(self):
|
||||
start_time = time.time()
|
||||
self.log.info("Initializing Unprompted shortcode parser...")
|
||||
# Reset variables for reload support
|
||||
|
||||
self.shortcode_objects = {}
|
||||
self.shortcode_modules = {}
|
||||
self.shortcode_user_vars = {}
|
||||
|
|
@ -233,6 +238,9 @@ class Unprompted:
|
|||
self.log.debug("Goodbye routine completed.")
|
||||
|
||||
def process_string(self, string, context=None, cleanup_extra_spaces=None):
|
||||
self.shortcodes.global_did_break = False
|
||||
self.shortcodes.global_did_continue = False
|
||||
|
||||
if cleanup_extra_spaces == None:
|
||||
cleanup_extra_spaces = self.Config.syntax.cleanup_extra_spaces
|
||||
|
||||
|
|
@ -240,10 +248,30 @@ class Unprompted:
|
|||
if context:
|
||||
self.current_context = context
|
||||
# First, sanitize contents
|
||||
string = self.shortcode_parser.parse(self.sanitize_pre(string, self.Config.syntax.sanitize_before), context)
|
||||
try:
|
||||
string = self.shortcode_parser.parse(self.sanitize_pre(string, self.Config.syntax.sanitize_before), context)
|
||||
except:
|
||||
self.log.exception("Could not parse contents.")
|
||||
pass
|
||||
self.conditional_depth = max(0, self.conditional_depth - 1)
|
||||
|
||||
return (self.sanitize_post(string, cleanup_extra_spaces))
|
||||
|
||||
def handle_breaks(self):
|
||||
to_return = -1
|
||||
|
||||
if self.shortcodes.global_did_continue:
|
||||
self.shortcodes.global_did_continue = False
|
||||
to_return = self.FlowBreaks.CONTINUE
|
||||
# This should always go last
|
||||
elif self.shortcodes.global_did_break:
|
||||
to_return = self.FlowBreaks.BREAK
|
||||
|
||||
if to_return > -1:
|
||||
self.shortcodes.global_did_break = False
|
||||
|
||||
return to_return
|
||||
|
||||
def sanitize_pre(self, string, rules_obj, only_remove_last=False):
|
||||
for k, v in rules_obj.__dict__.items():
|
||||
if only_remove_last:
|
||||
|
|
@ -370,6 +398,32 @@ class Unprompted:
|
|||
|
||||
return default
|
||||
|
||||
def validate_args(self, logger=None, parg_count=None, min_parg_count=None, max_parg_count=None, kwarg_count=None, min_kwarg_count=None, max_kwarg_count=None):
|
||||
"""Validates the number of arguments passed to a shortcode."""
|
||||
|
||||
if not logger:
|
||||
logger = self.log.error
|
||||
|
||||
if parg_count and len(self.pargs) != parg_count:
|
||||
logger(f"Expected {parg_count} positional arguments, but received {len(self.pargs)}")
|
||||
return False
|
||||
if min_parg_count and len(self.pargs) < min_parg_count:
|
||||
logger(f"Expected at least {min_parg_count} positional arguments, but received {len(self.pargs)}")
|
||||
return False
|
||||
if max_parg_count and len(self.pargs) > max_parg_count:
|
||||
logger(f"Expected at most {max_parg_count} positional arguments, but received {len(self.pargs)}")
|
||||
return False
|
||||
if kwarg_count and len(self.kwargs) != kwarg_count:
|
||||
logger(f"Expected {kwarg_count} keyword arguments, but received {len(self.kwargs)}")
|
||||
return False
|
||||
if min_kwarg_count and len(self.kwargs) < min_kwarg_count:
|
||||
logger(f"Expected at least {min_kwarg_count} keyword arguments, but received {len(self.kwargs)}")
|
||||
return False
|
||||
if max_kwarg_count and len(self.kwargs) > max_kwarg_count:
|
||||
logger(f"Expected at most {max_kwarg_count} keyword arguments, but received {len(self.kwargs)}")
|
||||
return False
|
||||
return True
|
||||
|
||||
def parse_advanced(self, string, context=None):
|
||||
"""First runs the string through parse_alt_tags, the result of which then goes through simpleeval"""
|
||||
if string is None:
|
||||
|
|
@ -551,12 +605,14 @@ class Unprompted:
|
|||
if att_split[2] == "image":
|
||||
# Check if we supplied a string
|
||||
if isinstance(self.shortcode_user_vars[att], str):
|
||||
import imageio
|
||||
this_val = imageio.imread(self.str_replace_macros(self.shortcode_user_vars[att]))
|
||||
image = helpers.str_to_pil(self.shortcode_user_vars[att])
|
||||
# import imageio
|
||||
# this_val = imageio.imread(self.str_replace_macros(self.shortcode_user_vars[att]))
|
||||
# Otherwise, assume we supplied a PIL image and convert to numpy
|
||||
else:
|
||||
import numpy
|
||||
this_val = numpy.array(self.shortcode_user_vars[att])
|
||||
image = self.shortcode_user_vars[att]
|
||||
import numpy
|
||||
this_val = numpy.array(image)
|
||||
else:
|
||||
this_val = self.shortcode_user_vars[att]
|
||||
# Apply preset model names
|
||||
|
|
|
|||
|
|
@ -9,9 +9,13 @@ global_keywords = {}
|
|||
# The set of all end-words for globally-registered block-scoped shortcodes.
|
||||
global_endwords = set()
|
||||
|
||||
global_did_break = False
|
||||
global_did_continue = False
|
||||
|
||||
|
||||
# Decorator function for globally registering shortcode handlers.
|
||||
def register(keyword, endword=None, preprocessor=None):
|
||||
|
||||
def register_function(func):
|
||||
global_keywords[keyword] = (func, endword, preprocessor)
|
||||
if endword:
|
||||
|
|
@ -48,6 +52,7 @@ class ShortcodeRenderingError(ShortcodeError):
|
|||
|
||||
# Input text is parsed into a tree of Node instances.
|
||||
class Node:
|
||||
|
||||
def __init__(self):
|
||||
self.children = []
|
||||
|
||||
|
|
@ -57,10 +62,14 @@ class Node:
|
|||
|
||||
# Represents ordinary text not enclosed in tag delimiters.
|
||||
class Text(Node):
|
||||
|
||||
def __init__(self, text):
|
||||
self.text = text
|
||||
|
||||
def render(self, context):
|
||||
global global_did_break
|
||||
if global_did_break:
|
||||
return ""
|
||||
return self.text
|
||||
|
||||
|
||||
|
|
@ -110,7 +119,18 @@ class AtomicShortcode(Shortcode):
|
|||
# in a ShortcodeRenderingError. The original exception will still be
|
||||
# available via the exception's __cause__ attribute.
|
||||
def render(self, context):
|
||||
if self.token.blocked: return self.token.raw_text
|
||||
global global_did_break, global_did_continue
|
||||
if self.token.blocked:
|
||||
return self.token.raw_text
|
||||
if self.token.keyword == "break":
|
||||
global_did_break = True
|
||||
elif self.token.keyword == "continue":
|
||||
global_did_continue = True
|
||||
global_did_break = True
|
||||
|
||||
if global_did_break:
|
||||
return ""
|
||||
|
||||
try:
|
||||
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context))
|
||||
except Exception as ex:
|
||||
|
|
@ -134,7 +154,11 @@ class BlockShortcode(Shortcode):
|
|||
# in a ShortcodeRenderingError. The original exception will still be
|
||||
# available via the exception's __cause__ attribute.
|
||||
def render(self, context):
|
||||
if self.token.blocked: return self.token.raw_text
|
||||
global global_did_break
|
||||
if self.token.blocked:
|
||||
return self.token.raw_text
|
||||
elif global_did_break:
|
||||
return ""
|
||||
content = ''.join(child.render(context) for child in self.children)
|
||||
try:
|
||||
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context, content))
|
||||
|
|
@ -168,6 +192,7 @@ class BlockShortcode(Shortcode):
|
|||
# If `ignore_unknown` is true, unknown shortcodes are ignored. If this parameter
|
||||
# is false (the default), unknown shortcodes cause an error.
|
||||
class Parser:
|
||||
|
||||
def __init__(self, start='[%', end='%]', esc='\\', inherit_globals=True, ignore_unknown=False):
|
||||
self.start = start
|
||||
self.end = end
|
||||
|
|
@ -192,41 +217,51 @@ class Parser:
|
|||
|
||||
lexer = Lexer(text, self.start, self.end, self.esc_start)
|
||||
for token in lexer.tokenize():
|
||||
if self.blocking_depth > 0: token.blocked = True
|
||||
if self.blocking_depth > 0:
|
||||
token.blocked = True
|
||||
|
||||
if token.type == "TEXT":
|
||||
stack[-1].children.append(Text(token.text))
|
||||
elif token.keyword in self.keywords:
|
||||
# Hardcoded bypass for multiline comments
|
||||
if len(expecting) > 0 and "##" in self.keywords and expecting[-1] == self.keywords["##"][1]: continue
|
||||
if len(expecting) > 0 and "##" in self.keywords and expecting[-1] == self.keywords["##"][1]:
|
||||
continue
|
||||
|
||||
handler, endword, preprocessor = self.keywords[token.keyword]
|
||||
if endword:
|
||||
node = BlockShortcode(token, handler, preprocessor)
|
||||
|
||||
if self.blocking_depth: self.blocking_depth += 1
|
||||
if self.blocking_depth:
|
||||
self.blocking_depth += 1
|
||||
elif preprocessor:
|
||||
added_depth = int(node.run_preprocess(context))
|
||||
self.blocking_depth += added_depth
|
||||
|
||||
expecting.append(endword)
|
||||
stack[-1].children.append(node)
|
||||
if self.blocking_depth < 2: stack.append(node)
|
||||
if self.blocking_depth < 2:
|
||||
stack.append(node)
|
||||
else:
|
||||
node = AtomicShortcode(token, handler, preprocessor)
|
||||
if preprocessor: node.render_preprocess(context)
|
||||
if preprocessor:
|
||||
node.render_preprocess(context)
|
||||
stack[-1].children.append(node)
|
||||
elif token.keyword in self.endwords:
|
||||
if len(expecting) == 0:
|
||||
msg = f"Unexpected '{token.keyword}' tag in line {token.line_number}."
|
||||
raise ShortcodeSyntaxError(msg)
|
||||
elif token.keyword == expecting[-1]:
|
||||
if self.blocking_depth > 0: self.blocking_depth -= 1
|
||||
if self.blocking_depth > 0:
|
||||
self.blocking_depth -= 1
|
||||
expecting.pop()
|
||||
|
||||
if self.blocking_depth > 0: stack[-1].children.append(Text(token.raw_text))
|
||||
else: stack.pop()
|
||||
if self.blocking_depth > 0:
|
||||
stack[-1].children.append(Text(token.raw_text))
|
||||
else:
|
||||
stack.pop()
|
||||
|
||||
elif token.blocked: stack[-1].children.append(Text(token.raw_text))
|
||||
elif token.blocked:
|
||||
stack[-1].children.append(Text(token.raw_text))
|
||||
else:
|
||||
msg = f"Unexpected '{token.keyword}' tag in line {token.line_number}. "
|
||||
msg += f"The shortcode parser was expecting a closing '{expecting[-1]}' tag."
|
||||
|
|
@ -257,6 +292,7 @@ class Parser:
|
|||
|
||||
|
||||
class Token:
|
||||
|
||||
def __init__(self, token_type, token_text, raw_text, line_number):
|
||||
words = token_text.split()
|
||||
self.keyword = words[0] if words else ''
|
||||
|
|
@ -271,6 +307,7 @@ class Token:
|
|||
|
||||
|
||||
class Lexer:
|
||||
|
||||
def __init__(self, text, start, end, esc_start):
|
||||
self.text = text
|
||||
self.start = start
|
||||
|
|
|
|||
7
setup.py
7
setup.py
|
|
@ -5,12 +5,13 @@ from setuptools import setup
|
|||
|
||||
setup(
|
||||
name='Unprompted',
|
||||
version='11.1.0',
|
||||
version='11.2.0',
|
||||
package_dir={'unprompted': '.'},
|
||||
packages=['unprompted.lib_unprompted', 'unprompted.shortcodes', 'unprompted.shortcodes.basic', 'unprompted.shortcodes.stable_diffusion', 'unprompted.templates'],
|
||||
packages=['unprompted.lib_unprompted', 'unprompted.shortcodes', 'unprompted.shortcodes.basic', 'unprompted.shortcodes.stable_diffusion', 'unprompted.templates', 'unprompted.docs'],
|
||||
package_data={
|
||||
'unprompted.lib_unprompted': ['*.json'],
|
||||
'unprompted.templates': ['common/*.txt']
|
||||
'unprompted.templates': ['common/*.txt'],
|
||||
'unprompted.docs': ['*.md']
|
||||
},
|
||||
include_package_data=True,
|
||||
)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,15 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Return the absolute value of the content."
|
||||
|
||||
def run_block(self, pargs, kwargs, context, content):
|
||||
try:
|
||||
content = abs(float(content))
|
||||
except:
|
||||
self.log.exception("Unable to get the absolute value of content.")
|
||||
return content
|
||||
|
||||
def ui(self, gr):
|
||||
pass
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Manages a group or list of values."
|
||||
|
|
@ -24,8 +25,6 @@ class Shortcode():
|
|||
self.Unprompted.log.debug(f"String var {parg} was split into a list.")
|
||||
continue
|
||||
# Print remaining pargs
|
||||
# self.log.debug(f"What is pargs[0]? {pargs[0]}")
|
||||
# self.log.debug(f"What is the second array index? {int(self.Unprompted.parse_advanced(parg, context))}")
|
||||
result_list.append(str(self.Unprompted.shortcode_user_vars[pargs[0]][int(self.Unprompted.parse_advanced(parg, context))]))
|
||||
|
||||
# Set new array values
|
||||
|
|
@ -53,12 +52,13 @@ class Shortcode():
|
|||
step = self.Unprompted.parse_arg("_step", 1)
|
||||
|
||||
if "_append" in kwargs:
|
||||
split_append = self.Unprompted.parse_advanced(kwargs["_append"], context).split(delimiter)
|
||||
split_append = kwargs["_append"].split(delimiter)
|
||||
# str(self.Unprompted.parse_advanced(kwargs["_append"], context)).split(delimiter)
|
||||
for idx, item in enumerate(split_append):
|
||||
split_append[idx] = self.Unprompted.parse_advanced(item, context)
|
||||
self.Unprompted.shortcode_user_vars[pargs[0]].extend(split_append)
|
||||
if "_prepend" in kwargs:
|
||||
split_prepend = self.Unprompted.parse_advanced(kwargs["_prepend"], context).split(delimiter)
|
||||
split_prepend = kwargs["_prepend"].split(delimiter)
|
||||
for idx, item in enumerate(split_prepend):
|
||||
split_prepend[idx] = self.Unprompted.parse_advanced(item, context)
|
||||
split_prepend.extend(self.Unprompted.shortcode_user_vars[pargs[0]])
|
||||
|
|
|
|||
|
|
@ -0,0 +1,12 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Exits the current loop or function early."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
# self.log.debug("Exiting early.")
|
||||
return ""
|
||||
|
||||
def ui(self, gr):
|
||||
pass
|
||||
|
|
@ -4,6 +4,7 @@ import os
|
|||
|
||||
|
||||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Processes the function or filepath content of the first parg."
|
||||
|
|
@ -71,6 +72,7 @@ class Shortcode():
|
|||
self.Unprompted.shortcode_objects["set"].run_block([key], {}, context, self.Unprompted.parse_alt_tags(value))
|
||||
|
||||
contents = self.Unprompted.process_string(contents, next_context)
|
||||
break_type = self.Unprompted.handle_breaks()
|
||||
|
||||
if contents == "_false":
|
||||
contents = ""
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Use within [switch] to run different logic blocks depending on the value of a var."
|
||||
|
|
@ -14,12 +15,12 @@ class Shortcode():
|
|||
# Default case
|
||||
if len(pargs) == 0:
|
||||
if _var != "":
|
||||
return (self.Unprompted.parse_alt_tags(content, context))
|
||||
return (self.Unprompted.process_string(content, context))
|
||||
# Supports matching against multiple pargs
|
||||
for parg in pargs:
|
||||
if helpers.is_equal(_var, self.Unprompted.parse_advanced(parg, context)):
|
||||
self.Unprompted.shortcode_objects["switch"].switch_var = ""
|
||||
return (self.Unprompted.parse_alt_tags(content, context))
|
||||
return (self.Unprompted.process_string(content, context))
|
||||
|
||||
return ("")
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,11 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Skips the remaining code within the current iteration of the loop and move on to the next iteration."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
return ""
|
||||
|
||||
def ui(self, gr):
|
||||
pass
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "It's a do-until loop."
|
||||
|
|
@ -15,8 +16,14 @@ class Shortcode():
|
|||
else:
|
||||
final_string += self.Unprompted.process_string(self.Unprompted.sanitize_pre(content, self.Unprompted.Config.syntax.sanitize_block, True), context, False)
|
||||
|
||||
break_type = self.Unprompted.handle_breaks()
|
||||
if break_type == self.Unprompted.FlowBreaks.BREAK:
|
||||
break
|
||||
|
||||
if (self.Unprompted.parse_advanced(kwargs["until"], context)):
|
||||
return (final_string)
|
||||
break
|
||||
|
||||
return final_string
|
||||
|
||||
def ui(self, gr):
|
||||
return [
|
||||
|
|
|
|||
|
|
@ -0,0 +1,44 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Foreach loop."
|
||||
|
||||
def preprocess_block(self, pargs, kwargs, context):
|
||||
return True
|
||||
|
||||
def run_block(self, pargs, kwargs, context, content):
|
||||
|
||||
if not self.Unprompted.validate_args(self.log.error, min_parg_count=2):
|
||||
return ""
|
||||
|
||||
final_string = ""
|
||||
|
||||
idx_var = self.Unprompted.parse_arg("idx", "idx")
|
||||
|
||||
the_var = self.Unprompted.parse_alt_tags(pargs[0], context)
|
||||
list_name = self.Unprompted.parse_alt_tags(pargs[1], context)
|
||||
|
||||
if list_name in self.Unprompted.shortcode_user_vars:
|
||||
the_list = self.Unprompted.shortcode_user_vars[list_name]
|
||||
else:
|
||||
self.log.error(f"List variable {list_name} not found.")
|
||||
return ""
|
||||
|
||||
for _idx, item in enumerate(the_list):
|
||||
self.log.debug(f"Looping through item {_idx} in {list}")
|
||||
self.Unprompted.shortcode_user_vars[the_var] = item
|
||||
self.Unprompted.shortcode_user_vars[idx_var] = _idx
|
||||
|
||||
final_string += self.Unprompted.process_string(self.Unprompted.sanitize_pre(content, self.Unprompted.Config.syntax.sanitize_block, True), context, False)
|
||||
break_type = self.Unprompted.handle_breaks()
|
||||
if break_type == self.Unprompted.FlowBreaks.BREAK:
|
||||
break
|
||||
|
||||
return final_string
|
||||
|
||||
def ui(self, gr):
|
||||
return [
|
||||
gr.Textbox(label="Iterator variable 🡢 str", max_lines=1, placeholder="item"),
|
||||
gr.Textbox(label="List variable 🡢 str", max_lines=1, placeholder="my_list"),
|
||||
]
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Returns whether the specified variable exists."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
_any = "_any" in pargs
|
||||
for parg in pargs:
|
||||
if parg == "_any":
|
||||
continue
|
||||
the_var = self.Unprompted.parse_advanced(parg, context)
|
||||
if the_var in self.Unprompted.shortcode_user_vars:
|
||||
if _any:
|
||||
return True
|
||||
continue
|
||||
else:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def ui(self, gr):
|
||||
pass
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "It's a for loop."
|
||||
|
|
@ -9,25 +10,37 @@ class Shortcode():
|
|||
def run_block(self, pargs, kwargs, context, content):
|
||||
final_string = ""
|
||||
this_var = ""
|
||||
test = pargs[0]
|
||||
expr = pargs[1]
|
||||
|
||||
for key, value in kwargs.items():
|
||||
if (self.Unprompted.is_system_arg(key)):
|
||||
continue # Skips system arguments
|
||||
this_var = key
|
||||
self.Unprompted.shortcode_objects["set"].run_block([key], [], context, value)
|
||||
# Get the datatype
|
||||
datatype = type(self.Unprompted.shortcode_user_vars[key])
|
||||
|
||||
while True:
|
||||
if (self.Unprompted.parse_advanced(pargs[0], context)):
|
||||
|
||||
if (self.Unprompted.parse_advanced(test, context)):
|
||||
if "_raw" in pargs:
|
||||
final_string += self.Unprompted.process_string(content, context)
|
||||
else:
|
||||
final_string += self.Unprompted.process_string(self.Unprompted.sanitize_pre(content, self.Unprompted.Config.syntax.sanitize_block, True), context, False)
|
||||
|
||||
break_type = self.Unprompted.handle_breaks()
|
||||
if break_type == self.Unprompted.FlowBreaks.BREAK:
|
||||
break
|
||||
|
||||
if this_var:
|
||||
self.Unprompted.shortcode_user_vars[this_var] = self.Unprompted.parse_advanced(pargs[1], context)
|
||||
self.Unprompted.shortcode_user_vars[this_var] = self.Unprompted.parse_advanced(expr, context)
|
||||
else:
|
||||
self.Unprompted.parse_advanced(pargs[1], context)
|
||||
self.Unprompted.parse_arg(expr, datatype=datatype)
|
||||
else:
|
||||
return (final_string)
|
||||
break
|
||||
|
||||
return final_string
|
||||
|
||||
def ui(self, gr):
|
||||
return [
|
||||
|
|
|
|||
|
|
@ -0,0 +1,50 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Returns documentation for the requested shortcode or feature."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
import os, webbrowser
|
||||
help_string = ""
|
||||
_online = self.Unprompted.parse_arg("_online", False)
|
||||
|
||||
# Add "help" to pargs for documentation on this shortcode
|
||||
if not pargs:
|
||||
pargs = ["help"]
|
||||
|
||||
def retrieve_doc(subpath, title=None):
|
||||
if not title:
|
||||
title = subpath
|
||||
|
||||
if _online:
|
||||
# Open documentation in the browser
|
||||
webbrowser.open(f"https://github.com/ThereforeGames/unprompted/tree/main/docs/{subpath}.md")
|
||||
return f"Documentation for '{title}' opened in browser."
|
||||
else:
|
||||
doc_file = f"{self.Unprompted.base_dir}/docs/{subpath}.md"
|
||||
if os.path.exists(doc_file):
|
||||
with open(doc_file, "r", encoding="utf-8") as f:
|
||||
return f"# {title}\n\n" + f.read()
|
||||
else:
|
||||
return f"Documentation for '{title}' not found: {subpath}.md"
|
||||
|
||||
for parg in pargs:
|
||||
parg = parg.lower()
|
||||
|
||||
if self.Unprompted.is_system_arg(parg):
|
||||
continue
|
||||
if parg in self.Unprompted.shortcode_objects:
|
||||
help_string += retrieve_doc(f"shortcodes/{parg}", f"{self.Unprompted.Config.syntax.tag_start}{parg}{self.Unprompted.Config.syntax.tag_end}")
|
||||
elif parg == "manual":
|
||||
help_string += retrieve_doc("MANUAL")
|
||||
elif parg == "changelog":
|
||||
help_string += retrieve_doc("CHANGELOG")
|
||||
|
||||
# Circumvent newline sanitization rules of Unprompted
|
||||
help_string = help_string.replace("\n", "%NEWLINE%")
|
||||
|
||||
return help_string
|
||||
|
||||
def ui(self, gr):
|
||||
pass
|
||||
|
|
@ -7,6 +7,7 @@ class Shortcode():
|
|||
self.destination = "after"
|
||||
# Prevent memory address errors
|
||||
self.copied_images = []
|
||||
self.remembered_images = []
|
||||
self.resample_methods = helpers.pil_resampling_dict
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
|
|
@ -18,6 +19,9 @@ class Shortcode():
|
|||
self.log.warning("Could not import torch. Some features may not work.")
|
||||
pass
|
||||
|
||||
if "unload_cache" in pargs:
|
||||
self.remembered_images = []
|
||||
|
||||
image = self.Unprompted.parse_image_kwarg("input")
|
||||
if not image:
|
||||
return ""
|
||||
|
|
@ -34,7 +38,31 @@ class Shortcode():
|
|||
combined_args = pargs + list(kwargs.keys())
|
||||
|
||||
for this_arg in combined_args:
|
||||
if this_arg == "mask":
|
||||
if this_arg == "upscale":
|
||||
from modules import shared
|
||||
|
||||
_models = helpers.ensure(self.Unprompted.parse_arg(this_arg, "None"), list)
|
||||
scale = self.Unprompted.parse_arg("scale", 1)
|
||||
visibility = self.Unprompted.parse_arg("upscale_alpha", 1.0)
|
||||
limit = self.Unprompted.parse_arg("upscale_model_limit", 100)
|
||||
keep_res = self.Unprompted.parse_arg("upscale_keep_res", False)
|
||||
|
||||
models = []
|
||||
for model in _models:
|
||||
for upscaler in shared.sd_upscalers:
|
||||
if upscaler.name == model:
|
||||
models.append(upscaler)
|
||||
break
|
||||
if len(models) >= limit:
|
||||
self.log.info(f"Upscale model limit satisfied ({limit}). Proceeding...")
|
||||
break
|
||||
|
||||
for model in models:
|
||||
self.log.info(f"Upscaling {scale}x with {model.name}...")
|
||||
image = model.scaler.upscale(image, scale, model.data_path)
|
||||
if keep_res:
|
||||
image = image.resize(orig_image.size, Image.ANTIALIAS)
|
||||
elif this_arg == "mask":
|
||||
mask = self.Unprompted.parse_image_kwarg(this_arg)
|
||||
if not mask:
|
||||
continue
|
||||
|
|
@ -56,7 +84,8 @@ class Shortcode():
|
|||
r, g, b, a = pixels[x, y]
|
||||
if a == 0:
|
||||
pixels[x, y] = (0, 0, 0, 0)
|
||||
|
||||
elif this_arg == "remember":
|
||||
self.remembered_images.append(image)
|
||||
elif this_arg == "mode":
|
||||
mode = self.Unprompted.parse_arg("mode", "RGB")
|
||||
image = image.convert(mode)
|
||||
|
|
@ -71,6 +100,10 @@ class Shortcode():
|
|||
paste_x = self.Unprompted.parse_arg("paste_x", 0)
|
||||
paste_y = self.Unprompted.parse_arg("paste_y", 0)
|
||||
|
||||
# paste_origin = self.Unprompted.parse_arg("paste_origin", "top_left")
|
||||
# if paste_origin == "top_right":
|
||||
# paste_x =
|
||||
|
||||
# Ensure that paste is RGBA
|
||||
if paste.mode != "RGBA":
|
||||
paste = paste.convert("RGBA")
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ class Shortcode():
|
|||
def run_atomic(self, pargs, kwargs, context):
|
||||
from PIL import Image
|
||||
return_string = ""
|
||||
delimiter = ","
|
||||
delimiter = self.Unprompted.parse_arg("delimiter", ",")
|
||||
|
||||
image = self.Unprompted.parse_image_kwarg("file")
|
||||
if not image:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Returns various types of metadata about the content."
|
||||
|
|
@ -19,8 +20,18 @@ class Shortcode():
|
|||
if "filename" in pargs:
|
||||
from pathlib import Path
|
||||
return_string += Path(content).stem + delimiter
|
||||
if "directory" in pargs:
|
||||
from pathlib import Path
|
||||
return_string += str(Path(content).parent.name) + delimiter
|
||||
if "extension" in pargs:
|
||||
from pathlib import Path
|
||||
return_string += Path(content).suffix + delimiter
|
||||
if "path" in pargs:
|
||||
from pathlib import Path
|
||||
return_string += str(Path(content).resolve().parent) + delimiter
|
||||
if "string_count" in kwargs:
|
||||
return_string += str(content.count(kwargs["string_count"])) + delimiter
|
||||
str_to_check = self.Unprompted.parse_arg("string_count", "")
|
||||
return_string += str(content.count(str_to_check)) + delimiter
|
||||
if "clip_count" in pargs:
|
||||
try:
|
||||
from ldm.modules.encoders.modules import FrozenCLIPEmbedder
|
||||
|
|
|
|||
|
|
@ -1,11 +1,12 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Returns the number of items in a delimited string."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
_delimiter = self.Unprompted.parse_advanced(kwargs["_delimiter"], context) if "_delimiter" in kwargs else self.Unprompted.Config.syntax.delimiter
|
||||
_max = self.Unprompted.parse_advanced(kwargs["_max"], context) if "_max" in kwargs else -1
|
||||
_delimiter = self.Unprompted.parse_arg("_delimiter", self.Unprompted.Config.syntax.delimiter)
|
||||
_max = self.Unprompted.parse_arg("_max", -1)
|
||||
this_obj = self.Unprompted.parse_advanced(pargs[0], context)
|
||||
# Support direct array
|
||||
if isinstance(this_obj, list):
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import random
|
|||
|
||||
|
||||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Returns a random number between 0 and a given max value (inclusive)"
|
||||
|
|
@ -12,7 +13,7 @@ class Shortcode():
|
|||
_min = self.Unprompted.parse_advanced(kwargs["_min"], context)
|
||||
_max = self.Unprompted.parse_advanced(kwargs["_max"], context)
|
||||
else:
|
||||
_max = pargs[0]
|
||||
_max = self.Unprompted.parse_advanced(pargs[0], context)
|
||||
|
||||
if ("_float" in pargs):
|
||||
return (random.uniform(float(_min), float(_max)))
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Updates a string using the arguments for replacement logic."
|
||||
|
|
@ -12,6 +13,7 @@ class Shortcode():
|
|||
|
||||
_insensitive = self.Unprompted.shortcode_var_is_true("_insensitive", pargs, kwargs)
|
||||
_strict = self.Unprompted.parse_arg("_strict", False)
|
||||
_delimiter = self.Unprompted.parse_advanced("_delimiter", self.Unprompted.Config.syntax.delimiter)
|
||||
|
||||
if "_load" in kwargs:
|
||||
jsons = self.Unprompted.load_jsons(self.Unprompted.parse_advanced(kwargs["_load"], context), context)
|
||||
|
|
@ -22,8 +24,8 @@ class Shortcode():
|
|||
|
||||
for key, value in kwargs.items():
|
||||
if (key == "_from"):
|
||||
from_values.extend(self.Unprompted.parse_advanced(value, context).split(self.Unprompted.Config.syntax.delimiter))
|
||||
to_values.extend(self.Unprompted.parse_advanced(kwargs["_to"] if "_to" in kwargs else "", context).split(self.Unprompted.Config.syntax.delimiter))
|
||||
from_values.extend(self.Unprompted.parse_advanced(value, context).split(_delimiter))
|
||||
to_values.extend(self.Unprompted.parse_advanced(kwargs["_to"] if "_to" in kwargs else "", context).split(_delimiter))
|
||||
|
||||
elif (key[0] != "_"):
|
||||
from_values.append(self.Unprompted.parse_advanced(key, context))
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import operator
|
|||
|
||||
|
||||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
import lib_unprompted.helpers as helpers
|
||||
self.Unprompted = Unprompted
|
||||
|
|
@ -61,6 +62,13 @@ class Shortcode():
|
|||
final_string += self.Unprompted.process_string(content, context)
|
||||
else:
|
||||
final_string += self.Unprompted.process_string(self.Unprompted.sanitize_pre(content, self.Unprompted.Config.syntax.sanitize_block, True), context, False)
|
||||
|
||||
break_type = self.Unprompted.handle_breaks()
|
||||
if break_type == self.Unprompted.FlowBreaks.BREAK:
|
||||
break
|
||||
elif break_type == self.Unprompted.FlowBreaks.CONTINUE:
|
||||
continue
|
||||
|
||||
else:
|
||||
break
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,5 @@
|
|||
try:
|
||||
import gradio as gr
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Runs an img2img task inside of an [after] block."
|
||||
|
|
|
|||
|
|
@ -1,37 +1,47 @@
|
|||
class Shortcode():
|
||||
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Automatically adjusts the width and height parameters in img2img mode based on the proportions of the input image."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
if "init_images" in self.Unprompted.shortcode_user_vars:
|
||||
sd_unit = self.Unprompted.parse_advanced(kwargs["unit"], context) if "unit" in kwargs else 64
|
||||
target_size = self.Unprompted.parse_arg("target", self.Unprompted.shortcode_user_vars["sd_res"])
|
||||
only_full_res = self.Unprompted.parse_advanced(kwargs["only_full_res"], context) if "only_full_res" in kwargs else False
|
||||
|
||||
if not only_full_res or "inpaint_full_res" not in self.Unprompted.shortcode_user_vars or not self.Unprompted.shortcode_user_vars["inpaint_full_res"]:
|
||||
|
||||
self.Unprompted.shortcode_user_vars["width"] = self.Unprompted.shortcode_user_vars["init_images"][0].width
|
||||
self.Unprompted.shortcode_user_vars["height"] = self.Unprompted.shortcode_user_vars["init_images"][0].height
|
||||
|
||||
smaller_dimension = min(self.Unprompted.shortcode_user_vars["width"], self.Unprompted.shortcode_user_vars["height"])
|
||||
larger_dimension = max(self.Unprompted.shortcode_user_vars["width"], self.Unprompted.shortcode_user_vars["height"])
|
||||
|
||||
if (smaller_dimension > target_size):
|
||||
multiplier = target_size / smaller_dimension
|
||||
self.Unprompted.shortcode_user_vars["width"] *= multiplier
|
||||
self.Unprompted.shortcode_user_vars["height"] *= multiplier
|
||||
if (larger_dimension < target_size):
|
||||
multiplier = target_size / larger_dimension
|
||||
self.Unprompted.shortcode_user_vars["width"] *= multiplier
|
||||
self.Unprompted.shortcode_user_vars["height"] *= multiplier
|
||||
|
||||
self.Unprompted.shortcode_user_vars["width"] = int(round(self.Unprompted.shortcode_user_vars["width"] / sd_unit) * sd_unit)
|
||||
self.Unprompted.shortcode_user_vars["height"] = int(round(self.Unprompted.shortcode_user_vars["height"] / sd_unit) * sd_unit)
|
||||
|
||||
self.log.debug(f"Output image size: {self.Unprompted.shortcode_user_vars['width']}x{self.Unprompted.shortcode_user_vars['height']}")
|
||||
checking_image = self.Unprompted.parse_image_kwarg("input")
|
||||
if checking_image:
|
||||
this_width = checking_image.size[0]
|
||||
this_height = checking_image.size[1]
|
||||
elif "init_images" in self.Unprompted.shortcode_user_vars:
|
||||
this_width = self.Unprompted.shortcode_user_vars["init_images"][0].width
|
||||
this_height = self.Unprompted.shortcode_user_vars["init_images"][0].height
|
||||
else:
|
||||
self.log.error(f"Could not find initial image! Printing the user vars for reference: {dir(self.Unprompted.shortcode_user_vars)}")
|
||||
return ""
|
||||
|
||||
sd_unit = self.Unprompted.parse_advanced(kwargs["unit"], context) if "unit" in kwargs else 64
|
||||
target_size = self.Unprompted.parse_arg("target", self.Unprompted.shortcode_user_vars["sd_res"])
|
||||
only_full_res = self.Unprompted.parse_advanced(kwargs["only_full_res"], context) if "only_full_res" in kwargs else False
|
||||
|
||||
if not only_full_res or "inpaint_full_res" not in self.Unprompted.shortcode_user_vars or not self.Unprompted.shortcode_user_vars["inpaint_full_res"]:
|
||||
|
||||
self.Unprompted.shortcode_user_vars["width"] = this_width
|
||||
self.Unprompted.shortcode_user_vars["height"] = this_height
|
||||
|
||||
smaller_dimension = min(self.Unprompted.shortcode_user_vars["width"], self.Unprompted.shortcode_user_vars["height"])
|
||||
larger_dimension = max(self.Unprompted.shortcode_user_vars["width"], self.Unprompted.shortcode_user_vars["height"])
|
||||
|
||||
if (smaller_dimension > target_size):
|
||||
multiplier = target_size / smaller_dimension
|
||||
self.Unprompted.shortcode_user_vars["width"] *= multiplier
|
||||
self.Unprompted.shortcode_user_vars["height"] *= multiplier
|
||||
if (larger_dimension < target_size):
|
||||
multiplier = target_size / larger_dimension
|
||||
self.Unprompted.shortcode_user_vars["width"] *= multiplier
|
||||
self.Unprompted.shortcode_user_vars["height"] *= multiplier
|
||||
|
||||
self.Unprompted.shortcode_user_vars["width"] = int(round(self.Unprompted.shortcode_user_vars["width"] / sd_unit) * sd_unit)
|
||||
self.Unprompted.shortcode_user_vars["height"] = int(round(self.Unprompted.shortcode_user_vars["height"] / sd_unit) * sd_unit)
|
||||
|
||||
self.log.debug(f"Output image size: {self.Unprompted.shortcode_user_vars['width']}x{self.Unprompted.shortcode_user_vars['height']}")
|
||||
|
||||
return ("")
|
||||
|
||||
def ui(self, gr):
|
||||
|
|
|
|||
|
|
@ -1116,7 +1116,7 @@ class Shortcode():
|
|||
import random
|
||||
random.shuffle(all_masks)
|
||||
elif mask_sort_method == "big-to-small":
|
||||
all_masks = sorted(all_masks, key=lambda x: x.shape[0] * x.shape[1], reverse=True)
|
||||
all_masks = sorted(all_masks, key=lambda x: x.shape[0] * x.shape[1], reverse=False)
|
||||
elif mask_sort_method == "left-to-right" or mask_sort_method == "top-to-bottom":
|
||||
i = 0
|
||||
|
||||
|
|
|
|||
|
|
@ -1,62 +0,0 @@
|
|||
try:
|
||||
from modules import shared
|
||||
except:
|
||||
pass # for unprompted_dry
|
||||
|
||||
|
||||
class Shortcode():
|
||||
def __init__(self, Unprompted):
|
||||
self.Unprompted = Unprompted
|
||||
self.description = "Enhances a given image using the WebUI's built-in upscaler methods."
|
||||
|
||||
def run_atomic(self, pargs, kwargs, context):
|
||||
from PIL import Image
|
||||
import lib_unprompted.helpers as helpers
|
||||
|
||||
image = self.Unprompted.parse_arg("image", False)
|
||||
if not image:
|
||||
image = self.Unprompted.current_image()
|
||||
if isinstance(image, str):
|
||||
image = Image.open(image)
|
||||
orig_image = image.copy()
|
||||
|
||||
scale = self.Unprompted.parse_arg("scale", 1)
|
||||
visibility = self.Unprompted.parse_arg("visibility", 1.0)
|
||||
limit = self.Unprompted.parse_arg("limit", 100)
|
||||
keep_res = self.Unprompted.parse_arg("keep_res", False)
|
||||
|
||||
_models = helpers.ensure(self.Unprompted.parse_arg("models", "None"), list)
|
||||
models = []
|
||||
for model in _models:
|
||||
for upscaler in shared.sd_upscalers:
|
||||
if upscaler.name == model:
|
||||
models.append(upscaler)
|
||||
break
|
||||
if len(models) >= limit:
|
||||
self.log.info(f"Upscale model limit satisfied ({limit}). Proceeding...")
|
||||
break
|
||||
|
||||
for model in models:
|
||||
self.log.info(f"Upscaling {scale}x with {model.name}...")
|
||||
image = model.scaler.upscale(image, scale, model.data_path)
|
||||
if keep_res:
|
||||
image = image.resize(orig_image.size, Image.ANTIALIAS)
|
||||
|
||||
# Append to output window
|
||||
try:
|
||||
if not keep_res:
|
||||
orig_image = orig_image.resize(image.size, Image.ANTIALIAS)
|
||||
self.Unprompted.current_image(Image.blend(orig_image, image, visibility))
|
||||
except:
|
||||
pass
|
||||
|
||||
return ""
|
||||
|
||||
def ui(self, gr):
|
||||
return [
|
||||
gr.Image(label="Image to perform upscaling on (defaults to SD output) 🡢 image", type="filepath", interactive=True),
|
||||
gr.Dropdown(label="Upscaler Model(s) 🡢 models", choices=[upscaler.name for upscaler in shared.sd_upscalers], multiselect=True),
|
||||
gr.Slider(label="Upscale Factor 🡢 scale", value=1, maximum=16, minimum=1, interactive=True, step=1),
|
||||
gr.Slider(label="Upscale Visibility 🡢 visibility", value=1.0, maximum=1.0, minimum=0.0, interactive=True, step=0.01),
|
||||
gr.Checkbox(label="Keep original resolution 🡢 keep_res", value=False, interactive=True),
|
||||
]
|
||||
189
style.css
189
style.css
|
|
@ -1,188 +1 @@
|
|||
details summary
|
||||
{
|
||||
font-size: 20px;
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
details>*:not(summary)
|
||||
{
|
||||
padding: 0 20px !important;
|
||||
}
|
||||
|
||||
details[open]
|
||||
{
|
||||
padding-bottom: 20px;
|
||||
}
|
||||
|
||||
.prose li
|
||||
{
|
||||
text-indent: -20px;
|
||||
padding-left: 2em;
|
||||
}
|
||||
|
||||
.gradio-container .prose a,
|
||||
.gradio-container .prose a:visited
|
||||
{
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
#unprompted_result .empty
|
||||
{
|
||||
display: none;
|
||||
}
|
||||
#unprompted_result div[data-testid=block-label]:has(+ .empty)
|
||||
{
|
||||
display:none;
|
||||
}
|
||||
|
||||
#unprompted_result .cm-content
|
||||
{
|
||||
font-size: 18px;
|
||||
font-family: IBM Plex Mono, ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New, monospace;
|
||||
}
|
||||
|
||||
#promo
|
||||
{
|
||||
display: block;
|
||||
}
|
||||
|
||||
#promo .thumbnail
|
||||
{
|
||||
float: left;
|
||||
width: 150px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
#promo h1
|
||||
{
|
||||
font-size: 20px;
|
||||
letter-spacing: 0.015em;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
#promo p
|
||||
{
|
||||
margin: 1em 0;
|
||||
}
|
||||
|
||||
#promo .gr-button
|
||||
{
|
||||
transition: 0.1s;
|
||||
display: inline-block;
|
||||
padding: 0.25em 0.75em;
|
||||
font-weight: bold;
|
||||
color: var(--background-fill-primary);
|
||||
border-radius: 0.5em;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
background-color: var(--body-text-color);
|
||||
}
|
||||
|
||||
#promo .gr-button:hover
|
||||
{
|
||||
background-color: var(--checkbox-background-color-selected);
|
||||
color: var(--body-text-color);
|
||||
}
|
||||
|
||||
.gradio-group div .wizard-autoinclude
|
||||
{
|
||||
margin-right: 0;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.wizard-autoinclude-row
|
||||
{
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
.autoinclude-order
|
||||
{
|
||||
overflow:visible !important;
|
||||
margin-left:60px !important;
|
||||
}
|
||||
.autoinclude-order:before
|
||||
{
|
||||
content:"Order:";
|
||||
position: absolute;
|
||||
left: -55px;
|
||||
display: inline-block;
|
||||
top: 10px;
|
||||
}
|
||||
|
||||
.wizard-autoinclude input[type="checkbox"]:checked+span
|
||||
{
|
||||
color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
|
||||
.unprompted-badge
|
||||
{
|
||||
display: inline-block;
|
||||
padding: 0.25em 0.75em;
|
||||
font-size: 0.75em;
|
||||
font-weight: bold;
|
||||
color: white;
|
||||
border-radius: 0.5em;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
margin-left: var(--size-2);
|
||||
background-color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
|
||||
.patreon-symbol
|
||||
{
|
||||
letter-spacing: -7px;
|
||||
margin-right: 7px;
|
||||
}
|
||||
|
||||
#file_list li
|
||||
{
|
||||
padding-left: 0;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
#file_list ul
|
||||
{
|
||||
list-style-type: none;
|
||||
overflow-x: hidden;
|
||||
}
|
||||
|
||||
#file_list button:before
|
||||
{
|
||||
content: "\01F5CE";
|
||||
position: absolute;
|
||||
left: 0;
|
||||
}
|
||||
|
||||
#file_list button
|
||||
{
|
||||
display: block;
|
||||
text-align: left;
|
||||
width: 100%;
|
||||
padding-left: 15px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
#wizard-dropdown li.autoincluded, #wizard-dropdown input.autoincluded
|
||||
{
|
||||
color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
#wizard-dropdown li.autoincluded::before, #wizard-dropdown div:has(> input.autoincluded)::before
|
||||
{
|
||||
content: "🪄";
|
||||
display:inline-block;
|
||||
}
|
||||
#wizard-dropdown li:not(.active)::before
|
||||
{
|
||||
position:absolute;
|
||||
left:-2px;
|
||||
}
|
||||
|
||||
.wizard-banner
|
||||
{
|
||||
mask-image: linear-gradient(to bottom, rgba(0,0,0,1) 67%, rgba(0,0,0,0) 100%);
|
||||
}
|
||||
.wizard-banner img
|
||||
{
|
||||
border-radius:10px 10px 0 0;
|
||||
}
|
||||
details summary{font-size:20px;font-weight:bold;cursor:pointer}details>*:not(summary){padding:0 20px !important}details[open]{padding-bottom:20px}.prose li{text-indent:-20px;padding-left:2em}.gradio-container .prose a:is(:link,:visited){text-decoration:underline}#unprompted_result .empty,#unprompted_result div[data-testid=block-label]:has(+.empty){display:none}#unprompted_result .cm-content{font-size:18px;font-family:IBM Plex Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace}#promo{display:block}#promo h1{font-size:20px;letter-spacing:.015em;margin-top:10px}#promo p{margin:1em 0}#promo .thumbnail{float:left;width:150px;margin-bottom:10px}#promo .gr-button{transition:.1s;display:inline-block;padding:.25em .75em;font-weight:bold;color:var(--background-fill-primary);border-radius:.5em;text-align:center;vertical-align:middle;background-color:var(--body-text-color)}#promo .gr-button:hover{background-color:var(--checkbox-background-color-selected);color:var(--body-text-color)}.gradio-group div .wizard-autoinclude{margin-right:0;margin-bottom:8px}.wizard-autoinclude-row{margin:20px 0}.autoinclude-order{overflow:visible !important;margin-left:60px !important}.autoinclude-order:before{content:"Order:";position:absolute;left:-55px;display:inline-block;top:10px}.wizard-autoinclude input[type=checkbox]:checked+span{color:var(--checkbox-background-color-selected)}.unprompted-badge{display:inline-block;padding:.25em .75em;font-size:.75em;font-weight:bold;color:#fff;border-radius:.5em;text-align:center;vertical-align:middle;margin-left:var(--size-2);background-color:var(--checkbox-background-color-selected)}.patreon-symbol{letter-spacing:-7px;margin-right:7px}#file_list li{padding-left:0;word-wrap:break-word}#file_list ul{list-style-type:none;overflow-x:hidden}#file_list button{display:block;text-align:left;width:100%;padding-left:15px;position:relative}#file_list button:before{content:"🗎";position:absolute;left:0}#wizard-dropdown li.autoincluded,#wizard-dropdown input.autoincluded{color:var(--checkbox-background-color-selected)}#wizard-dropdown li.autoincluded::before,#wizard-dropdown #wizard-dropdown div:has(>input.autoincluded)::before{content:"🪄";display:inline-block}#wizard-dropdown li:not(.active)::before{position:absolute;left:-2px}.wizard-banner{-webkit-mask-image:linear-gradient(to bottom, rgb(0, 0, 0) 67%, rgba(0, 0, 0, 0) 100%);mask-image:linear-gradient(to bottom, rgb(0, 0, 0) 67%, rgba(0, 0, 0, 0) 100%)}.wizard-banner img{border-radius:10px 10px 0 0}/*# sourceMappingURL=style.css.map */
|
||||
File diff suppressed because one or more lines are too long
|
|
@ -0,0 +1,200 @@
|
|||
details
|
||||
{
|
||||
summary
|
||||
{
|
||||
font-size: 20px;
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
& > *:not(summary)
|
||||
{
|
||||
padding: 0 20px !important;
|
||||
}
|
||||
|
||||
&[open]
|
||||
{
|
||||
padding-bottom: 20px;
|
||||
}
|
||||
}
|
||||
|
||||
.prose li
|
||||
{
|
||||
text-indent: -20px;
|
||||
padding-left: 2em;
|
||||
}
|
||||
|
||||
.gradio-container .prose a:is(:link, :visited)
|
||||
{
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
|
||||
#unprompted_result
|
||||
{
|
||||
.empty, div[data-testid=block-label]:has(+ .empty)
|
||||
{
|
||||
display: none
|
||||
}
|
||||
|
||||
.cm-content
|
||||
{
|
||||
font-size: 18px;
|
||||
font-family: IBM Plex Mono, ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New, monospace;
|
||||
}
|
||||
}
|
||||
|
||||
#promo
|
||||
{
|
||||
display: block;
|
||||
|
||||
h1
|
||||
{
|
||||
font-size: 20px;
|
||||
letter-spacing: 0.015em;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
p
|
||||
{
|
||||
margin: 1em 0;
|
||||
}
|
||||
|
||||
.thumbnail
|
||||
{
|
||||
float: left;
|
||||
width: 150px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
.gr-button
|
||||
{
|
||||
transition: 0.1s;
|
||||
display: inline-block;
|
||||
padding: 0.25em 0.75em;
|
||||
font-weight: bold;
|
||||
color: var(--background-fill-primary);
|
||||
border-radius: 0.5em;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
background-color: var(--body-text-color);
|
||||
|
||||
&:hover
|
||||
{
|
||||
background-color: var(--checkbox-background-color-selected);
|
||||
color: var(--body-text-color);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.gradio-group div .wizard-autoinclude
|
||||
{
|
||||
margin-right: 0;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.wizard-autoinclude-row
|
||||
{
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
.autoinclude-order
|
||||
{
|
||||
overflow: visible !important;
|
||||
margin-left: 60px !important;
|
||||
|
||||
&:before
|
||||
{
|
||||
content: "Order:";
|
||||
position: absolute;
|
||||
left: -55px;
|
||||
display: inline-block;
|
||||
top: 10px;
|
||||
}
|
||||
}
|
||||
|
||||
.wizard-autoinclude input[type="checkbox"]:checked + span
|
||||
{
|
||||
color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
|
||||
.unprompted-badge
|
||||
{
|
||||
display: inline-block;
|
||||
padding: 0.25em 0.75em;
|
||||
font-size: 0.75em;
|
||||
font-weight: bold;
|
||||
color: white;
|
||||
border-radius: 0.5em;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
margin-left: var(--size-2);
|
||||
background-color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
|
||||
.patreon-symbol
|
||||
{
|
||||
letter-spacing: -7px;
|
||||
margin-right: 7px;
|
||||
}
|
||||
|
||||
#file_list
|
||||
{
|
||||
li
|
||||
{
|
||||
padding-left: 0;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
ul
|
||||
{
|
||||
list-style-type: none;
|
||||
overflow-x: hidden;
|
||||
}
|
||||
|
||||
button
|
||||
{
|
||||
&:before
|
||||
{
|
||||
content: "\01F5CE";
|
||||
position: absolute;
|
||||
left: 0;
|
||||
}
|
||||
|
||||
display: block;
|
||||
text-align: left;
|
||||
width: 100%;
|
||||
padding-left: 15px;
|
||||
position: relative;
|
||||
}
|
||||
}
|
||||
|
||||
#wizard-dropdown
|
||||
{
|
||||
li.autoincluded, input.autoincluded
|
||||
{
|
||||
color: var(--checkbox-background-color-selected);
|
||||
}
|
||||
|
||||
li.autoincluded::before, #wizard-dropdown div:has(> input.autoincluded)::before
|
||||
{
|
||||
content: "🪄";
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
li:not(.active)::before
|
||||
{
|
||||
position: absolute;
|
||||
left: -2px;
|
||||
}
|
||||
}
|
||||
|
||||
.wizard-banner
|
||||
{
|
||||
mask-image: linear-gradient(to bottom, rgba(0, 0, 0, 1) 67%, rgba(0, 0, 0, 0) 100%);
|
||||
}
|
||||
|
||||
.wizard-banner img
|
||||
{
|
||||
border-radius: 10px 10px 0 0;
|
||||
}
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
[##]
|
||||
A decent starting point to upscale images using the Tile model for ControlNet.
|
||||
Ideally, you should mask out the face and run the result through Facelift.
|
||||
Best ESRGAN model I'm aware of: 4x_RealisticRescaler_100000_G
|
||||
[/##]
|
||||
[if batch_real_index=0][sets sampler="Restart" steps=20 denoising_strength=0.25 cfg_scale=15 negative_prompt="rfneg UnrealisticDream BadDream BeyondV3-neg" cn_0_enabled=1 cn_0_model=ip-adapter-plus-face_sd15 cn_0_module=ip-adapter_clip_sd15 cn_0_weight=0.5 cn_0_pixel_perfect=0 cn_1_enabled=1 cn_1_module=inpaint_only cn_1_model=inpaint cn_1_weight=1.0 cn_1_guidance_end=1.0 cn_1_control_mode=2], best quality (worst quality:-1)[/if]
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
[##]
|
||||
A quick solution for upscaling models using a combination of ESRGAN models and ControlNet.
|
||||
In some cases, you may want to mask out the character's face and run the result through Facelift.
|
||||
Best ESRGAN models I'm aware of:
|
||||
- 4xNomos8kSCHAT-L
|
||||
- 4x_RealisticRescaler_100000_G
|
||||
[/##]
|
||||
[if batch_real_index=0]
|
||||
[if sd_base="sdxl"]
|
||||
, score_9, score_8_up, score_7_up, score_6_up, [img2img_autosize][image_edit autotone upscale="4xNomos8kSCHAT-L|4x_RealisticRescaler_100000_G" upscale_model_limit=1][replace _from="_" _to=" "][interrogate bypass_cache method="WaifuDiffusion" model="SmilingWolf/wd-vit-tagger-v3" blacklist_tags="{get blacklist_tags}"][/replace], detailed realistic photo[sets cn_0_enabled=1 cn_0_pixel_perfect=0 cn_0_model=controlnetxlCNXL_xinsirCnUnionPromax cn_0_module=none cn_0_weight=1.0 cn_0_guidance_end=1.0 cn_0_control_mode=1 sampler="DPM++ 2M" steps=15 denoising_strength=0.5 cfg_scale=7.5]
|
||||
[/if]
|
||||
[else]
|
||||
[sets sampler="Restart" steps=20 denoising_strength=0.25 cfg_scale=15 negative_prompt="rfneg UnrealisticDream BadDream BeyondV3-neg" cn_0_enabled=1 cn_0_model=ip-adapter-plus-face_sd15 cn_0_module=ip-adapter_clip_sd15 cn_0_weight=0.5 cn_0_pixel_perfect=0 cn_1_enabled=1 cn_1_module=inpaint_only cn_1_model=inpaint cn_1_weight=1.0 cn_1_guidance_end=1.0 cn_1_control_mode=2], best quality (worst quality:-1)
|
||||
[/else]
|
||||
[/if]
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
[if "{random 1} == 1"]
|
||||
[set winner][image_edit input="{get winner}" flip_horizontal remember return][/set]
|
||||
[/if]
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
[if "{random 1} == 1"]
|
||||
[set winner][image_edit input="{get winner}" flip_vertical remember return][/set]
|
||||
[/if]
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
[sets
|
||||
cn_0_enabled=1 cn_0_pixel_perfect=1 cn_0_module=softedge_anyline cn_0_model=mistoLine_softedge_sdxl_fp16 cn_0_weight=0.5 cn_0_guidance_end=1.0 cn_0_control_mode=2
|
||||
]
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
[sets
|
||||
cn_0_enabled=1 cn_0_pixel_perfect=1 cn_0_module=softedge_pidinet cn_0_model=mistoLine_softedge_sdxl_fp16 cn_0_weight=0.65 cn_0_guidance_end=1.0 cn_0_control_mode=1
|
||||
]
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
[# Download Promax model: https://civitai.com/models/136070?modelVersionId=655836]
|
||||
[sets
|
||||
cn_0_enabled=1 cn_0_pixel_perfect=1 cn_0_model=controlnetxlCNXL_xinsirCnUnionPromax cn_0_module=none cn_0_weight=0.4 cn_0_guidance_end=1.0 cn_0_control_mode=1
|
||||
]
|
||||
|
|
@ -1 +1 @@
|
|||
[faceswap "{get faces}" unload="{get unload_all}" visibility=0.75][restore_faces unload="{get unload}" method=gpen][upscale models="TGHQFace8x_500k|4xFaceUpSharpLDAT|4x-UltraSharp|R-ESRGAN 4x+" scale=1 limit=1 visibility=0.8 keep_res]
|
||||
[faceswap "{get faces}" unload="{get unload_all}" visibility=0.75][restore_faces unload="{get unload}" method=gpen][image_edit upscale="TGHQFace8x_500k|4xFaceUpSharpLDAT|4x-UltraSharp|R-ESRGAN 4x+" scale=1 upscale_model_limit=1 upscale_alpha=0.8 upscale_keep_res]
|
||||
|
|
@ -1 +1 @@
|
|||
[sets cfg_scale=9 sampler_name="DPM++ 2M" steps=15 denoising_strength=1.0 mask_blur=8]
|
||||
[sets denoising_strength=1.0 mask_blur=8]
|
||||
|
|
@ -1 +1 @@
|
|||
[sets cfg_scale=7.5 sampler_name="DPM++ 2M" scheduler="Karras" steps=15 denoising_strength=0.85 mask_blur=0]
|
||||
[sets denoising_strength=0.85 mask_blur=0]
|
||||
|
|
@ -1 +1 @@
|
|||
[sets cfg_scale=4 sampler_name="DPM++ SDE" steps=20 denoising_strength=0.67 mask_blur=0]
|
||||
[sets denoising_strength=0.67 mask_blur=0]
|
||||
|
|
@ -8,7 +8,7 @@
|
|||
[civitai _file="sdxl_offset_example_v10" _id=137511 _weight=0.5]
|
||||
[/if]
|
||||
[/set]
|
||||
[set magic_negative]
|
||||
[set magic_negative _defer]
|
||||
[if "sd_base=='sdxl' or sd_base=='sd3'"]
|
||||
text, watermark, low-quality, signature, moiré pattern, downsampling, aliasing, distorted, blurry, glossy, blur, jpeg artifacts, compression artifacts, poorly drawn, low-resolution, bad, distortion, twisted, excessive, exaggerated pose, exaggerated limbs, grainy, symmetrical, duplicate, error, pattern, beginner, pixelated, fake, hyper, glitch, overexposed, high-contrast, bad-contrast
|
||||
[/if]
|
||||
|
|
|
|||
|
|
@ -3,11 +3,12 @@
|
|||
[set magic_gpt_tokenizer][get gpt_model][/set]
|
||||
[set magic_gpt_instruction]Expand the following prompt to add more detail:[/set]
|
||||
[set magic_networks]
|
||||
[if "sd_base=='sdxl' or sd_base=='sd3'"]
|
||||
[civitai _file="sdxl_offset_example_v10" _id=137511 _weight=0.5]
|
||||
[if sd_base="sdxl"]
|
||||
[civitai _file="spo_sdxl_10ep_4k-data_lora_webui" _id=510261 _mvid=567119 _weight=0.75]
|
||||
[civitai _file="sd_xl_dpo_lora_v1" _id=242825 _mvid=273996 _weight=0.25]
|
||||
[/if]
|
||||
[/set]
|
||||
[set magic_negative _append]
|
||||
[set magic_negative _append _defer]
|
||||
[score_5_up, score_6_up, score_4_up:score_0, score_1, score_2, score_3, score_4_up, score_5_up, score_6_up, chibi, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, abstract, low quality, worst quality, normal quality, distorted, watermark, bad anatomy, mutated, mutation, extra limb, missing limb, amputation, deformed, black and white, disfigured, low contrast:0.5]
|
||||
[/set]
|
||||
[set magic_postprocessing _defer]
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@
|
|||
[civitai lora "difConsistency_detail" 0.2 _id=87378]
|
||||
[/else]
|
||||
[/set]
|
||||
[set magic_negative _append]
|
||||
[set magic_negative _append _defer]
|
||||
[if sd_base="sdxl"]
|
||||
text, watermark, low-quality, signature, moiré pattern, downsampling, aliasing, distorted, blurry, glossy, blur, jpeg artifacts, compression artifacts, poorly drawn, low-resolution, bad, distortion, twisted, excessive, exaggerated pose, exaggerated limbs, grainy, symmetrical, duplicate, error, pattern, beginner, pixelated, fake, hyper, glitch, overexposed, high-contrast, bad-contrast
|
||||
[/if]
|
||||
|
|
|
|||
|
|
@ -1,16 +0,0 @@
|
|||
[set magic_fluff_prefix]score_9, score_8_up, score_7_up, score_6_up[/set]
|
||||
[set magic_fluff_affix]detailed raw photo, realistic, photorealistic[/set]
|
||||
[set magic_gpt_model]0Tick/danbooruTagAutocomplete[/set]
|
||||
[set magic_gpt_tokenizer][get gpt_model][/set]
|
||||
[set magic_gpt_instruction]Expand the following prompt to add more detail:[/set]
|
||||
[set magic_networks]
|
||||
[if sd_base="sdxl"]
|
||||
[civitai _file="sdxl_offset_example_v10" _id=137511 _weight=0.5]
|
||||
[/if]
|
||||
[/set]
|
||||
[set magic_negative _append]
|
||||
[:score_6_up, score_5_up, score_4_up, text, shiny skin, watermark, low-quality, signature, moiré pattern, downsampling, aliasing, distorted, blurry, glossy, blur, jpeg artifacts, compression artifacts, poorly drawn, low-resolution:0.1]
|
||||
[/set]
|
||||
[set magic_postprocessing _defer]
|
||||
[image_edit autotone add_noise=3]
|
||||
[/set]
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
[set magic_fluff_prefix]score_9, score_8_up, score_7_up, score_6_up[/set]
|
||||
[set magic_fluff_affix]detailed realistic photo, photorealistic[/set]
|
||||
[set magic_gpt_model]0Tick/danbooruTagAutocomplete[/set]
|
||||
[set magic_gpt_tokenizer][get gpt_model][/set]
|
||||
[set magic_gpt_instruction]Expand the following prompt to add more detail:[/set]
|
||||
[set magic_networks]
|
||||
[if sd_base="sdxl"]
|
||||
[civitai _file="spo_sdxl_10ep_4k-data_lora_webui" _id=510261 _mvid=567119 _weight=0.75]
|
||||
[civitai _file="sd_xl_dpo_lora_v1" _id=242825 _mvid=273996 _weight=0.25]
|
||||
[civitai _file="sdxl_offset_example_v10" _id=137511 _weight=0.5]
|
||||
[/if]
|
||||
[/set]
|
||||
[set magic_negative _append _defer]
|
||||
[if sd_base="sdxl"]
|
||||
[if do_networks]
|
||||
[civitai _file="EZNegPONYXL-neg" _id=587651 _mvid=656021 _weight=1.0 types="ti"]
|
||||
[/if]
|
||||
[else]
|
||||
(watermark:1.5), (text:1.5), score_6_up, score_5_up, score_4_up, worst quality, low quality, bad anatomy, bad hands, missing fingers, fewer digits, source_furry, source_pony, source_cartoon,3d, blurry,
|
||||
[/else]
|
||||
[/if]
|
||||
[else]
|
||||
[:score_6_up, score_5_up, score_4_up, text, shiny skin, watermark, low-quality, signature, moiré pattern, downsampling, aliasing, distorted, blurry, glossy, blur, jpeg artifacts, compression artifacts, poorly drawn, low-resolution:0.4]
|
||||
[/else]
|
||||
[/set]
|
||||
[set magic_postprocessing _defer]
|
||||
[image_edit autotone add_noise=3]
|
||||
[/set]
|
||||
|
|
@ -0,0 +1 @@
|
|||
[sets cfg_scale=8.0 sampler_name="DPM++ 2M" scheduler="Karras" steps=15]
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
<details><summary>0.0.1</summary>
|
||||
|
||||
### Added
|
||||
- Initial release
|
||||
|
||||
</details>
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 112 KiB |
|
|
@ -0,0 +1,54 @@
|
|||
[template name="Autopilot v0.0.1" id="autopilot"]
|
||||
[wizard tab _label="About"]
|
||||
[wizard markdown]
|
||||
Quickly set up your basic inference settings, such as CFG and image dimensions.
|
||||
[/wizard]
|
||||
[/wizard]
|
||||
[wizard tab _label="Documentation"]
|
||||
[wizard markdown _file="MANUAL.md"][/wizard]
|
||||
[/wizard]
|
||||
[wizard tab _label="Changelog"]
|
||||
[wizard markdown _file="CHANGELOG.md"][/wizard]
|
||||
[/wizard]
|
||||
|
||||
[wizard accordion _label="⚙️ Basic Settings" _open]
|
||||
[wizard row]
|
||||
[set inference_preset _new _info="Locks CFG scale, sampler, etc. to recommended values" _label="Inference Preset" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/txt2img/*.*' _basename _hide_ext _places='user|common'}"]dpm_2m_fast_v1[/set]
|
||||
|
||||
[set img2img_preset _new _info="Settings for img2img mode." _label="Img2img Preset" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/img2img/*.*' _basename _hide_ext _places='user|common'}"]none[/set]
|
||||
[/wizard]
|
||||
|
||||
[set aspect_ratio _new _label="📐 Aspect Ratio" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/dimensions/*.*' _basename _hide_ext _places='user|common'}"]square_1-1[/set]
|
||||
[set detect_preset _new _label="Detect Preset" _info="Automatically pick a preset based on the aspect ratio number you provide. You can supply this from another template!"][get detect_preset _default=0][/set]
|
||||
[set flip_aspect_ratio _new _label="⇅ Flip Aspect Ratio" _ui="checkbox"]0[/set]
|
||||
|
||||
[/wizard]
|
||||
|
||||
[if "detect_preset > 0"]
|
||||
[if "detect_preset >= 0.4 and detect_preset <= 0.6"]
|
||||
[set aspect_ratio]square_1-1[/set]
|
||||
[/if]
|
||||
[elif "detect_preset >= 0.6 and detect_preset <= 0.8"]
|
||||
[set aspect_ratio]portrait_2-3[/set]
|
||||
[/elif]
|
||||
[elif "detect_preset >= 1.2 and detect_preset <= 2.0"]
|
||||
[set aspect_ratio]landscape_3-2[/set]
|
||||
[/elif]
|
||||
[/if]
|
||||
|
||||
[if "aspect_ratio != 'none'"]
|
||||
[call "/presets/dimensions/{get aspect_ratio}" _places="user|common"]
|
||||
[/if]
|
||||
[if flip_aspect_ratio]
|
||||
[set width_temp][get width][/set]
|
||||
[set width][get height][/set]
|
||||
[set height][get width_temp][/set]
|
||||
[/if]
|
||||
|
||||
[if "inference_preset != 'none'"]
|
||||
[call "/presets/txt2img/{get inference_preset}" _places="user|common"]
|
||||
[/if]
|
||||
|
||||
[if "img2img_preset != 'none'"]
|
||||
[call "/presets/img2img/{get img2img_preset}" _places="user|common"]
|
||||
[/if]
|
||||
|
|
@ -1,4 +1,14 @@
|
|||
<details open><summary>2.1.0</summary>
|
||||
<details open><summary>2.1.1</summary>
|
||||
|
||||
### Changed
|
||||
- Reverted the default `mask_method` to `clipseg`
|
||||
|
||||
### Fixed
|
||||
- Disabled debug mode in the `[txt2mask]` operations
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>2.1.0</summary>
|
||||
|
||||
### Added
|
||||
- Support for the new `[txt2mask]` method `panoptic_sam`
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ Bodysnatcher's default settings are designed to work well for most users. Howeve
|
|||
Bodysnatcher includes a number of different approaches to masking your input image. Here are some tips for using them effectively:
|
||||
|
||||
- **`clipseg`** is the default mask method and arguably the most versatile. It understands a wide range of terms and is reasonably accurate. However, it does not support instance segmentation, meaning it cannot differentiate between multiple subjects of the same class. (e.g. prompting for "left person" in an image with two people will likely result in both being masked.)
|
||||
- **`panoptic_sam`** is a newer mask method that supports instance segmentation. It produces highly accurate masks and has no trouble in differentiating between subjects of the same class, even in complex scenes. However, it is a bit slower than `clipseg`, requires more VRAM, and does not understand smaller objects very well. For example, prompting for "shirt" may result in the entire person being masked.
|
||||
- **`panoptic_sam`** is a newer mask method that supports instance segmentation. It produces highly accurate masks and has no trouble in differentiating between subjects of the same class, even in complex scenes. However, it is a bit slower than `clipseg`, requires more VRAM, and does not understand smaller objects very well. For example, prompting for "shirt" may result in the entire person being masked. For this reason, it doesn't play nicely with the `keep_hands` and `keep_feet` settings.
|
||||
- **`none`** will disable automatic masking, effectively performing an img2img operation on the entire image.
|
||||
- **`none` + manual mask** allows you to perform traditional inpainting. However, be aware that `manual_mask_mode` is set to `subtract` by default, meaning your manual mask will be inverted. Set it to `add` to prevent this.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
[template name="Bodysnatcher v2.1.0" id="bodysnatcher"]
|
||||
[template name="Bodysnatcher v2.1.1" id="bodysnatcher"]
|
||||
[wizard tab _label="About"]
|
||||
[wizard markdown]
|
||||
Seamlessly replaces the selected subject of an image with a new one. This template works best with inpainting models and ControlNet.
|
||||
|
|
@ -18,7 +18,7 @@
|
|||
[set prefix _new _label="Prefix" _info="For example, the visual medium"][get global_prefix][/set]
|
||||
[set subject _new _label="New subject"][get global_subject][/set]
|
||||
[set simple_description _new _label="Simple Description" _info="These terms will apply to both the full image and the cropped face, 1-3 words are usually plenty"][/set]
|
||||
[set max_image_size _new _ui="number" _info="Limits both dimensions of the input image; 0 to disable. This will also ensure that both dimension are at least 512px."]1536[/set]
|
||||
[set max_image_size _new _ui="number" _info="Limits both dimensions of the input image; 0 to disable. This will also ensure that both dimension are at least 512px."]1024[/set]
|
||||
[/wizard]
|
||||
|
||||
[if max_image_size]
|
||||
|
|
@ -37,7 +37,7 @@
|
|||
[set keep_hands _new _label="🤚 Keep original hands" _ui="checkbox"]1[/set]
|
||||
[set keep_feet _new _label="🦶 Keep original feet" _ui="checkbox"]0[/set]
|
||||
[/wizard]
|
||||
[set mask_method _new _label="Masking method" _ui="radio" _choices="panoptic_sam|clipseg|none"]panoptic_sam[/set]
|
||||
[set mask_method _new _label="Masking method" _ui="radio" _choices="panoptic_sam|clipseg|none"]clipseg[/set]
|
||||
[wizard row]
|
||||
[set mask_sort_method _new _label="Sort method" _ui="dropdown" _choices="random|left-to-right|top-to-bottom|big-to-small|in-to-out"]left-to-right[/set]
|
||||
[set mask_index _new _label="Index" _ui=number _minimum=0]0[/set]
|
||||
|
|
@ -52,6 +52,7 @@
|
|||
[set blacklist_tags _new _info="Tags that will be removed from the interrogation result, separated by the standard delimiter (vertical pipe by default). You can also use shortcodes here. Only compatible with WaifuDiffusion captioners."][/set]
|
||||
[set mask_precision _new _label="Mask precision"]75[/set]
|
||||
[set mask_padding _new _label="Mask padding in pixels"]10[/set]
|
||||
[set mask_blur _new _label="Mask blur in pixels"]20[/set]
|
||||
[set stamp _new _label="Stamp" _info="Paste a temporary image on the init image for the purpose of masking (check unprompted/images/stamps for default stamps)"][/set]
|
||||
[set background_mode _new _label="Background Mode" _ui="checkbox" _info="Inverts the class mask and disables the zoom_enhance step (note: you'll probably want to increase the mask_precision - try 150 precision to start)"]0[/set]
|
||||
[/wizard]
|
||||
|
|
@ -93,7 +94,7 @@
|
|||
|
||||
[if batch_index=0]
|
||||
[if "mask_method != 'none'"]
|
||||
[set bodysnatch_mask][txt2mask precision="{get mask_precision}" method="{get mask_method}" mode="{get manual_mask_mode}" negative_mask="{get neg_mask}" neg_padding="{get mask_padding}" padding="{get mask_padding}" aspect_var="mask_aspect_ratio" mask_sort_method="{get mask_sort_method}" reverse_mask_sort="{get reverse_mask_sort}" mask_index="{get mask_index}" blur=2 return_image show][get class][/txt2mask][/set]
|
||||
[set bodysnatch_mask][txt2mask precision="{get mask_precision}" method="{get mask_method}" mode="{get manual_mask_mode}" negative_mask="{get neg_mask}" neg_padding="{get mask_padding}" padding="{get mask_padding}" aspect_var="mask_aspect_ratio" mask_sort_method="{get mask_sort_method}" reverse_mask_sort="{get reverse_mask_sort}" mask_index="{get mask_index}" blur="{get mask_blur}" return_image][get class][/txt2mask][/set]
|
||||
[if "inpaint_full_res == 0"]
|
||||
[img2img_autosize]
|
||||
[/if]
|
||||
|
|
@ -135,7 +136,15 @@
|
|||
[/if]
|
||||
[set relevant_image][image_edit mask="{get bodysnatch_mask}" return copy][/set]
|
||||
[get prefix] [get subject][get simple_description _before=" "], [get global_fluff][if interrogate]%SPACE%BREAK
|
||||
[replace _from="_" _to=" "][interrogate bypass_cache method="WaifuDiffusion" model="SmilingWolf/wd-vit-tagger-v3" image="{get relevant_image}" blacklist_tags="{get blacklist_tags}"][/replace][/if]
|
||||
[replace _from="_" _to=" "]
|
||||
[if "mask_method != 'none'"]
|
||||
[interrogate bypass_cache method="WaifuDiffusion" model="SmilingWolf/wd-vit-tagger-v3" image="{get relevant_image}" blacklist_tags="{get blacklist_tags}"]
|
||||
[/if]
|
||||
[else]
|
||||
[interrogate bypass_cache method="WaifuDiffusion" model="SmilingWolf/wd-vit-tagger-v3" blacklist_tags="{get blacklist_tags}"]
|
||||
[/else]
|
||||
[/replace]
|
||||
[/if]
|
||||
[if "color_correct_method != 'none'"]
|
||||
[after]
|
||||
[set new_image][image_edit mask="{get bodysnatch_mask}" return copy][/set]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,6 @@
|
|||
<details open><summary>0.0.1</summary>
|
||||
|
||||
### Added
|
||||
- Initial release
|
||||
|
||||
</details>
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
## How does Control Freak work?
|
||||
|
||||
Using the `control_subject` field, the Control Freak template analyzes filenames in your ControlNet image collection for matching terms and automatically sends the best candidate to the ControlNet extension.
|
||||
|
||||
Optionally, any unused terms in the selected file can be automatically appended to the prompt.
|
||||
|
||||
In my opinion, curating a dataset of ControlNet images is superior to LoRA when it comes to introducing new poses and concepts to Stable Diffusion. I hope that by automating the process with this template, collections of CN images may gain traction on places like Civitai.
|
||||
|
||||
## Initial Setup
|
||||
|
||||
You can start by building your collection at the following location (PNG files only):
|
||||
|
||||
`unprompted/templates/user/presets/control_freak/images`
|
||||
|
||||
Feel free to create subfolders to help organize your images - subfolders will not affect the template results.
|
||||
|
||||
Additionally, you need to check the ControlNet preset file to ensure you have the required model. The default preset can be found here:
|
||||
|
||||
`unprompted/templates/common/presets/controlnet/xl_promax_v1.txt`
|
||||
|
||||
Look for the `cn_0_model` and make sure you have that file installed in your `webui/models/controlnet` folder. Most ControlNet models are available on Civitai.
|
||||
|
||||
Alternatively, you can set the ControlNet preset to `none` and configure your ControlNet extension by hand.
|
||||
|
||||
## Captioning Your Images
|
||||
|
||||
Control Freak compares comma-delimited terms to your prompt when deciding which ControlNet image to use. For example, let's say your collection has the following image:
|
||||
|
||||
```
|
||||
robot, backflip, baseball cap.png
|
||||
```
|
||||
|
||||
The image receives a "score" based on the presence of `robot`, `backflip`, and `baseball cap` in your prompt. If it outscores all other images in your collection, it will be picked as the ControlNet image for this generation.
|
||||
|
||||
**Terms starting with an underscore are bypassed from the scoring system.** This can be useful if you want to add an author name to the image, or if you have more than one image with identical captions. For example:
|
||||
|
||||
```
|
||||
robot, backflip, baseball cap, _1.png
|
||||
robot, backlfip, baseball cap, _2.png
|
||||
```
|
||||
|
||||
Compared to captions for training, you need to think about ControlNet captions a little differently. You should only include terms that *always* apply to the image.
|
||||
|
||||
For example, an image like, `sitting, looking at viewer, front view, thumbs up, red hair, freckles` might be worth shortening to just `sitting, front view` - the rest of the terms are optional, as you might want to use this CN image in slightly different contexts. But the beauty of this system is that you can create a database that works best for you!
|
||||
|
||||
## Control Freak Effects
|
||||
|
||||
To squeeze more variety out of your ControlNet images, it is possible to apply filters to the image before inference. By default, the template uses `may_flip_horizontal` which adds a 50% chance to mirror the image.
|
||||
|
||||
You can create your own effects at `unprompted/templates/user/presets/control_freak/effects`.
|
||||
|
||||
Some ideas for additional effects that I haven't yet implemented:
|
||||
|
||||
- `more_precise`: Certain preprocessor types such as softedge can benefit from a simple sharpening pass. A sharper ControlNet image helps Stable Diffusion produce more faithful results.
|
||||
- `more_creative`: Similar to above, applying a slight blur to the image gives Stable Diffusion more creative leeway.
|
||||
- `wiggle`: Perhaps the overall position of the ControlNet image can be shifted along the x or y axis for variety?
|
||||
- `perspective_shift`: Perhaps we can tilt the perspective of the image for added variety?
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 155 KiB |
|
|
@ -0,0 +1,188 @@
|
|||
[template name="Control Freak v0.0.1" id="control_freak"]
|
||||
[wizard tab _label="About"]
|
||||
[wizard markdown]
|
||||
Parses the prompt prompt and automatically loads the best matching ControlNet image from your collection.
|
||||
|
||||
**Initial setup required!** Please review the Documentation tab for more information.
|
||||
[/wizard]
|
||||
[/wizard]
|
||||
[wizard tab _label="Documentation"]
|
||||
[wizard markdown _file="MANUAL.md"][/wizard]
|
||||
[/wizard]
|
||||
[wizard tab _label="Changelog"]
|
||||
[wizard markdown _file="CHANGELOG.md"][/wizard]
|
||||
[/wizard]
|
||||
|
||||
[set control_subject _new _label="Subject"]Terms to parse with Control Freak[/set]
|
||||
[set control_subject_original][get control_subject][/set]
|
||||
|
||||
[wizard row]
|
||||
[set control_whitelist _new _label="Whitelist Terms" _info="The ControlNet image must have these terms."][/set]
|
||||
[set control_blacklist _new _label="Blacklist Terms" _info="The ControlNet image must not have these terms."][/set]
|
||||
[/wizard]
|
||||
|
||||
[if "batch_real_index > 0"]
|
||||
[log ERROR]Control Freak bypassed for batch index > 1.[/log]
|
||||
[break]
|
||||
[/if]
|
||||
|
||||
[set cn_freak_event]pre[/set]
|
||||
|
||||
[set control_method _new _ui="dropdown" _choices="smart|path|random"]smart[/set]
|
||||
|
||||
[set control_effects _new _multiselect _ui="dropdown" _choices="{filelist '%BASE_DIR%/templates/%PLACE%/presets/control_freak/effects/**/*.txt' _basename _hide_ext _recursive _places='user|common'}"]may_flip_horizontal[/set]
|
||||
|
||||
[wizard accordion _label="⚙️ Advanced Settings"]
|
||||
[set control_cn_preset _new _info="Please make sure you have 'Allow other scripts to control this extension' enabled" _label="ControlNet Preset" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/controlnet/*.*' _basename _hide_ext _places='user|common'}"]xl_promax_v1[/set]
|
||||
[set control_ar _label="Aspect Ratio Handling" _new _ui="dropdown" _choices="inform_autopilot|prefer_mine|prefer_cn|exclude_mismatched"]inform_autopilot[/set]
|
||||
[set control_winner_threshold _new _info="If a ControlNet image has at least this many matching tags, it might be chosen even if some other images have more." _ui="number"]5[/set]
|
||||
[set control_winner_margin _new _info="If a ControlNet image is within this many matching tags of the most accurate image, it might be chosen as the winner." _ui="number"]0[/set]
|
||||
[set control_caption_method _new _ui="dropdown" _choices="none|add_difference|add_all" _info="Choose what to do with the captions of the selected image."]add_difference[/set]
|
||||
[set control_do_negative _new _info="Allows Control Freak to adjust the negative prompt style." _ui="checkbox" _label="Negative Style"]1[/set]
|
||||
[set control_do_extra _new _info="Allows Control Freak to load extra networks such as LORAs." _ui="checkbox" _label="Extra Style"]1[/set]
|
||||
[/wizard]
|
||||
|
||||
[image_edit unload_cache]
|
||||
|
||||
[# Populate master list]
|
||||
[set control_images][filelist "%BASE_DIR%/templates/user/presets/control_freak/images/**/*.png" _recursive][/set]
|
||||
[set best_score]0[/set]
|
||||
[log]What is control_images? [get control_images][/log]
|
||||
|
||||
[# TODO: Load delimiter character dynamically]
|
||||
[function get_image_tags image_filename="C:/some/path.png"]
|
||||
[set this_image_path][info path][get image_filename][/info][/set]
|
||||
[set cn_freak_event_bak][get cn_freak_event][/set]
|
||||
[set cn_freak_event]tags[/set]
|
||||
[set extra_tags][call "{get this_image_path}/function" _suppress_errors][/set]
|
||||
[set cn_freak_event][get cn_freak_event_bak][/set]
|
||||
|
||||
[set control_tags][replace _from="," _to="|" _delimiter="%bypass%"][replace _from=", " _to=","][get extra_tags], [info filename][get image_filename][/info][/replace][/replace][/set]
|
||||
[/function]
|
||||
|
||||
[switch control_method]
|
||||
[case "smart"]
|
||||
[for i=0 "i < {length control_images}" "i+1"]
|
||||
[call get_image_tags image_filename="{array control_images i}"]
|
||||
[set control_score]0[/set]
|
||||
|
||||
[# Call user functions if it exists]
|
||||
[set this_image_path][info path][array control_images i][/info][/set]
|
||||
[call "{get this_image_path}/function" _suppress_errors]
|
||||
|
||||
[for j=0 "j < {length control_tags}" "j+1"]
|
||||
[set this_string][array control_tags j][/set]
|
||||
[set first_char][substring start=0 end=1][get this_string][/substring][/set]
|
||||
[if "first_char == '_'"][continue][/if]
|
||||
|
||||
[set blacklist_score][info string_count="this_string"][get control_blacklist][/info][/set]
|
||||
[if "blacklist_score > 0"]
|
||||
[set control_score]0[/set]
|
||||
[break]
|
||||
[/if]
|
||||
|
||||
[if "{exists control_whitelist}"]
|
||||
[set whitelist_score][info string_count="this_string"][get control_whitelist][/info][/set]
|
||||
[if "whitelist_score == 0"]
|
||||
[set control_score]0[/set]
|
||||
[break]
|
||||
[/if]
|
||||
[/if]
|
||||
|
||||
[set control_score _append][info string_count="this_string"][get control_subject][/info][/set]
|
||||
[/for]
|
||||
[if "control_score > best_score"]
|
||||
[set best_score][get control_score][/set]
|
||||
[/if]
|
||||
[array control_scores _append="{get control_score}"]
|
||||
[/for]
|
||||
|
||||
[log]Best score is [get best_score].[/log]
|
||||
[set my_ar][eval][get width] / [get height][/eval][/set]
|
||||
[if "best_score > 0"]
|
||||
[for i=0 "i < {length control_scores}" "i+1"]
|
||||
[if "{array control_scores i} >= (best_score - control_winner_margin) or {array control_scores i} >= control_winner_threshold and {array control_scores i} > 0"]
|
||||
[if "control_ar == 'exclude_mismatched'"]
|
||||
[set this_cn_ar][eval][image_info file="{array control_images i}" delimiter="/" width height][/eval][/set]
|
||||
[if "{abs}{eval}my_ar - this_cn_ar{/eval}{/abs} > 0.45"]
|
||||
[log]Excluding this image from selection due to aspect ratio mismatch.[/log]
|
||||
[/if]
|
||||
[else]
|
||||
[log]The index [get i] is one of the winners.[/log]
|
||||
[array winners _append="{array control_images i}"]
|
||||
[/else]
|
||||
[/if]
|
||||
[else]
|
||||
[log]The index [get i] is one of the winners.[/log]
|
||||
[array winners _append="{array control_images i}"]
|
||||
[/else]
|
||||
[/if]
|
||||
[/for]
|
||||
[log]The winner(s): [get winners][/log]
|
||||
[set winner][choose][get winners][/choose][/set]
|
||||
[/if]
|
||||
[/case]
|
||||
[case "path"]
|
||||
[set winner][get control_subject][/set]
|
||||
[set control_subject] [/set]
|
||||
[/case]
|
||||
[case "random"]
|
||||
[set winner_idx][random "{length control_images}-1"][/set]
|
||||
[set winner][array control_images "winner_idx"][/set]
|
||||
[/case]
|
||||
[/switch]
|
||||
|
||||
[if "control_method != 'smart'"]
|
||||
[set best_score]1[/set]
|
||||
[set winners]1[/set]
|
||||
[/if]
|
||||
|
||||
[if "best_score > 0 and winners"]
|
||||
[if "control_caption_method == 'add_difference' or control_caption_method == 'add_all'"]
|
||||
[call get_image_tags image_filename="{get winner}"]
|
||||
[set control_filtered_caption] [/set]
|
||||
[for i=0 "i < {length control_tags}" "i+1"]
|
||||
[set this_string][array control_tags i][/set]
|
||||
[set first_char][substring start=0 end=1][get this_string][/substring][/set]
|
||||
[if "first_char == '_'"][continue][/if]
|
||||
[set this_score][info string_count="this_string"][get control_subject_original][/info][/set]
|
||||
[if "this_score == 0 or control_caption_method == 'add_all'"]
|
||||
[set control_filtered_caption _append], [get this_string][/set]
|
||||
[/if]
|
||||
[/for]
|
||||
[set control_subject][get control_subject][get control_filtered_caption][/set]
|
||||
[/if]
|
||||
[else]
|
||||
[unset control_subject]
|
||||
[/else]
|
||||
|
||||
[if "control_ar == 'prefer_cn'"]
|
||||
[img2img_autosize input="{get winner}"]
|
||||
[/if]
|
||||
[elif "control_ar == 'inform_autopilot'"]
|
||||
[set detect_preset][eval][image_info file="{get winner}" delimiter="/" width height][/eval][/set]
|
||||
[/elif]
|
||||
|
||||
[for i=0 "i < {length control_effects}" "i+1"]
|
||||
[call "/presets/control_freak/effects/{array control_effects i}" _places="user|common" ]
|
||||
[/for]
|
||||
|
||||
[set cn_0_image][get winner][/set]
|
||||
|
||||
[if "control_cn_preset != 'none'"]
|
||||
[call "/presets/controlnet/{get control_cn_preset}" _places="user|common"]
|
||||
[/if]
|
||||
|
||||
[# Assumes that Control Freak images have already been preprocessed.]
|
||||
[unset cn_0_module]
|
||||
|
||||
[# Print the combined Control Freak prompt]
|
||||
[get control_subject]
|
||||
|
||||
[if "control_do_negative"]
|
||||
[set negative_prompt _append][get control_negative][/set]
|
||||
[/if]
|
||||
[/if]
|
||||
[elif "control_method == 'smart'"]
|
||||
[get control_subject]
|
||||
[/elif]
|
||||
|
|
@ -1,4 +1,11 @@
|
|||
<details open><summary>0.2.0</summary>
|
||||
<details open><summary>0.2.1</summary>
|
||||
|
||||
### Changed
|
||||
- The `magic_negative` variable now uses `_defer` which allows presets to check for other Magic Spice settings such as `do_networks` to alter the negative prompt
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>0.2.0</summary>
|
||||
|
||||
### Added
|
||||
- Spices can now specify `magic_fluff_prefix` to add a prefix to the prompt
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
[template name="Magic Spice v0.2.0" id="magic_spice"]
|
||||
[template name="Magic Spice v0.2.1" id="magic_spice"]
|
||||
[wizard tab _label="About"]
|
||||
[wizard markdown]
|
||||
An all-purpose image quality enhancer.
|
||||
|
|
@ -16,13 +16,12 @@
|
|||
[set magic_subject _new _label="Subject" _info="Enter a prompt to enhance." _max_lines=20 _lines=3]Statue of God[/set]
|
||||
|
||||
[set magic_gpt_class]auto[/set]
|
||||
[set magic_fluff_prefix]none[/set]
|
||||
[set magic_fluff_affix]none[/set]
|
||||
|
||||
[set style_preset _new _info="May download extra dependencies on first use." _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/magic_spice/*.*' _basename _hide_ext _places='user|common'}" _label="Choose Your Spice"]allspice_v2[/set]
|
||||
|
||||
[set aspect_ratio _new _label="Aspect Ratio Preset" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/dimensions/*.*' _basename _hide_ext _places='user|common'}"]square_1-1[/set]
|
||||
|
||||
[wizard accordion _label="⚙️ Advanced Settings"]
|
||||
[set inference_preset _new _info="Locks CFG scale, sampler method, etc. to recommended values" _label="Inference Preset" _ui="dropdown" _choices="none|{filelist '%BASE_DIR%/templates/%PLACE%/presets/txt2img/*.*' _basename _hide_ext _places='user|common'}"]none[/set]
|
||||
[set do_fluff _new _label="Use fluff terms" _ui="checkbox"]1[/set]
|
||||
[set do_gpt _new _label="Use GPT-2 prompt expansion" _ui="checkbox"]0[/set]
|
||||
[set do_networks _new _label="Use extra networks" _ui="checkbox"]1[/set]
|
||||
|
|
@ -38,7 +37,7 @@
|
|||
[set start_idx]0[/set]
|
||||
[if "subject_count > 1"]
|
||||
[array magic_subject 0] ADDBASE
|
||||
[if do_negative][set negative_prompt _append][get magic_negative][/set][/if]
|
||||
[if do_negative][set negative_prompt _append][get magic_negative _parse][/set][/if]
|
||||
[set start_idx _append]1[/set]
|
||||
[/if]
|
||||
|
||||
|
|
@ -46,12 +45,12 @@
|
|||
[if "start_idx > 1"]
|
||||
%SPACE%ADDROW%SPACE%
|
||||
[/if]
|
||||
[if "do_fluff and magic_fluff_prefix"][get magic_fluff_prefix], [/if]
|
||||
[if "do_fluff and magic_fluff_prefix != 'none'"][get magic_fluff_prefix], [/if]
|
||||
[if do_gpt]
|
||||
[replace _from="<pad>|</s>" _to=""][gpt model="{get magic_gpt_model}" tokenizer="{get magic_gpt_tokenizer}" instruction="{get magic_gpt_instruction}" transformers_class="{get magic_gpt_class _default='auto'}" max_length=75] [array magic_subject start_idx][/gpt] [/replace]
|
||||
[/if]
|
||||
[else] [array magic_subject start_idx] [/else]
|
||||
[if "start_idx > 0"], [array magic_subject 0][/if][if "do_fluff and magic_fluff_affix"], [get magic_fluff_affix] [/if]
|
||||
[if "start_idx > 0"], [array magic_subject 0][/if][if "do_fluff and magic_fluff_affix != 'none'"], [get magic_fluff_affix] [/if]
|
||||
|
||||
[if do_negatives]
|
||||
[set negative_prompt _append][if "start_idx > 0"]%SPACE%ADDROW%SPACE%[/if][if "start_idx > 1"]([array magic_subject "start_idx - 1"]:1.5) [/if][get magic_negative][/set]
|
||||
|
|
@ -64,15 +63,6 @@
|
|||
[get magic_networks]
|
||||
[/if]
|
||||
|
||||
|
||||
[if "inference_preset != 'none'"]
|
||||
[call "/presets/txt2img/{get inference_preset}" _places="user|common"]
|
||||
[/if]
|
||||
|
||||
[if "aspect_ratio != 'none'"]
|
||||
[call "/presets/dimensions/{get aspect_ratio}" _places="user|common"]
|
||||
[/if]
|
||||
|
||||
[if do_postprocessing]
|
||||
[after][get magic_postprocessing _parse][/after]
|
||||
[/if]
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
{
|
||||
"TEMPLATES": {
|
||||
"magic_spice": {
|
||||
"style_preset": "pony_photoreal_spice_v3",
|
||||
"do_fluff": true,
|
||||
"do_gpt": false,
|
||||
"do_networks": true,
|
||||
"do_negatives": true,
|
||||
"do_postprocessing": true,
|
||||
"_enable": true,
|
||||
"_destination": "your_var",
|
||||
"_order": 1
|
||||
},
|
||||
"control_freak": {
|
||||
"control_subject": "[get your_var]",
|
||||
"control_method": "smart",
|
||||
"control_effects": "may_flip_horizontal",
|
||||
"control_cn_preset": "xl_promax_v1",
|
||||
"control_ar": "inform_autopilot",
|
||||
"control_winner_threshold": 5.0,
|
||||
"control_winner_margin": 0.0,
|
||||
"control_caption_method": "add_difference",
|
||||
"control_do_negative": true,
|
||||
"control_do_extra": true,
|
||||
"_enable": true,
|
||||
"_destination": "prompt",
|
||||
"_order": 2
|
||||
},
|
||||
"autopilot": {
|
||||
"inference_preset": "dpm_2m_fast_v1",
|
||||
"img2img_preset": "none",
|
||||
"aspect_ratio": "square_1-1",
|
||||
"flip_aspect_ratio": false,
|
||||
"_enable": true,
|
||||
"_destination": "prompt",
|
||||
"_order": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
Loading…
Reference in New Issue