pre-merge cleanup

pull/2897/head^2
Vladimir Mandic 2024-02-22 13:56:49 -05:00
parent e64055cc01
commit 41cfecc127
6 changed files with 333 additions and 254 deletions

View File

@ -1,10 +1,138 @@
# Change Log for SD.Next
## Update for 2024-02-22
Only 3 weeks since last release, but here's another feature-packed one!
This time release schedule was shorter as we wanted to get some of the fixes out faster.
### Highlights
- **IP-Adapters** & **FaceID**: multi-adapter and multi-image suport
- New optimization engines: [DeepCache](https://github.com/horseee/DeepCache), [ZLUDA](https://github.com/vosen/ZLUDA) and **Dynamic Attention Slicing**
- New built-in pipelines: [Differential diffusion](https://github.com/exx8/differential-diffusion) and [Regional prompting](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#regional-prompting-pipeline)
- Big updates to: **Outpainting** (noised-edge-extend), **Clip-skip** (interpolate with non-integrer values!), **CFG end** (prevent overburn on high CFG scales), **Control** module masking functionality
- All reported issues since the last release are addressed and included in this release
Further details:
- For basic instructions, see [README](https://github.com/vladmandic/automatic/blob/master/README.md)
- For more details on all new features see full [CHANGELOG](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md)
- For documentation, see [WiKi](https://github.com/vladmandic/automatic/wiki)
- [Discord](https://discord.com/invite/sd-next-federal-batch-inspectors-1101998836328697867) server
### Full ChangeLog for 2024-02-22
- **Improvements**:
- **IP Adapter** major refactor
- support for **multiple input images** per each ip adapter
- support for **multiple concurrent ip adapters**
*note*: you cannot mix & match ip adapters that use different *CLiP* models, for example `Base` and `Base ViT-G`
- add **adapter start/end** to settings, thanks @AI-Casanova
having adapter start late can help with better control over composition and prompt adherence
having adapter end early can help with overal quality and performance
- unified interface in txt2img, img2img and control
- enhanced xyz grid support
- **FaceID** now also works with multiple input images!
- [Differential diffusion](https://github.com/exx8/differential-diffusion)
img2img generation where you control strength of each pixel or image area
can be used with manually created masks or with auto-generated depth-maps
uses general denoising strength value
simply enable from *img2img -> scripts -> differential diffusion*
*note*: supports sd15 and sdxl models
- [Regional prompting](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#regional-prompting-pipeline) as a built-in solution
usage is same as original implementation from @hako-mikan
click on title to open docs and see examples of full syntax on how to use it
simply enable from *scripts -> regional prompting*
*note*: supports sd15 models only
- [DeepCache](https://github.com/horseee/DeepCache) model acceleration
it can produce massive speedups (2x-5x) with no overhead, but with some loss of quality
*settings -> compute -> model compile -> deep-cache* and *settings -> compute -> model compile -> cache interval*
- [ZLUDA](https://github.com/vosen/ZLUDA) experimental support, thanks @lshqqytiger
- ZLUDA is CUDA wrapper that can be used for GPUs without native support
- best use case is *AMD GPUs on Windows*, see [wiki](https://github.com/vladmandic/automatic/wiki/ZLUDA) for details
- **Outpaint** control outpaint now uses new alghorithm: noised-edge-extend
new method allows for much larger outpaint areas in a single pass, even outpaint 512->1024 works well
note that denoise strength should be increased for larger the outpaint areas, for example outpainting 512->1024 works well with denoise 0.75
outpaint can run in *img2img* mode (default) and *inpaint* mode where original image is masked (if inpaint masked only is selected)
- **Clip-skip** reworked completely, thanks @AI-Casanova & @Disty0
now clip-skip range is 0-12 where previously lowest value was 1 (default is still 1)
values can also be decimal to interpolate between different layers, for example `clip-skip: 1.5`, thanks @AI-Casanova
- **CFG End** new param to control image generation guidance, thanks @AI-Casanova
sometimes you want strong control over composition, but you want it to stop at some point
for example, when used with ip-adapters or controlnet, high cfg scale can overpower the guided image
- **Control**
- when performing inpainting, you can specify processing resolution using **size->mask**
- units now have extra option to re-use current preview image as processor input
- **Cross-attention** refactored cross-attention methods, thanks @Disty0
- for backend:original, its unchanged: SDP, xFormers, Doggettxs, InvokeAI, Sub-quadratic, Split attention
- for backend:diffuers, list is now: SDP, xFormers, Batch matrix-matrix, Split attention, Dynamic Attention BMM, Dynamic Attention SDP
note: you may need to update your settings! Attention Slicing is renamed to Split attention
- for ROCm, updated default cross-attention to Scaled Dot Product
- **Dynamic Attention Slicing**, thanks @Disty0
- dynamically slices attention queries in order to keep them under the slice rate
slicing gets only triggered if the query size is larger than the slice rate to gain performance
*Dynamic Attention Slicing BMM* uses *Batch matrix-matrix*
*Dynamic Attention Slicing SDP* uses *Scaled Dot Product*
- *settings -> compute settings -> attention -> dynamic attention slicing*
- **ONNX**:
- allow specify onnx default provider and cpu fallback
*settings -> diffusers*
- allow manual install of specific onnx flavor
*settings -> onnx*
- better handling of `fp16` models/vae, thanks @lshqqytiger
- **OpenVINO** update to `torch 2.2.0`, thanks @Disty0
- **HyperTile** additional options thanks @Disty0
- add swap size option
- add use only for hires pass option
- add `--theme` cli param to force theme on startup
- add `--allow-paths` cli param to add additional paths that are allowed to be accessed via web, thanks @OuticNZ
- **Wiki**:
- added benchmark notes for IPEX, OpenVINO and Olive
- added ZLUDA wiki page
- **Internal**
- update dependencies
- refactor txt2img/img2img api
- enhanced theme loader
- add additional debug env variables
- enhanced sdp cross-optimization control
see *settings -> compute settings*
- experimental support for *python 3.12*
- **Fixes**:
- add variation seed to diffusers txt2img, thanks @AI-Casanova
- add cmd param `--skip-env` to skip setting of environment parameters during sdnext load
- handle extensions that install conflicting versions of packages
`onnxruntime`, `opencv2-python`
- installer refresh package cache on any install
- fix embeddings registration on server startup, thanks @AI-Casanova
- ipex handle dependencies, thanks @Disty0
- insightface handle dependencies
- img2img mask blur and padding
- xyz grid handle ip adapter name and scale
- lazy loading of image may prevent metadata from being loaded on time
- allow startup without valid models folder
- fix interrogate api endpoint
- control fix resize causing runtime errors
- control fix processor override image after processor change
- control fix display grid with batch
- control restore pipeline before running scripts/extensions
- handle pipelines that return dict instead of object
- lora use strict name matching if preferred option is by-filename
- fix inpaint mask only for diffusers
- fix vae dtype mismatch, thanks @Disty0
- fix controlnet inpaint mask
- fix theme list refresh
- fix extensions update information in ui
- fix taesd with bfloat16
- fix model merge manual merge settings, thanks @AI-Casanova
- fix gradio instant update issues for textboxes in quicksettings
- fix rembg missing dependency
- bind controlnet extension to last known working commit, thanks @Aptronymist
- prompts-from-file fix resizable prompt area
## Update for 2024-02-07
Another big release just hit the shelves!
### Highlights
### Highlights 2024-02-07
- A lot more functionality in the **Control** module:
- Inpaint and outpaint support, flexible resizing options, optional hires
@ -36,7 +164,7 @@ Further details:
- For more details on all new features see full [CHANGELOG](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md)
- For documentation, see [WiKi](https://github.com/vladmandic/automatic/wiki)
### Full changelog
### Full ChangeLog 2024-02-07
- Heavily updated [Wiki](https://github.com/vladmandic/automatic/wiki)
- **Control**:
@ -239,8 +367,8 @@ Further details:
best used together with torch compile: *inductor*
this feature is highly experimental and will evolve over time
requires nightly versions of `torch` and `torchao`
> pip install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
> pip install -U git+https://github.com/pytorch-labs/ao
> `pip install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121`
> `pip install -U git+https://github.com/pytorch-labs/ao`
- new option: **compile text encoder** (experimental)
- **correction**
- new section in generate, allows for image corrections during generataion directly in latent space
@ -336,7 +464,7 @@ Further details:
To wrap up this amazing year, were releasing a new version of [SD.Next](https://github.com/vladmandic/automatic), this one is absolutely massive!
### Highlights
### Highlights 2023-12-29
- Brand new Control module for *text, image, batch and video* processing
Native implementation of all control methods for both *SD15* and *SD-XL*
@ -360,7 +488,7 @@ And others improvements in areas such as: Upscaling (up to 8x now with 40+ avail
Plus some nifty new modules such as **FaceID** automatic face guidance using embeds during generation and **Depth 3D** image to 3D scene
### Full changelog
### Full ChangeLog 2023-12-29
- **Control**
- native implementation of all image control methods:

@ -1 +1 @@
Subproject commit 7fd9904d9b01bdc8e2029f908ca5005aff184f61
Subproject commit 8985722e34377a7c966fd3b5e02751922e777e44

View File

@ -77,22 +77,28 @@ def post_interrogate(req: models.ReqInterrogate):
image = helpers.decode_base64_to_image(req.image)
image = image.convert('RGB')
if req.model == "clip":
caption = shared.interrogator.interrogate(image)
return models.ResInterrogate(caption)
elif req.model == "deepdanbooru":
try:
caption = shared.interrogator.interrogate(image)
except Exception as e:
caption = str(e)
return models.ResInterrogate(caption=caption)
elif req.model == "deepdanbooru" or req.model == 'deepbooru':
from modules import deepbooru
caption = deepbooru.model.tag(image)
return models.ResInterrogate(caption)
return models.ResInterrogate(caption=caption)
else:
from modules.ui_interrogate import interrogate_image, analyze_image, get_models
if req.model not in get_models():
raise HTTPException(status_code=404, detail="Model not found")
caption = interrogate_image(image, model=req.model, mode=req.mode)
try:
caption = interrogate_image(image, model=req.model, mode=req.mode)
except Exception as e:
caption = str(e)
if not req.analyze:
return models.ResInterrogate(caption)
return models.ResInterrogate(caption=caption)
else:
medium, artist, movement, trending, flavor = analyze_image(image, model=req.model)
return models.ResInterrogate(caption, medium, artist, movement, trending, flavor)
return models.ResInterrogate(caption=caption, medium=medium, artist=artist, movement=movement, trending=trending, flavor=flavor)
def post_unload_checkpoint():
from modules import sd_models

View File

@ -16,7 +16,6 @@ class DynamicSessionOptions(ort.SessionOptions):
def __init__(self):
super().__init__()
self.enable_mem_pattern = False
@classmethod
@ -77,7 +76,6 @@ class TemporalModule(TorchCompatibleModule):
device = extract_device(args, kwargs)
if device is not None and device.type != "cpu":
from .execution_providers import TORCH_DEVICE_TO_EP
provider = TORCH_DEVICE_TO_EP[device.type] if device.type in TORCH_DEVICE_TO_EP else self.provider
return OnnxRuntimeModel.load_model(self.path, provider, DynamicSessionOptions.from_sess_options(self.sess_options))
return self
@ -95,15 +93,12 @@ class OnnxRuntimeModel(TorchCompatibleModule, diffusers.OnnxRuntimeModel):
device = extract_device(args, kwargs)
if device is not None:
self.device = device
self.model = move_inference_session(self.model, device) # pylint: disable=attribute-defined-outside-init
self.model = move_inference_session(self.model, device)
return self
class VAEConfig:
DEFAULTS = {
"scaling_factor": 0.18215,
}
DEFAULTS = { "scaling_factor": 0.18215 }
config: Dict
def __init__(self, config: Dict):
@ -151,10 +146,8 @@ class VAE(TorchCompatibleModule):
def check_parameters_changed(p, refiner_enabled: bool):
from modules import shared, sd_models
if shared.sd_model.__class__.__name__ == "OnnxRawPipeline" or not shared.sd_model.__class__.__name__.startswith("Onnx"):
return shared.sd_model
compile_height = p.height
compile_width = p.width
if (shared.compiled_model_state is None or
@ -172,17 +165,14 @@ def check_parameters_changed(p, refiner_enabled: bool):
shared.compiled_model_state.height = compile_height
shared.compiled_model_state.width = compile_width
shared.compiled_model_state.batch_size = p.batch_size
return shared.sd_model
def preprocess_pipeline(p):
from modules import shared, sd_models
if "ONNX" not in shared.opts.diffusers_pipeline:
shared.log.warning(f"Unsupported pipeline for 'olive-ai' compile backend: {shared.opts.diffusers_pipeline}. You should select one of the ONNX pipelines.")
return shared.sd_model
if hasattr(shared.sd_model, "preprocess"):
shared.sd_model = shared.sd_model.preprocess(p)
if hasattr(shared.sd_refiner, "preprocess"):
@ -192,7 +182,6 @@ def preprocess_pipeline(p):
if shared.opts.onnx_unload_base:
sd_models.reload_model_weights(op='model')
shared.sd_model = shared.sd_model.preprocess(p)
return shared.sd_model
@ -201,26 +190,18 @@ def ORTDiffusionModelPart_to(self, *args, **kwargs):
return self
def initialize():
def initialize_onnx():
global initialized # pylint: disable=global-statement
if initialized:
return
from installer import log
from modules import devices
from modules.paths import models_path
from modules.shared import opts
try: # may fail on onnx import
import onnx # pylint: disable=unused-import
from .execution_providers import ExecutionProvider, TORCH_DEVICE_TO_EP, available_execution_providers
onnx_dir = os.path.join(models_path, "ONNX")
if not os.path.isdir(onnx_dir):
os.mkdir(onnx_dir)
if devices.backend == "rocm":
TORCH_DEVICE_TO_EP["cuda"] = ExecutionProvider.ROCm
from .pipelines.onnx_stable_diffusion_pipeline import OnnxStableDiffusionPipeline
from .pipelines.onnx_stable_diffusion_img2img_pipeline import OnnxStableDiffusionImg2ImgPipeline
from .pipelines.onnx_stable_diffusion_inpaint_pipeline import OnnxStableDiffusionInpaintPipeline
@ -253,10 +234,10 @@ def initialize():
optimum.onnxruntime.modeling_diffusion._ORTDiffusionModelPart.to = ORTDiffusionModelPart_to # pylint: disable=protected-access
log.info(f'ONNX: selected={opts.onnx_execution_provider}, available={available_execution_providers}')
initialized = True
log.debug(f'ONNX: version={ort.__version__} provider={opts.onnx_execution_provider}, available={available_execution_providers}')
except Exception as e:
log.error(f'ONNX: failed to initialize: {e}')
log.error(f'ONNX failed to initialize: {e}')
initialized = True
def initialize_olive():
@ -283,10 +264,8 @@ def initialize_olive():
def install_olive():
from installer import installed, install, log
if installed("olive-ai"):
return
try:
log.info('Installing Olive')
install('olive-ai', 'olive-ai', ignore=True)

View File

@ -22,246 +22,212 @@ def create_ui():
from .utils import check_diffusers_cache
with gr.Blocks(analytics_enabled=False) as ui:
with gr.Row():
with gr.Tabs(elem_id="tabs_onnx"):
with gr.TabItem("Manage execution providers", id="onnxep"):
gr.Markdown("Uninstall existing execution provider and install another one.")
choices = []
for ep in ExecutionProvider:
choices.append(ep)
ep_default = None
if cmd_opts.use_directml:
ep_default = ExecutionProvider.DirectML
elif cmd_opts.use_cuda:
ep_default = ExecutionProvider.CUDA
elif cmd_opts.use_rocm:
ep_default = ExecutionProvider.ROCm
elif cmd_opts.use_openvino:
ep_default = ExecutionProvider.OpenVINO
ep_checkbox = gr.Radio(label="Execution provider", value=ep_default, choices=choices)
ep_install = gr.Button(value="Install")
gr.Markdown("**Warning! If you are trying to reinstall, it may not work due to permission issue.**")
ep_install.click(fn=install_execution_provider, inputs=ep_checkbox)
with gr.Tabs(elem_id="tabs_onnx"):
with gr.TabItem("Provider", id="onnxep"):
gr.Markdown("Install ONNX execution provider")
ep_default = None
if cmd_opts.use_directml:
ep_default = ExecutionProvider.DirectML
elif cmd_opts.use_cuda:
ep_default = ExecutionProvider.CUDA
elif cmd_opts.use_rocm:
ep_default = ExecutionProvider.ROCm
elif cmd_opts.use_openvino:
ep_default = ExecutionProvider.OpenVINO
ep_checkbox = gr.Radio(label="Execution provider", value=ep_default, choices=ExecutionProvider)
ep_install = gr.Button(value="Reinstall")
ep_log = gr.HTML("")
ep_install.click(fn=install_execution_provider, inputs=[ep_checkbox], outputs=[ep_log])
if run_olive_workflow is not None:
import olive.passes as olive_passes
from olive.hardware.accelerator import AcceleratorSpec, Device
accelerator = AcceleratorSpec(accelerator_type=Device.GPU, execution_provider=opts.onnx_execution_provider)
with gr.Tabs(elem_id="tabs_olive"):
with gr.TabItem("Manage cache", id="manage_cache"):
cache_state_dirname = gr.Textbox(value=None, visible=False)
with gr.Row():
model_dropdown = gr.Dropdown(label="Model", value="Please select model", choices=checkpoint_tiles())
create_refresh_button(model_dropdown, refresh_checkpoints, {}, "onnx_cache_refresh_diffusers_model")
with gr.Row():
def remove_cache_onnx_converted(dirname: str):
shutil.rmtree(os.path.join(opts.onnx_cached_models_path, dirname))
log.info(f"ONNX converted cache of '{dirname}' is removed.")
cache_onnx_converted = gr.Markdown("Please select model")
cache_remove_onnx_converted = gr.Button(value="Remove cache", visible=False)
cache_remove_onnx_converted.click(fn=remove_cache_onnx_converted, inputs=[cache_state_dirname,])
with gr.Column():
cache_optimized_selected = gr.Textbox(value=None, visible=False)
def select_cache_optimized(evt: gr.SelectData, data):
return ",".join(data[evt.index[0]])
def remove_cache_optimized(dirname: str, s: str):
if s == "":
return
size = s.split(",")
shutil.rmtree(os.path.join(opts.onnx_cached_models_path, f"{dirname}-{size[0]}w-{size[1]}h"))
log.info(f"Olive processed cache of '{dirname}' is removed: width={size[0]}, height={size[1]}")
with gr.Row():
cache_list_optimized_headers = ["height", "width"]
cache_list_optimized_types = ["str", "str"]
cache_list_optimized = gr.Dataframe(None, label="Optimized caches", show_label=True, overflow_row_behaviour='paginate', interactive=False, max_rows=10, headers=cache_list_optimized_headers, datatype=cache_list_optimized_types, type="array")
cache_list_optimized.select(fn=select_cache_optimized, inputs=[cache_list_optimized,], outputs=[cache_optimized_selected,])
cache_remove_optimized = gr.Button(value="Remove selected cache", visible=False)
cache_remove_optimized.click(fn=remove_cache_optimized, inputs=[cache_state_dirname, cache_optimized_selected,])
def cache_update_menus(query: str):
checkpoint_info = get_closet_checkpoint_match(query)
if checkpoint_info is None:
log.error(f"Could not find checkpoint object for '{query}'.")
with gr.TabItem("Manage cache", id="manage_cache"):
cache_state_dirname = gr.Textbox(value=None, visible=False)
with gr.Row():
model_dropdown = gr.Dropdown(label="Model", value="Please select model", choices=checkpoint_tiles())
create_refresh_button(model_dropdown, refresh_checkpoints, {}, "onnx_cache_refresh_diffusers_model")
with gr.Row():
def remove_cache_onnx_converted(dirname: str):
shutil.rmtree(os.path.join(opts.onnx_cached_models_path, dirname))
log.info(f"ONNX converted cache of '{dirname}' is removed.")
cache_onnx_converted = gr.Markdown("Please select model")
cache_remove_onnx_converted = gr.Button(value="Remove cache", visible=False)
cache_remove_onnx_converted.click(fn=remove_cache_onnx_converted, inputs=[cache_state_dirname,])
with gr.Column():
cache_optimized_selected = gr.Textbox(value=None, visible=False)
def select_cache_optimized(evt: gr.SelectData, data):
return ",".join(data[evt.index[0]])
def remove_cache_optimized(dirname: str, s: str):
if s == "":
return
model_name = os.path.basename(os.path.dirname(os.path.dirname(checkpoint_info.path)) if check_diffusers_cache(checkpoint_info.path) else checkpoint_info.path)
caches = os.listdir(opts.onnx_cached_models_path)
onnx_converted = False
optimized_sizes = []
for cache in caches:
if cache == model_name:
onnx_converted = True
elif model_name in cache:
try:
splitted = cache.split("-")
height = splitted[-1][:-1]
width = splitted[-2][:-1]
optimized_sizes.append((width, height,))
except Exception:
pass
return (
model_name,
cache_onnx_converted.update(value="ONNX model cache of this model exists." if onnx_converted else "ONNX model cache of this model does not exist."),
cache_remove_onnx_converted.update(visible=onnx_converted),
None if len(optimized_sizes) == 0 else optimized_sizes,
cache_remove_optimized.update(visible=True),
)
size = s.split(",")
shutil.rmtree(os.path.join(opts.onnx_cached_models_path, f"{dirname}-{size[0]}w-{size[1]}h"))
log.info(f"Olive processed cache of '{dirname}' is removed: width={size[0]}, height={size[1]}")
with gr.Row():
cache_list_optimized_headers = ["height", "width"]
cache_list_optimized_types = ["str", "str"]
cache_list_optimized = gr.Dataframe(None, label="Optimized caches", show_label=True, overflow_row_behaviour='paginate', interactive=False, max_rows=10, headers=cache_list_optimized_headers, datatype=cache_list_optimized_types, type="array")
cache_list_optimized.select(fn=select_cache_optimized, inputs=[cache_list_optimized,], outputs=[cache_optimized_selected,])
cache_remove_optimized = gr.Button(value="Remove selected cache", visible=False)
cache_remove_optimized.click(fn=remove_cache_optimized, inputs=[cache_state_dirname, cache_optimized_selected,])
model_dropdown.change(fn=cache_update_menus, inputs=[model_dropdown,], outputs=[
cache_state_dirname,
cache_onnx_converted, cache_remove_onnx_converted,
cache_list_optimized, cache_remove_optimized,
])
def cache_update_menus(query: str):
checkpoint_info = get_closet_checkpoint_match(query)
if checkpoint_info is None:
log.error(f"Could not find checkpoint object for '{query}'.")
return
model_name = os.path.basename(os.path.dirname(os.path.dirname(checkpoint_info.path)) if check_diffusers_cache(checkpoint_info.path) else checkpoint_info.path)
caches = os.listdir(opts.onnx_cached_models_path)
onnx_converted = False
optimized_sizes = []
for cache in caches:
if cache == model_name:
onnx_converted = True
elif model_name in cache:
try:
splitted = cache.split("-")
height = splitted[-1][:-1]
width = splitted[-2][:-1]
optimized_sizes.append((width, height,))
except Exception:
pass
return (
model_name,
cache_onnx_converted.update(value="ONNX model cache of this model exists." if onnx_converted else "ONNX model cache of this model does not exist."),
cache_remove_onnx_converted.update(visible=onnx_converted),
None if len(optimized_sizes) == 0 else optimized_sizes,
cache_remove_optimized.update(visible=True),
)
with gr.TabItem("Customize pass flow", id="pass_flow"):
with gr.Tabs(elem_id="tabs_model_type"):
with gr.TabItem("Stable Diffusion", id="sd"):
sd_config_path = os.path.join(sd_configs_path, "olive", "sd")
sd_submodels = os.listdir(sd_config_path)
sd_configs: Dict[str, Dict[str, Dict[str, Dict]]] = {}
sd_pass_config_components: Dict[str, Dict[str, Dict]] = {}
model_dropdown.change(fn=cache_update_menus, inputs=[model_dropdown,], outputs=[
cache_state_dirname,
cache_onnx_converted, cache_remove_onnx_converted,
cache_list_optimized, cache_remove_optimized,
])
with gr.Tabs(elem_id="tabs_sd_submodel"):
def sd_create_change_listener(*args):
def listener(v: Dict):
get_recursively(sd_configs, *args[:-1])[args[-1]] = v
return listener
with gr.TabItem("Customize pass flow", id="pass_flow"):
with gr.Tabs(elem_id="tabs_model_type"):
with gr.TabItem("Stable Diffusion", id="sd"):
sd_config_path = os.path.join(sd_configs_path, "olive", "sd")
sd_submodels = os.listdir(sd_config_path)
sd_configs: Dict[str, Dict[str, Dict[str, Dict]]] = {}
sd_pass_config_components: Dict[str, Dict[str, Dict]] = {}
for submodel in sd_submodels:
config: Dict = None
with gr.Tabs(elem_id="tabs_sd_submodel"):
def sd_create_change_listener(*args):
def listener(v: Dict):
get_recursively(sd_configs, *args[:-1])[args[-1]] = v
return listener
sd_pass_config_components[submodel] = {}
for submodel in sd_submodels:
config: Dict = None
sd_pass_config_components[submodel] = {}
with open(os.path.join(sd_config_path, submodel), "r", encoding="utf-8") as file:
config = json.load(file)
sd_configs[submodel] = config
with open(os.path.join(sd_config_path, submodel), "r", encoding="utf-8") as file:
config = json.load(file)
sd_configs[submodel] = config
submodel_name = submodel[:-5]
with gr.TabItem(submodel_name, id=f"sd_{submodel_name}"):
pass_flows = DropdownMulti(label="Pass flow", value=sd_configs[submodel]["pass_flows"][0], choices=sd_configs[submodel]["passes"].keys())
pass_flows.change(fn=sd_create_change_listener(submodel, "pass_flows", 0), inputs=pass_flows)
submodel_name = submodel[:-5]
with gr.TabItem(submodel_name, id=f"sd_{submodel_name}"):
pass_flows = DropdownMulti(label="Pass flow", value=sd_configs[submodel]["pass_flows"][0], choices=sd_configs[submodel]["passes"].keys())
pass_flows.change(fn=sd_create_change_listener(submodel, "pass_flows", 0), inputs=pass_flows)
with gr.Tabs(elem_id=f"tabs_sd_{submodel_name}_pass"):
for pass_name in sd_configs[submodel]["passes"]:
sd_pass_config_components[submodel][pass_name] = {}
with gr.Tabs(elem_id=f"tabs_sd_{submodel_name}_pass"):
for pass_name in sd_configs[submodel]["passes"]:
sd_pass_config_components[submodel][pass_name] = {}
with gr.TabItem(pass_name, id=f"sd_{submodel_name}_pass_{pass_name}"):
config_dict = sd_configs[submodel]["passes"][pass_name]
pass_type = gr.Dropdown(label="Type", value=config_dict["type"], choices=(x.__name__ for x in tuple(olive_passes.REGISTRY.values())))
with gr.TabItem(pass_name, id=f"sd_{submodel_name}_pass_{pass_name}"):
config_dict = sd_configs[submodel]["passes"][pass_name]
def create_pass_config_change_listener(submodel, pass_name, config_key):
def listener(value):
sd_configs[submodel]["passes"][pass_name]["config"][config_key] = value
return listener
pass_type = gr.Dropdown(label="Type", value=config_dict["type"], choices=(x.__name__ for x in tuple(olive_passes.REGISTRY.values())))
for config_key, v in getattr(olive_passes, config_dict["type"], olive_passes.Pass)._default_config(accelerator).items(): # pylint: disable=protected-access
component = None
if v.type_ == bool:
component = gr.Checkbox
elif v.type_ == str:
component = gr.Textbox
elif v.type_ == int:
component = gr.Number
if component is not None:
component = component(value=config_dict["config"][config_key] if config_key in config_dict["config"] else v.default_value, label=config_key)
sd_pass_config_components[submodel][pass_name][config_key] = component
component.change(fn=create_pass_config_change_listener(submodel, pass_name, config_key), inputs=component)
pass_type.change(fn=sd_create_change_listener(submodel, "passes", config_key, "type"), inputs=pass_type) # pylint: disable=undefined-loop-variable
def create_pass_config_change_listener(submodel, pass_name, config_key):
def listener(value):
sd_configs[submodel]["passes"][pass_name]["config"][config_key] = value
return listener
def sd_save():
for k, v in sd_configs.items():
with open(os.path.join(sd_config_path, k), "w", encoding="utf-8") as file:
json.dump(v, file)
log.info("Olive: config for SD was saved.")
sd_save_button = gr.Button(value="Save")
sd_save_button.click(fn=sd_save)
for config_key, v in getattr(olive_passes, config_dict["type"], olive_passes.Pass)._default_config(accelerator).items(): # pylint: disable=protected-access
component = None
with gr.TabItem("Stable Diffusion XL", id="sdxl"):
sdxl_config_path = os.path.join(sd_configs_path, "olive", "sdxl")
sdxl_submodels = os.listdir(sdxl_config_path)
sdxl_configs: Dict[str, Dict[str, Dict[str, Dict]]] = {}
sdxl_pass_config_components: Dict[str, Dict[str, Dict]] = {}
if v.type_ == bool:
component = gr.Checkbox
elif v.type_ == str:
component = gr.Textbox
elif v.type_ == int:
component = gr.Number
with gr.Tabs(elem_id="tabs_sdxl_submodel"):
def sdxl_create_change_listener(*args):
def listener(v: Dict):
get_recursively(sdxl_configs, *args[:-1])[args[-1]] = v
return listener
if component is not None:
component = component(value=config_dict["config"][config_key] if config_key in config_dict["config"] else v.default_value, label=config_key)
sd_pass_config_components[submodel][pass_name][config_key] = component
component.change(fn=create_pass_config_change_listener(submodel, pass_name, config_key), inputs=component)
for submodel in sdxl_submodels:
config: Dict = None
sdxl_pass_config_components[submodel] = {}
with open(os.path.join(sdxl_config_path, submodel), "r", encoding="utf-8") as file:
config = json.load(file)
sdxl_configs[submodel] = config
pass_type.change(fn=sd_create_change_listener(submodel, "passes", config_key, "type"), inputs=pass_type) # pylint: disable=undefined-loop-variable
submodel_name = submodel[:-5]
with gr.TabItem(submodel_name, id=f"sdxl_{submodel_name}"):
pass_flows = DropdownMulti(label="Pass flow", value=sdxl_configs[submodel]["pass_flows"][0], choices=sdxl_configs[submodel]["passes"].keys())
pass_flows.change(fn=sdxl_create_change_listener(submodel, "pass_flows", 0), inputs=pass_flows)
def sd_save():
for k, v in sd_configs.items():
with open(os.path.join(sd_config_path, k), "w", encoding="utf-8") as file:
json.dump(v, file)
log.info("Olive: config for SD was saved.")
with gr.Tabs(elem_id=f"tabs_sdxl_{submodel_name}_pass"):
for pass_name in sdxl_configs[submodel]["passes"]:
sdxl_pass_config_components[submodel][pass_name] = {}
sd_save_button = gr.Button(value="Save")
sd_save_button.click(fn=sd_save)
with gr.TabItem(pass_name, id=f"sdxl_{submodel_name}_pass_{pass_name}"):
config_dict = sdxl_configs[submodel]["passes"][pass_name]
pass_type = gr.Dropdown(label="Type", value=sdxl_configs[submodel]["passes"][pass_name]["type"], choices=(x.__name__ for x in tuple(olive_passes.REGISTRY.values())))
with gr.TabItem("Stable Diffusion XL", id="sdxl"):
sdxl_config_path = os.path.join(sd_configs_path, "olive", "sdxl")
sdxl_submodels = os.listdir(sdxl_config_path)
sdxl_configs: Dict[str, Dict[str, Dict[str, Dict]]] = {}
sdxl_pass_config_components: Dict[str, Dict[str, Dict]] = {}
def create_pass_config_change_listener(submodel, pass_name, config_key): # pylint: disable=function-redefined
def listener(value):
sdxl_configs[submodel]["passes"][pass_name]["config"][config_key] = value
return listener
with gr.Tabs(elem_id="tabs_sdxl_submodel"):
def sdxl_create_change_listener(*args):
def listener(v: Dict):
get_recursively(sdxl_configs, *args[:-1])[args[-1]] = v
return listener
for config_key, v in getattr(olive_passes, config_dict["type"], olive_passes.Pass)._default_config(accelerator).items(): # pylint: disable=protected-access
component = None
if v.type_ == bool:
component = gr.Checkbox
elif v.type_ == str:
component = gr.Textbox
elif v.type_ == int:
component = gr.Number
if component is not None:
component = component(value=config_dict["config"][config_key] if config_key in config_dict["config"] else v.default_value, label=config_key)
sdxl_pass_config_components[submodel][pass_name][config_key] = component
component.change(fn=create_pass_config_change_listener(submodel, pass_name, config_key), inputs=component)
pass_type.change(fn=sdxl_create_change_listener(submodel, "passes", pass_name, "type"), inputs=pass_type)
for submodel in sdxl_submodels:
config: Dict = None
def sdxl_save():
for k, v in sdxl_configs.items():
with open(os.path.join(sdxl_config_path, k), "w", encoding="utf-8") as file:
json.dump(v, file)
log.info("Olive: config for SDXL was saved.")
sdxl_pass_config_components[submodel] = {}
with open(os.path.join(sdxl_config_path, submodel), "r", encoding="utf-8") as file:
config = json.load(file)
sdxl_configs[submodel] = config
submodel_name = submodel[:-5]
with gr.TabItem(submodel_name, id=f"sdxl_{submodel_name}"):
pass_flows = DropdownMulti(label="Pass flow", value=sdxl_configs[submodel]["pass_flows"][0], choices=sdxl_configs[submodel]["passes"].keys())
pass_flows.change(fn=sdxl_create_change_listener(submodel, "pass_flows", 0), inputs=pass_flows)
with gr.Tabs(elem_id=f"tabs_sdxl_{submodel_name}_pass"):
for pass_name in sdxl_configs[submodel]["passes"]:
sdxl_pass_config_components[submodel][pass_name] = {}
with gr.TabItem(pass_name, id=f"sdxl_{submodel_name}_pass_{pass_name}"):
config_dict = sdxl_configs[submodel]["passes"][pass_name]
pass_type = gr.Dropdown(label="Type", value=sdxl_configs[submodel]["passes"][pass_name]["type"], choices=(x.__name__ for x in tuple(olive_passes.REGISTRY.values())))
def create_pass_config_change_listener(submodel, pass_name, config_key): # pylint: disable=function-redefined
def listener(value):
sdxl_configs[submodel]["passes"][pass_name]["config"][config_key] = value
return listener
for config_key, v in getattr(olive_passes, config_dict["type"], olive_passes.Pass)._default_config(accelerator).items(): # pylint: disable=protected-access
component = None
if v.type_ == bool:
component = gr.Checkbox
elif v.type_ == str:
component = gr.Textbox
elif v.type_ == int:
component = gr.Number
if component is not None:
component = component(value=config_dict["config"][config_key] if config_key in config_dict["config"] else v.default_value, label=config_key)
sdxl_pass_config_components[submodel][pass_name][config_key] = component
component.change(fn=create_pass_config_change_listener(submodel, pass_name, config_key), inputs=component)
pass_type.change(fn=sdxl_create_change_listener(submodel, "passes", pass_name, "type"), inputs=pass_type)
def sdxl_save():
for k, v in sdxl_configs.items():
with open(os.path.join(sdxl_config_path, k), "w", encoding="utf-8") as file:
json.dump(v, file)
log.info("Olive: config for SDXL was saved.")
sdxl_save_button = gr.Button(value="Save")
sdxl_save_button.click(fn=sdxl_save)
sdxl_save_button = gr.Button(value="Save")
sdxl_save_button.click(fn=sdxl_save)
return ui

2
wiki

@ -1 +1 @@
Subproject commit 7654276a07723988aefc17da885de8686ee0c530
Subproject commit ccc5298bf570be6804d63eafe8c20a9791cd91aa