mirror of https://github.com/vladmandic/automatic
add reprocess plus major processing refactor
parent
1395f5bf9e
commit
4136983f82
61
CHANGELOG.md
61
CHANGELOG.md
|
|
@ -1,8 +1,24 @@
|
|||
# Change Log for SD.Next
|
||||
|
||||
## Update for 2024-09-21
|
||||
## Update for 2024-09-24
|
||||
|
||||
- **flux**
|
||||
- **reprocess**
|
||||
- new top-leve button: reprocess your last generated image
|
||||
- generate using full-quality:off and then reprocess using *full quality decode*
|
||||
- generate without hires/refine and then *reprocess with hires/refine*
|
||||
*note*: you can change hires/refine settings and run-reprocess again!
|
||||
- reprocess using *face-restore*
|
||||
- **text encoder**:
|
||||
- allow loading different custom text encoders: *clip-vit-l, clip-vit-g, t5*
|
||||
will automatically find appropriate encoder in the loaded model and replace it with loaded text encoder
|
||||
download text encoders into folder set in settings -> system paths -> text encoders
|
||||
default `models/Text-encoder` folder is used if no custom path is set
|
||||
finetuned *clip-vit-l* models: [Detailed, Smooth](https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14), [LongCLIP](https://huggingface.co/zer0int/LongCLIP-GmP-ViT-L-14)
|
||||
reference *clip-vit-l* and *clip-vit-g* models: [OpenCLIP-Laion2b](https://huggingface.co/collections/laion/openclip-laion-2b-64fcade42d20ced4e9389b30)
|
||||
*note* sd/sdxl contain heavily distilled versions of reference models, so switching to reference model produces vastly different results
|
||||
- xyz grid support for text encoder
|
||||
- full prompt parser now correctly works with different prompts in batch
|
||||
- **flux**
|
||||
- avoid unet load if unchanged
|
||||
- mark specific unet as unavailable if load failed
|
||||
- fix diffusers local model name parsing
|
||||
|
|
@ -22,13 +38,12 @@
|
|||
- enable working with different resolutions
|
||||
now you can adjust width/height in the grid just as any other param
|
||||
- renamed options to include section name and adjusted cost of each option
|
||||
- added additional metadata
|
||||
- **interrogate**
|
||||
- add additional blip models: *blip-base, blip-large, blip-t5-xl, blip-t5-xxl, opt-2.7b, opt-6.7b*
|
||||
- change default params for better memory utilization
|
||||
- change default params for better memory utilization
|
||||
- add optional advanced params
|
||||
- update logging
|
||||
- **reprocess** generate images using taesd (full quality off) and reprocess selected ones using full vae
|
||||
- right click on *generate* button -> *reprocess*
|
||||
- **lora** auto-apply tags to prompt
|
||||
- controlled via *settings -> networks -> lora_apply_tags*
|
||||
*0:disable, -1:all-tags, n:top-n-tags*
|
||||
|
|
@ -36,24 +51,14 @@
|
|||
- if lora contains no tags, lora name itself will be used as a tag
|
||||
- if prompt contains `_tags_` it will be used as placeholder for replacement, otherwise tags will be appended
|
||||
- used tags are also logged and registered in image metadata
|
||||
- loras are no longer filtered per detected type vs loaded model type as its unreliable
|
||||
- loras display in networks now shows possible version in top-left corner
|
||||
- loras are no longer filtered per detected type vs loaded model type as its unreliable
|
||||
- loras display in networks now shows possible version in top-left corner
|
||||
- correct using of `extra_networks_default_multiplier` if not scale is specified
|
||||
- always keep lora on gpu
|
||||
- **text encoder**:
|
||||
- allow loading different custom text encoders: *clip-vit-l, clip-vit-g, t5*
|
||||
will automatically find appropriate encoder in the loaded model and replace it with loaded text encoder
|
||||
download text encoders into folder set in settings -> system paths -> text encoders
|
||||
default `models/Text-encoder` folder is used if no custom path is set
|
||||
finetuned *clip-vit-l* models: [Detailed, Smooth](https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14), [LongCLIP](https://huggingface.co/zer0int/LongCLIP-GmP-ViT-L-14)
|
||||
reference *clip-vit-l* and *clip-vit-g* models: [OpenCLIP-Laion2b](https://huggingface.co/collections/laion/openclip-laion-2b-64fcade42d20ced4e9389b30)
|
||||
*note* sd/sdxl contain heavily distilled versions of reference models, so switching to reference model produces vastly different results
|
||||
- xyz grid support for text encoder
|
||||
- full prompt parser now correctly works with different prompts in batch
|
||||
- **huggingface**:
|
||||
- **huggingface**:
|
||||
- force logout/login on token change
|
||||
- unified handling of cache folder: set via `HF_HUB` or `HF_HUB_CACHE` or via settings -> system paths
|
||||
- **cogvideox**:
|
||||
- unified handling of cache folder: set via `HF_HUB` or `HF_HUB_CACHE` or via settings -> system paths
|
||||
- **cogvideox**:
|
||||
- add support for *image2video* (in addition to previous *text2video* and *video2video*)
|
||||
- *note*: *image2video* requires separate 5b model variant
|
||||
- **backend=original** is now marked as in maintenance-only mode
|
||||
|
|
@ -63,11 +68,17 @@
|
|||
- **ui**
|
||||
- hide token counter until tokens are known
|
||||
- minor ui optimizations
|
||||
- **free-u** check if device/dtype are fft compatible
|
||||
- massive log cleanup
|
||||
- full lint pass
|
||||
- **experimental**
|
||||
- flux t5 load from gguf: requires transformers pr
|
||||
- fix update infotext on image select
|
||||
- fix imageviewer exif parser
|
||||
- **free-u** check if device/dtype are fft compatible and cast as necessary
|
||||
- **experimental**
|
||||
- flux t5 load from gguf: requires transformers pr
|
||||
- rocm triton backend for flash attention
|
||||
- **refactor**
|
||||
- modularize main process loop
|
||||
- massive log cleanup
|
||||
- full lint pass
|
||||
|
||||
|
||||
## Update for 2024-09-13
|
||||
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
Subproject commit 2c95d480d63d46232122ddbd4161b73cba8c258a
|
||||
Subproject commit 49f944e5fb0df2e2e7d6c7834d758364d528e774
|
||||
|
|
@ -31,7 +31,7 @@ const contextMenuInit = () => {
|
|||
if ((windowHeight - posy) < menuHeight) contextMenu.style.top = `${windowHeight - menuHeight}px`;
|
||||
}
|
||||
|
||||
function appendContextMenuOption(targetElementSelector, entryName, entryFunction) {
|
||||
function appendContextMenuOption(targetElementSelector, entryName, entryFunction, primary = false) {
|
||||
let currentItems = menuSpecs.get(targetElementSelector);
|
||||
if (!currentItems) {
|
||||
currentItems = [];
|
||||
|
|
@ -41,7 +41,8 @@ const contextMenuInit = () => {
|
|||
id: `${targetElementSelector}_${uid()}`,
|
||||
name: entryName,
|
||||
func: entryFunction,
|
||||
isNew: true,
|
||||
primary,
|
||||
// isNew: true,
|
||||
};
|
||||
currentItems.push(newItem);
|
||||
return newItem.id;
|
||||
|
|
@ -64,13 +65,21 @@ const contextMenuInit = () => {
|
|||
if (!e.isTrusted) return;
|
||||
const oldMenu = gradioApp().querySelector('#context-menu');
|
||||
if (oldMenu) oldMenu.remove();
|
||||
menuSpecs.forEach((v, k) => {
|
||||
const items = v.filter((item) => item.primary);
|
||||
if (items.length > 0 && e.composedPath()[0].matches(k)) {
|
||||
showContextMenu(e, e.composedPath()[0], items);
|
||||
e.preventDefault();
|
||||
}
|
||||
});
|
||||
});
|
||||
gradioApp().addEventListener('contextmenu', (e) => {
|
||||
const oldMenu = gradioApp().querySelector('#context-menu');
|
||||
if (oldMenu) oldMenu.remove();
|
||||
menuSpecs.forEach((v, k) => {
|
||||
if (e.composedPath()[0].matches(k)) {
|
||||
showContextMenu(e, e.composedPath()[0], v);
|
||||
const items = v.filter((item) => !item.primary);
|
||||
if (items.length > 0 && e.composedPath()[0].matches(k)) {
|
||||
showContextMenu(e, e.composedPath()[0], items);
|
||||
e.preventDefault();
|
||||
}
|
||||
});
|
||||
|
|
@ -80,10 +89,10 @@ const contextMenuInit = () => {
|
|||
return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener];
|
||||
};
|
||||
|
||||
const initResponse = contextMenuInit();
|
||||
const appendContextMenuOption = initResponse[0];
|
||||
const removeContextMenuOption = initResponse[1];
|
||||
const addContextMenuEventListener = initResponse[2];
|
||||
const initContextResponse = contextMenuInit();
|
||||
const appendContextMenuOption = initContextResponse[0];
|
||||
const removeContextMenuOption = initContextResponse[1];
|
||||
const addContextMenuEventListener = initContextResponse[2];
|
||||
|
||||
const generateForever = (genbuttonid) => {
|
||||
if (window.generateOnRepeatInterval) {
|
||||
|
|
@ -102,22 +111,25 @@ const generateForever = (genbuttonid) => {
|
|||
}
|
||||
};
|
||||
|
||||
const reprocessLatent = (btnId) => {
|
||||
const btn = document.getElementById(btnId);
|
||||
const reprocessClick = (tabId, state) => {
|
||||
const btn = document.getElementById(`${tabId}_${state}`);
|
||||
window.submit_state = state;
|
||||
if (btn) btn.click();
|
||||
};
|
||||
|
||||
async function initContextMenu() {
|
||||
let id = '';
|
||||
for (const tab of ['txt2img', 'img2img', 'control']) {
|
||||
for (const el of ['generate', 'interrupt', 'skip', 'pause', 'paste', 'clear_prompt', 'extra_networks_btn']) {
|
||||
const id = `#${tab}_${el}`;
|
||||
appendContextMenuOption(id, 'Copy to clipboard', () => navigator.clipboard.writeText(document.querySelector(`#${tab}_prompt > label > textarea`).value));
|
||||
appendContextMenuOption(id, 'Generate forever', () => generateForever(`#${tab}_generate`));
|
||||
appendContextMenuOption(id, 'Apply selected style', quickApplyStyle);
|
||||
appendContextMenuOption(id, 'Quick save style', quickSaveStyle);
|
||||
appendContextMenuOption(id, 'nVidia overlay', initNVML);
|
||||
appendContextMenuOption(id, 'Reprocess last image', () => reprocessLatent(`${tab}_reprocess`));
|
||||
}
|
||||
id = `#${tab}_generate`;
|
||||
appendContextMenuOption(id, 'Copy to clipboard', () => navigator.clipboard.writeText(document.querySelector(`#${tab}_prompt > label > textarea`).value));
|
||||
appendContextMenuOption(id, 'Generate forever', () => generateForever(`#${tab}_generate`));
|
||||
appendContextMenuOption(id, 'Apply selected style', quickApplyStyle);
|
||||
appendContextMenuOption(id, 'Quick save style', quickSaveStyle);
|
||||
appendContextMenuOption(id, 'nVidia overlay', initNVML);
|
||||
id = `#${tab}_reprocess`;
|
||||
appendContextMenuOption(id, 'Decode full quality', () => reprocessClick(`${tab}`, 'reprocess_decode'), true);
|
||||
appendContextMenuOption(id, 'Refine & HiRes pass', () => reprocessClick(`${tab}`, 'reprocess_refine'), true);
|
||||
appendContextMenuOption(id, 'Face restore', () => reprocessClick(`${tab}`, 'reprocess_face'), true);
|
||||
}
|
||||
addContextMenuEventListener();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -115,8 +115,6 @@ button.selected {background: var(--button-primary-background-fill);}
|
|||
#txt2img_checkboxes, #img2img_checkboxes { background-color: transparent; }
|
||||
#txt2img_checkboxes, #img2img_checkboxes { margin-bottom: 0.2em; }
|
||||
#txt2img_actions_column, #img2img_actions_column { flex-flow: wrap; justify-content: space-between; }
|
||||
#txt2img_enqueue_wrapper, #img2img_enqueue_wrapper { min-width: unset; width: 48%; }
|
||||
#txt2img_generate_box, #img2img_generate_box { min-width: unset; width: 48%; }
|
||||
|
||||
#extras_upscale { margin-top: 10px }
|
||||
#txt2img_progress_row > div { min-width: var(--left-column); max-width: var(--left-column); }
|
||||
|
|
|
|||
|
|
@ -1,20 +1,22 @@
|
|||
// attaches listeners to the txt2img and img2img galleries to update displayed generation param text when the image changes
|
||||
|
||||
function attachGalleryListeners(tab_name) {
|
||||
const gallery = gradioApp().querySelector(`#${tab_name}_gallery`);
|
||||
function attachGalleryListeners(tabName) {
|
||||
const gallery = gradioApp().querySelector(`#${tabName}_gallery`);
|
||||
if (!gallery) return null;
|
||||
gallery.addEventListener('click', () => setTimeout(() => {
|
||||
log('galleryItemSelected:', tab_name);
|
||||
gradioApp().getElementById(`${tab_name}_generation_info_button`)?.click();
|
||||
}, 500));
|
||||
gallery.addEventListener('click', () => {
|
||||
// log('galleryItemSelected:', tabName);
|
||||
const btn = gradioApp().getElementById(`${tabName}_generation_info_button`);
|
||||
if (btn) btn.click();
|
||||
});
|
||||
gallery?.addEventListener('keydown', (e) => {
|
||||
if (e.keyCode === 37 || e.keyCode === 39) gradioApp().getElementById(`${tab_name}_generation_info_button`).click(); // left or right arrow
|
||||
if (e.keyCode === 37 || e.keyCode === 39) gradioApp().getElementById(`${tabName}_generation_info_button`).click(); // left or right arrow
|
||||
});
|
||||
return gallery;
|
||||
}
|
||||
|
||||
let txt2img_gallery;
|
||||
let img2img_gallery;
|
||||
let control_gallery;
|
||||
let modal;
|
||||
|
||||
async function initiGenerationParams() {
|
||||
|
|
@ -23,14 +25,17 @@ async function initiGenerationParams() {
|
|||
|
||||
const modalObserver = new MutationObserver((mutations) => {
|
||||
mutations.forEach((mutationRecord) => {
|
||||
let selectedTab = gradioApp().querySelector('#tabs div button.selected')?.innerText;
|
||||
if (!selectedTab) selectedTab = gradioApp().querySelector('#tabs div button')?.innerText;
|
||||
if (mutationRecord.target.style.display === 'none' && (selectedTab === 'txt2img' || selectedTab === 'img2img')) { gradioApp().getElementById(`${selectedTab}_generation_info_button`)?.click(); }
|
||||
const tabName = getENActiveTab();
|
||||
if (mutationRecord.target.style.display === 'none') {
|
||||
const btn = gradioApp().getElementById(`${tabName}_generation_info_button`);
|
||||
if (btn) btn.click();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (!txt2img_gallery) txt2img_gallery = attachGalleryListeners('txt2img');
|
||||
if (!img2img_gallery) img2img_gallery = attachGalleryListeners('img2img');
|
||||
if (!control_gallery) control_gallery = attachGalleryListeners('control');
|
||||
modalObserver.observe(modal, { attributes: true, attributeFilter: ['style'] });
|
||||
log('initGenerationParams');
|
||||
}
|
||||
|
|
|
|||
|
|
@ -65,8 +65,8 @@ async function getExif(el) {
|
|||
// let html = `<b>Image</b> <a href="${el.src}" target="_blank">${el.src}</a> <b>Size</b> ${el.naturalWidth}x${el.naturalHeight}<br>`;
|
||||
let html = '';
|
||||
let params;
|
||||
if (exif.paramters) {
|
||||
params = exif.paramters;
|
||||
if (exif.parameters) {
|
||||
params = exif.parameters;
|
||||
} else if (exif.userComment) {
|
||||
params = Array.from(exif.userComment)
|
||||
.map((c) => String.fromCharCode(c))
|
||||
|
|
|
|||
|
|
@ -115,8 +115,6 @@ button.selected {background: var(--button-primary-background-fill);}
|
|||
#txt2img_checkboxes, #img2img_checkboxes { background-color: transparent; }
|
||||
#txt2img_checkboxes, #img2img_checkboxes { margin-bottom: 0.2em; }
|
||||
#txt2img_actions_column, #img2img_actions_column { flex-flow: wrap; justify-content: space-between; }
|
||||
#txt2img_enqueue_wrapper, #img2img_enqueue_wrapper { min-width: unset; width: 48%; }
|
||||
#txt2img_generate_box, #img2img_generate_box { min-width: unset; width: 48%; }
|
||||
|
||||
#extras_upscale { margin-top: 10px }
|
||||
#txt2img_progress_row > div { min-width: var(--left-column); max-width: var(--left-column); }
|
||||
|
|
|
|||
|
|
@ -42,7 +42,8 @@ function checkPaused(state) {
|
|||
function setProgress(res) {
|
||||
const elements = ['txt2img_generate', 'img2img_generate', 'extras_generate', 'control_generate'];
|
||||
const progress = (res?.progress || 0);
|
||||
const job = res?.job || '';
|
||||
let job = res?.job || '';
|
||||
job = job.replace('txt2img', 'Generate').replace('img2img', 'Generate');
|
||||
const perc = res && (progress > 0) ? `${Math.round(100.0 * progress)}%` : '';
|
||||
let sec = res?.eta || 0;
|
||||
let eta = '';
|
||||
|
|
|
|||
|
|
@ -90,17 +90,17 @@ button.custom-button { border-radius: var(--button-large-radius); padding: var(-
|
|||
#control-inputs { margin-top: 1em; }
|
||||
#txt2img_prompt_container, #img2img_prompt_container, #control_prompt_container { margin-right: var(--layout-gap) }
|
||||
#txt2img_footer, #img2img_footer, #control_footer { height: fit-content; display: none; }
|
||||
#txt2img_generate_box, #img2img_generate_box, #control_general_box { gap: 0.5em; flex-wrap: wrap-reverse; height: fit-content; }
|
||||
#txt2img_generate_box, #img2img_generate_box, #control_generate_box { gap: 0.5em; flex-wrap: unset; min-width: unset; width: 66.6%; }
|
||||
#txt2img_actions_column, #img2img_actions_column, #control_actions_column { gap: 0.3em; height: fit-content; }
|
||||
#txt2img_generate_box>button, #img2img_generate_box>button, #control_generate_box>button, #txt2img_enqueue, #img2img_enqueue { min-height: 44px !important; max-height: 44px !important; line-height: 1em; }
|
||||
#txt2img_generate_box>button, #img2img_generate_box>button, #control_generate_box>button, #txt2img_enqueue, #img2img_enqueue, #txt2img_enqueue>button, #img2img_enqueue>button { min-height: 44px !important; max-height: 44px !important; line-height: 1em; white-space: break-spaces; min-width: unset; }
|
||||
#txt2img_enqueue_wrapper, #img2img_enqueue_wrapper, #control_enqueue_wrapper { min-width: unset !important; width: 31%; }
|
||||
#txt2img_generate_line2, #img2img_generate_line2, #txt2img_tools, #img2img_tools, #control_generate_line2, #control_tools { display: flex; }
|
||||
#txt2img_generate_line2>button, #img2img_generate_line2>button, #extras_generate_box>button, #control_generate_line2>button, #txt2img_tools>button, #img2img_tools>button, #control_tools>button { height: 2em; line-height: 0; font-size: var(--text-md);
|
||||
min-width: unset; display: block !important; }
|
||||
#txt2img_prompt, #txt2img_neg_prompt, #img2img_prompt, #img2img_neg_prompt, #control_prompt, #control_neg_prompt { display: contents; }
|
||||
#txt2img_generate_box, #img2img_generate_box { min-width: unset; width: 66%; }
|
||||
#control_generate_box { min-width: unset; width: 100%; }
|
||||
#txt2img_actions_column, #img2img_actions_column, #control_actions { flex-flow: wrap; justify-content: space-between; }
|
||||
#txt2img_enqueue_wrapper, #img2img_enqueue_wrapper, #control_enqueue_wrapper { min-width: unset !important; width: 32%; }
|
||||
|
||||
|
||||
.interrogate-clip { position: absolute; right: 6em; top: 8px; max-width: fit-content; background: none !important; z-index: 50; }
|
||||
.interrogate-blip { position: absolute; right: 4em; top: 8px; max-width: fit-content; background: none !important; z-index: 50; }
|
||||
.interrogate-col { min-width: 0 !important; max-width: fit-content; margin-right: var(--spacing-xxl); }
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ async function initStartup() {
|
|||
// all items here are non-blocking async calls
|
||||
initModels();
|
||||
getUIDefaults();
|
||||
initiGenerationParams();
|
||||
initPromptChecker();
|
||||
initLogMonitor();
|
||||
initContextMenu();
|
||||
|
|
@ -16,6 +15,7 @@ async function initStartup() {
|
|||
initSettings();
|
||||
initImageViewer();
|
||||
initGallery();
|
||||
initiGenerationParams();
|
||||
setupControlUI();
|
||||
|
||||
// reconnect server session
|
||||
|
|
|
|||
|
|
@ -115,8 +115,6 @@ button.selected {background: var(--button-primary-background-fill);}
|
|||
#txt2img_checkboxes, #img2img_checkboxes { background-color: transparent; }
|
||||
#txt2img_checkboxes, #img2img_checkboxes { margin-bottom: 0.2em; }
|
||||
#txt2img_actions_column, #img2img_actions_column { flex-flow: wrap; justify-content: space-between; }
|
||||
#txt2img_enqueue_wrapper, #img2img_enqueue_wrapper { min-width: unset; width: 48%; }
|
||||
#txt2img_generate_box, #img2img_generate_box { min-width: unset; width: 48%; }
|
||||
|
||||
#extras_upscale { margin-top: 10px }
|
||||
#txt2img_progress_row > div { min-width: var(--left-column); max-width: var(--left-column); }
|
||||
|
|
|
|||
|
|
@ -204,6 +204,8 @@ function submit_txt2img(...args) {
|
|||
requestProgress(id, null, gradioApp().getElementById('txt2img_gallery'));
|
||||
const res = create_submit_args(args);
|
||||
res[0] = id;
|
||||
res[1] = window.submit_state;
|
||||
window.submit_state = '';
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
@ -214,7 +216,9 @@ function submit_img2img(...args) {
|
|||
requestProgress(id, null, gradioApp().getElementById('img2img_gallery'));
|
||||
const res = create_submit_args(args);
|
||||
res[0] = id;
|
||||
res[1] = get_tab_index('mode_img2img');
|
||||
res[1] = window.submit_state;
|
||||
res[2] = get_tab_index('mode_img2img');
|
||||
window.submit_state = '';
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
@ -225,7 +229,9 @@ function submit_control(...args) {
|
|||
requestProgress(id, null, gradioApp().getElementById('control_gallery'));
|
||||
const res = create_submit_args(args);
|
||||
res[0] = id;
|
||||
res[1] = gradioApp().querySelector('#control-tabs > .tab-nav > .selected')?.innerText.toLowerCase() || ''; // selected tab name
|
||||
res[1] = window.submit_state;
|
||||
res[2] = gradioApp().querySelector('#control-tabs > .tab-nav > .selected')?.innerText.toLowerCase() || ''; // selected tab name
|
||||
window.submit_state = '';
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
@ -236,6 +242,7 @@ function submit_postprocessing(...args) {
|
|||
}
|
||||
|
||||
window.submit = submit_txt2img;
|
||||
window.submit_state = '';
|
||||
|
||||
function modelmerger(...args) {
|
||||
const id = randomId();
|
||||
|
|
|
|||
|
|
@ -53,7 +53,8 @@ def control_set(kwargs):
|
|||
p_extra_args[k] = v
|
||||
|
||||
|
||||
def control_run(units: List[unit.Unit] = [], inputs: List[Image.Image] = [], inits: List[Image.Image] = [], mask: Image.Image = None, unit_type: str = None, is_generator: bool = True,
|
||||
def control_run(state: str = '',
|
||||
units: List[unit.Unit] = [], inputs: List[Image.Image] = [], inits: List[Image.Image] = [], mask: Image.Image = None, unit_type: str = None, is_generator: bool = True,
|
||||
input_type: int = 0,
|
||||
prompt: str = '', negative_prompt: str = '', styles: List[str] = [],
|
||||
steps: int = 20, sampler_index: int = None,
|
||||
|
|
@ -148,6 +149,7 @@ def control_run(units: List[unit.Unit] = [], inputs: List[Image.Image] = [], ini
|
|||
outpath_samples=shared.opts.outdir_samples or shared.opts.outdir_control_samples,
|
||||
outpath_grids=shared.opts.outdir_grids or shared.opts.outdir_control_grids,
|
||||
)
|
||||
p.state = state
|
||||
# processing.process_init(p)
|
||||
resize_mode_before = resize_mode_before if resize_name_before != 'None' and inputs is not None and len(inputs) > 0 else 0
|
||||
|
||||
|
|
@ -732,6 +734,7 @@ def control_run(units: List[unit.Unit] = [], inputs: List[Image.Image] = [], ini
|
|||
output_filename = ''
|
||||
image_txt = f'| Frames {len(output_images)} | Size {output_images[0].width}x{output_images[0].height}'
|
||||
|
||||
p.close()
|
||||
restore_pipeline()
|
||||
debug(f'Ready: {image_txt}')
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ import piexif
|
|||
import piexif.helper
|
||||
from PIL import Image, PngImagePlugin, ExifTags
|
||||
from modules import sd_samplers, shared, script_callbacks, errors, paths
|
||||
from modules.images_grid import image_grid, split_grid, combine_grid, check_grid_size, get_font, draw_grid_annotations, draw_prompt_matrix, GridAnnotation, Grid # pylint: disable=unused-import
|
||||
from modules.images_grid import image_grid, get_grid_size, split_grid, combine_grid, check_grid_size, get_font, draw_grid_annotations, draw_prompt_matrix, GridAnnotation, Grid # pylint: disable=unused-import
|
||||
from modules.images_resize import resize_image # pylint: disable=unused-import
|
||||
from modules.images_namegen import FilenameGenerator, get_next_sequence_number # pylint: disable=unused-import
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ def check_grid_size(imgs):
|
|||
return ok
|
||||
|
||||
|
||||
def image_grid(imgs, batch_size=1, rows=None):
|
||||
def get_grid_size(imgs, batch_size=1, rows=None):
|
||||
if rows is None:
|
||||
if shared.opts.n_rows > 0:
|
||||
rows = shared.opts.n_rows
|
||||
|
|
@ -32,6 +32,11 @@ def image_grid(imgs, batch_size=1, rows=None):
|
|||
if rows > len(imgs):
|
||||
rows = len(imgs)
|
||||
cols = math.ceil(len(imgs) / rows)
|
||||
return rows, cols
|
||||
|
||||
|
||||
def image_grid(imgs, batch_size=1, rows=None):
|
||||
rows, cols = get_grid_size(imgs, batch_size, rows=rows)
|
||||
params = script_callbacks.ImageGridLoopParams(imgs, cols, rows)
|
||||
script_callbacks.image_grid_callback(params)
|
||||
imgs = [i for i in imgs if i is not None] if imgs is not None else []
|
||||
|
|
|
|||
|
|
@ -107,7 +107,7 @@ def process_batch(p, input_files, input_dir, output_dir, inpaint_mask_dir, args)
|
|||
shared.log.debug(f'Processed: images={len(batch_image_files)} memory={memory_stats()} batch')
|
||||
|
||||
|
||||
def img2img(id_task: str, mode: int,
|
||||
def img2img(id_task: str, state: str, mode: int,
|
||||
prompt, negative_prompt, prompt_styles,
|
||||
init_img,
|
||||
sketch,
|
||||
|
|
@ -251,6 +251,7 @@ def img2img(id_task: str, mode: int,
|
|||
)
|
||||
p.scripts = modules.scripts.scripts_img2img
|
||||
p.script_args = args
|
||||
p.state = state
|
||||
if mask:
|
||||
p.extra_generation_params["Mask blur"] = mask_blur
|
||||
p.extra_generation_params["Mask alpha"] = mask_alpha
|
||||
|
|
|
|||
|
|
@ -3,7 +3,11 @@ import re
|
|||
import json
|
||||
|
||||
|
||||
debug = lambda *args, **kwargs: None # pylint: disable=unnecessary-lambda-assignment
|
||||
if os.environ.get('SD_PASTE_DEBUG', None) is not None:
|
||||
from modules.errors import log
|
||||
debug = log.trace
|
||||
else:
|
||||
debug = lambda *args, **kwargs: None # pylint: disable=unnecessary-lambda-assignment
|
||||
re_size = re.compile(r"^(\d+)x(\d+)$") # int x int
|
||||
re_param = re.compile(r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)') # multi-word: value
|
||||
|
||||
|
|
|
|||
|
|
@ -272,9 +272,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
p.scripts.process(p)
|
||||
|
||||
def infotext(_inxex=0): # dummy function overriden if there are iterations
|
||||
return ''
|
||||
|
||||
ema_scope_context = p.sd_model.ema_scope if not shared.native else nullcontext
|
||||
shared.state.job_count = p.n_iter
|
||||
with devices.inference_context(), ema_scope_context():
|
||||
|
|
@ -312,55 +309,53 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
p.scripts.process_batch(p, batch_number=n, prompts=p.prompts, seeds=p.seeds, subseeds=p.subseeds)
|
||||
|
||||
x_samples_ddim = None
|
||||
samples = None
|
||||
timer.process.record('init')
|
||||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
x_samples_ddim = p.scripts.process_images(p)
|
||||
if x_samples_ddim is None:
|
||||
samples = p.scripts.process_images(p)
|
||||
if samples is None:
|
||||
if not shared.native:
|
||||
from modules.processing_original import process_original
|
||||
x_samples_ddim = process_original(p)
|
||||
samples = process_original(p)
|
||||
elif shared.native:
|
||||
from modules.processing_diffusers import process_diffusers
|
||||
x_samples_ddim = process_diffusers(p)
|
||||
samples = process_diffusers(p)
|
||||
else:
|
||||
raise ValueError(f"Unknown backend {shared.backend}")
|
||||
timer.process.record('process')
|
||||
|
||||
if not shared.opts.keep_incomplete and shared.state.interrupted:
|
||||
x_samples_ddim = []
|
||||
samples = []
|
||||
|
||||
if not shared.native and (shared.cmd_opts.lowvram or shared.cmd_opts.medvram):
|
||||
lowvram.send_everything_to_cpu()
|
||||
devices.torch_gc()
|
||||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
p.scripts.postprocess_batch(p, x_samples_ddim, batch_number=n)
|
||||
p.scripts.postprocess_batch(p, samples, batch_number=n)
|
||||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
p.prompts = p.all_prompts[n * p.batch_size:(n+1) * p.batch_size]
|
||||
p.negative_prompts = p.all_negative_prompts[n * p.batch_size:(n+1) * p.batch_size]
|
||||
batch_params = scripts.PostprocessBatchListArgs(list(x_samples_ddim))
|
||||
batch_params = scripts.PostprocessBatchListArgs(list(samples))
|
||||
p.scripts.postprocess_batch_list(p, batch_params, batch_number=n)
|
||||
x_samples_ddim = batch_params.images
|
||||
samples = batch_params.images
|
||||
|
||||
def infotext(index): # pylint: disable=function-redefined
|
||||
return create_infotext(p, p.prompts, p.seeds, p.subseeds, index=index, all_negative_prompts=p.negative_prompts)
|
||||
|
||||
for i, x_sample in enumerate(x_samples_ddim):
|
||||
debug(f'Processing result: index={i+1}/{len(x_samples_ddim)} iteration={n+1}/{p.n_iter}')
|
||||
for i, sample in enumerate(samples):
|
||||
debug(f'Processing result: index={i+1}/{len(samples)} iteration={n+1}/{p.n_iter}')
|
||||
p.batch_index = i
|
||||
if type(x_sample) == Image.Image:
|
||||
image = x_sample
|
||||
x_sample = np.array(x_sample)
|
||||
info = create_infotext(p, p.prompts, p.seeds, p.subseeds, index=i, all_negative_prompts=p.negative_prompts)
|
||||
if type(sample) == Image.Image:
|
||||
image = sample
|
||||
sample = np.array(sample)
|
||||
else:
|
||||
x_sample = validate_sample(x_sample)
|
||||
image = Image.fromarray(x_sample)
|
||||
sample = validate_sample(sample)
|
||||
image = Image.fromarray(sample)
|
||||
if p.restore_faces:
|
||||
if not p.do_not_save_samples and shared.opts.save_images_before_face_restoration:
|
||||
images.save_image(Image.fromarray(x_sample), path=p.outpath_samples, basename="", seed=p.seeds[i], prompt=p.prompts[i], extension=shared.opts.samples_format, info=infotext(i), p=p, suffix="-before-face-restore")
|
||||
images.save_image(Image.fromarray(sample), path=p.outpath_samples, basename="", seed=p.seeds[i], prompt=p.prompts[i], extension=shared.opts.samples_format, info=info, p=p, suffix="-before-face-restore")
|
||||
p.ops.append('face')
|
||||
x_sample = face_restoration.restore_faces(x_sample, p)
|
||||
if x_sample is not None:
|
||||
image = Image.fromarray(x_sample)
|
||||
sample = face_restoration.restore_faces(sample, p)
|
||||
if sample is not None:
|
||||
image = Image.fromarray(sample)
|
||||
if p.scripts is not None and isinstance(p.scripts, scripts.ScriptRunner):
|
||||
pp = scripts.PostprocessImageArgs(image)
|
||||
p.scripts.postprocess_image(p, pp)
|
||||
|
|
@ -370,7 +365,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
if not p.do_not_save_samples and shared.opts.save_images_before_color_correction:
|
||||
orig = p.color_corrections
|
||||
p.color_corrections = None
|
||||
info = infotext(i)
|
||||
p.color_corrections = orig
|
||||
image_without_cc = apply_overlay(image, p.paste_to, i, p.overlay_images)
|
||||
images.save_image(image_without_cc, path=p.outpath_samples, basename="", seed=p.seeds[i], prompt=p.prompts[i], extension=shared.opts.samples_format, info=info, p=p, suffix="-before-color-correct")
|
||||
|
|
@ -378,12 +372,11 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
image = apply_color_correction(p.color_corrections[i], image)
|
||||
if shared.opts.mask_apply_overlay:
|
||||
image = apply_overlay(image, p.paste_to, i, p.overlay_images)
|
||||
text = infotext(i)
|
||||
infotexts.append(text)
|
||||
image.info["parameters"] = text
|
||||
infotexts.append(info)
|
||||
image.info["parameters"] = info
|
||||
output_images.append(image)
|
||||
if shared.opts.samples_save and not p.do_not_save_samples and p.outpath_samples is not None:
|
||||
images.save_image(image, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=text, p=p) # main save image
|
||||
images.save_image(image, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=info, p=p) # main save image
|
||||
if hasattr(p, 'mask_for_overlay') and p.mask_for_overlay and any([shared.opts.save_mask, shared.opts.save_mask_composite, shared.opts.return_mask, shared.opts.return_mask_composite]):
|
||||
image_mask = p.mask_for_overlay.convert('RGB')
|
||||
image1 = image.convert('RGBA').convert('RGBa')
|
||||
|
|
@ -391,15 +384,15 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
mask = images.resize_image(3, p.mask_for_overlay, image.width, image.height).convert('L')
|
||||
image_mask_composite = Image.composite(image1, image2, mask).convert('RGBA')
|
||||
if shared.opts.save_mask:
|
||||
images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=text, p=p, suffix="-mask")
|
||||
images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=info, p=p, suffix="-mask")
|
||||
if shared.opts.save_mask_composite:
|
||||
images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=text, p=p, suffix="-mask-composite")
|
||||
images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], shared.opts.samples_format, info=info, p=p, suffix="-mask-composite")
|
||||
if shared.opts.return_mask:
|
||||
output_images.append(image_mask)
|
||||
if shared.opts.return_mask_composite:
|
||||
output_images.append(image_mask_composite)
|
||||
timer.process.record('post')
|
||||
del x_samples_ddim
|
||||
del samples
|
||||
devices.torch_gc()
|
||||
|
||||
if hasattr(shared.sd_model, 'restore_pipeline') and shared.sd_model.restore_pipeline is not None:
|
||||
|
|
@ -413,15 +406,16 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
index_of_first_image = 0
|
||||
if (shared.opts.return_grid or shared.opts.grid_save) and not p.do_not_save_grid and len(output_images) > 1:
|
||||
if images.check_grid_size(output_images):
|
||||
r, c = images.get_grid_size(output_images, p.batch_size)
|
||||
grid = images.image_grid(output_images, p.batch_size)
|
||||
grid_text = f'{r}x{c}'
|
||||
grid_info = create_infotext(p, p.all_prompts, p.all_seeds, p.all_subseeds, index=0, grid=grid_text)
|
||||
if shared.opts.return_grid:
|
||||
text = infotext(-1)
|
||||
infotexts.insert(0, text)
|
||||
grid.info["parameters"] = text
|
||||
infotexts.insert(0, grid_info)
|
||||
output_images.insert(0, grid)
|
||||
index_of_first_image = 1
|
||||
if shared.opts.grid_save:
|
||||
images.save_image(grid, p.outpath_grids, "", p.all_seeds[0], p.all_prompts[0], shared.opts.grid_format, info=infotext(-1), p=p, grid=True, suffix="-grid") # main save grid
|
||||
images.save_image(grid, p.outpath_grids, "", p.all_seeds[0], p.all_prompts[0], shared.opts.grid_format, info=grid_info, p=p, grid=True, suffix="-grid") # main save grid
|
||||
|
||||
if shared.native:
|
||||
from modules import ipadapter
|
||||
|
|
@ -445,7 +439,7 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
|
|||
p,
|
||||
images_list=output_images,
|
||||
seed=p.all_seeds[0],
|
||||
info=infotext(0),
|
||||
info=infotexts[0] if len(infotexts) > 0 else '',
|
||||
comments="\n".join(comments),
|
||||
subseed=p.all_subseeds[0],
|
||||
index_of_first_image=index_of_first_image,
|
||||
|
|
|
|||
|
|
@ -21,6 +21,8 @@ class StableDiffusionProcessing:
|
|||
The first set of paramaters: sd_models -> do_not_reload_embeddings represent the minimum required to create a StableDiffusionProcessing
|
||||
"""
|
||||
def __init__(self, sd_model=None, outpath_samples=None, outpath_grids=None, prompt: str = "", styles: List[str] = None, seed: int = -1, subseed: int = -1, subseed_strength: float = 0, seed_resize_from_h: int = -1, seed_resize_from_w: int = -1, seed_enable_extras: bool = True, sampler_name: str = None, hr_sampler_name: str = None, batch_size: int = 1, n_iter: int = 1, steps: int = 50, cfg_scale: float = 7.0, image_cfg_scale: float = None, clip_skip: int = 1, width: int = 512, height: int = 512, full_quality: bool = True, restore_faces: bool = False, tiling: bool = False, hidiffusion: bool = False, do_not_save_samples: bool = False, do_not_save_grid: bool = False, extra_generation_params: Dict[Any, Any] = None, overlay_images: Any = None, negative_prompt: str = None, eta: float = None, do_not_reload_embeddings: bool = False, denoising_strength: float = 0, diffusers_guidance_rescale: float = 0.7, pag_scale: float = 0.0, pag_adaptive: float = 0.5, cfg_end: float = 1, resize_mode: int = 0, resize_name: str = 'None', resize_context: str = 'None', scale_by: float = 0, selected_scale_tab: int = 0, hdr_mode: int = 0, hdr_brightness: float = 0, hdr_color: float = 0, hdr_sharpen: float = 0, hdr_clamp: bool = False, hdr_boundary: float = 4.0, hdr_threshold: float = 0.95, hdr_maximize: bool = False, hdr_max_center: float = 0.6, hdr_max_boundry: float = 1.0, hdr_color_picker: str = None, hdr_tint_ratio: float = 0, override_settings: Dict[str, Any] = None, override_settings_restore_afterwards: bool = True, sampler_index: int = None, script_args: list = None): # pylint: disable=unused-argument
|
||||
self.state: str = ''
|
||||
self.skip = []
|
||||
self.outpath_samples: str = outpath_samples
|
||||
self.outpath_grids: str = outpath_grids
|
||||
self.prompt: str = prompt
|
||||
|
|
@ -192,6 +194,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
|
|||
def __init__(self, enable_hr: bool = False, denoising_strength: float = 0.75, firstphase_width: int = 0, firstphase_height: int = 0, hr_scale: float = 2.0, hr_force: bool = False, hr_resize_mode: int = 0, hr_resize_context: str = 'None', hr_upscaler: str = None, hr_second_pass_steps: int = 0, hr_resize_x: int = 0, hr_resize_y: int = 0, refiner_steps: int = 5, refiner_start: float = 0, refiner_prompt: str = '', refiner_negative: str = '', **kwargs):
|
||||
|
||||
super().__init__(**kwargs)
|
||||
self.reprocess = {}
|
||||
self.enable_hr = enable_hr
|
||||
self.denoising_strength = denoising_strength
|
||||
self.hr_scale = hr_scale
|
||||
|
|
@ -220,6 +223,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
|
|||
self.scripts = None
|
||||
self.script_args = []
|
||||
|
||||
|
||||
def init(self, all_prompts=None, all_seeds=None, all_subseeds=None):
|
||||
if shared.native:
|
||||
shared.sd_model = sd_models.set_diffuser_pipe(self.sd_model, sd_models.DiffusersTaskType.TEXT_2_IMAGE)
|
||||
|
|
|
|||
|
|
@ -5,68 +5,52 @@ import numpy as np
|
|||
import torch
|
||||
import torchvision.transforms.functional as TF
|
||||
from modules import shared, devices, processing, sd_models, errors, sd_hijack_hypertile, processing_vae, sd_models_compile, hidiffusion, timer
|
||||
from modules.processing_helpers import resize_hires, calculate_base_steps, calculate_hires_steps, calculate_refiner_steps, save_intermediate, update_sampler
|
||||
from modules.processing_helpers import resize_hires, calculate_base_steps, calculate_hires_steps, calculate_refiner_steps, save_intermediate, update_sampler, is_txt2img, is_refiner_enabled
|
||||
from modules.processing_args import set_pipeline_args
|
||||
from modules.onnx_impl import preprocess_pipeline as preprocess_onnx_pipeline, check_parameters_changed as olive_check_parameters_changed
|
||||
|
||||
|
||||
debug = shared.log.trace if os.environ.get('SD_DIFFUSERS_DEBUG', None) is not None else lambda *args, **kwargs: None
|
||||
debug('Trace: DIFFUSERS')
|
||||
last_p = None
|
||||
|
||||
|
||||
def process_diffusers(p: processing.StableDiffusionProcessing):
|
||||
debug(f'Process diffusers args: {vars(p)}')
|
||||
orig_pipeline = shared.sd_model
|
||||
results = []
|
||||
def restore_state(p: processing.StableDiffusionProcessing):
|
||||
if p.state in ['reprocess_refine', 'reprocess_face']:
|
||||
# validate
|
||||
if last_p is None:
|
||||
shared.log.warning(f'Restore state: op={p.state} last state missing')
|
||||
return p
|
||||
if p.__class__ != last_p.__class__:
|
||||
shared.log.warning(f'Restore state: op={p.state} last state is different type')
|
||||
return p
|
||||
if processing_vae.last_latent is None:
|
||||
shared.log.warning(f'Restore state: op={p.state} last latents missing')
|
||||
return p
|
||||
state = p.state
|
||||
|
||||
def is_txt2img():
|
||||
return sd_models.get_diffusers_task(shared.sd_model) == sd_models.DiffusersTaskType.TEXT_2_IMAGE
|
||||
# set ops
|
||||
if state == 'reprocess_refine':
|
||||
# use new upscale values
|
||||
hr_scale, hr_upscaler, hr_resize_mode, hr_resize_context, hr_resize_x, hr_resize_y, hr_upscale_to_x, hr_upscale_to_y = p.hr_scale, p.hr_upscaler, p.hr_resize_mode, p.hr_resize_context, p.hr_resize_x, p.hr_resize_y, p.hr_upscale_to_x, p.hr_upscale_to_y # txt2img
|
||||
height, width, scale_by, resize_mode, resize_name, resize_context = p.height, p.width, p.scale_by, p.resize_mode, p.resize_name, p.resize_context # img2img
|
||||
p = last_p
|
||||
p.skip = ['encode', 'base']
|
||||
p.state = state
|
||||
p.enable_hr = True
|
||||
p.hr_force = True
|
||||
p.hr_scale, p.hr_upscaler, p.hr_resize_mode, p.hr_resize_context, p.hr_resize_x, p.hr_resize_y, p.hr_upscale_to_x, p.hr_upscale_to_y = hr_scale, hr_upscaler, hr_resize_mode, hr_resize_context, hr_resize_x, hr_resize_y, hr_upscale_to_x, hr_upscale_to_y
|
||||
p.height, p.width, p.scale_by, p.resize_mode, p.resize_name, p.resize_context = height, width, scale_by, resize_mode, resize_name, resize_context
|
||||
p.init_images = None
|
||||
if state == 'reprocess_face':
|
||||
p.skip = ['encode', 'base', 'hires']
|
||||
p.restore_faces = True
|
||||
shared.log.info(f'Restore state: op={p.state} skip={p.skip}')
|
||||
return p
|
||||
|
||||
def is_refiner_enabled():
|
||||
return p.enable_hr and p.refiner_steps > 0 and p.refiner_start > 0 and p.refiner_start < 1 and shared.sd_refiner is not None
|
||||
|
||||
def update_pipeline(sd_model, p: processing.StableDiffusionProcessing):
|
||||
if sd_models.get_diffusers_task(sd_model) == sd_models.DiffusersTaskType.INPAINTING and getattr(p, 'image_mask', None) is None and p.task_args.get('image_mask', None) is None and getattr(p, 'mask', None) is None:
|
||||
shared.log.warning('Processing: mode=inpaint mask=None')
|
||||
sd_model = sd_models.set_diffuser_pipe(sd_model, sd_models.DiffusersTaskType.IMAGE_2_IMAGE)
|
||||
if shared.opts.cuda_compile_backend == "olive-ai":
|
||||
sd_model = olive_check_parameters_changed(p, is_refiner_enabled())
|
||||
if sd_model.__class__.__name__ == "OnnxRawPipeline":
|
||||
sd_model = preprocess_onnx_pipeline(p)
|
||||
nonlocal orig_pipeline
|
||||
orig_pipeline = sd_model # processed ONNX pipeline should not be replaced with original pipeline.
|
||||
if getattr(sd_model, "current_attn_name", None) != shared.opts.cross_attention_optimization:
|
||||
shared.log.info(f"Setting attention optimization: {shared.opts.cross_attention_optimization}")
|
||||
sd_models.set_diffusers_attention(sd_model)
|
||||
return sd_model
|
||||
|
||||
# sanitize init_images
|
||||
if hasattr(p, 'init_images') and getattr(p, 'init_images', None) is None:
|
||||
del p.init_images
|
||||
if hasattr(p, 'init_images') and not isinstance(getattr(p, 'init_images', []), list):
|
||||
p.init_images = [p.init_images]
|
||||
if len(getattr(p, 'init_images', [])) > 0:
|
||||
while len(p.init_images) < len(p.prompts):
|
||||
p.init_images.append(p.init_images[-1])
|
||||
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
|
||||
# pipeline type is set earlier in processing, but check for sanity
|
||||
is_control = getattr(p, 'is_control', False) is True
|
||||
has_images = len(getattr(p, 'init_images' ,[])) > 0
|
||||
if sd_models.get_diffusers_task(shared.sd_model) != sd_models.DiffusersTaskType.TEXT_2_IMAGE and not has_images and not is_control:
|
||||
shared.sd_model = sd_models.set_diffuser_pipe(shared.sd_model, sd_models.DiffusersTaskType.TEXT_2_IMAGE) # reset pipeline
|
||||
if hasattr(shared.sd_model, 'unet') and hasattr(shared.sd_model.unet, 'config') and hasattr(shared.sd_model.unet.config, 'in_channels') and shared.sd_model.unet.config.in_channels == 9 and not is_control:
|
||||
shared.sd_model = sd_models.set_diffuser_pipe(shared.sd_model, sd_models.DiffusersTaskType.INPAINTING) # force pipeline
|
||||
if len(getattr(p, 'init_images', [])) == 0:
|
||||
p.init_images = [TF.to_pil_image(torch.rand((3, getattr(p, 'height', 512), getattr(p, 'width', 512))))]
|
||||
|
||||
sd_models.move_model(shared.sd_model, devices.device)
|
||||
sd_models_compile.openvino_recompile_model(p, hires=False, refiner=False) # recompile if a parameter changes
|
||||
|
||||
use_refiner_start = is_txt2img() and is_refiner_enabled() and not p.is_hr_pass and p.refiner_start > 0 and p.refiner_start < 1
|
||||
def process_base(p: processing.StableDiffusionProcessing):
|
||||
use_refiner_start = is_txt2img() and is_refiner_enabled(p) and not p.is_hr_pass and p.refiner_start > 0 and p.refiner_start < 1
|
||||
use_denoise_start = not is_txt2img() and p.refiner_start > 0 and p.refiner_start < 1
|
||||
|
||||
shared.sd_model = update_pipeline(shared.sd_model, p)
|
||||
|
|
@ -141,14 +125,22 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
p.extra_generation_params['Embeddings'] = ', '.join(shared.sd_model.embedding_db.embeddings_used)
|
||||
|
||||
shared.state.nextjob()
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
return output
|
||||
|
||||
|
||||
def process_hires(p: processing.StableDiffusionProcessing, output):
|
||||
# optional second pass
|
||||
if p.enable_hr:
|
||||
p.is_hr_pass = True
|
||||
p.init_hr(p.hr_scale, p.hr_upscaler, force=p.hr_force)
|
||||
if hasattr(p, 'init_hr'):
|
||||
p.init_hr(p.hr_scale, p.hr_upscaler, force=p.hr_force)
|
||||
else: # fake hires for img2img
|
||||
p.hr_scale = p.scale_by
|
||||
p.hr_upscaler = p.resize_name
|
||||
p.hr_resize_mode = p.resize_mode
|
||||
p.hr_resize_context = p.resize_context
|
||||
p.hr_upscale_to_x = p.width
|
||||
p.hr_upscale_to_y = p.height
|
||||
prev_job = shared.state.job
|
||||
|
||||
# hires runs on original pipeline
|
||||
|
|
@ -156,7 +148,7 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
shared.sd_model.restore_pipeline()
|
||||
|
||||
# upscale
|
||||
if hasattr(p, 'height') and hasattr(p, 'width') and p.hr_resize_mode >0 and (p.hr_upscaler != 'None' or p.hr_resize_mode == 5):
|
||||
if hasattr(p, 'height') and hasattr(p, 'width') and p.hr_resize_mode > 0 and (p.hr_upscaler != 'None' or p.hr_resize_mode == 5):
|
||||
shared.log.info(f'Upscale: mode={p.hr_resize_mode} upscaler="{p.hr_upscaler}" context="{p.hr_resize_context}" resize={p.hr_resize_x}x{p.hr_resize_y} upscale={p.hr_upscale_to_x}x{p.hr_upscale_to_y}')
|
||||
p.ops.append('upscale')
|
||||
if shared.opts.samples_save and not p.do_not_save_samples and shared.opts.save_images_before_highres_fix and hasattr(shared.sd_model, 'vae'):
|
||||
|
|
@ -225,9 +217,12 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
shared.state.nextjob()
|
||||
p.is_hr_pass = False
|
||||
timer.process.record('hires')
|
||||
return output
|
||||
|
||||
|
||||
def process_refine(p: processing.StableDiffusionProcessing, output):
|
||||
# optional refiner pass or decode
|
||||
if is_refiner_enabled():
|
||||
if is_refiner_enabled(p):
|
||||
prev_job = shared.state.job
|
||||
shared.state.job = 'Refine'
|
||||
shared.state.job_count +=1
|
||||
|
|
@ -238,7 +233,7 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
sd_models.move_model(shared.sd_model, devices.cpu)
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
return output
|
||||
if shared.opts.diffusers_offload_mode == "balanced":
|
||||
shared.sd_model = sd_models.apply_balanced_offload(shared.sd_model)
|
||||
if shared.opts.diffusers_move_refiner:
|
||||
|
|
@ -282,17 +277,19 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
try:
|
||||
if 'requires_aesthetics_score' in shared.sd_refiner.config: # sdxl-model needs false and sdxl-refiner needs true
|
||||
shared.sd_refiner.register_to_config(requires_aesthetics_score = getattr(shared.sd_refiner, 'tokenizer', None) is None)
|
||||
refiner_output = shared.sd_refiner(**refiner_args) # pylint: disable=not-callable
|
||||
if isinstance(refiner_output, dict):
|
||||
refiner_output = SimpleNamespace(**refiner_output)
|
||||
output = shared.sd_refiner(**refiner_args) # pylint: disable=not-callable
|
||||
if isinstance(output, dict):
|
||||
output = SimpleNamespace(**output)
|
||||
sd_models_compile.openvino_post_compile(op="refiner")
|
||||
except AssertionError as e:
|
||||
shared.log.info(e)
|
||||
|
||||
""" # TODO decode using refiner
|
||||
if not shared.state.interrupted and not shared.state.skipped:
|
||||
refiner_images = processing_vae.vae_decode(latents=refiner_output.images, model=shared.sd_refiner, full_quality=True, width=max(p.width, p.hr_upscale_to_x), height=max(p.height, p.hr_upscale_to_y))
|
||||
for refiner_image in refiner_images:
|
||||
results.append(refiner_image)
|
||||
"""
|
||||
|
||||
if shared.opts.diffusers_offload_mode == "balanced":
|
||||
shared.sd_refiner = sd_models.apply_balanced_offload(shared.sd_refiner)
|
||||
|
|
@ -303,30 +300,113 @@ def process_diffusers(p: processing.StableDiffusionProcessing):
|
|||
shared.state.nextjob()
|
||||
p.is_refiner_pass = False
|
||||
timer.process.record('refine')
|
||||
return output
|
||||
|
||||
# final decode since there is no refiner
|
||||
if not is_refiner_enabled():
|
||||
if output is not None:
|
||||
if not hasattr(output, 'images') and hasattr(output, 'frames'):
|
||||
shared.log.debug(f'Generated: frames={len(output.frames[0])}')
|
||||
output.images = output.frames[0]
|
||||
if hasattr(shared.sd_model, "vae") and output.images is not None and len(output.images) > 0:
|
||||
if p.hr_resize_mode > 0 and (p.hr_upscaler != 'None' or p.hr_resize_mode == 5):
|
||||
width = max(getattr(p, 'width', 0), getattr(p, 'hr_upscale_to_x', 0))
|
||||
height = max(getattr(p, 'height', 0), getattr(p, 'hr_upscale_to_y', 0))
|
||||
else:
|
||||
width = getattr(p, 'width', 0)
|
||||
height = getattr(p, 'height', 0)
|
||||
results = processing_vae.vae_decode(latents=output.images, model=shared.sd_model, full_quality=p.full_quality, width=width, height=height)
|
||||
elif hasattr(output, 'images'):
|
||||
results = output.images
|
||||
|
||||
def process_decode(p: processing.StableDiffusionProcessing, output):
|
||||
if output is not None:
|
||||
if not hasattr(output, 'images') and hasattr(output, 'frames'):
|
||||
shared.log.debug(f'Generated: frames={len(output.frames[0])}')
|
||||
output.images = output.frames[0]
|
||||
if hasattr(shared.sd_model, "vae") and output.images is not None and len(output.images) > 0:
|
||||
if p.hr_resize_mode > 0 and (p.hr_upscaler != 'None' or p.hr_resize_mode == 5):
|
||||
width = max(getattr(p, 'width', 0), getattr(p, 'hr_upscale_to_x', 0))
|
||||
height = max(getattr(p, 'height', 0), getattr(p, 'hr_upscale_to_y', 0))
|
||||
else:
|
||||
shared.log.warning('Processing returned no results')
|
||||
results = []
|
||||
width = getattr(p, 'width', 0)
|
||||
height = getattr(p, 'height', 0)
|
||||
results = processing_vae.vae_decode(
|
||||
latents = output.images,
|
||||
model = shared.sd_model if not is_refiner_enabled(p) else shared.sd_refiner,
|
||||
full_quality = p.full_quality,
|
||||
width = width,
|
||||
height = height,
|
||||
save = p.state == '',
|
||||
)
|
||||
elif hasattr(output, 'images'):
|
||||
results = output.images
|
||||
else:
|
||||
shared.log.warning('Processing returned no results')
|
||||
results = []
|
||||
else:
|
||||
shared.log.warning('Processing returned no results')
|
||||
results = []
|
||||
return results
|
||||
|
||||
|
||||
orig_pipeline = shared.sd_model
|
||||
def update_pipeline(sd_model, p: processing.StableDiffusionProcessing):
|
||||
if sd_models.get_diffusers_task(sd_model) == sd_models.DiffusersTaskType.INPAINTING and getattr(p, 'image_mask', None) is None and p.task_args.get('image_mask', None) is None and getattr(p, 'mask', None) is None:
|
||||
shared.log.warning('Processing: mode=inpaint mask=None')
|
||||
sd_model = sd_models.set_diffuser_pipe(sd_model, sd_models.DiffusersTaskType.IMAGE_2_IMAGE)
|
||||
if shared.opts.cuda_compile_backend == "olive-ai":
|
||||
sd_model = olive_check_parameters_changed(p, is_refiner_enabled(p))
|
||||
if sd_model.__class__.__name__ == "OnnxRawPipeline":
|
||||
sd_model = preprocess_onnx_pipeline(p)
|
||||
global orig_pipeline # pylint: disable=global-statement
|
||||
orig_pipeline = sd_model # processed ONNX pipeline should not be replaced with original pipeline.
|
||||
if getattr(sd_model, "current_attn_name", None) != shared.opts.cross_attention_optimization:
|
||||
shared.log.info(f"Setting attention optimization: {shared.opts.cross_attention_optimization}")
|
||||
sd_models.set_diffusers_attention(sd_model)
|
||||
return sd_model
|
||||
|
||||
|
||||
def process_diffusers(p: processing.StableDiffusionProcessing):
|
||||
debug(f'Process diffusers args: {vars(p)}')
|
||||
results = []
|
||||
p = restore_state(p)
|
||||
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
|
||||
# sanitize init_images
|
||||
if hasattr(p, 'init_images') and getattr(p, 'init_images', None) is None:
|
||||
del p.init_images
|
||||
if hasattr(p, 'init_images') and not isinstance(getattr(p, 'init_images', []), list):
|
||||
p.init_images = [p.init_images]
|
||||
if len(getattr(p, 'init_images', [])) > 0:
|
||||
while len(p.init_images) < len(p.prompts):
|
||||
p.init_images.append(p.init_images[-1])
|
||||
# pipeline type is set earlier in processing, but check for sanity
|
||||
is_control = getattr(p, 'is_control', False) is True
|
||||
has_images = len(getattr(p, 'init_images' ,[])) > 0
|
||||
if sd_models.get_diffusers_task(shared.sd_model) != sd_models.DiffusersTaskType.TEXT_2_IMAGE and not has_images and not is_control:
|
||||
shared.sd_model = sd_models.set_diffuser_pipe(shared.sd_model, sd_models.DiffusersTaskType.TEXT_2_IMAGE) # reset pipeline
|
||||
if hasattr(shared.sd_model, 'unet') and hasattr(shared.sd_model.unet, 'config') and hasattr(shared.sd_model.unet.config, 'in_channels') and shared.sd_model.unet.config.in_channels == 9 and not is_control:
|
||||
shared.sd_model = sd_models.set_diffuser_pipe(shared.sd_model, sd_models.DiffusersTaskType.INPAINTING) # force pipeline
|
||||
if len(getattr(p, 'init_images', [])) == 0:
|
||||
p.init_images = [TF.to_pil_image(torch.rand((3, getattr(p, 'height', 512), getattr(p, 'width', 512))))]
|
||||
|
||||
sd_models.move_model(shared.sd_model, devices.device)
|
||||
sd_models_compile.openvino_recompile_model(p, hires=False, refiner=False) # recompile if a parameter changes
|
||||
|
||||
if 'base' not in p.skip:
|
||||
output = process_base(p)
|
||||
else:
|
||||
output = SimpleNamespace(images=processing_vae.last_latent)
|
||||
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
|
||||
if 'hires' not in p.skip:
|
||||
output = process_hires(p, output)
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
|
||||
if 'refine' not in p.skip:
|
||||
output = process_refine(p, output)
|
||||
if shared.state.interrupted or shared.state.skipped:
|
||||
shared.sd_model = orig_pipeline
|
||||
return results
|
||||
|
||||
results = process_decode(p, output)
|
||||
|
||||
timer.process.record('decode')
|
||||
shared.sd_model = orig_pipeline
|
||||
if p.state == '':
|
||||
global last_p # pylint: disable=global-statement
|
||||
last_p = p
|
||||
return results
|
||||
|
|
|
|||
|
|
@ -17,6 +17,14 @@ debug_steps = shared.log.trace if os.environ.get('SD_STEPS_DEBUG', None) is not
|
|||
debug_steps('Trace: STEPS')
|
||||
|
||||
|
||||
def is_txt2img():
|
||||
return sd_models.get_diffusers_task(shared.sd_model) == sd_models.DiffusersTaskType.TEXT_2_IMAGE
|
||||
|
||||
|
||||
def is_refiner_enabled(p):
|
||||
return p.enable_hr and p.refiner_steps > 0 and p.refiner_start > 0 and p.refiner_start < 1 and shared.sd_refiner is not None
|
||||
|
||||
|
||||
def setup_color_correction(image):
|
||||
debug("Calibrating color correction")
|
||||
correction_target = cv2.cvtColor(np.asarray(image.copy()), cv2.COLOR_RGB2LAB)
|
||||
|
|
@ -435,8 +443,7 @@ def fix_prompts(prompts, negative_prompts, prompts_2, negative_prompts_2):
|
|||
def calculate_base_steps(p, use_denoise_start, use_refiner_start):
|
||||
if len(getattr(p, 'timesteps', [])) > 0:
|
||||
return None
|
||||
is_txt2img = sd_models.get_diffusers_task(shared.sd_model) == sd_models.DiffusersTaskType.TEXT_2_IMAGE
|
||||
if not is_txt2img:
|
||||
if not is_txt2img():
|
||||
if use_denoise_start and shared.sd_model_type == 'sdxl':
|
||||
steps = p.steps // (1 - p.refiner_start)
|
||||
elif p.denoising_strength > 0:
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ else:
|
|||
sd_hijack = None
|
||||
|
||||
|
||||
def create_infotext(p: StableDiffusionProcessing, all_prompts=None, all_seeds=None, all_subseeds=None, comments=None, iteration=0, position_in_batch=0, index=None, all_negative_prompts=None):
|
||||
def create_infotext(p: StableDiffusionProcessing, all_prompts=None, all_seeds=None, all_subseeds=None, comments=None, iteration=0, position_in_batch=0, index=None, all_negative_prompts=None, grid=None):
|
||||
if p is None:
|
||||
shared.log.warning('Processing info: no data')
|
||||
return ''
|
||||
|
|
@ -45,7 +45,6 @@ def create_infotext(p: StableDiffusionProcessing, all_prompts=None, all_seeds=No
|
|||
"CFG scale": p.cfg_scale,
|
||||
"Size": f"{p.width}x{p.height}" if hasattr(p, 'width') and hasattr(p, 'height') else None,
|
||||
"Batch": f'{p.n_iter}x{p.batch_size}' if p.n_iter > 1 or p.batch_size > 1 else None,
|
||||
"Index": f'{p.iteration + 1}x{index + 1}' if (p.n_iter > 1 or p.batch_size > 1) and index >= 0 else None,
|
||||
"Parser": shared.opts.prompt_attention,
|
||||
"Model": None if (not shared.opts.add_model_name_to_info) or (not shared.sd_model.sd_checkpoint_info.model_name) else shared.sd_model.sd_checkpoint_info.model_name.replace(',', '').replace(':', ''),
|
||||
"Model hash": getattr(p, 'sd_model_hash', None if (not shared.opts.add_model_hash_to_info) or (not shared.sd_model.sd_model_hash) else shared.sd_model.sd_model_hash),
|
||||
|
|
@ -64,6 +63,10 @@ def create_infotext(p: StableDiffusionProcessing, all_prompts=None, all_seeds=No
|
|||
"Operations": '; '.join(ops).replace('"', '') if len(p.ops) > 0 else 'none',
|
||||
}
|
||||
# native
|
||||
if grid is None and (p.n_iter > 1 or p.batch_size > 1) and index >= 0:
|
||||
args['Index'] = f'{p.iteration + 1}x{index + 1}'
|
||||
if grid is not None:
|
||||
args['Grid'] = grid
|
||||
if shared.native:
|
||||
args['Pipeline'] = shared.sd_model.__class__.__name__
|
||||
args['T5'] = None if (not shared.opts.add_model_name_to_info or shared.opts.sd_text_encoder is None or shared.opts.sd_text_encoder == 'None') else shared.opts.sd_text_encoder
|
||||
|
|
|
|||
|
|
@ -140,7 +140,7 @@ def taesd_vae_encode(image):
|
|||
return encoded
|
||||
|
||||
|
||||
def vae_decode(latents, model, output_type='np', full_quality=True, width=None, height=None):
|
||||
def vae_decode(latents, model, output_type='np', full_quality=True, width=None, height=None, save=True):
|
||||
global last_latent # pylint: disable=global-statement
|
||||
t0 = time.time()
|
||||
if latents is None or not torch.is_tensor(latents): # already decoded
|
||||
|
|
@ -163,7 +163,8 @@ def vae_decode(latents, model, output_type='np', full_quality=True, width=None,
|
|||
latents = latents.unsqueeze(0)
|
||||
if latents.shape[0] == 4 and latents.shape[1] != 4: # likely animatediff latent
|
||||
latents = latents.permute(1, 0, 2, 3)
|
||||
last_latent = latents.clone().detach()
|
||||
if save:
|
||||
last_latent = latents.clone().detach()
|
||||
|
||||
if latents.shape[-1] <= 4: # not a latent, likely an image
|
||||
decoded = latents.float().cpu().numpy()
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ debug = shared.log.trace if os.environ.get('SD_PROCESS_DEBUG', None) is not None
|
|||
debug('Trace: PROCESS')
|
||||
|
||||
|
||||
def txt2img(id_task,
|
||||
def txt2img(id_task, state,
|
||||
prompt, negative_prompt, prompt_styles,
|
||||
steps, sampler_index, hr_sampler_index,
|
||||
full_quality, restore_faces, tiling, hidiffusion,
|
||||
|
|
@ -88,6 +88,7 @@ def txt2img(id_task,
|
|||
)
|
||||
p.scripts = scripts.scripts_txt2img
|
||||
p.script_args = args
|
||||
p.state = state
|
||||
processed = scripts.scripts_txt2img.run(p, *args)
|
||||
if processed is None:
|
||||
processed = processing.process_images(p)
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ def infotext_to_html(text):
|
|||
negative = res.get('Negative prompt', '')
|
||||
res.pop('Prompt', None)
|
||||
res.pop('Negative prompt', None)
|
||||
params = [f'{k}: {v}' for k, v in res.items() if v is not None and 'size-' not in k.lower()]
|
||||
params = [f'{k}: {v}' for k, v in res.items() if v is not None and not k.endswith('-1') and not k.endswith('-2')]
|
||||
params = '| '.join(params) if len(params) > 0 else ''
|
||||
code = ''
|
||||
if len(prompt) > 0:
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ def get_units(*values):
|
|||
break
|
||||
|
||||
|
||||
def generate_click(job_id: str, active_tab: str, *args):
|
||||
def generate_click(job_id: str, state: str, active_tab: str, *args):
|
||||
while helpers.busy:
|
||||
time.sleep(0.01)
|
||||
from modules.control.run import control_run
|
||||
|
|
@ -71,7 +71,7 @@ def generate_click(job_id: str, active_tab: str, *args):
|
|||
shared.mem_mon.reset()
|
||||
progress.start_task(job_id)
|
||||
try:
|
||||
for results in control_run(units, helpers.input_source, helpers.input_init, helpers.input_mask, active_tab, True, *args):
|
||||
for results in control_run(state, units, helpers.input_source, helpers.input_init, helpers.input_mask, active_tab, True, *args):
|
||||
progress.record_results(job_id, results)
|
||||
yield return_controls(results)
|
||||
except Exception as e:
|
||||
|
|
@ -103,6 +103,7 @@ def create_ui(_blocks: gr.Blocks=None):
|
|||
with gr.Row(elem_id='control_settings'):
|
||||
|
||||
full_quality, restore_faces, tiling, hidiffusion = ui_sections.create_options('control')
|
||||
state = gr.Textbox(value='', visible=False)
|
||||
|
||||
with gr.Accordion(open=False, label="Input", elem_id="control_input", elem_classes=["small-accordion"]):
|
||||
with gr.Row():
|
||||
|
|
@ -503,7 +504,6 @@ def create_ui(_blocks: gr.Blocks=None):
|
|||
btn_negative_counter.click(fn=call_queue.wrap_queued_call(ui_common.update_token_counter), inputs=[negative, steps], outputs=[negative_counter])
|
||||
btn_interrogate_clip.click(fn=helpers.interrogate_clip, inputs=[], outputs=[prompt])
|
||||
btn_interrogate_booru.click(fn=helpers.interrogate_booru, inputs=[], outputs=[prompt])
|
||||
btn_reprocess.click(fn=processing_vae.reprocess, inputs=[output_gallery], outputs=[output_gallery])
|
||||
|
||||
select_fields = [input_mode, input_image, init_image, input_type, input_resize, input_inpaint, input_video, input_batch, input_folder]
|
||||
select_output = [output_tabs, preview_process, result_txt]
|
||||
|
|
@ -517,6 +517,7 @@ def create_ui(_blocks: gr.Blocks=None):
|
|||
)
|
||||
|
||||
prompt.submit(**select_dict)
|
||||
negative.submit(**select_dict)
|
||||
btn_generate.click(**select_dict)
|
||||
for ctrl in [input_image, input_resize, input_video, input_batch, input_folder, init_image, init_video, init_batch, init_folder, tab_image, tab_video, tab_batch, tab_folder, tab_image_init, tab_video_init, tab_batch_init, tab_folder_init]:
|
||||
if hasattr(ctrl, 'change'):
|
||||
|
|
@ -527,7 +528,7 @@ def create_ui(_blocks: gr.Blocks=None):
|
|||
if hasattr(ctrl, 'upload'):
|
||||
ctrl.upload(**select_dict)
|
||||
|
||||
tabs_state = gr.Text(value='none', visible=False)
|
||||
tabs_state = gr.Textbox(value='none', visible=False)
|
||||
input_fields = [
|
||||
input_type,
|
||||
prompt, negative, styles,
|
||||
|
|
@ -553,13 +554,18 @@ def create_ui(_blocks: gr.Blocks=None):
|
|||
control_dict = dict(
|
||||
fn=generate_click,
|
||||
_js="submit_control",
|
||||
inputs=[tabs_state, tabs_state] + input_fields + input_script_args,
|
||||
inputs=[tabs_state, state, tabs_state] + input_fields + input_script_args,
|
||||
outputs=output_fields,
|
||||
show_progress=True,
|
||||
)
|
||||
prompt.submit(**control_dict)
|
||||
negative.submit(**control_dict)
|
||||
btn_generate.click(**control_dict)
|
||||
|
||||
btn_reprocess[1].click(fn=processing_vae.reprocess, inputs=[output_gallery], outputs=[output_gallery]) # full-decode
|
||||
btn_reprocess[2].click(**control_dict) # hires-refine
|
||||
btn_reprocess[3].click(**control_dict) # face-restore
|
||||
|
||||
paste_fields = [
|
||||
# prompt
|
||||
(prompt, "Prompt"),
|
||||
|
|
|
|||
|
|
@ -66,6 +66,7 @@ def create_ui():
|
|||
|
||||
with gr.Tabs(elem_id="mode_img2img"):
|
||||
img2img_selected_tab = gr.State(0) # pylint: disable=abstract-class-instantiated
|
||||
state = gr.Textbox(value='', visible=False)
|
||||
with gr.TabItem('Image', id='img2img', elem_id="img2img_img2img_tab") as tab_img2img:
|
||||
init_img = gr.Image(label="Image for img2img", elem_id="img2img_image", show_label=False, source="upload", interactive=True, type="pil", tool="editor", image_mode="RGBA", height=512)
|
||||
interrogate_clip, interrogate_booru = ui_sections.create_interrogate_buttons('img2img')
|
||||
|
|
@ -159,13 +160,11 @@ def create_ui():
|
|||
ui_common.connect_reuse_seed(seed, reuse_seed, img2img_generation_info, is_subseed=False)
|
||||
ui_common.connect_reuse_seed(subseed, reuse_subseed, img2img_generation_info, is_subseed=True, subseed_strength=subseed_strength)
|
||||
|
||||
img2img_reprocess.click(fn=processing_vae.reprocess, inputs=[img2img_gallery], outputs=[img2img_gallery])
|
||||
|
||||
img2img_prompt_img.change(fn=modules.images.image_data, inputs=[img2img_prompt_img], outputs=[img2img_prompt, img2img_prompt_img])
|
||||
dummy_component1 = gr.Textbox(visible=False, value='dummy')
|
||||
dummy_component2 = gr.Number(visible=False, value=0)
|
||||
img2img_args = [
|
||||
dummy_component1, dummy_component2,
|
||||
dummy_component1, state, dummy_component2,
|
||||
img2img_prompt, img2img_negative_prompt, img2img_prompt_styles,
|
||||
init_img,
|
||||
sketch,
|
||||
|
|
@ -210,8 +209,13 @@ def create_ui():
|
|||
img2img_prompt.submit(**img2img_dict)
|
||||
img2img_negative_prompt.submit(**img2img_dict)
|
||||
img2img_submit.click(**img2img_dict)
|
||||
|
||||
dummy_component = gr.Textbox(visible=False, value='dummy')
|
||||
|
||||
img2img_reprocess[1].click(fn=processing_vae.reprocess, inputs=[img2img_gallery], outputs=[img2img_gallery]) # full-decode
|
||||
img2img_reprocess[2].click(**img2img_dict) # hires-refine
|
||||
img2img_reprocess[3].click(**img2img_dict) # face-restore
|
||||
|
||||
interrogate_args = dict(
|
||||
_js="get_img2img_tab_index",
|
||||
inputs=[
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ def create_toprow(is_img2img: bool = False, id_part: str = None):
|
|||
if id_part is None:
|
||||
id_part = "img2img" if is_img2img else "txt2img"
|
||||
with gr.Row(elem_id=f"{id_part}_toprow", variant="compact"):
|
||||
with gr.Column(elem_id=f"{id_part}_prompt_container", scale=5):
|
||||
with gr.Column(elem_id=f"{id_part}_prompt_container", scale=4):
|
||||
with gr.Row():
|
||||
with gr.Column(scale=80):
|
||||
with gr.Row():
|
||||
|
|
@ -27,8 +27,12 @@ def create_toprow(is_img2img: bool = False, id_part: str = None):
|
|||
negative_prompt = gr.Textbox(elem_id=f"{id_part}_neg_prompt", label="Negative prompt", show_label=False, lines=3, placeholder="Negative prompt", elem_classes=["prompt"])
|
||||
with gr.Column(scale=1, elem_id=f"{id_part}_actions_column"):
|
||||
with gr.Row(elem_id=f"{id_part}_generate_box"):
|
||||
reprocess = []
|
||||
submit = gr.Button('Generate', elem_id=f"{id_part}_generate", variant='primary')
|
||||
reprocess = gr.Button('Reprocess', elem_id=f"{id_part}_reprocess", variant='secondary', visible=False)
|
||||
reprocess.append(gr.Button('Reprocess', elem_id=f"{id_part}_reprocess", variant='primary', visible=True))
|
||||
reprocess.append(gr.Button('Reprocess decode', elem_id=f"{id_part}_reprocess_decode", variant='primary', visible=False))
|
||||
reprocess.append(gr.Button('Reprocess refine', elem_id=f"{id_part}_reprocess_refine", variant='primary', visible=False))
|
||||
reprocess.append(gr.Button('Reprocess face', elem_id=f"{id_part}_reprocess_face", variant='primary', visible=False))
|
||||
with gr.Row(elem_id=f"{id_part}_generate_line2"):
|
||||
interrupt = gr.Button('Stop', elem_id=f"{id_part}_interrupt")
|
||||
interrupt.click(fn=lambda: shared.state.interrupt(), _js="requestInterrupt", inputs=[], outputs=[])
|
||||
|
|
|
|||
|
|
@ -1,11 +1,10 @@
|
|||
import gradio as gr
|
||||
from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call
|
||||
from modules import timer, shared, ui_common, ui_sections, generation_parameters_copypaste, processing_vae
|
||||
from modules import timer, shared, ui_common, ui_sections, generation_parameters_copypaste, processing, processing_vae, devices
|
||||
from modules.ui_components import ToolButton # pylint: disable=unused-import
|
||||
|
||||
|
||||
def calc_resolution_hires(width, height, hr_scale, hr_resize_x, hr_resize_y, hr_upscaler):
|
||||
from modules import processing, devices
|
||||
if hr_upscaler == "None":
|
||||
return "Hires resize: None"
|
||||
p = processing.StableDiffusionProcessingTxt2Img(width=width, height=height, enable_hr=True, hr_scale=hr_scale, hr_resize_x=hr_resize_x, hr_resize_y=hr_resize_y)
|
||||
|
|
@ -50,6 +49,7 @@ def create_ui():
|
|||
hdr_mode, hdr_brightness, hdr_color, hdr_sharpen, hdr_clamp, hdr_boundary, hdr_threshold, hdr_maximize, hdr_max_center, hdr_max_boundry, hdr_color_picker, hdr_tint_ratio = ui_sections.create_correction_inputs('txt2img')
|
||||
enable_hr, hr_sampler_index, denoising_strength, hr_resize_mode, hr_resize_context, hr_upscaler, hr_force, hr_second_pass_steps, hr_scale, hr_resize_x, hr_resize_y, refiner_steps, refiner_start, refiner_prompt, refiner_negative = ui_sections.create_hires_inputs('txt2img')
|
||||
override_settings = ui_common.create_override_inputs('txt2img')
|
||||
state = gr.Textbox(value='', visible=False)
|
||||
|
||||
with gr.Group(elem_id="txt2img_script_container"):
|
||||
txt2img_script_inputs = modules.scripts.scripts_txt2img.setup_ui(parent='txt2img', accordion=True)
|
||||
|
|
@ -58,11 +58,10 @@ def create_ui():
|
|||
ui_common.connect_reuse_seed(seed, reuse_seed, txt2img_generation_info, is_subseed=False)
|
||||
ui_common.connect_reuse_seed(subseed, reuse_subseed, txt2img_generation_info, is_subseed=True, subseed_strength=subseed_strength)
|
||||
|
||||
txt2img_reprocess.click(fn=processing_vae.reprocess, inputs=[txt2img_gallery], outputs=[txt2img_gallery])
|
||||
|
||||
dummy_component = gr.Textbox(visible=False, value='dummy')
|
||||
|
||||
txt2img_args = [
|
||||
dummy_component,
|
||||
dummy_component, state,
|
||||
txt2img_prompt, txt2img_negative_prompt, txt2img_prompt_styles,
|
||||
steps, sampler_index, hr_sampler_index,
|
||||
full_quality, restore_faces, tiling, hidiffusion,
|
||||
|
|
@ -89,9 +88,15 @@ def create_ui():
|
|||
],
|
||||
show_progress=False,
|
||||
)
|
||||
|
||||
txt2img_prompt.submit(**txt2img_dict)
|
||||
txt2img_negative_prompt.submit(**txt2img_dict)
|
||||
txt2img_submit.click(**txt2img_dict)
|
||||
|
||||
txt2img_reprocess[1].click(fn=processing_vae.reprocess, inputs=[txt2img_gallery], outputs=[txt2img_gallery]) # full-decode
|
||||
txt2img_reprocess[2].click(**txt2img_dict) # hires-refine
|
||||
txt2img_reprocess[3].click(**txt2img_dict) # face-restore
|
||||
|
||||
txt2img_paste_fields = [
|
||||
# prompt
|
||||
(txt2img_prompt, "Prompt"),
|
||||
|
|
|
|||
|
|
@ -164,6 +164,7 @@ class FaceRestorerYolo(FaceRestoration):
|
|||
shared.log.debug(f'Face HiRes: faces={report} args={faces[0].args} denoise={p.denoising_strength} blur={p.mask_blur} width={p.width} height={p.height} padding={p.inpaint_full_res_padding}')
|
||||
|
||||
mask_all = []
|
||||
p.state = ''
|
||||
for face in faces:
|
||||
if face.mask is None:
|
||||
continue
|
||||
|
|
@ -187,6 +188,7 @@ class FaceRestorerYolo(FaceRestoration):
|
|||
p = processing_class.switch_class(p, orig_cls, orig_p)
|
||||
p.init_images = getattr(orig_p, 'init_images', None)
|
||||
p.image_mask = getattr(orig_p, 'image_mask', None)
|
||||
p.state = getattr(orig_p, 'state', None)
|
||||
shared.opts.data['mask_apply_overlay'] = orig_apply_overlay
|
||||
np_image = np.array(image)
|
||||
|
||||
|
|
|
|||
|
|
@ -284,7 +284,7 @@ class Script(scripts.Script):
|
|||
pc.extra_generation_params["Y Values"] = y_values
|
||||
if y_opt.label in ["Seed", "Var. seed"] and not no_fixed_seeds:
|
||||
pc.extra_generation_params["Fixed Y Values"] = ", ".join([str(y) for y in ys])
|
||||
grid_infotext[subgrid_index] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds)
|
||||
grid_infotext[subgrid_index] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds, grid=f'{len(x_values)}x{len(y_values)}')
|
||||
if grid_infotext[0] is None and ix == 0 and iy == 0 and iz == 0: # Sets main grid infotext
|
||||
pc.extra_generation_params = copy(pc.extra_generation_params)
|
||||
if z_opt.label != 'Nothing':
|
||||
|
|
@ -292,7 +292,7 @@ class Script(scripts.Script):
|
|||
pc.extra_generation_params["Z Values"] = z_values
|
||||
if z_opt.label in ["Seed", "Var. seed"] and not no_fixed_seeds:
|
||||
pc.extra_generation_params["Fixed Z Values"] = ", ".join([str(z) for z in zs])
|
||||
grid_infotext[0] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds)
|
||||
grid_infotext[0] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds, grid=f'{len(z_values)}x{len(x_values)}x{len(y_values)}')
|
||||
return res
|
||||
|
||||
with SharedSettingsStackHelper():
|
||||
|
|
|
|||
|
|
@ -298,7 +298,7 @@ class Script(scripts.Script):
|
|||
pc.extra_generation_params["Y Values"] = y_values
|
||||
if y_opt.label in ["Seed", "Var. seed"] and not no_fixed_seeds:
|
||||
pc.extra_generation_params["Fixed Y Values"] = ", ".join([str(y) for y in ys])
|
||||
grid_infotext[subgrid_index] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds)
|
||||
grid_infotext[subgrid_index] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds, grid=f'{len(x_values)}x{len(y_values)}')
|
||||
if grid_infotext[0] is None and ix == 0 and iy == 0 and iz == 0: # Sets main grid infotext
|
||||
pc.extra_generation_params = copy(pc.extra_generation_params)
|
||||
if z_opt.label != 'Nothing':
|
||||
|
|
@ -306,7 +306,8 @@ class Script(scripts.Script):
|
|||
pc.extra_generation_params["Z Values"] = z_values
|
||||
if z_opt.label in ["Seed", "Var. seed"] and not no_fixed_seeds:
|
||||
pc.extra_generation_params["Fixed Z Values"] = ", ".join([str(z) for z in zs])
|
||||
grid_infotext[0] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds)
|
||||
grid_text = f'{len(z_values)}x{len(x_values)}x{len(y_values)}' if len(z_values) > 0 else f'{len(x_values)}x{len(y_values)}'
|
||||
grid_infotext[0] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds, grid=grid_text)
|
||||
return res
|
||||
|
||||
with SharedSettingsStackHelper():
|
||||
|
|
|
|||
Loading…
Reference in New Issue