pull/32/head
Haoming 2024-03-05 10:48:56 +08:00 committed by GitHub
commit eb22e0efa2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
33 changed files with 610 additions and 551 deletions

2
.gitattributes vendored
View File

@ -1,2 +0,0 @@
*.png filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text

View File

@ -1,3 +1,22 @@
### v2.0.0 - 2024 Mar.05
- Improved Logics
### v2.0.epsilon - 2024 Mar.04
- Improved Logics
### v2.0.delta - 2024 Mar.04
- Support **SDXL**
### v2.0.gamma - 2024 Mar.01
- Major **Rewrite** & **Optimization**
### v2.0.beta - 2024 Feb.29
- Revert Sampler **Hook** *(`internal`)*
### v2.0.alpha - 2024 Feb.29
- Changed Sampler **Hook** *(`internal`)*
- Removed **LFS** *(`GitHub`)*
### v1.5.1 - 2023 Dec.03
- Bug Fix by. **catboxanon**

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023 Haoming
Copyright (c) 2024 Haoming
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

149
README.md
View File

@ -1,8 +1,10 @@
# SD Webui Vectorscope CC
This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise**[*](#offset-noise-tldr) natively,
This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise** natively,
allowing you to adjust the brightness, contrast, and color of the generations.
**Important:** The color currently only works for **SD 1.5** Checkpoints
> Also compatible with [Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge)!
> Now supports SDXL!
> [Sample Images](#sample-images)
@ -10,7 +12,7 @@ allowing you to adjust the brightness, contrast, and color of the generations.
After installing this Extension, you will see a new section in both **txt2img** and **img2img** tabs.
Refer to the parameters and sample images below and play around with the values.
**Note:** Since this modifies the underlying latent noise, the composition may change drastically. Using the **Ones** scaling seems to reduce the variations.
**Note:** Since this modifies the underlying latent noise, the composition may change drastically.
#### Parameters
- **Enable:** Turn on/off this Extension
@ -59,17 +61,15 @@ Refer to the parameters and sample images below and play around with the values.
- To save a Style, enter a name in the `Textbox` then click **Save Style**
- To delete a Style, enter the name in the `Textbox` then click **Delete Style**
- *Deleted Style is still in the `styles.json` in case you wish to retrieve it*
- Click **Refresh Style** to update the `Dropdown` if you edited the `styles.json` directly
- Click **Refresh Style** to update the `Dropdown` if you edited the `styles.json` manually
#### Advanced Settings
- **Process Hires. fix:** By default, this Extension only functions during the **txt2img** phase, so that **Hires. fix** may "fix" the artifacts introduced during **txt2img**. Enable this to process **Hires. fix** phase too.
- This option does not affect **img2img**
##### Noise Settings
#### Noise Settings
> let `x` denote the Tensor ; let `y` denote the operations
<!-- "Straight", "Straight Abs.", "Cross", "Cross Abs.", "Ones", "N.Random", "U.Random", "Multi-Res", "Multi-Res Abs." -->
- **Straight:** All operations are calculated on the same Tensor
- `x += x * y`
- **Cross:** All operations are calculated on the Tensor opposite of the `Alt.` setting
@ -85,15 +85,16 @@ Refer to the parameters and sample images below and play around with the values.
- **Abs:** Calculate using the absolute values of the chosen Tensors instead
- `x += abs(F) * y`
<p align="center"><img src="samples/Bright.jpg" width=768></p>
<p align="center"><img src="samples/Dark.jpg" width=768></p>
<p align="center">
<img src="samples/Method.jpg">
</p>
##### Scaling Settings
Previously, this Extension offsets the noise by the same amount each step.
#### Scaling Settings
By default, this Extension offsets the noise by the same amount each step.
But due to the denoising process, this may produce undesired outcomes such as blurriness at high **Brightness** or noises at low **Brightness**.
Thus, I added a scaling option to modify the offset amount.
Therefore, I added a scaling option to modify the offset amount throughout the process.
> Essentially, the "magnitude" of the default Tensor gets smaller every step, so offsetting by the same amount will have stronger effects at later steps. This is reversed on the `Alt.` Tensor however.
> Essentially, the "magnitude" of the default Tensor gets smaller every step, so offsetting by the same amount will have stronger effects at the later steps.
- **Flat:** Default behavior. Same amount each step.
- **Cos:** Cosine scaling. *(High -> Low)*
@ -101,97 +102,80 @@ Thus, I added a scaling option to modify the offset amount.
- **1 - Cos:** *(Low -> High)*
- **1 - Sin:** *(High -> Low)*
> In my experience, **`1 - Sin`** works better for the **default** Tensor while **`1 - Cos`** works better for the **Alt.** Tensor
<p align="center">
<code>Alt. Disabled</code><br>
<img src="samples/Scaling.jpg" width=768>
<img src="samples/Scaling.jpg">
</p>
<p align="center">
<code>Alt. Enabled</code><br>
<img src="samples/Scaling_alt.jpg" width=768>
</p>
<p align="center"><i>Notice the blurriness and the noises on <code>Flat</code> scaling</i></p>
## Sample Images
- **Checkpoint:** [UHD-23](https://civitai.com/models/22371/uhd-23)
- **Pos. Prompt:** `(masterpiece, best quality), 1girl, solo, night, street, city, neon_lights`
- **Neg. Prompt:** `(low quality, worst quality:1.2)`, [`EasyNegative`](https://huggingface.co/datasets/gsdf/EasyNegative/tree/main), [`EasyNegativeV2`](https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding)
- `Euler a`; `20 steps`; `7.5 CFG`; `Hires. fix`; `Latent (nearest)`; `16 H.steps`; `0.6 D.Str.`; `Seed:`**`3814649974`**
- `Straight Abs.`
- **Checkpoint:** [Animagine XL V3](https://civitai.com/models/260267)
- **Pos. Prompt:** `[high quality, best quality], 1girl, solo, casual, night, street, city, <lora:SDXL_Lightning_8steps:1>`
- **Neg. Prompt:** `lowres, [low quality, worst quality], jpeg`
- `Euler A SGMUniform`; `10 steps`; `2.0 CFG`; **Seed:** `2836968120`
- `Multi-Res Abs.` ; `Cos`
<p align="center">
<b>Base</b><br>
<code>Extension Disabled</code><br>
<img src="samples/00.jpg" width=512>
<code>Disabled</code><br>
<img src="samples/00.jpg" width=768>
</p>
<p align="center">
<b>Dark</b><br>
<code><b>Brightness:</b> -3; <b>Contrast:</b> 1.5</code><br>
<img src="samples/01.jpg" width=512>
<code>Brightness: 2.0 ; Contrast: -0.5 ; Saturation: 1.5<br>
R: 2.5; G: 1.5; B: -3</code><br>
<img src="samples/01.jpg" width=768>
</p>
<p align="center">
<b>Bright</b><br>
<code><b>Brightness:</b> 2.5; <b>Contrast:</b> 0.5; <b>Alt:</b> Enabled</code><br>
<img src="samples/02.jpg" width=512>
<code>Brightness: -2.5 ; Contrast: 1 ; Saturation: 0.75<br>
R: -1.5; G: -1.5; B: 4</code><br>
<img src="samples/02.jpg" width=768>
</p>
<p align="center">
<b>Chill</b><br>
<code><b>Brightness:</b> -2.5; <b>Contrast:</b> 1.25</code><br>
<code><b>R:</b> -1.5; <b>B:</b> 2.5</code><br>
<img src="samples/03.jpg" width=512>
</p>
<p align="center">
<b><s>Mexican Movie</s></b><br>
<code><b>Brightness:</b> 3; <b>Saturation:</b> 1.5</code><br>
<code><b>R:</b> 2; <b>G:</b> 1; <b>B:</b> -2</code><br>
<img src="samples/04.jpg" width=512>
</p>
<p align="center"><i>Notice the significant differences even when using the same seed</i></p>
## Roadmap
- [X] Extension Released
- [X] Add Support for **X/Y/Z Plot**
- [X] Implement different **Noise** functions
- [X] Add **Randomize** functions
- [X] Append Parameters onto Metadata
- You can enable this in the **Infotext** section of the **Settings** tab
- [X] Add **Randomize** button
- [X] **Style** Presets
- [X] Implement **Color Wheel** & **Color Picker**
- [X] Implement better scaling algorithms
- [X] Fix the **Brightness** issues *~~kinda~~*
- [X] Add API Docs
- [X] Append Parameters onto Metadata
- You can enable this in the **Infotext** section of the **Settings** tab
- [X] Add Infotext Support *(by. [catboxanon](https://github.com/catboxanon))*
- [X] ADD **HDR** Script
- [X] Add SDXL Support
- [ ] Add Gradient features
- [ ] Add SDXL Support
<p align="center"><code>X/Y/Z Plot Support</code><br><i>(Outdated Contrast Value)</i></p>
<p align="center"><img src="samples/XYZ.jpg" width=768></p>
<p align="center">
<code>X/Y/Z Plot Support</code><br>
<img src="samples/XYZ.jpg">
</p>
<p align="center"><code>X/Y/Z Plot w/ Randomize</code></p>
<p align="center"><img src="samples/Random.jpg" width=768></p>
<p align="center">The value is used as the random seed<br>You can refer to the console to see the randomized values</p>
<p align="center">
<code>Randomize</code><br>
<img src="samples/Random.jpg"><br>
The value is used as the random seed<br>You can refer to the console to see the randomized values</p>
## API
You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry in the `alwayson_scripts` of your payload.
You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry to the `alwayson_scripts` of your payload.
An [example](samples/api_example.json) is provided.
The `args` are sent in the following order:
The `args` are sent in the following order in an `array`:
- **[Enable, Alt, Brightness, Contrast, Saturation, R, G, B, Process Hires. Fix, Noise Settings, Scaling Settings]**
> `bool`, `bool`, `float`, `float`, `float`, `float`, `float`, `float`, `bool`, `str`, `str`
- **Enable:** `bool`
- **Alt:** `bool`
- **Brightness:** `float`
- **Contrast:** `float`
- **Saturation:** `float`
- **R:** `float`
- **G:** `float`
- **B:** `float`
- **Process Hires. Fix:** `bool`
- **Noise Settings:** `str`
- **Scaling Settings:** `str`
## Known Issues
- Does not work with `DDIM`, `UniPC` samplers
- Does **not** work with `DDIM`, `UniPC`, `Euler` samplers
- Has little effect when used with certain **LoRA**s
- Colors are incorrect when using SDXL checkpoints
## HDR
<p align="right"><i><b>BETA</b></i></p>
@ -201,19 +185,21 @@ The `args` are sent in the following order:
- In the **Script** `Dropdown` at the bottom, there is now a new option: **`High Dynamic Range`**
- This script will generate multiple images *("Brackets")* of varying brightness, then merge them into 1 HDR image
- *Do provide feedback in the thread!*
- **Highly Recommended** to use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* worked the best in my experience.
- **Highly Recommended** to use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* worked well in my experience.
#### Settings
- **Brackets:** The numer of images to generate
- **Gaps:** The brightness difference between each image
- **Automatically Merge:** When enabled, this will merge the images using a `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder; When disabled, this will return all images to the result section, for when you have a more advanced program such as Photoshop to do the merging.
- **Automatically Merge:** When enabled, this will merge the images using an `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder; When disabled, this will return all images to the result section, for when you have a more advanced program such as Photoshop to do the merging.
- All the images are still saved to the `outputs` folder regardless
<hr>
### Offset Noise TL;DR
<details>
<summary>Offset Noise TL;DR</summary>
The most common *version* of **Offset Noise** you may have heard of is from this [blog post](https://www.crosslabs.org/blog/diffusion-with-offset-noise),
where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5`.
where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5` *(**ie.** grey)*.
> **ie.** Even if you prompt for dark/night or bright/snow, the overall image still looks "grey"
@ -221,11 +207,11 @@ where it was discovered that the noise functions used during **training** were f
However, this Extension instead tries to offset the latent noise during the **inference** phase.
Therefore, you do not need to use models that were specially trained, as this can work on any model.
Though, the results may not be as good as using properly trained models.
</details>
<hr>
<details>
<summary>How does this work?</summary>
### What is Under the Hood
After reading through and messing around with the code,
I found out that it is possible to directly modify the Tensors
representing the latent noise used by the Stable Diffusion process.
@ -240,15 +226,16 @@ Then, I tried to play around with the values of each channel and ended up discov
Essentially, the 4 channels correspond to the **CMYK** color format,
hence why you can control the brightness as well as the colors.
</details>
<hr>
### Vectorscope?
#### Vectorscope?
The Extension is named this way because the color interactions remind me of the `Vectorscope` found in **Premiere Pro**'s **Lumetri Color**.
Those who are experienced in Color Correction should be rather familiar with this Extension.
<p align="center"><img src="scripts/Vectorscope.png" width=256></p>
<hr>
<sup>~~Yes. I'm aware that it's just how digital colors work in general.~~</sup>
<sup>~~Yes. I'm aware that it's just how digital colors work in general.~~<br>
~~We've come full **circle** *(\*ba dum tss)* now that a Color Wheel is actually added.~~</sup>
<sup>~~We've come full **circle** *(\*ba dum tss)* now that a Color Wheel is actually added.~~</sup>

View File

@ -1,136 +1,127 @@
function registerPicker(wheel, sliders, mode) {
for (const event of ['mousemove', 'click']) {
wheel.addEventListener(event, (e) => {
e.preventDefault();
class VectorscopeCC {
const rect = e.target.getBoundingClientRect();
static dot = { 'txt': null, 'img': null };
if (e.type != 'click') {
if (e.buttons != 1) {
static updateCursor(r, g, b, mode) {
const mag = Math.abs(r) + Math.abs(g) + Math.abs(b);
var condX, condY;
if (mag < Number.EPSILON) {
condX = 0.0;
condY = 0.0;
} else {
condX = 25 * Math.sqrt(r * r + g * g + b * b) * (r * -0.5 + g * -0.5 + b * 1.0) / mag;
condY = 25 * Math.sqrt(r * r + g * g + b * b) * (r * -0.866 + g * 0.866 + b * 0.0) / mag;
}
this.dot[mode].style.left = `calc(50% + ${condX - 12}px)`;
this.dot[mode].style.top = `calc(50% + ${condY - 12}px)`;
}
static registerPicker(wheel, sliders, dot) {
['mousemove', 'click'].forEach((event) => {
wheel.addEventListener(event, (e) => {
e.preventDefault();
if (e.type === 'mousemove' && e.buttons != 1)
return;
const rect = e.target.getBoundingClientRect();
const p_rect = e.target.parentNode.getBoundingClientRect();
const shift = (p_rect.width - rect.width) / 2.0;
dot.style.left = `calc(${e.clientX - rect.left}px - 12px + ${shift}px)`;
dot.style.top = `calc(${e.clientY - rect.top}px - 12px)`;
const x = ((e.clientX - rect.left) - 100.0) / 25;
const y = ((e.clientY - rect.top) - 100.0) / 25;
const zeta = Math.atan(y / x);
var degree = 0;
if (x >= 0) {
if (y >= 0)
degree = zeta * 180 / Math.PI;
else
degree = 360 + zeta * 180 / Math.PI;
}
else if (x < 0) {
degree = 180 + zeta * 180 / Math.PI;
}
const dot = e.target.parentElement.querySelector('#cc-dot-' + mode);
dot.style.position = 'fixed';
dot.style.left = e.x - (dot.width / 2) + 'px';
dot.style.top = e.y - (dot.height / 2) + 'px';
}
var r = -(0.00077 * (433 * x * degree + 750 * y * degree) / degree);
var g = y / 0.866 + r;
var b = x + 0.5 * r + 0.5 * g;
x = ((e.clientX - rect.left) - 100.0) / 25;
y = ((e.clientY - rect.top) - 100.0) / 25;
const mag = Math.sqrt(r * r + g * g + b * b);
const len = Math.abs(r) + Math.abs(g) + Math.abs(b);
const zeta = Math.atan(y / x);
var degree = 0;
r = (r / mag * len).toFixed(2);
g = (g / mag * len).toFixed(2);
b = (b / mag * len).toFixed(2);
if (x >= 0) {
if (y >= 0)
degree = zeta * 180 / Math.PI;
else
degree = 360 + zeta * 180 / Math.PI;
}
else if (x < 0) {
degree = 180 + zeta * 180 / Math.PI;
}
sliders[0][0].value = r;
sliders[0][1].value = r;
sliders[1][0].value = g;
sliders[1][1].value = g;
sliders[2][0].value = b;
sliders[2][1].value = b;
});
});
// -0.5r - 0.5g + b = x
// -0.866r + 0.866g = y
// 240r + 120g = z * rgb
// g = (1 / 0.866)y + r
// -0.5r - 0.5((1 / 0.866)y + r) + b = x
// b = x + 0.5r + 0.5((1 / 0.866)y + r)
// 240r + 120(1 / 0.866)y + r = z * r((1 / 0.866)y + r)(x + 0.5r + 0.5((1 / 0.866)y + r))
var r = -(0.00077 * (433 * x * degree + 750 * y * degree) / degree);
var g = y / 0.866 + r;
var b = x + 0.5 * r + 0.5 * g;
const mag = Math.sqrt(r * r + g * g + b * b);
const len = Math.abs(r) + Math.abs(g) + Math.abs(b);
r = r / mag * len;
g = g / mag * len;
b = b / mag * len;
sliders[0].value = r.toFixed(2);
sliders[0].closest('.gradio-slider').querySelector('input[type=range]').value = r.toFixed(2);
sliders[1].value = g.toFixed(2);
sliders[1].closest('.gradio-slider').querySelector('input[type=range]').value = g.toFixed(2);
sliders[2].value = b.toFixed(2);
sliders[2].closest('.gradio-slider').querySelector('input[type=range]').value = b.toFixed(2);
if (e.type == 'click') {
updateInput(sliders[0]);
updateInput(sliders[1]);
updateInput(sliders[2]);
}
['mouseleave', 'mouseup'].forEach((event) => {
wheel.addEventListener(event, () => {
updateInput(sliders[0][0]);
updateInput(sliders[1][0]);
updateInput(sliders[2][0]);
});
});
}
wheel.addEventListener('mouseup', (e) => {
const dot = e.target.parentElement.querySelector('#cc-dot-' + mode);
dot.style.position = 'absolute';
updateInput(sliders[0]);
updateInput(sliders[1]);
updateInput(sliders[2]);
});
}
onUiLoaded(async () => {
['txt', 'img'].forEach((mode) => {
const container = document.getElementById('cc-colorwheel-' + mode);
const container = gradioApp().getElementById(`cc-colorwheel-${mode}`);
container.style.height = '200px';
container.style.width = 'auto';
container.style.width = '200px';
container.querySelector('.float')?.remove();
container.querySelector('.download')?.remove();
for (const downloadButton of container.querySelectorAll('[download]')) {
downloadButton.parentElement.remove();
}
while (container.firstChild.nodeName.toLowerCase() !== 'img')
container.firstChild.remove();
const wheel = container.getElementsByTagName('img')[0];
wheel.style.height = '100%';
wheel.style.width = 'auto';
wheel.style.margin = 'auto';
wheel.id = 'cc-img-' + mode;
wheel.ondragstart = (e) => { e.preventDefault(); };
const wheel = container.querySelector('img');
wheel.ondragstart = (e) => { e.preventDefault(); return false; };
wheel.id = `cc-img-${mode}`;
sliders = [
document.getElementById('cc-r-' + mode).querySelector('input'),
document.getElementById('cc-g-' + mode).querySelector('input'),
document.getElementById('cc-b-' + mode).querySelector('input')
[gradioApp().getElementById(`cc-r-${mode}`).querySelector('input[type=number]'),
gradioApp().getElementById(`cc-r-${mode}`).querySelector('input[type=range]')],
[gradioApp().getElementById(`cc-g-${mode}`).querySelector('input[type=number]'),
gradioApp().getElementById(`cc-g-${mode}`).querySelector('input[type=range]')],
[gradioApp().getElementById(`cc-b-${mode}`).querySelector('input[type=number]'),
gradioApp().getElementById(`cc-b-${mode}`).querySelector('input[type=range]')]
];
registerPicker(wheel, sliders, mode);
const temp = gradioApp().getElementById(`cc-temp-${mode}`);
const temp = document.getElementById('cc-temp-' + mode);
const dot = temp.getElementsByTagName('img')[0];
dot.id = 'cc-dot-' + mode;
container.appendChild(dot);
const dot = temp.querySelector('img');
dot.id = `cc-dot-${mode}`;
dot.style.left = 'calc(50% - 12px)';
dot.style.top = 'calc(50% - 12px)';
container.appendChild(dot);
temp.remove();
const row1 = document.getElementById('cc-apply-' + mode).parentNode;
const row2 = document.getElementById('cc-save-' + mode).parentNode;
VectorscopeCC.dot[mode] = dot;
VectorscopeCC.registerPicker(wheel, sliders, dot);
const row1 = gradioApp().getElementById(`cc-apply-${mode}`).parentNode;
row1.style.alignItems = 'end';
row1.style.gap = '1em';
const row2 = gradioApp().getElementById(`cc-save-${mode}`).parentNode;
row2.style.alignItems = 'end';
row2.style.gap = '1em';
// ----- HDR UIs -----
const hdr_settings = document.getElementById('vec-hdr-' + mode);
const buttons = hdr_settings.getElementsByTagName('label');
for (let i = 0; i < buttons.length; i++)
buttons[i].style.borderRadius = '0.5em';
});
});

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 114 KiB

BIN
samples/03.jpg (Stored with Git LFS)

Binary file not shown.

BIN
samples/04.jpg (Stored with Git LFS)

Binary file not shown.

BIN
samples/Bright.jpg (Stored with Git LFS)

Binary file not shown.

BIN
samples/Dark.jpg (Stored with Git LFS)

Binary file not shown.

BIN
samples/Method.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 694 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 B

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 B

After

Width:  |  Height:  |  Size: 587 KiB

BIN
samples/Scaling_alt.jpg (Stored with Git LFS)

Binary file not shown.

BIN
samples/Skip.jpg (Stored with Git LFS)

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 B

After

Width:  |  Height:  |  Size: 1.6 MiB

View File

@ -14,8 +14,8 @@
"args": [
true,
true,
-2.5,
0.5,
-2.0,
1.5,
1.25,
0.0,
0.0,

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 166 KiB

View File

@ -1,22 +1,22 @@
from modules.sd_samplers_kdiffusion import KDiffusionSampler
import modules.scripts as scripts
from modules import shared, scripts
from scripts.cc_colorpicker import create_colorpicker
from scripts.cc_style import StyleManager
from scripts.cc_xyz import xyz_support
import scripts.cc_const as const
from modules import shared
import gradio as gr
import random
from scripts.cc_xyz import xyz_support
from scripts.cc_version import VERSION
VERSION = 'v2.0.0'
from scripts.cc_colorpicker import create_colorpicker
from scripts.cc_colorpicker import horizontal_js
from scripts.cc_colorpicker import vertical_js
from scripts.cc_style import StyleManager
style_manager = StyleManager()
style_manager.load_styles()
class VectorscopeCC(scripts.Script):
def __init__(self):
self.xyzCache = {}
@ -29,42 +29,45 @@ class VectorscopeCC(scripts.Script):
return scripts.AlwaysVisible
def ui(self, is_img2img):
mode = ("img" if is_img2img else "txt")
m = f"\"{mode}\""
with gr.Accordion(f"Vectorscope CC {VERSION}", elem_id='vec-cc-' + ('img' if is_img2img else 'txt'), open=False):
with gr.Accordion(f"Vectorscope CC {VERSION}", elem_id=f"vec-cc-{mode}", open=False):
with gr.Row():
enable = gr.Checkbox(label="Enable")
latent = gr.Checkbox(label="Alt. (Stronger Effects)")
with gr.Row():
bri = gr.Slider(label="Brightness", minimum=const.Brightness.minimum, maximum=const.Brightness.maximum, step=0.1, value=const.Brightness.default)
bri = gr.Slider(label="Brightness", minimum=const.Brightness.minimum, maximum=const.Brightness.maximum, step=0.05, value=const.Brightness.default)
con = gr.Slider(label="Contrast", minimum=const.Contrast.minimum, maximum=const.Contrast.maximum, step=0.05, value=const.Contrast.default)
sat = gr.Slider(label="Saturation", minimum=const.Saturation.minimum, maximum=const.Saturation.maximum, step=0.05, value=const.Saturation.default)
with gr.Row():
with gr.Column():
r = gr.Slider(label="R", info='Cyan | Red', minimum=const.R.minimum, maximum=const.R.maximum, step=0.05, value=const.R.default, elem_id='cc-r-' + ('img' if is_img2img else 'txt'))
g = gr.Slider(label="G", info='Magenta | Green',minimum=const.G.minimum, maximum=const.G.maximum, step=0.05, value=const.G.default, elem_id='cc-g-' + ('img' if is_img2img else 'txt'))
b = gr.Slider(label="B", info='Yellow | Blue',minimum=const.B.minimum, maximum=const.B.maximum, step=0.05, value=const.B.default, elem_id='cc-b-' + ('img' if is_img2img else 'txt'))
r = gr.Slider(label="R", info='Cyan | Red', minimum=const.COLOR.minimum, maximum=const.COLOR.maximum, step=0.05, value=const.COLOR.default, elem_id=f"cc-r-{mode}")
g = gr.Slider(label="G", info='Magenta | Green',minimum=const.COLOR.minimum, maximum=const.COLOR.maximum, step=0.05, value=const.COLOR.default, elem_id=f"cc-g-{mode}")
b = gr.Slider(label="B", info='Yellow | Blue',minimum=const.COLOR.minimum, maximum=const.COLOR.maximum, step=0.05, value=const.COLOR.default, elem_id=f"cc-b-{mode}")
r.input(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
g.input(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
b.input(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
create_colorpicker(is_img2img)
for component in [r, g, b]:
component.change(None, inputs=[r, g, b], outputs=[], _js=horizontal_js(is_img2img))
component.change(None, inputs=[r, g, b], outputs=[], _js=vertical_js(is_img2img))
with gr.Accordion("Styles", open=False):
with gr.Row():
style_choice = gr.Dropdown(label="Styles", choices=style_manager.list_style(), scale = 3)
apply_btn = gr.Button(value="Apply Style", elem_id='cc-apply-' + ('img' if is_img2img else 'txt'), scale = 2)
apply_btn = gr.Button(value="Apply Style", elem_id=f"cc-apply-{mode}", scale = 2)
refresh_btn = gr.Button(value="Refresh Style", scale = 2)
with gr.Row():
style_name = gr.Textbox(label="Style Name", scale = 3)
save_btn = gr.Button(value="Save Style", elem_id='cc-save-' + ('img' if is_img2img else 'txt'), scale = 2)
save_btn = gr.Button(value="Save Style", elem_id=f"cc-save-{mode}", scale = 2)
delete_btn = gr.Button(value="Delete Style", scale = 2)
apply_btn.click(fn=style_manager.get_style, inputs=style_choice, outputs=[latent, bri, con, sat, r, g, b])
apply_btn.click(fn=style_manager.get_style, inputs=style_choice, outputs=[latent, bri, con, sat, r, g, b]).then(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
save_btn.click(fn=lambda *args: gr.update(choices=style_manager.save_style(*args)), inputs=[style_name, latent, bri, con, sat, r, g, b], outputs=style_choice)
delete_btn.click(fn=lambda name: gr.update(choices=style_manager.delete_style(name)), inputs=style_name, outputs=style_choice)
refresh_btn.click(fn=lambda _: gr.update(choices=style_manager.list_style()), outputs=style_choice)
@ -76,10 +79,34 @@ class VectorscopeCC(scripts.Script):
with gr.Row():
reset_btn = gr.Button(value="Reset")
self.register_reset(reset_btn, latent, bri, con, sat, r, g, b, doHR, method, scaling)
random_btn = gr.Button(value="Randomize")
self.register_random(random_btn, bri, con, sat, r, g, b)
def on_reset():
return [
gr.update(value=False),
gr.update(value=const.Brightness.default),
gr.update(value=const.Contrast.default),
gr.update(value=const.Saturation.default),
gr.update(value=const.COLOR.default),
gr.update(value=const.COLOR.default),
gr.update(value=const.COLOR.default),
gr.update(value=False),
gr.update(value='Straight Abs.'),
gr.update(value='Flat')
]
def on_random():
return [
gr.update(value=round(random.uniform(const.Brightness.minimum, const.Brightness.maximum), 2)),
gr.update(value=round(random.uniform(const.Contrast.minimum, const.Contrast.maximum), 2)),
gr.update(value=round(random.uniform(const.Saturation.minimum, const.Saturation.maximum), 2)),
gr.update(value=round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2)),
gr.update(value=round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2)),
gr.update(value=round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2))
]
reset_btn.click(fn=on_reset, inputs=[], outputs=[latent, bri, con, sat, r, g, b, doHR, method, scaling]).then(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
random_btn.click(fn=on_random, inputs=[], outputs=[bri, con, sat, r, g, b]).then(None, [r, g, b], None, _js=f'(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}')
self.paste_field_names = []
self.infotext_fields = [
@ -96,46 +123,21 @@ class VectorscopeCC(scripts.Script):
(scaling, "Vec CC Scaling"),
]
for _, name in self.infotext_fields:
for comp, name in self.infotext_fields:
comp.do_not_save_to_config = True
self.paste_field_names.append(name)
return [enable, latent, bri, con, sat, r, g, b, doHR, method, scaling]
def register_reset(self, reset_btn, latent, bri, con, sat, r, g, b, doHR, method, scaling):
for component in [latent, doHR]:
reset_btn.click(fn=lambda _: gr.update(value=False), outputs=component)
for component in [bri, con, r, g, b]:
reset_btn.click(fn=lambda _: gr.update(value=0.0), outputs=component)
for component in [sat]:
reset_btn.click(fn=lambda _: gr.update(value=const.Saturation.default), outputs=component)
reset_btn.click(fn=lambda _: gr.update(value='Straight Abs.'), outputs=method)
reset_btn.click(fn=lambda _: gr.update(value='Flat'), outputs=scaling)
def register_random(self, random_btn, bri, con, sat, r, g, b):
for component in [bri, con, r, g, b]:
random_btn.click(fn=lambda _: gr.update(value=round(random.uniform(const.R.minimum, const.R.maximum), 2)), outputs=component)
for component in [sat]:
random_btn.click(fn=lambda _: gr.update(value=round(random.uniform(const.Saturation.minimum, const.Saturation.maximum), 2)), outputs=component)
def parse_bool(self, string:str):
if string.lower() == "true":
return True
if string.lower() == "false":
return False
raise ValueError(f"Invalid Value: {string}")
def process(self, p, enable:bool, latent:bool, bri:float, con:float, sat:float, r:float, g:float, b:float, doHR:bool, method:str, scaling:str):
KDiffusionSampler.isHR_Pass = False
if 'Enable' in self.xyzCache.keys():
enable = self.parse_bool(self.xyzCache['Enable'])
enable = (self.xyzCache['Enable'].lower().strip() == "true")
if not enable:
if 'Enable' not in self.xyzCache.keys():
if len(self.xyzCache) > 0:
print('\n[X/Y/Z Plot] x [Vec.CC] Extension is not Enabled!\n')
print('\n[Vec.CC] x [X/Y/Z Plot] Extension is not Enabled!\n')
self.xyzCache.clear()
KDiffusionSampler.vec_cc = {'enable': False}
@ -143,36 +145,36 @@ class VectorscopeCC(scripts.Script):
if 'Random' in self.xyzCache.keys():
if len(self.xyzCache) > 1:
print('\n[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled!\nSome settings will not apply!\n')
print('\n[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled.\nSome settings will not apply!\n')
else:
print('\n[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled!\n')
print('\n[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled.\n')
cc_seed = None
for k, v in self.xyzCache.items():
match k:
case 'Alt':
latent = self.parse_bool(v)
latent = (self.xyzCache['Alt'].lower().strip() == "true")
case 'Brightness':
bri = v
bri = float(v)
case 'Contrast':
con = v
con = float(v)
case 'Saturation':
sat = v
sat = float(v)
case 'R':
r = v
r = float(v)
case 'G':
g = v
g = float(v)
case 'B':
b = v
b = float(v)
case 'DoHR':
doHR = self.parse_bool(v)
doHR = (self.xyzCache['DoHR'].lower().strip() == "true")
case 'Method':
method = v
method = str(v)
case 'Scaling':
scaling = v
scaling = str(v)
case 'Random':
cc_seed = v
cc_seed = int(v)
self.xyzCache.clear()
@ -180,22 +182,24 @@ class VectorscopeCC(scripts.Script):
KDiffusionSampler.vec_cc = {'enable': False}
return p
steps = p.steps
if not hasattr(p, 'enable_hr') and hasattr(p, 'denoising_strength') and not shared.opts.img2img_fix_steps:
steps = int(steps * p.denoising_strength)
steps:int = p.steps
# is img2img & do full steps
if not hasattr(p, 'enable_hr') and not shared.opts.img2img_fix_steps:
if getattr(p, 'denoising_strength', 1.0) < 1.0:
steps = int(steps * getattr(p, 'denoising_strength', 1.0) + 1.0)
if cc_seed is not None:
if cc_seed:
random.seed(cc_seed)
bri = round(random.uniform(const.Contrast.minimum, const.Contrast.maximum), 2)
bri = round(random.uniform(const.Brightness.minimum, const.Brightness.maximum), 2)
con = round(random.uniform(const.Contrast.minimum, const.Contrast.maximum), 2)
r = round(random.uniform(const.R.minimum, const.R.maximum), 2)
g = round(random.uniform(const.G.minimum, const.G.maximum), 2)
b = round(random.uniform(const.B.minimum, const.B.maximum), 2)
sat = round(random.uniform(const.Saturation.minimum, const.Saturation.maximum), 2)
print(f'-> Seed: {cc_seed}')
r = round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2)
g = round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2)
b = round(random.uniform(const.COLOR.minimum, const.COLOR.maximum), 2)
print(f'\n-> Seed: {cc_seed}')
print(f'Brightness:\t{bri}')
print(f'Contrast:\t{con}')
print(f'Saturation:\t{sat}')
@ -203,7 +207,7 @@ class VectorscopeCC(scripts.Script):
print(f'G:\t\t{g}')
print(f'B:\t\t{b}\n')
if hasattr(shared.opts, 'cc_metadata') and shared.opts.cc_metadata is True:
if getattr(shared.opts, 'cc_metadata', True):
p.extra_generation_params['Vec CC Enabled'] = enable
p.extra_generation_params['Vec CC Alt'] = latent
p.extra_generation_params['Vec CC Brightness'] = bri
@ -242,6 +246,3 @@ class VectorscopeCC(scripts.Script):
}
return p
def before_hr(self, p, *args):
KDiffusionSampler.isHR_Pass = True

View File

@ -1,67 +1,165 @@
from modules.sd_samplers_kdiffusion import KDiffusionSampler
from modules import script_callbacks
from modules import script_callbacks, devices
from scripts.cc_scaling import apply_scaling
from scripts.cc_noise import *
from random import random
from torch import Tensor
import torch
class NoiseMethods:
@staticmethod
def get_delta(latent: Tensor) -> Tensor:
mean = torch.mean(latent)
return torch.sub(latent, mean)
@staticmethod
def to_abs(latent: Tensor) -> Tensor:
return torch.abs(latent)
@staticmethod
def zeros(latent: Tensor) -> Tensor:
return torch.zeros_like(latent)
@staticmethod
def ones(latent: Tensor) -> Tensor:
return torch.ones_like(latent)
@staticmethod
def gaussian_noise(latent: Tensor) -> Tensor:
return torch.rand_like(latent)
@staticmethod
def normal_noise(latent: Tensor) -> Tensor:
return torch.randn_like(latent)
@staticmethod
def multires_noise(
latent: Tensor, use_zero: bool, iterations: int = 8, discount: float = 0.4
):
"""
Credit: Kohya_SS
https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L448
"""
noise = NoiseMethods.zeros(latent) if use_zero else NoiseMethods.ones(latent)
batchSize, c, w, h = noise.shape
device = devices.get_optimal_device()
upsampler = torch.nn.Upsample(size=(w, h), mode="bilinear").to(device)
for b in range(batchSize):
for i in range(iterations):
r = random() * 2 + 2
wn = max(1, int(w / (r**i)))
hn = max(1, int(h / (r**i)))
noise[b] += (upsampler(torch.randn(1, c, hn, wn).to(device)) * discount**i)[0]
if wn == 1 or hn == 1:
break
return noise / noise.std()
def RGB_2_CbCr(r:float, g:float, b:float) -> float:
"""Convert RGB channels into YCbCr for SDXL"""
cb = -0.15 * r - 0.29 * g + 0.44 * b
cr = 0.44 * r - 0.37 * g - 0.07 * b
return cb, cr
original_callback = KDiffusionSampler.callback_state
@torch.no_grad()
def cc_callback(self, d):
if not self.vec_cc['enable']:
if not self.vec_cc["enable"]:
return original_callback(self, d)
if not self.vec_cc['doHR'] and self.isHR_Pass is True:
if getattr(self.p, "is_hr_pass", False) and not self.vec_cc["doHR"]:
return original_callback(self, d)
mode = self.vec_cc['mode']
method = self.vec_cc['method']
is_xl: bool = self.p.sd_model.is_sdxl
mode = str(self.vec_cc["mode"])
method = str(self.vec_cc["method"])
source = d[mode]
# "Straight", "Straight Abs.", "Cross", "Cross Abs.", "Ones", "N.Random", "U.Random", "Multi-Res", "Multi-Res Abs."
if 'Straight' in method:
target = d[mode]
elif 'Cross' in method:
cross = 'x' if mode == 'denoised' else 'denoised'
target = d[cross]
elif 'Multi-Res' in method:
target = multires_noise(d[mode], 'Abs' in method)
elif method == 'Ones':
target = ones(d[mode])
elif method == 'N.Random':
target = normal_noise(d[mode])
elif method == 'U.Random':
target = gaussian_noise(d[mode])
if "Straight" in method:
target = d[mode].detach().clone()
elif "Cross" in method:
target = d["x" if mode == "denoised" else "denoised"].detach().clone()
elif "Multi-Res" in method:
target = NoiseMethods.multires_noise(d[mode], "Abs" in method)
elif method == "Ones":
target = NoiseMethods.ones(d[mode])
elif method == "N.Random":
target = NoiseMethods.normal_noise(d[mode])
elif method == "U.Random":
target = NoiseMethods.gaussian_noise(d[mode])
else:
raise ValueError
if 'Abs' in method:
target = to_abs(target)
if "Abs" in method:
target = NoiseMethods.to_abs(target)
batchSize = d[mode].size(0)
batchSize = int(d[mode].size(0))
mods = apply_scaling(self.vec_cc['scaling'], d["i"], self.vec_cc['step'],
self.vec_cc['bri'], self.vec_cc['con'], self.vec_cc['sat'],
self.vec_cc['r'], self.vec_cc['g'], self.vec_cc['b'])
bri, con, sat, r, g, b = apply_scaling(
self.vec_cc["scaling"],
d["i"],
self.vec_cc["step"],
self.vec_cc["bri"],
self.vec_cc["con"],
self.vec_cc["sat"],
self.vec_cc["r"],
self.vec_cc["g"],
self.vec_cc["b"],
)
for i in range(batchSize):
BRIGHTNESS = [source[i, 0], target[i, 0]]
R = [source[i, 2], target[i, 2]]
G = [source[i, 1], target[i, 1]]
B = [source[i, 3], target[i, 3]]
if not is_xl:
for i in range(batchSize):
# Brightness
source[i][0] += target[i][0] * bri
# Contrast
source[i][0] += NoiseMethods.get_delta(source[i][0]) * con
BRIGHTNESS[0] += BRIGHTNESS[1] * mods[0]
BRIGHTNESS[0] += get_delta(BRIGHTNESS[0]) * mods[1]
# R
source[i][2] -= target[i][2] * r
# G
source[i][1] += target[i][1] * g
# B
source[i][3] -= target[i][3] * b
R[0] -= R[1] * mods[3]
G[0] += G[1] * mods[4]
B[0] -= B[1] * mods[5]
# Saturation
source[i][2] *= sat
source[i][1] *= sat
source[i][3] *= sat
R[0] *= mods[2]
G[0] *= mods[2]
B[0] *= mods[2]
else:
# But why...
cb, cr = RGB_2_CbCr(r, b, g)
for i in range(batchSize):
# Brightness
source[i][0] += target[i][0] * bri
# Contrast
source[i][0] += NoiseMethods.get_delta(source[i][0]) * con
#CbCr
source[i][1] -= target[i][1] * cr
source[i][2] += target[i][2] * cb
# Saturation
source[i][1] *= sat
source[i][2] *= sat
return original_callback(self, d)
KDiffusionSampler.callback_state = cc_callback
def restore_callback():
KDiffusionSampler.callback_state = original_callback

View File

@ -1,23 +1,23 @@
import modules.scripts as scripts
import gradio as gr
import os
WHEEL = scripts.basedir() + '/scripts/Vectorscope.png'
DOT = scripts.basedir() + '/scripts/dot.png'
WHEEL = os.path.join(scripts.basedir(), "scripts", "Vectorscope.png")
DOT = os.path.join(scripts.basedir(), "scripts", "dot.png")
def create_colorpicker(is_img):
gr.Image(WHEEL, type='filepath', interactive=False, container=False, elem_id='cc-colorwheel-' + ('img' if is_img else 'txt'))
gr.Image(DOT, type='filepath', interactive=False, container=False, elem_id='cc-temp-' + ('img' if is_img else 'txt'))
def horizontal_js(is_img):
mag = '(Math.abs(r) + Math.abs(g) + Math.abs(b))'
calc = '25 * Math.sqrt(r*r+g*g+b*b) * (r * -0.5 + g * -0.5 + b * 1.0) / ' + mag
cond = '(' + mag + ' === 0 ? 0 : ' + calc + ')'
def create_colorpicker(is_img: bool):
return "(r, g, b) => {document.getElementById('cc-dot-" + ('img' if is_img else 'txt') + "').style.left = 'calc(50% + ' +(" + cond + "- 12) + 'px)'}"
gr.Image(
WHEEL,
interactive=False,
container=False,
elem_id=f"cc-colorwheel-{'img' if is_img else 'txt'}",
)
def vertical_js(is_img):
mag = '(Math.abs(r) + Math.abs(g) + Math.abs(b))'
calc = '25 * Math.sqrt(r*r+g*g+b*b) * (r * -0.866 + g * 0.866 + b * 0.0) / ' + mag
cond = '(' + mag + ' === 0 ? 0 : ' + calc + ')'
return "(r, g, b) => {document.getElementById('cc-dot-" + ('img' if is_img else 'txt') + "').style.top = 'calc(50% + ' +(" + cond + "- 12) + 'px)'}"
gr.Image(
DOT,
interactive=False,
container=False,
elem_id=f"cc-temp-{'img' if is_img else 'txt'}",
)

View File

@ -1,12 +1,10 @@
class Param():
def __init__(self, minimum, maximum, default):
class Param:
def __init__(self, minimum: float, maximum: float, default: float):
self.minimum = minimum
self.maximum = maximum
self.default = default
Brightness = Param(-6.0, 6.0, 0.0)
Brightness = Param(-5.0, 5.0, 0.0)
Contrast = Param(-5.0, 5.0, 0.0)
Saturation = Param(0.2, 1.8, 1.0)
R = Param(-4.0, 4.0, 0.0)
G = Param(-4.0, 4.0, 0.0)
B = Param(-4.0, 4.0, 0.0)
Saturation = Param(0.25, 1.75, 1.0)
COLOR = Param(-4.0, 4.0, 0.0)

View File

@ -6,13 +6,14 @@ import cv2 as cv
from modules.processing import process_images, get_fixed_seed
from copy import copy
# https://docs.opencv.org/4.8.0/d2/df0/tutorial_py_hdr.html
def merge_HDR(imgs:list, path:str, depth:str, fmt:str, gamma:float):
def merge_HDR(imgs: list, path: str, depth: str, fmt: str, gamma: float):
import datetime
import math
import os
output_folder = os.path.join(path, 'hdr')
output_folder = os.path.join(path, "hdr")
if not os.path.exists(output_folder):
os.makedirs(output_folder)
@ -22,20 +23,25 @@ def merge_HDR(imgs:list, path:str, depth:str, fmt:str, gamma:float):
hdr = merge.process(imgs_np)
hdr += math.ceil(0 - np.min(hdr) * 1000) / 1000
#print(f'{np.min(hdr)}, {np.max(hdr)}')
# print(f'{np.min(hdr)}, {np.max(hdr)}')
target = (65535 if depth == '16bpc' else 255)
precision = ('uint16' if depth == '16bpc' else 'uint8')
target = 65535 if depth == "16bpc" else 255
precision = "uint16" if depth == "16bpc" else "uint8"
hdr = np.power(hdr, (1 / gamma))
ldr = np.clip(hdr * target, 0, target).astype(precision)
rgb = cv.cvtColor(ldr, cv.COLOR_BGR2RGB)
cv.imwrite(os.path.join(output_folder, f'{datetime.datetime.now().strftime("%H-%M-%S")}{fmt}'), rgb)
cv.imwrite(
os.path.join(
output_folder, f'{datetime.datetime.now().strftime("%H-%M-%S")}{fmt}'
),
rgb,
)
class VectorHDR(scripts.Script):
def title(self):
return "High Dynamic Range"
@ -47,29 +53,41 @@ class VectorHDR(scripts.Script):
count = gr.Slider(label="Brackets", minimum=3, maximum=9, step=2, value=7)
gap = gr.Slider(label="Gaps", minimum=0.50, maximum=2.50, step=0.25, value=1.50)
with gr.Accordion("Merge Options", elem_id='vec-hdr-' + ('img' if is_img2img else 'txt'), open=False):
with gr.Accordion("Merge Options", elem_id="vec-hdr-" + ("img" if is_img2img else "txt"), open=False):
auto = gr.Checkbox(label="Automatically Merge", value=True)
with gr.Row():
depth = gr.Radio(['16bpc', '8bpc'], label="Bit Depth", value="16bpc")
fmt = gr.Radio(['.tiff', '.png'], label="Image Format", value=".tiff")
depth = gr.Radio(["16bpc", "8bpc"], label="Bit Depth", value="16bpc")
fmt = gr.Radio([".tiff", ".png"], label="Image Format", value=".tiff")
gamma = gr.Slider(label="Gamma", info='Lower: Darker | Higher: Brighter',minimum=0.2, maximum=2.2, step=0.2, value=1.2)
gamma = gr.Slider(
label="Gamma",
info="Lower: Darker | Higher: Brighter",
minimum=0.2,
maximum=2.2,
step=0.2,
value=1.2,
)
for comp in [count, gap, auto, depth, fmt, gamma]:
comp.do_not_save_to_config = True
return [count, gap, auto, depth, fmt, gamma]
def run(self, p, count:int, gap:float, auto:bool, depth:str, fmt:str, gamma:float):
def run(self, p, count: int, gap: float, auto: bool, depth: str, fmt: str, gamma: float):
center = count // 2
p.seed = get_fixed_seed(p.seed)
p.scripts.script('vectorscope cc').xyzCache.update({
'Enable':'True',
'Alt':'True',
'Brightness': 0,
'DoHR': 'False',
'Method': 'Ones',
'Scaling': '1 - Cos'
})
p.scripts.script("vectorscope cc").xyzCache.update(
{
"Enable": "True",
"Alt": "True",
"Brightness": 0,
"DoHR": "False",
"Method": "Ones",
"Scaling": "1 - Cos",
}
)
baseline = process_images(p)
pc = copy(p)
@ -83,14 +101,9 @@ class VectorHDR(scripts.Script):
if it == center:
continue
pc.scripts.script('vectorscope cc').xyzCache.update({
'Enable':'True',
'Alt':'True',
'Brightness': brackets[it],
'DoHR': 'False',
'Method': 'Ones',
'Scaling': '1 - Cos'
})
pc.scripts.script("vectorscope cc").xyzCache.update(
{"Brightness": brackets[it]}
)
proc = process_images(pc)
imgs[it] = proc.images[0]
@ -98,10 +111,12 @@ class VectorHDR(scripts.Script):
if not auto:
baseline.images = imgs
return baseline
else:
merge_HDR(imgs, p.outpath_samples, depth, fmt, gamma)
return baseline
def brightness_brackets(count, gap):
half = count // 2
return [gap * (i - half) for i in range(count)]

View File

@ -1,50 +0,0 @@
from modules import devices
import torch
def get_delta(latent):
mean = torch.mean(latent)
return torch.sub(latent, mean)
def to_abs(latent):
return torch.abs(latent)
def zeros(latent):
return torch.zeros_like(latent)
def ones(latent):
return torch.ones_like(latent)
def gaussian_noise(latent):
return torch.rand_like(latent)
def normal_noise(latent):
return torch.randn_like(latent)
def multires_noise(latent, use_zero:bool, iterations=8, discount=0.4):
"""
Reference: https://wandb.ai/johnowhitaker/multires_noise/reports/Multi-Resolution-Noise-for-Diffusion-Model-Training--VmlldzozNjYyOTU2
Credit: Kohya_SS
"""
noise = zeros(latent) if use_zero else ones(latent)
batchSize = noise.size(0)
height = noise.size(2)
width = noise.size(3)
device = devices.get_optimal_device()
upsampler = torch.nn.Upsample(size=(height, width), mode="bilinear").to(device)
for b in range(batchSize):
for i in range(iterations):
r = torch.rand(1).item() * 2 + 2
wn = max(1, int(width / (r**i)))
hn = max(1, int(height / (r**i)))
for c in range(4):
noise[b, c] += upsampler(torch.randn(1, 1, hn, wn).to(device))[0, 0] * discount**i
if wn == 1 or hn == 1:
break
return noise / noise.std()

View File

@ -1,26 +1,32 @@
from math import cos, sin, pi
def apply_scaling(alg:str, current_step:int, total_steps:int, bri:float, con:float, sat:float, r:float, g:float, b:float):
ratio = float(current_step / total_steps)
rad = ratio * pi / 2
def apply_scaling(
alg: str,
current_step: int,
total_steps: int,
bri: float,
con: float,
sat: float,
r: float,
g: float,
b: float,
) -> list:
mod = 1.0
if alg == "Flat":
mod = 1.0
match alg:
case "Cos":
mod = cos(rad)
case "Sin":
mod = sin(rad)
case "1 - Cos":
mod = (1 - cos(rad))
case "1 - Sin":
mod = (1 - sin(rad))
else:
ratio = float(current_step / total_steps)
rad = ratio * pi / 2
return [
bri * mod,
con * mod,
(sat - 1) * mod + 1,
r * mod,
g * mod,
b * mod
]
match alg:
case "Cos":
mod = cos(rad)
case "Sin":
mod = sin(rad)
case "1 - Cos":
mod = (1 - cos(rad))
case "1 - Sin":
mod = (1 - sin(rad))
return [bri * mod, con * mod, (sat - 1.0) * mod + 1.0, r * mod, g * mod, b * mod]

View File

@ -1,6 +1,6 @@
from modules import script_callbacks, shared
def on_ui_settings():
shared.opts.add_option("cc_metadata", shared.OptionInfo(False, "Add Vectorscope CC parameters to generation information", section=("infotext", "Infotext")))
shared.opts.add_option("cc_metadata", shared.OptionInfo(True, "Append Vectorscope CC parameters to generation information", section=("infotext", "Infotext")))
script_callbacks.on_ui_settings(on_ui_settings)

View File

@ -1,76 +1,94 @@
import modules.scripts as scripts
import scripts.cc_const as const
import json
import os
STYLE_FILE = scripts.basedir() + '/' + 'styles.json'
EMPTY_STYLE = {
'styles' : {},
'deleted' : {}
}
STYLE_FILE = os.path.join(scripts.basedir(), "styles.json")
class StyleManager():
def __init__(self):
self.STYLE_SHEET = None
EMPTY_STYLE = {"styles": {}, "deleted": {}}
def load_styles(self):
if self.STYLE_SHEET is not None:
return
try:
with open(STYLE_FILE, 'r') as json_file:
self.STYLE_SHEET = json.loads(json_file.read())
print('[Vec. CC] Style Sheet Loaded...')
except IOError:
with open(STYLE_FILE, 'w+') as json_file:
self.STYLE_SHEET = EMPTY_STYLE
json_file.write(json.dumps(self.STYLE_SHEET))
print('[Vec. CC] Creating Empty Style Sheet...')
class StyleManager:
def __init__(self):
self.STYLE_SHEET = None
def list_style(self):
return list(self.STYLE_SHEET['styles'].keys())
def load_styles(self):
if self.STYLE_SHEET is not None:
return
def get_style(self, style_name):
try:
style = self.STYLE_SHEET['styles'][style_name]
return style['alt'], style['brightness'], style['contrast'], style['saturation'], style['rgb'][0], style['rgb'][1], style['rgb'][2]
except KeyError:
print(f'\n[Warning] No Style of Name "{style_name}" Found!\n')
return False, const.Brightness.default, const.Contrast.default, const.Saturation.default, const.R.default, const.G.default, const.B.default
try:
with open(STYLE_FILE, "r", encoding="utf-8") as json_file:
self.STYLE_SHEET = json.loads(json_file.read())
print("[Vec. CC] Style Sheet Loaded...")
def save_style(self, style_name, latent, bri, con, sat, r, g, b):
if style_name in self.STYLE_SHEET['styles'].keys():
print(f'\n[Warning] Duplicated Style Name "{style_name}" Detected! Values are not saved!\n')
return self.list_style()
except IOError:
with open(STYLE_FILE, "w+", encoding="utf-8") as json_file:
self.STYLE_SHEET = EMPTY_STYLE
json_file.write(json.dumps(self.STYLE_SHEET))
print("[Vec. CC] Creating Empty Style Sheet...")
style = {
'alt' : latent,
'brightness' : bri,
'contrast' : con,
'saturation' : sat,
'rgb' : [r, g, b]
}
def list_style(self):
return list(self.STYLE_SHEET["styles"].keys())
self.STYLE_SHEET['styles'].update({style_name:style})
def get_style(self, style_name):
try:
style = self.STYLE_SHEET["styles"][style_name]
return (
style["alt"],
style["brightness"],
style["contrast"],
style["saturation"],
style["rgb"][0],
style["rgb"][1],
style["rgb"][2],
)
with open(STYLE_FILE, 'w+') as json_file:
json_file.write(json.dumps(self.STYLE_SHEET))
except KeyError:
print(f'\n[Warning] No Style of Name "{style_name}" Found!\n')
return (
False,
const.Brightness.default,
const.Contrast.default,
const.Saturation.default,
const.COLOR.default,
const.COLOR.default,
const.COLOR.default,
)
print(f'\nStyle of Name "{style_name}" Saved!\n')
return self.list_style()
def save_style(self, style_name, latent, bri, con, sat, r, g, b):
if style_name in self.STYLE_SHEET["styles"].keys():
print(f'\n[Warning] Duplicated Style Name "{style_name}" Detected! Values were not saved!\n')
return self.list_style()
def delete_style(self, style_name):
try:
style = self.STYLE_SHEET['styles'][style_name]
del self.STYLE_SHEET['styles'][style_name]
except KeyError:
print(f'\n[Warning] No Style of Name "{style_name}" Found!\n')
return self.list_style()
style = {
"alt": latent,
"brightness": bri,
"contrast": con,
"saturation": sat,
"rgb": [r, g, b],
}
self.STYLE_SHEET['deleted'].update({style_name:style})
self.STYLE_SHEET["styles"].update({style_name: style})
with open(STYLE_FILE, 'w+') as json_file:
json_file.write(json.dumps(self.STYLE_SHEET))
with open(STYLE_FILE, "w+") as json_file:
json_file.write(json.dumps(self.STYLE_SHEET))
print(f'\nStyle of Name "{style_name}" Deleted!\n')
return self.list_style()
print(f'\nStyle of Name "{style_name}" Saved!\n')
return self.list_style()
def delete_style(self, style_name):
try:
style = self.STYLE_SHEET["styles"][style_name]
self.STYLE_SHEET["deleted"].update({style_name: style})
del self.STYLE_SHEET["styles"][style_name]
except KeyError:
print(f'\n[Warning] No Style of Name "{style_name}" Found!\n')
return self.list_style()
with open(STYLE_FILE, "w+") as json_file:
json_file.write(json.dumps(self.STYLE_SHEET))
print(f'\nStyle of Name "{style_name}" Deleted!\n')
return self.list_style()

View File

@ -1,20 +0,0 @@
from modules import script_callbacks
import modules.scripts as scripts
import json
VERSION = 'v1.5.1'
def clean_outdated(EXT_NAME:str):
with open(scripts.basedir() + '/' + 'ui-config.json', 'r', encoding='utf8') as json_file:
configs = json.loads(json_file.read())
cleaned_configs = {key: value for key, value in configs.items() if EXT_NAME not in key}
with open(scripts.basedir() + '/' + 'ui-config.json', 'w', encoding='utf8') as json_file:
json.dump(cleaned_configs, json_file)
def refresh_sliders():
clean_outdated('cc.py')
clean_outdated('cc_hdr.py')
script_callbacks.on_before_ui(refresh_sliders)

View File

@ -1,21 +1,37 @@
import modules.scripts as scripts
def grid_reference():
for data in scripts.scripts_data:
if data.script_class.__module__ == 'xyz_grid.py' and hasattr(data, "module"):
if data.script_class.__module__ == "xyz_grid.py" and hasattr(data, "module"):
return data.module
def xyz_support(cache):
raise SystemError("Could not find X/Y/Z Plot...")
def xyz_support(cache: dict):
def apply_field(field):
def _(p, x, xs):
cache.update({field : x})
cache.update({field: x})
return _
def choices_bool():
return ["False", "True"]
def choices_method():
return ["Disabled", "Straight", "Straight Abs.", "Cross", "Cross Abs.", "Ones", "N.Random", "U.Random", "Multi-Res", "Multi-Res Abs."]
return [
"Disabled",
"Straight",
"Straight Abs.",
"Cross",
"Cross Abs.",
"Ones",
"N.Random",
"U.Random",
"Multi-Res",
"Multi-Res Abs.",
]
def choices_scaling():
return ["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"]
@ -34,7 +50,7 @@ def xyz_support(cache):
xyz_grid.AxisOption("[Adv.CC] Proc. H.Fix", str, apply_field("DoHR"), choices=choices_bool),
xyz_grid.AxisOption("[Adv.CC] Method", str, apply_field("Method"), choices=choices_method),
xyz_grid.AxisOption("[Adv.CC] Scaling", str, apply_field("Scaling"), choices=choices_scaling),
xyz_grid.AxisOption("[Adv.CC] Randomize", int, apply_field("Random"))
xyz_grid.AxisOption("[Adv.CC] Randomize", int, apply_field("Random")),
]
xyz_grid.axis_options.extend(extra_axis_options)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 B

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@ -1,37 +1,37 @@
#cc-dot-txt {
#cc-dot-txt, #cc-dot-img {
position: absolute;
width: 24px;
height: 24px;
pointer-events: none;
}
#cc-dot-img {
position: absolute;
width: 24px;
height: 24px;
pointer-events: none;
}
#cc-img-txt {
#cc-img-txt, #cc-img-img {
cursor: pointer;
height: 100%;
width: auto;
margin: auto;
}
#cc-img-img {
cursor: pointer;
#vec-cc-txt, #vec-cc-img {
user-select: none;
}
#vec-cc-txt button {
border-radius: 0.5em;
}
#vec-cc-txt label {
border-radius: 0.5em;
}
#vec-cc-img button {
border-radius: 0.5em;
}
#vec-cc-img label {
#vec-cc-txt button, #vec-cc-txt label {
border-radius: 0.5em;
}
#vec-cc-img button, #vec-cc-img label {
border-radius: 0.5em;
}
#vec-cc-txt fieldset > div {
gap: 0.2em 0.4em;
}
#vec-cc-img fieldset > div {
gap: 0.2em 0.4em;
}
#vec-hdr-txt label, #vec-hdr-img label {
border-radius: 0.5em;
}