main
Haoming 2024-09-20 22:28:19 +08:00 committed by GitHub
commit 0329658686
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 428 additions and 281 deletions

View File

@ -1,3 +1,6 @@
### v2.3.0 - 2024 Sep.20
- Refactor
### v2.2.6 - 2024 Sep.18
- Allow disabling `do_not_save_to_config` to use **Defaults**

358
README.md
View File

@ -1,28 +1,99 @@
# SD Webui Vectorscope CC
This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise** natively,
allowing you to adjust the brightness, contrast, and color of the generations.
This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise** natively during inference, allowing you to adjust the brightness, contrast, and color of the generations.
> Also supports both old & new [Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge)
> [Sample Images](#sample-images)
## Example Images
<p align="center">
<img src="samples/00.jpg" width=256><br>
<code>Base Image w/o Extension</code>
</p>
<details>
<summary>Infotext</summary>
- **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
- **Positive Prompt:** `(high quality, best quality), a 4k cinematic photo of a gentleman in suit, street in a city at night, (depth of field, bokeh)`
- **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
```cpp
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3709157017, Size: 512x512, Denoising strength: 0.5
Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
Hires upscale: 2, Hires steps: 16, Hires upscaler: 2xNomosUni_esrgan_multijpg
```
</details>
<table>
<thead>
<tr align="center">
<td><b>Vibrant</b></td>
<td><b>Cold</b></td>
<td><b>"Movie when Mexico"</b></td>
</tr>
</thead>
<tbody>
<tr align="center">
<td><img src="samples/01.jpg" width=512></td>
<td><img src="samples/02.jpg" width=512></td>
<td><img src="samples/03.jpg" width=512></td>
</tr>
<tr align="left">
<td>
<ul>
<li><b>Alt:</b> <code>True</code></li>
<li><b>Saturation:</b> <code>1.75</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Cos</code></li>
</ul>
</td>
<td>
<ul>
<li><b>Brightness:</b> <code>-5.0</code></li>
<li><b>Contrast:</b> <code>2.5</code></li>
<li><b>Saturation:</b> <code>0.75</code></li>
<li><b>R:</b> <code>-3.0</code></li>
<li><b>B:</b> <code>3.0</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Sin</code></li>
</ul>
</td>
<td>
<ul>
<li><b>Brightness:</b> <code>2.5</code></li>
<li><b>Contrast:</b> <code>-2.5</code></li>
<li><b>Saturation:</b> <code>1.25</code></li>
<li><b>R:</b> <code>1.5</code></li>
<li><b>G:</b> <code>3.0</code></li>
<li><b>B:</b> <code>-4.0</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Sin</code></li>
</ul>
</td>
</tr>
</tbody>
</table>
## How to Use
After installing this Extension, you will see a new section in both **txt2img** and **img2img** tabs.
Refer to the parameters and sample images below and play around with the values.
> **Note:** Since this modifies the underlying latent noise, the composition may change drastically
> **Note:** Since this Extension modifies the underlying latent tensor, the composition may change drastically depending on the parameters
#### Parameters
- **Enable:** Turn on / off this Extension
- **Alt:** Modify an alternative Tensor instead, causing the effects to be significantly stronger
- **Brightness:** Adjust the overall brightness of the image
- **Contrast:** Adjust the overall contrast of the image
- **Saturation:** Adjust the overall saturation of the image
### Basic Parameters
- **Enable:** Enable the Extension 💀
- **Alt:** Cause the Extension effects to be stronger
<details>
<summary><i>Technical Detail</i></summary>
- This parameter makes the Extension modify the `denoised` Tensor instead of the `x` Tensor
</details>
- **Brightness**, **Contrast**, **Saturation**: Adjust the overall `brightness` / `contrast` / `saturation` of the image
#### Color Channels
- Comes with a Color Wheel for visualization
- You can also click and drag on the Color Wheel to select a color directly
> The color picker isn't 100% accurate for **SDXL**, due to 3 layers of conversions...
<table>
<thead align="center">
@ -51,25 +122,39 @@ Refer to the parameters and sample images below and play around with the values.
</tbody>
</table>
#### Buttons
- **Reset:** Reset all settings to the default values
- **Randomize:** Randomize `Brightness`, `Contrast`, `Saturation`, `R`, `G`, `B`
- The Extension also comes with a Color Wheel for visualization, which you can also click on to pick a color directly
> The color picker isn't 100% accurate due to multiple layers of conversions...
#### Style Presets
- Use the `Dropdown` to select a Style then click **Apply Style** to apply
- To apply a Style, select from the `Dropdown` then click **Apply Style**
- To save a Style, enter a name in the `Textbox` then click **Save Style**
- To delete a Style, enter the name in the `Textbox` then click **Delete Style**
- *Deleted Style is still in the `styles.json` in case you wish to retrieve it*
- *Style that was deleted is still in the `styles.json` in case you wish to retrieve it*
- Click **Refresh Style** to update the `Dropdown` if you edited the `styles.json` manually
#### Advanced Settings
- **Process Hires. fix:** By default, this Extension only functions during the **txt2img** phase, so that **Hires. fix** may "fix" the artifacts introduced during **txt2img**. Enable this to process **Hires. fix** phase too.
- This option does not affect **img2img**
- **Process ADetailer:** Enable to process **[ADetailer](https://github.com/Bing-su/adetailer)** phase too.
<blockquote>
You can also find pre-made Styles by the community available online<br>
<ul>
<li><b>eg.</b>
The <a href="https://raw.githubusercontent.com/sandner-art/Photomatix/refs/heads/main/PX-Vectorscope-CC-Styles/styles.json">Photomatix</a>
Styles <i>(right click on the link, click <code>Save link as</code>, then save the <code>.json</code> file into the
<b>sd-webui-vectorscope-cc</b> extension folder)</i>
</li>
</ul>
</blockquote>
### Advanced Parameters
- **Process Hires. fix:** Enable this option to process during the **Hires. fix** phase too
- By default, this Extension only functions during the regular phase of the `txt2img` mode
- **Process ADetailer:** Enable this option to process during the **[ADetailer](https://github.com/Bing-su/adetailer)** phase too
- Will usually cause a square of inconsistent colors
- **Randomize using Seed:** Enable this option to use the current generation `seed` to randomize the basic parameters
- Randomized results will be printed in the console
#### Noise Settings
> let `x` denote the Tensor ; let `y` denote the operations
> let **`x`** denote the latent Tensor ; let **`y`** denote the operations
- **Straight:** All operations are calculated on the same Tensor
- `x += x * y`
@ -77,9 +162,9 @@ Refer to the parameters and sample images below and play around with the values.
- `x += x' * y`
- **Ones:** All operations are calculated on a Tensor filled with ones
- `x += 1 * y`
- **N.Random:** All operations are calculated on a Tensor filled with random values from normal distribution
- **N.Random:** All operations are calculated on a Tensor filled with random values in normal distribution
- `x += randn() * y`
- **U.Random:** All operations are calculated on a Tensor filled with random values from uniform distribution
- **U.Random:** All operations are calculated on a Tensor filled with random values in uniform distribution
- `x += rand() * y`
- **Multi-Res:** All operations are calculated on a Tensor generated with multi-res noise algorithm
- `x += multires() * y`
@ -87,153 +172,194 @@ Refer to the parameters and sample images below and play around with the values.
- `x += abs(F) * y`
<p align="center">
<img src="samples/Method.jpg">
<img src="samples/method.jpg">
</p>
<details>
<summary>Infotext</summary>
- **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
- **Positive Prompt:** `(high quality, best quality), a 4k photo of a cute dog running in the snow, mountains, day, (depth of field, bokeh)`
- **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
- **Brightness:** `2.5`
- **Contrast:** `2.5`
- **Alt:** `True`
- **Scaling:** `1 - Cos`
```cpp
Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 1257068736, Size: 512x512, Denoising strength: 0.5
Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
Hires upscale: 1.5, Hires steps: 16, Hires upscaler: 2xNomosUni_esrgan_multijpg
```
</details>
#### Scaling Settings
By default, this Extension offsets the noise by the same amount each step.
But due to the denoising process, this may produce undesired outcomes such as blurriness at high **Brightness** or noises at low **Brightness**.
Therefore, I added a scaling option to modify the offset amount throughout the process.
By default, this Extension offsets the noise by the same amount every step. But depending on the `Sampler` and `Scheduler` used, and whether `Alt.` was enabled or not, the effects might be too strong during the early or the later phase of the process, which in turn causes artifacts.
> Essentially, the "magnitude" of the default Tensor gets smaller every step, so offsetting by the same amount will have stronger effects at the later steps.
- **Flat:** Default behavior. Same amount each step.
- **Cos:** Cosine scaling. *(High -> Low)*
- **Sin:** Sine scaling. *(Low -> High)*
- **1 - Cos:** *(Low -> High)*
- **1 - Sin:** *(High -> Low)*
- **Flat:** Default behavior
- **Cos:** Cosine scaling `(High -> Low)`
- **Sin:** Sine scaling `(Low -> High)`
- **1 - Cos:** `(Low -> High)`
- **1 - Sin:** `(High -> Low)`
<p align="center">
<img src="samples/Scaling.jpg">
<img src="samples/scaling.jpg">
</p>
## Sample Images
- **Checkpoint:** [Animagine XL V3](https://civitai.com/models/260267)
- **Pos. Prompt:** `[high quality, best quality], 1girl, solo, casual, night, street, city, <lora:SDXL_Lightning_8steps:1>`
- **Neg. Prompt:** `lowres, [low quality, worst quality], jpeg`
- `Euler A SGMUniform`; `10 steps`; `2.0 CFG`; **Seed:** `2836968120`
- `Multi-Res Abs.` ; `Cos`
<details>
<summary>Infotext</summary>
<p align="center">
<code>Disabled</code><br>
<img src="samples/00.jpg" width=768>
</p>
- **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
- **Positive Prompt:** `(high quality, best quality), a 4k photo of a cute cat standing at a flower field in a park, day, (depth of field, bokeh)`
- **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
- **Alt:** `True`
- **Noise:** `Straight Abs.`
<p align="center">
<code>Brightness: 2.0 ; Contrast: -0.5 ; Saturation: 1.5<br>
R: 2.5; G: 1.5; B: -3</code><br>
<img src="samples/01.jpg" width=768>
</p>
```cpp
Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3515074713, Size: 512x512, Denoising strength: 0.5
Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
Hires upscale: 1.5, Hires steps: 12, Hires upscaler: 2xNomosUni_esrgan_multijpg
```
<p align="center">
<code>Brightness: -2.5 ; Contrast: 1 ; Saturation: 0.75<br>
R: -1.5; G: -1.5; B: 4</code><br>
<img src="samples/02.jpg" width=768>
</p>
</details>
### Buttons
- **Reset:** Reset all `Basic` and `Advanced` parameters to the default values
- **Randomize:** Randomize the `Brightness`, `Contrast`, `Saturation`, `R`, `G`, `B` parameters
## Roadmap
- [X] Extension Released
- [X] Extension Released!
- [X] Add Support for **X/Y/Z Plot**
- [X] Implement different **Noise** functions
- [X] Add **Randomize** button
- [X] **Style** Presets
- [X] Implement **Randomize** button
- [X] Implement **Style** Presets
- [X] Implement **Color Wheel** & **Color Picker**
- [X] Implement better scaling algorithms
- [X] Add API Docs
- [X] Append Parameters onto Metadata
- You can enable this in the **Infotext** section of the **Settings** tab
- [X] Add Infotext Support *(by. [catboxanon](https://github.com/catboxanon))*
- [X] ADD **HDR** Script
- [X] Add SDXL Support
- [ ] Add Gradient features
<p align="center">
<code>X/Y/Z Plot Support</code><br>
<img src="samples/XYZ.jpg">
</p>
<p align="center">
<code>Randomize</code><br>
<img src="samples/Random.jpg"><br>
The value is used as the random seed<br>You can refer to the console to see the randomized values</p>
- [X] Append Parameters to Infotext
- [X] Improved Infotext Support *(by. [catboxanon](https://github.com/catboxanon))*
- [X] Add **HDR** Script
- [X] Add Support for **SDXL**
- [ ] Implement Gradient features
## API
You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry to the `alwayson_scripts` of your payload.
An [example](samples/api_example.json) is provided.
The `args` are sent in the following order in an `array`:
You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry to the `alwayson_scripts` of your payload. An [example](samples/api_example.json) is provided. The `args` are sent in the following order in an `array`:
- **Enable:** `bool`
- **Alt:** `bool`
- **Brightness:** `float`
- **Contrast:** `float`
- **Saturation:** `float`
- **R:** `float`
- **G:** `float`
- **B:** `float`
- **Process Hires. Fix:** `bool`
- **Noise Settings:** `str`
- **Scaling Settings:** `str`
<table>
<thead>
<tr align="center">
<td><b>Parameter</b></td>
<td><b>Type</b></td>
</tr>
</thead>
<tbody>
<tr align="center">
<td>Enable</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Alt.</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Brightness</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Contrast</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Saturation</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>R</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>G</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>B</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Hires. fix</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>ADetailer</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Randomize</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Noise Method</td>
<td><code>str</code></td>
</tr>
<tr align="center">
<td>Scaling</td>
<td><code>str</code></td>
</tr>
</tbody>
</table>
## Known Issues
- Does **not** work certain samplers *(See [Wiki](https://github.com/Haoming02/sd-webui-vectorscope-cc/wiki/Vectorscope-CC-Wiki#effects-with-different-samplers))*
- Has little effect when used with certain **LoRA**s
- In rare occasions, this Extension has little effects when used with certain **LoRA**s
- Works better / worse with certain `Samplers`
<!--- *(See [Wiki](https://github.com/Haoming02/sd-webui-vectorscope-cc/wiki/Vectorscope-CC-Wiki#effects-with-different-samplers))* --->
## HDR
<p align="right"><i><b>BETA</b></i></p>
> [Discussion Thread](https://github.com/Haoming02/sd-webui-vectorscope-cc/issues/16)
- In the **Script** `Dropdown` at the bottom, there is now a new option: **`High Dynamic Range`**
In the **Script** `Dropdown` at the bottom, there is now a new **`High Dynamic Range`** option:
- This script will generate multiple images *("Brackets")* of varying brightness, then merge them into 1 HDR image
- *Do provide feedback in the thread!*
- **Highly Recommended** to use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* worked well in my experience.
- **(Recommended)** Use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* works well in my experience
#### Settings
- **Brackets:** The numer of images to generate
- **Gaps:** The brightness difference between each image
- **Automatically Merge:** When enabled, this will merge the images using an `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder; When disabled, this will return all images to the result section, for when you have a more advanced program such as Photoshop to do the merging.
- All the images are still saved to the `outputs` folder regardless
- **Automatically Merge:** When enabled, this will merge the images using an `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder
- Disable this if you want to merge them yourself using better external program
<hr>
<details>
<summary>Offset Noise TL;DR</summary>
The most common *version* of **Offset Noise** you may have heard of is from this [blog post](https://www.crosslabs.org/blog/diffusion-with-offset-noise),
where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5` *(**ie.** grey)*.
The most common *version* of **Offset Noise** you may have heard of is from this [blog post](https://www.crosslabs.org/blog/diffusion-with-offset-noise), where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5` *(**ie.** grey)*.
> **ie.** Even if you prompt for dark/night or bright/snow, the overall image still looks "grey"
> **ie.** Even if you prompt for dark/night or bright/snow, the average of the image is still "grey"
> [Technical Explanations](https://youtu.be/cVxQmbf3q7Q)
However, this Extension instead tries to offset the latent noise during the **inference** phase.
Therefore, you do not need to use models that were specially trained, as this can work on any model.
However, this Extension instead tries to offset the latent noise during the **inference** phase. Therefore, you do not need to use models that were specially trained, as this can work on any model.
</details>
<details>
<summary>How does this work?</summary>
After reading through and messing around with the code,
I found out that it is possible to directly modify the Tensors
representing the latent noise used by the Stable Diffusion process.
After reading through and messing around with the code, I found out that it is possible to directly modify the Tensors representing the latent noise used by the Stable Diffusion process.
The dimensions of the Tensors is `(X, 4, H / 8, W / 8)`, which can be thought of like this:
> **X** batch of noise images, with **4** channels, each with **(W / 8) x (H / 8)** values
The dimensions of the Tensors is `(X, 4, H / 8, W / 8)`, which represents **X** batch of noise images, with **4** channels, each with **(W / 8) x (H / 8)** values
> **eg.** Generating a single 512x768 image will create a Tensor of size (1, 4, 96, 64)
Then, I tried to play around with the values of each channel and ended up discovering these relationships.
Essentially, the 4 channels correspond to the **CMYK** color format,
hence why you can control the brightness as well as the colors.
Then, I tried to play around with the values of each channel and ended up discovering these relationships. Essentially, the 4 channels correspond to the **CMYK** color format for `SD1` *(**Y'CbCr** for `SDXL`)*, hence why you can control the brightness as well as the colors.
</details>
<hr>
#### Vectorscope?
The Extension is named this way because the color interactions remind me of the `Vectorscope` found in **Premiere Pro**'s **Lumetri Color**.
Those who are experienced in Color Correction should be rather familiar with this Extension.
The Extension is named this way because the color interactions remind me of the `Vectorscope` found in **Premiere Pro**'s **Lumetri Color**. Those who are experienced in Color Correction should be rather familiar with this Extension.
<p align="center"><img src="scripts/vectorscope.png" width=256></p>

View File

@ -92,26 +92,17 @@ onUiLoaded(() => {
wheel.id = `cc-img-${mode}`;
const sliders = [
[
document.getElementById(`cc-r-${mode}`).querySelector('input[type=number]'),
document.getElementById(`cc-r-${mode}`).querySelector('input[type=range]')
],
[
document.getElementById(`cc-g-${mode}`).querySelector('input[type=number]'),
document.getElementById(`cc-g-${mode}`).querySelector('input[type=range]')
],
[
document.getElementById(`cc-b-${mode}`).querySelector('input[type=number]'),
document.getElementById(`cc-b-${mode}`).querySelector('input[type=range]')
]
document.getElementById(`cc-r-${mode}`).querySelectorAll('input'),
document.getElementById(`cc-g-${mode}`).querySelectorAll('input'),
document.getElementById(`cc-b-${mode}`).querySelectorAll('input'),
];
const temp = document.getElementById(`cc-temp-${mode}`);
const dot = temp.querySelector('img');
dot.id = `cc-dot-${mode}`;
dot.style.left = 'calc(50% - 12px)';
dot.style.top = 'calc(50% - 12px)';
dot.id = `cc-dot-${mode}`;
container.appendChild(dot);
temp.remove();

View File

@ -0,0 +1,4 @@
"""
Author: Haoming02
License: MIT
"""

View File

@ -38,7 +38,10 @@ class NoiseMethods:
@staticmethod
@torch.inference_mode()
def multires_noise(
latent: torch.Tensor, use_zero: bool, iterations: int = 8, discount: float = 0.4
latent: torch.Tensor,
use_zero: bool,
iterations: int = 10,
discount: float = 0.8,
):
"""
Credit: Kohya_SS
@ -46,21 +49,18 @@ class NoiseMethods:
"""
noise = NoiseMethods.zeros(latent) if use_zero else NoiseMethods.ones(latent)
b, c, w, h = noise.shape
device = latent.device
b, c, w, h = noise.shape
upsampler = torch.nn.Upsample(size=(w, h), mode="bilinear").to(device)
for batch in range(b):
for i in range(iterations):
r = random() * 2 + 2
wn = max(1, int(w / (r**i)))
hn = max(1, int(h / (r**i)))
noise[batch] += (
upsampler(torch.randn(1, c, hn, wn).to(device)) * discount**i
)[0]
noise += upsampler(torch.randn(b, c, wn, hn).to(device)) * discount**i
if wn == 1 or hn == 1:
break
@ -79,6 +79,7 @@ def RGB_2_CbCr(r: float, g: float, b: float) -> tuple[float, float]:
original_callback = KDiffusionSampler.callback_state
@torch.no_grad()
@torch.inference_mode()
@wraps(original_callback)
def cc_callback(self, d):
@ -96,7 +97,7 @@ def cc_callback(self, d):
mode = str(self.vec_cc["mode"])
method = str(self.vec_cc["method"])
source: torch.Tensor = d[mode]
target: torch.Tensor = None
target = None
if "Straight" in method:
target = d[mode].detach().clone()
@ -150,7 +151,6 @@ def cc_callback(self, d):
source[i][3] *= sat
else:
# But why...
cb, cr = RGB_2_CbCr(r, g, b)
for i in range(batchSize):
@ -170,12 +170,11 @@ def cc_callback(self, d):
return original_callback(self, d)
KDiffusionSampler.callback_state = cc_callback
def restore_callback():
KDiffusionSampler.callback_state = original_callback
on_script_unloaded(restore_callback)
on_ui_settings(settings)
def hook_callbacks():
KDiffusionSampler.callback_state = cc_callback
on_script_unloaded(restore_callback)
on_ui_settings(settings)

View File

@ -9,16 +9,19 @@ DOT = os.path.join(scripts.basedir(), "scripts", "dot.png")
def create_colorpicker(is_img: bool):
m: str = "img" if is_img else "txt"
gr.Image(
whl = gr.Image(
value=WHEEL,
interactive=False,
container=False,
elem_id=f"cc-colorwheel-{m}",
)
gr.Image(
dot = gr.Image(
value=DOT,
interactive=False,
container=False,
elem_id=f"cc-temp-{m}",
)
whl.do_not_save_to_config = True
dot.do_not_save_to_config = True

View File

@ -2,6 +2,7 @@ import random
class Param:
def __init__(self, minimum: float, maximum: float, default: float):
self.minimum = minimum
self.maximum = maximum

View File

@ -13,10 +13,9 @@ def apply_scaling(
b: float,
) -> list:
if alg == "Flat":
mod = 1.0
else:
if alg != "Flat":
ratio = float(current_step / total_steps)
rad = ratio * pi / 2

View File

@ -10,6 +10,7 @@ EMPTY_STYLE = {"styles": {}, "deleted": {}}
class StyleManager:
def __init__(self):
self.STYLE_SHEET: dict = None

View File

@ -1,7 +1,7 @@
from modules import scripts
def grid_reference():
def _grid_reference():
for data in scripts.scripts_data:
if data.script_class.__module__ in (
"scripts.xyz_grid",
@ -40,7 +40,7 @@ def xyz_support(cache: dict):
def choices_scaling():
return ["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"]
xyz_grid = grid_reference()
xyz_grid = _grid_reference()
extra_axis_options = [
xyz_grid.AxisOption(

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 507 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 195 KiB

After

Width:  |  Height:  |  Size: 480 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 574 KiB

BIN
samples/03.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 492 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 694 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 587 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 MiB

View File

@ -1,25 +1,27 @@
{
"prompt": "(masterpiece, best quality), 1girl, solo, night",
"negative_prompt": "(low quality, worst quality:1.2)",
"seed": -1,
"prompt": "a photo of a dog",
"negative_prompt": "(low quality, worst quality)",
"sampler_name": "Euler a",
"sampler_index": "euler",
"batch_size": 1,
"steps": 24,
"cfg_scale": 8.5,
"cfg_scale": 6.0,
"batch_size": 1,
"seed": -1,
"width": 512,
"height": 512,
"alwayson_scripts": {
"Vectorscope CC": {
"vectorscope cc": {
"args": [
true,
true,
-2.0,
-2.5,
1.5,
1.25,
0.0,
0.85,
0.0,
0.0,
1.0,
false,
false,
false,
"Straight Abs.",
"Flat"

BIN
samples/method.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 964 KiB

BIN
samples/scaling.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -2,20 +2,21 @@ from modules.sd_samplers_kdiffusion import KDiffusionSampler
from modules import shared, scripts
from lib_cc.colorpicker import create_colorpicker
from lib_cc.callback import hook_callbacks
from lib_cc.style import StyleManager
from lib_cc.xyz import xyz_support
from lib_cc import callback
from lib_cc import const
from random import seed
import gradio as gr
VERSION = "2.2.6"
VERSION = "2.3.0"
style_manager = StyleManager()
style_manager.load_styles()
hook_callbacks()
class VectorscopeCC(scripts.Script):
@ -45,24 +46,24 @@ class VectorscopeCC(scripts.Script):
with gr.Row():
bri = gr.Slider(
label="Brightness",
value=const.Brightness.default,
minimum=const.Brightness.minimum,
maximum=const.Brightness.maximum,
step=0.05,
value=const.Brightness.default,
)
con = gr.Slider(
label="Contrast",
value=const.Contrast.default,
minimum=const.Contrast.minimum,
maximum=const.Contrast.maximum,
step=0.05,
value=const.Contrast.default,
)
sat = gr.Slider(
label="Saturation",
value=const.Saturation.default,
minimum=const.Saturation.minimum,
maximum=const.Saturation.maximum,
step=0.05,
value=const.Saturation.default,
)
with gr.Row():
@ -70,42 +71,33 @@ class VectorscopeCC(scripts.Script):
r = gr.Slider(
label="R",
info="Cyan | Red",
value=const.Color.default,
minimum=const.Color.minimum,
maximum=const.Color.maximum,
step=0.05,
value=const.Color.default,
elem_id=f"cc-r-{mode}",
)
g = gr.Slider(
label="G",
info="Magenta | Green",
value=const.Color.default,
minimum=const.Color.minimum,
maximum=const.Color.maximum,
step=0.05,
value=const.Color.default,
elem_id=f"cc-g-{mode}",
)
b = gr.Slider(
label="B",
info="Yellow | Blue",
value=const.Color.default,
minimum=const.Color.minimum,
maximum=const.Color.maximum,
step=0.05,
value=const.Color.default,
elem_id=f"cc-b-{mode}",
)
r.input(
None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
g.input(
None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
b.input(
for c in (r, g, b):
c.input(
None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
@ -125,7 +117,9 @@ class VectorscopeCC(scripts.Script):
refresh_btn = gr.Button(value="Refresh Style", scale=2)
with gr.Row(elem_classes="style-rows"):
style_name = gr.Textbox(label="Style Name", scale=3)
style_name = gr.Textbox(
label="Style Name", lines=1, max_lines=1, scale=3
)
save_btn = gr.Button(
value="Save Style", elem_id=f"cc-save-{mode}", scale=2
)
@ -147,12 +141,15 @@ class VectorscopeCC(scripts.Script):
with gr.Accordion("Advanced Settings", open=False):
with gr.Row():
doHR = gr.Checkbox(label="Process Hires. fix")
doHR = gr.Checkbox(
label="Process Hires. fix",
visible=(not is_img2img),
)
doAD = gr.Checkbox(label="Process Adetailer")
doRN = gr.Checkbox(label="Randomize using Seed")
method = gr.Radio(
[
choices=(
"Straight",
"Straight Abs.",
"Cross",
@ -162,13 +159,13 @@ class VectorscopeCC(scripts.Script):
"U.Random",
"Multi-Res",
"Multi-Res Abs.",
],
),
label="Noise Settings",
value="Straight Abs.",
)
scaling = gr.Radio(
["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"],
choices=("Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"),
label="Scaling Settings",
value="Flat",
)
@ -190,7 +187,7 @@ class VectorscopeCC(scripts.Script):
apply_btn.click(
fn=style_manager.get_style,
inputs=style_choice,
inputs=[style_choice],
outputs=[*comps],
).then(
None,
@ -201,18 +198,18 @@ class VectorscopeCC(scripts.Script):
save_btn.click(
fn=lambda *args: gr.update(choices=style_manager.save_style(*args)),
inputs=[style_name, *comps],
outputs=style_choice,
outputs=[style_choice],
)
delete_btn.click(
fn=lambda name: gr.update(choices=style_manager.delete_style(name)),
inputs=style_name,
outputs=style_choice,
inputs=[style_name],
outputs=[style_choice],
)
refresh_btn.click(
fn=lambda _: gr.update(choices=style_manager.load_styles()),
outputs=style_choice,
fn=lambda: gr.update(choices=style_manager.load_styles()),
outputs=[style_choice],
)
with gr.Row():
@ -250,7 +247,7 @@ class VectorscopeCC(scripts.Script):
outputs=[*comps],
show_progress="hidden",
).then(
None,
fn=None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
@ -260,7 +257,7 @@ class VectorscopeCC(scripts.Script):
outputs=[bri, con, sat, r, g, b],
show_progress="hidden",
).then(
None,
fn=None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
@ -310,52 +307,45 @@ class VectorscopeCC(scripts.Script):
seeds: list[int],
subseeds: list[int],
):
if "Enable" in self.xyzCache.keys():
enable = self.xyzCache["Enable"].lower().strip() == "true"
enable = self.xyzCache.pop("Enable", str(enable)).lower().strip() == "true"
if not enable:
if len(self.xyzCache) > 0:
if "Enable" not in self.xyzCache.keys():
print("\n[Vec.CC] x [X/Y/Z Plot] Extension is not Enabled!\n")
self.xyzCache.clear()
KDiffusionSampler.vec_cc = {"enable": False}
setattr(KDiffusionSampler, "vec_cc", {"enable": False})
return p
method = str(self.xyzCache.pop("Method", method))
if method == "Disabled":
setattr(KDiffusionSampler, "vec_cc", {"enable": False})
return p
if "Random" in self.xyzCache.keys():
print("[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled.")
if len(self.xyzCache) > 1:
print("Some parameters will not apply!")
print("Some parameters will be overridden!")
cc_seed = int(self.xyzCache.pop("Random"))
else:
cc_seed = int(seeds[0]) if doRN else None
if "Alt" in self.xyzCache.keys():
latent = self.xyzCache["Alt"].lower().strip() == "true"
latent = self.xyzCache.pop("Alt", str(latent)).lower().strip() == "true"
doHR = self.xyzCache.pop("DoHR", str(doHR)).lower().strip() == "true"
scaling = str(self.xyzCache.pop("Scaling", scaling))
if "DoHR" in self.xyzCache.keys():
doHR = self.xyzCache["DoHR"].lower().strip() == "true"
bri = float(self.xyzCache.pop("Brightness", bri))
con = float(self.xyzCache.pop("Contrast", con))
sat = float(self.xyzCache.pop("Saturation", sat))
if "Random" in self.xyzCache.keys():
cc_seed = int(self.xyzCache["Random"])
r = float(self.xyzCache.pop("R", r))
g = float(self.xyzCache.pop("G", g))
b = float(self.xyzCache.pop("B", b))
bri = float(self.xyzCache.get("Brightness", bri))
con = float(self.xyzCache.get("Contrast", con))
sat = float(self.xyzCache.get("Saturation", sat))
r = float(self.xyzCache.get("R", r))
g = float(self.xyzCache.get("G", g))
b = float(self.xyzCache.get("B", b))
method = str(self.xyzCache.get("Method", method))
scaling = str(self.xyzCache.get("Scaling", scaling))
self.xyzCache.clear()
if method == "Disabled":
KDiffusionSampler.vec_cc = {"enable": False}
return p
steps: int = getattr(p, "firstpass_steps", None) or p.steps
assert len(self.xyzCache) == 0
if cc_seed:
seed(cc_seed)
@ -368,13 +358,13 @@ class VectorscopeCC(scripts.Script):
g = const.Color.rand()
b = const.Color.rand()
print(f"\n-> Seed: {cc_seed}")
print(f"Brightness:\t{bri}")
print(f"Contrast:\t{con}")
print(f"Saturation:\t{sat}")
print(f"R:\t\t{r}")
print(f"G:\t\t{g}")
print(f"B:\t\t{b}\n")
print(f"\n[Seed: {cc_seed}]")
print(f"> Brightness: {bri}")
print(f"> Contrast: {con}")
print(f"> Saturation: {sat}")
print(f"> R: {r}")
print(f"> G: {g}")
print(f"> B: {b}\n")
if getattr(shared.opts, "cc_metadata", True):
p.extra_generation_params.update(
@ -396,6 +386,8 @@ class VectorscopeCC(scripts.Script):
}
)
steps: int = getattr(p, "firstpass_steps", None) or p.steps
bri /= steps
con /= steps
sat = pow(sat, 1.0 / steps)
@ -405,7 +397,10 @@ class VectorscopeCC(scripts.Script):
mode: str = "x" if latent else "denoised"
KDiffusionSampler.vec_cc = {
setattr(
KDiffusionSampler,
"vec_cc",
{
"enable": True,
"mode": mode,
"bri": bri,
@ -419,6 +414,15 @@ class VectorscopeCC(scripts.Script):
"doAD": doAD,
"scaling": scaling,
"step": steps,
}
},
)
return p
def before_hr(self, p, enable: bool, *args, **kwargs):
if enable:
steps: int = getattr(p, "hr_second_pass_steps", None) or p.steps
KDiffusionSampler.vec_cc["step"] = steps
return p

View File

@ -1,4 +1,5 @@
from modules.processing import process_images, get_fixed_seed
from modules.shared import state
from modules import scripts
from copy import copy
import gradio as gr
@ -6,39 +7,36 @@ import numpy as np
import cv2 as cv
# https://docs.opencv.org/4.8.0/d2/df0/tutorial_py_hdr.html
def merge_HDR(imgs: list, path: str, depth: str, fmt: str, gamma: float) -> np.ndarray:
def _merge_HDR(imgs: list, path: str, depth: str, fmt: str, gamma: float):
"""https://docs.opencv.org/4.8.0/d2/df0/tutorial_py_hdr.html"""
import datetime
import math
import os
output_folder = os.path.join(path, "hdr")
os.makedirs(output_folder, exist_ok=True)
out_dir = os.path.join(os.path.dirname(path), "hdr")
os.makedirs(out_dir, exist_ok=True)
print(f'\nSaving HDR Outputs to "{out_dir}"\n')
imgs_np = [np.asarray(img).astype(np.uint8) for img in imgs]
imgs_np = [np.asarray(img, dtype=np.uint8) for img in imgs]
merge = cv.createMergeMertens()
hdr = merge.process(imgs_np)
hdr += math.ceil(0 - np.min(hdr) * 1000) / 1000
# print(f'{np.min(hdr)}, {np.max(hdr)}')
# shift min to 0.0
hdr += math.ceil(0.0 - np.min(hdr) * 1000) / 1000
# print(f"({np.min(hdr)}, {np.max(hdr)}")
target = 65535 if depth == "16bpc" else 255
precision = "uint16" if depth == "16bpc" else "uint8"
precision = np.uint16 if depth == "16bpc" else np.uint8
hdr = np.power(hdr, (1 / gamma))
ldr = np.clip(hdr * target, 0, target).astype(precision)
rgb = cv.cvtColor(ldr, cv.COLOR_BGR2RGB)
cv.imwrite(
os.path.join(
output_folder, f'{datetime.datetime.now().strftime("%H-%M-%S")}{fmt}'
),
rgb,
)
return ldr
time = datetime.datetime.now().strftime("%H-%M-%S")
cv.imwrite(os.path.join(out_dir, f"{time}{fmt}"), rgb)
class VectorHDR(scripts.Script):
@ -50,10 +48,22 @@ class VectorHDR(scripts.Script):
return True
def ui(self, is_img2img):
with gr.Row():
count = gr.Slider(label="Brackets", minimum=3, maximum=9, step=2, value=5)
count = gr.Slider(
label="Brackets",
minimum=3,
maximum=9,
step=2,
value=5,
)
gap = gr.Slider(
label="Gaps", minimum=0.50, maximum=2.50, step=0.25, value=1.25
label="Gaps",
minimum=0.50,
maximum=2.50,
step=0.25,
value=1.25,
)
with gr.Accordion(
@ -61,6 +71,7 @@ class VectorHDR(scripts.Script):
elem_id=f'vec-hdr-{"img" if is_img2img else "txt"}',
open=False,
):
auto = gr.Checkbox(label="Automatically Merge", value=True)
with gr.Row():
@ -76,7 +87,7 @@ class VectorHDR(scripts.Script):
value=1.2,
)
for comp in [count, gap, auto, depth, fmt, gamma]:
for comp in (count, gap, auto, depth, fmt, gamma):
comp.do_not_save_to_config = True
return [count, gap, auto, depth, fmt, gamma]
@ -84,7 +95,8 @@ class VectorHDR(scripts.Script):
def run(
self, p, count: int, gap: float, auto: bool, depth: str, fmt: str, gamma: float
):
center = count // 2
center: int = count // 2
brackets = brightness_brackets(count, gap)
p.seed = get_fixed_seed(p.seed)
p.scripts.script("vectorscope cc").xyzCache.update({"Enable": "False"})
@ -95,9 +107,12 @@ class VectorHDR(scripts.Script):
imgs = [None] * count
imgs[center] = baseline.images[0]
brackets = brightness_brackets(count, gap)
for it in range(count):
if state.skipped or state.interrupted or state.stopping_generation:
print("HDR Process Skipped...")
return baseline
if it == center:
continue
@ -115,14 +130,13 @@ class VectorHDR(scripts.Script):
proc = process_images(pc)
imgs[it] = proc.images[0]
if not auto:
baseline.images = imgs
else:
baseline.images = [merge_HDR(imgs, p.outpath_samples, depth, fmt, gamma)]
if auto:
_merge_HDR(imgs, p.outpath_samples, depth, fmt, gamma)
baseline.images = imgs
return baseline
def brightness_brackets(count: int, gap: int) -> list[int]:
def brightness_brackets(count: int, gap: float) -> list[float]:
half = count // 2
return [gap * (i - half) for i in range(count)]