pull/32/head
Haoming 2024-09-20 12:00:20 +08:00
parent a7e25e9714
commit 20a6fb3166
23 changed files with 417 additions and 281 deletions

View File

@ -1,3 +1,6 @@
### v2.3.0 - 2024 Sep.20
- Refactor
### v2.2.6 - 2024 Sep.18 ### v2.2.6 - 2024 Sep.18
- Allow disabling `do_not_save_to_config` to use **Defaults** - Allow disabling `do_not_save_to_config` to use **Defaults**

347
README.md
View File

@ -1,28 +1,99 @@
# SD Webui Vectorscope CC # SD Webui Vectorscope CC
This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise** natively, This is an Extension for the [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which performs a kind of **Offset Noise** natively during inference, allowing you to adjust the brightness, contrast, and color of the generations.
allowing you to adjust the brightness, contrast, and color of the generations.
> Also supports both old & new [Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge) > Also supports both old & new [Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge)
> [Sample Images](#sample-images) ## Example Images
<p align="center">
<img src="samples/00.jpg" width=256><br>
<code>Base Image w/o Extension</code>
</p>
<details>
<summary>Infotext</summary>
- **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
- **Positive Prompt:** `(high quality, best quality), a 4k cinematic photo of a gentleman in suit, street in a city at night, (depth of field, bokeh)`
- **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
```cpp
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3709157017, Size: 512x512, Denoising strength: 0.5
Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
Hires upscale: 2, Hires steps: 16, Hires upscaler: 2xNomosUni_esrgan_multijpg
```
</details>
<table>
<thead>
<tr align="center">
<td><b>Vibrant</b></td>
<td><b>Cold</b></td>
<td><b>"Movie when Mexico"</b></td>
</tr>
</thead>
<tbody>
<tr align="center">
<td><img src="samples/01.jpg" width=512></td>
<td><img src="samples/02.jpg" width=512></td>
<td><img src="samples/03.jpg" width=512></td>
</tr>
<tr align="left">
<td>
<ul>
<li><b>Alt:</b> <code>True</code></li>
<li><b>Saturation:</b> <code>1.75</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Cos</code></li>
</ul>
</td>
<td>
<ul>
<li><b>Brightness:</b> <code>-5.0</code></li>
<li><b>Contrast:</b> <code>2.5</code></li>
<li><b>Saturation:</b> <code>0.75</code></li>
<li><b>R:</b> <code>-3.0</code></li>
<li><b>B:</b> <code>3.0</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Sin</code></li>
</ul>
</td>
<td>
<ul>
<li><b>Brightness:</b> <code>2.5</code></li>
<li><b>Contrast:</b> <code>-2.5</code></li>
<li><b>Saturation:</b> <code>1.25</code></li>
<li><b>R:</b> <code>1.5</code></li>
<li><b>G:</b> <code>3.0</code></li>
<li><b>B:</b> <code>-4.0</code></li>
<li><b>Noise:</b> <code>Ones</code></li>
<li><b>Scaling:</b> <code>1 - Sin</code></li>
</ul>
</td>
</tr>
</tbody>
</table>
## How to Use ## How to Use
After installing this Extension, you will see a new section in both **txt2img** and **img2img** tabs.
Refer to the parameters and sample images below and play around with the values.
> **Note:** Since this modifies the underlying latent noise, the composition may change drastically > **Note:** Since this Extension modifies the underlying latent tensor, the composition may change drastically depending on the parameters
#### Parameters ### Basic Parameters
- **Enable:** Turn on / off this Extension
- **Alt:** Modify an alternative Tensor instead, causing the effects to be significantly stronger - **Enable:** Enable the Extension 💀
- **Brightness:** Adjust the overall brightness of the image - **Alt:** Cause the Extension effects to be stronger
- **Contrast:** Adjust the overall contrast of the image
- **Saturation:** Adjust the overall saturation of the image <details>
<summary><i>Technical Detail</i></summary>
- This parameter makes the Extension modify the `denoised` Tensor instead of the `x` Tensor
</details>
- **Brightness**, **Contrast**, **Saturation**: Adjust the overall `brightness` / `contrast` / `saturation` of the image
#### Color Channels #### Color Channels
- Comes with a Color Wheel for visualization
- You can also click and drag on the Color Wheel to select a color directly
> The color picker isn't 100% accurate for **SDXL**, due to 3 layers of conversions...
<table> <table>
<thead align="center"> <thead align="center">
@ -51,25 +122,28 @@ Refer to the parameters and sample images below and play around with the values.
</tbody> </tbody>
</table> </table>
#### Buttons - The Extension also comes with a Color Wheel for visualization, which you can also click on to pick a color directly
- **Reset:** Reset all settings to the default values
- **Randomize:** Randomize `Brightness`, `Contrast`, `Saturation`, `R`, `G`, `B` > The color picker isn't 100% accurate due to multiple layers of conversions...
#### Style Presets #### Style Presets
- Use the `Dropdown` to select a Style then click **Apply Style** to apply - To apply a Style, select from the `Dropdown` then click **Apply Style**
- To save a Style, enter a name in the `Textbox` then click **Save Style** - To save a Style, enter a name in the `Textbox` then click **Save Style**
- To delete a Style, enter the name in the `Textbox` then click **Delete Style** - To delete a Style, enter the name in the `Textbox` then click **Delete Style**
- *Deleted Style is still in the `styles.json` in case you wish to retrieve it* - *Style that was deleted is still in the `styles.json` in case you wish to retrieve it*
- Click **Refresh Style** to update the `Dropdown` if you edited the `styles.json` manually - Click **Refresh Style** to update the `Dropdown` if you edited the `styles.json` manually
#### Advanced Settings ### Advanced Parameters
- **Process Hires. fix:** By default, this Extension only functions during the **txt2img** phase, so that **Hires. fix** may "fix" the artifacts introduced during **txt2img**. Enable this to process **Hires. fix** phase too.
- This option does not affect **img2img** - **Process Hires. fix:** Enable this option to process during the **Hires. fix** phase too
- **Process ADetailer:** Enable to process **[ADetailer](https://github.com/Bing-su/adetailer)** phase too. - By default, this Extension only functions during the regular phase of the `txt2img` mode
- **Process ADetailer:** Enable this option to process during the **[ADetailer](https://github.com/Bing-su/adetailer)** phase too
- Will usually cause a square of inconsistent colors - Will usually cause a square of inconsistent colors
- **Randomize using Seed:** Enable this option to use the current generation `seed` to randomize the basic parameters
- Randomized results will be printed in the console
#### Noise Settings #### Noise Settings
> let `x` denote the Tensor ; let `y` denote the operations > let **`x`** denote the latent Tensor ; let **`y`** denote the operations
- **Straight:** All operations are calculated on the same Tensor - **Straight:** All operations are calculated on the same Tensor
- `x += x * y` - `x += x * y`
@ -77,9 +151,9 @@ Refer to the parameters and sample images below and play around with the values.
- `x += x' * y` - `x += x' * y`
- **Ones:** All operations are calculated on a Tensor filled with ones - **Ones:** All operations are calculated on a Tensor filled with ones
- `x += 1 * y` - `x += 1 * y`
- **N.Random:** All operations are calculated on a Tensor filled with random values from normal distribution - **N.Random:** All operations are calculated on a Tensor filled with random values in normal distribution
- `x += randn() * y` - `x += randn() * y`
- **U.Random:** All operations are calculated on a Tensor filled with random values from uniform distribution - **U.Random:** All operations are calculated on a Tensor filled with random values in uniform distribution
- `x += rand() * y` - `x += rand() * y`
- **Multi-Res:** All operations are calculated on a Tensor generated with multi-res noise algorithm - **Multi-Res:** All operations are calculated on a Tensor generated with multi-res noise algorithm
- `x += multires() * y` - `x += multires() * y`
@ -87,153 +161,194 @@ Refer to the parameters and sample images below and play around with the values.
- `x += abs(F) * y` - `x += abs(F) * y`
<p align="center"> <p align="center">
<img src="samples/Method.jpg"> <img src="samples/method.jpg">
</p> </p>
<details>
<summary>Infotext</summary>
- **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
- **Positive Prompt:** `(high quality, best quality), a 4k photo of a cute dog running in the snow, mountains, day, (depth of field, bokeh)`
- **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
- **Brightness:** `2.5`
- **Contrast:** `2.5`
- **Alt:** `True`
- **Scaling:** `1 - Cos`
```cpp
Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 1257068736, Size: 512x512, Denoising strength: 0.5
Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
Hires upscale: 1.5, Hires steps: 16, Hires upscaler: 2xNomosUni_esrgan_multijpg
```
</details>
#### Scaling Settings #### Scaling Settings
By default, this Extension offsets the noise by the same amount each step. By default, this Extension offsets the noise by the same amount every step. But depending on the `Sampler` and `Scheduler` used, and whether `Alt.` was enabled or not, the effects might be too strong during the early or the later phase of the process, which in turn causes artifacts.
But due to the denoising process, this may produce undesired outcomes such as blurriness at high **Brightness** or noises at low **Brightness**.
Therefore, I added a scaling option to modify the offset amount throughout the process.
> Essentially, the "magnitude" of the default Tensor gets smaller every step, so offsetting by the same amount will have stronger effects at the later steps. - **Flat:** Default behavior
- **Cos:** Cosine scaling `(High -> Low)`
- **Flat:** Default behavior. Same amount each step. - **Sin:** Sine scaling `(Low -> High)`
- **Cos:** Cosine scaling. *(High -> Low)* - **1 - Cos:** `(Low -> High)`
- **Sin:** Sine scaling. *(Low -> High)* - **1 - Sin:** `(High -> Low)`
- **1 - Cos:** *(Low -> High)*
- **1 - Sin:** *(High -> Low)*
<p align="center"> <p align="center">
<img src="samples/Scaling.jpg"> <img src="samples/scaling.jpg">
</p> </p>
## Sample Images <details>
- **Checkpoint:** [Animagine XL V3](https://civitai.com/models/260267) <summary>Infotext</summary>
- **Pos. Prompt:** `[high quality, best quality], 1girl, solo, casual, night, street, city, <lora:SDXL_Lightning_8steps:1>`
- **Neg. Prompt:** `lowres, [low quality, worst quality], jpeg`
- `Euler A SGMUniform`; `10 steps`; `2.0 CFG`; **Seed:** `2836968120`
- `Multi-Res Abs.` ; `Cos`
<p align="center"> - **Checkpoint:** [realisticVisionV51](https://civitai.com/models/4201?modelVersionId=130072)
<code>Disabled</code><br> - **Positive Prompt:** `(high quality, best quality), a 4k photo of a cute cat standing at a flower field in a park, day, (depth of field, bokeh)`
<img src="samples/00.jpg" width=768> - **Negative Prompt:** `(low quality, worst quality:1.2), [EasyNegative, EasyNegativeV2]`
</p> - **Alt:** `True`
- **Noise:** `Straight Abs.`
<p align="center"> ```cpp
<code>Brightness: 2.0 ; Contrast: -0.5 ; Saturation: 1.5<br> Steps: 24, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3515074713, Size: 512x512, Denoising strength: 0.5
R: 2.5; G: 1.5; B: -3</code><br> Clip skip: 2, Token merging ratio: 0.2, Token merging ratio hr: 0.2, RNG: CPU, NGMS: 4
<img src="samples/01.jpg" width=768> Hires upscale: 1.5, Hires steps: 12, Hires upscaler: 2xNomosUni_esrgan_multijpg
</p> ```
<p align="center"> </details>
<code>Brightness: -2.5 ; Contrast: 1 ; Saturation: 0.75<br>
R: -1.5; G: -1.5; B: 4</code><br> ### Buttons
<img src="samples/02.jpg" width=768> - **Reset:** Reset all `Basic` and `Advanced` parameters to the default values
</p> - **Randomize:** Randomize the `Brightness`, `Contrast`, `Saturation`, `R`, `G`, `B` parameters
## Roadmap ## Roadmap
- [X] Extension Released - [X] Extension Released!
- [X] Add Support for **X/Y/Z Plot** - [X] Add Support for **X/Y/Z Plot**
- [X] Implement different **Noise** functions - [X] Implement different **Noise** functions
- [X] Add **Randomize** button - [X] Implement **Randomize** button
- [X] **Style** Presets - [X] Implement **Style** Presets
- [X] Implement **Color Wheel** & **Color Picker** - [X] Implement **Color Wheel** & **Color Picker**
- [X] Implement better scaling algorithms - [X] Implement better scaling algorithms
- [X] Add API Docs - [X] Add API Docs
- [X] Append Parameters onto Metadata - [X] Append Parameters to Infotext
- You can enable this in the **Infotext** section of the **Settings** tab - [X] Improved Infotext Support *(by. [catboxanon](https://github.com/catboxanon))*
- [X] Add Infotext Support *(by. [catboxanon](https://github.com/catboxanon))* - [X] Add **HDR** Script
- [X] ADD **HDR** Script - [X] Add Support for **SDXL**
- [X] Add SDXL Support - [ ] Implement Gradient features
- [ ] Add Gradient features
<p align="center">
<code>X/Y/Z Plot Support</code><br>
<img src="samples/XYZ.jpg">
</p>
<p align="center">
<code>Randomize</code><br>
<img src="samples/Random.jpg"><br>
The value is used as the random seed<br>You can refer to the console to see the randomized values</p>
## API ## API
You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry to the `alwayson_scripts` of your payload. You can use this Extension via [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) by adding an entry to the `alwayson_scripts` of your payload. An [example](samples/api_example.json) is provided. The `args` are sent in the following order in an `array`:
An [example](samples/api_example.json) is provided.
The `args` are sent in the following order in an `array`:
- **Enable:** `bool` <table>
- **Alt:** `bool` <thead>
- **Brightness:** `float` <tr align="center">
- **Contrast:** `float` <td><b>Parameter</b></td>
- **Saturation:** `float` <td><b>Type</b></td>
- **R:** `float` </tr>
- **G:** `float` </thead>
- **B:** `float` <tbody>
- **Process Hires. Fix:** `bool` <tr align="center">
- **Noise Settings:** `str` <td>Enable</td>
- **Scaling Settings:** `str` <td><code>bool</code></td>
</tr>
<tr align="center">
<td>Alt.</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Brightness</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Contrast</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Saturation</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>R</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>G</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>B</td>
<td><code>float</code></td>
</tr>
<tr align="center">
<td>Hires. fix</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>ADetailer</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Randomize</td>
<td><code>bool</code></td>
</tr>
<tr align="center">
<td>Noise Method</td>
<td><code>str</code></td>
</tr>
<tr align="center">
<td>Scaling</td>
<td><code>str</code></td>
</tr>
</tbody>
</table>
## Known Issues ## Known Issues
- Does **not** work certain samplers *(See [Wiki](https://github.com/Haoming02/sd-webui-vectorscope-cc/wiki/Vectorscope-CC-Wiki#effects-with-different-samplers))* - In rare occasions, this Extension has little effects when used with certain **LoRA**s
- Has little effect when used with certain **LoRA**s - Works better / worse with certain `Samplers`
<!--- *(See [Wiki](https://github.com/Haoming02/sd-webui-vectorscope-cc/wiki/Vectorscope-CC-Wiki#effects-with-different-samplers))* --->
## HDR ## HDR
<p align="right"><i><b>BETA</b></i></p>
> [Discussion Thread](https://github.com/Haoming02/sd-webui-vectorscope-cc/issues/16) > [Discussion Thread](https://github.com/Haoming02/sd-webui-vectorscope-cc/issues/16)
- In the **Script** `Dropdown` at the bottom, there is now a new option: **`High Dynamic Range`** In the **Script** `Dropdown` at the bottom, there is now a new **`High Dynamic Range`** option:
- This script will generate multiple images *("Brackets")* of varying brightness, then merge them into 1 HDR image - This script will generate multiple images *("Brackets")* of varying brightness, then merge them into 1 HDR image
- *Do provide feedback in the thread!* - **(Recommended)** Use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* works well in my experience
- **Highly Recommended** to use a deterministic sampler and high enough steps. `Euler` *(**not** `Euler a`)* worked well in my experience.
#### Settings #### Settings
- **Brackets:** The numer of images to generate - **Brackets:** The numer of images to generate
- **Gaps:** The brightness difference between each image - **Gaps:** The brightness difference between each image
- **Automatically Merge:** When enabled, this will merge the images using an `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder; When disabled, this will return all images to the result section, for when you have a more advanced program such as Photoshop to do the merging. - **Automatically Merge:** When enabled, this will merge the images using an `OpenCV` algorithm and save to the `HDR` folder in the `outputs` folder
- All the images are still saved to the `outputs` folder regardless - Disable this if you want to merge them yourself using better external program
<hr> <hr>
<details> <details>
<summary>Offset Noise TL;DR</summary> <summary>Offset Noise TL;DR</summary>
The most common *version* of **Offset Noise** you may have heard of is from this [blog post](https://www.crosslabs.org/blog/diffusion-with-offset-noise), The most common *version* of **Offset Noise** you may have heard of is from this [blog post](https://www.crosslabs.org/blog/diffusion-with-offset-noise), where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5` *(**ie.** grey)*.
where it was discovered that the noise functions used during **training** were flawed, causing `Stable Diffusion` to always generate images with an average of `0.5` *(**ie.** grey)*.
> **ie.** Even if you prompt for dark/night or bright/snow, the overall image still looks "grey" > **ie.** Even if you prompt for dark/night or bright/snow, the average of the image is still "grey"
> [Technical Explanations](https://youtu.be/cVxQmbf3q7Q) > [Technical Explanations](https://youtu.be/cVxQmbf3q7Q)
However, this Extension instead tries to offset the latent noise during the **inference** phase. However, this Extension instead tries to offset the latent noise during the **inference** phase. Therefore, you do not need to use models that were specially trained, as this can work on any model.
Therefore, you do not need to use models that were specially trained, as this can work on any model.
</details> </details>
<details> <details>
<summary>How does this work?</summary> <summary>How does this work?</summary>
After reading through and messing around with the code, After reading through and messing around with the code, I found out that it is possible to directly modify the Tensors representing the latent noise used by the Stable Diffusion process.
I found out that it is possible to directly modify the Tensors
representing the latent noise used by the Stable Diffusion process.
The dimensions of the Tensors is `(X, 4, H / 8, W / 8)`, which can be thought of like this: The dimensions of the Tensors is `(X, 4, H / 8, W / 8)`, which represents **X** batch of noise images, with **4** channels, each with **(W / 8) x (H / 8)** values
> **X** batch of noise images, with **4** channels, each with **(W / 8) x (H / 8)** values
> **eg.** Generating a single 512x768 image will create a Tensor of size (1, 4, 96, 64) > **eg.** Generating a single 512x768 image will create a Tensor of size (1, 4, 96, 64)
Then, I tried to play around with the values of each channel and ended up discovering these relationships. Then, I tried to play around with the values of each channel and ended up discovering these relationships. Essentially, the 4 channels correspond to the **CMYK** color format for `SD1` *(**Y'CbCr** for `SDXL`)*, hence why you can control the brightness as well as the colors.
Essentially, the 4 channels correspond to the **CMYK** color format,
hence why you can control the brightness as well as the colors.
</details> </details>
<hr> <hr>
#### Vectorscope? #### Vectorscope?
The Extension is named this way because the color interactions remind me of the `Vectorscope` found in **Premiere Pro**'s **Lumetri Color**. The Extension is named this way because the color interactions remind me of the `Vectorscope` found in **Premiere Pro**'s **Lumetri Color**. Those who are experienced in Color Correction should be rather familiar with this Extension.
Those who are experienced in Color Correction should be rather familiar with this Extension.
<p align="center"><img src="scripts/vectorscope.png" width=256></p> <p align="center"><img src="scripts/vectorscope.png" width=256></p>

View File

@ -92,26 +92,17 @@ onUiLoaded(() => {
wheel.id = `cc-img-${mode}`; wheel.id = `cc-img-${mode}`;
const sliders = [ const sliders = [
[ document.getElementById(`cc-r-${mode}`).querySelectorAll('input'),
document.getElementById(`cc-r-${mode}`).querySelector('input[type=number]'), document.getElementById(`cc-g-${mode}`).querySelectorAll('input'),
document.getElementById(`cc-r-${mode}`).querySelector('input[type=range]') document.getElementById(`cc-b-${mode}`).querySelectorAll('input'),
],
[
document.getElementById(`cc-g-${mode}`).querySelector('input[type=number]'),
document.getElementById(`cc-g-${mode}`).querySelector('input[type=range]')
],
[
document.getElementById(`cc-b-${mode}`).querySelector('input[type=number]'),
document.getElementById(`cc-b-${mode}`).querySelector('input[type=range]')
]
]; ];
const temp = document.getElementById(`cc-temp-${mode}`); const temp = document.getElementById(`cc-temp-${mode}`);
const dot = temp.querySelector('img'); const dot = temp.querySelector('img');
dot.id = `cc-dot-${mode}`;
dot.style.left = 'calc(50% - 12px)'; dot.style.left = 'calc(50% - 12px)';
dot.style.top = 'calc(50% - 12px)'; dot.style.top = 'calc(50% - 12px)';
dot.id = `cc-dot-${mode}`;
container.appendChild(dot); container.appendChild(dot);
temp.remove(); temp.remove();

View File

@ -0,0 +1,4 @@
"""
Author: Haoming02
License: MIT
"""

View File

@ -38,7 +38,10 @@ class NoiseMethods:
@staticmethod @staticmethod
@torch.inference_mode() @torch.inference_mode()
def multires_noise( def multires_noise(
latent: torch.Tensor, use_zero: bool, iterations: int = 8, discount: float = 0.4 latent: torch.Tensor,
use_zero: bool,
iterations: int = 10,
discount: float = 0.8,
): ):
""" """
Credit: Kohya_SS Credit: Kohya_SS
@ -46,21 +49,18 @@ class NoiseMethods:
""" """
noise = NoiseMethods.zeros(latent) if use_zero else NoiseMethods.ones(latent) noise = NoiseMethods.zeros(latent) if use_zero else NoiseMethods.ones(latent)
b, c, w, h = noise.shape
device = latent.device device = latent.device
b, c, w, h = noise.shape
upsampler = torch.nn.Upsample(size=(w, h), mode="bilinear").to(device) upsampler = torch.nn.Upsample(size=(w, h), mode="bilinear").to(device)
for batch in range(b):
for i in range(iterations): for i in range(iterations):
r = random() * 2 + 2 r = random() * 2 + 2
wn = max(1, int(w / (r**i))) wn = max(1, int(w / (r**i)))
hn = max(1, int(h / (r**i))) hn = max(1, int(h / (r**i)))
noise[batch] += ( noise += upsampler(torch.randn(b, c, wn, hn).to(device)) * discount**i
upsampler(torch.randn(1, c, hn, wn).to(device)) * discount**i
)[0]
if wn == 1 or hn == 1: if wn == 1 or hn == 1:
break break
@ -79,6 +79,7 @@ def RGB_2_CbCr(r: float, g: float, b: float) -> tuple[float, float]:
original_callback = KDiffusionSampler.callback_state original_callback = KDiffusionSampler.callback_state
@torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
@wraps(original_callback) @wraps(original_callback)
def cc_callback(self, d): def cc_callback(self, d):
@ -96,7 +97,7 @@ def cc_callback(self, d):
mode = str(self.vec_cc["mode"]) mode = str(self.vec_cc["mode"])
method = str(self.vec_cc["method"]) method = str(self.vec_cc["method"])
source: torch.Tensor = d[mode] source: torch.Tensor = d[mode]
target: torch.Tensor = None target = None
if "Straight" in method: if "Straight" in method:
target = d[mode].detach().clone() target = d[mode].detach().clone()
@ -150,7 +151,6 @@ def cc_callback(self, d):
source[i][3] *= sat source[i][3] *= sat
else: else:
# But why...
cb, cr = RGB_2_CbCr(r, g, b) cb, cr = RGB_2_CbCr(r, g, b)
for i in range(batchSize): for i in range(batchSize):
@ -170,12 +170,11 @@ def cc_callback(self, d):
return original_callback(self, d) return original_callback(self, d)
KDiffusionSampler.callback_state = cc_callback
def restore_callback(): def restore_callback():
KDiffusionSampler.callback_state = original_callback KDiffusionSampler.callback_state = original_callback
def hook_callbacks():
KDiffusionSampler.callback_state = cc_callback
on_script_unloaded(restore_callback) on_script_unloaded(restore_callback)
on_ui_settings(settings) on_ui_settings(settings)

View File

@ -9,16 +9,19 @@ DOT = os.path.join(scripts.basedir(), "scripts", "dot.png")
def create_colorpicker(is_img: bool): def create_colorpicker(is_img: bool):
m: str = "img" if is_img else "txt" m: str = "img" if is_img else "txt"
gr.Image( whl = gr.Image(
value=WHEEL, value=WHEEL,
interactive=False, interactive=False,
container=False, container=False,
elem_id=f"cc-colorwheel-{m}", elem_id=f"cc-colorwheel-{m}",
) )
gr.Image( dot = gr.Image(
value=DOT, value=DOT,
interactive=False, interactive=False,
container=False, container=False,
elem_id=f"cc-temp-{m}", elem_id=f"cc-temp-{m}",
) )
whl.do_not_save_to_config = True
dot.do_not_save_to_config = True

View File

@ -2,6 +2,7 @@ import random
class Param: class Param:
def __init__(self, minimum: float, maximum: float, default: float): def __init__(self, minimum: float, maximum: float, default: float):
self.minimum = minimum self.minimum = minimum
self.maximum = maximum self.maximum = maximum

View File

@ -13,10 +13,9 @@ def apply_scaling(
b: float, b: float,
) -> list: ) -> list:
if alg == "Flat":
mod = 1.0 mod = 1.0
else: if alg != "Flat":
ratio = float(current_step / total_steps) ratio = float(current_step / total_steps)
rad = ratio * pi / 2 rad = ratio * pi / 2

View File

@ -10,6 +10,7 @@ EMPTY_STYLE = {"styles": {}, "deleted": {}}
class StyleManager: class StyleManager:
def __init__(self): def __init__(self):
self.STYLE_SHEET: dict = None self.STYLE_SHEET: dict = None

View File

@ -1,7 +1,7 @@
from modules import scripts from modules import scripts
def grid_reference(): def _grid_reference():
for data in scripts.scripts_data: for data in scripts.scripts_data:
if data.script_class.__module__ in ( if data.script_class.__module__ in (
"scripts.xyz_grid", "scripts.xyz_grid",
@ -40,7 +40,7 @@ def xyz_support(cache: dict):
def choices_scaling(): def choices_scaling():
return ["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"] return ["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"]
xyz_grid = grid_reference() xyz_grid = _grid_reference()
extra_axis_options = [ extra_axis_options = [
xyz_grid.AxisOption( xyz_grid.AxisOption(

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 507 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 195 KiB

After

Width:  |  Height:  |  Size: 480 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 574 KiB

BIN
samples/03.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 492 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 694 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 587 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 MiB

View File

@ -1,25 +1,27 @@
{ {
"prompt": "(masterpiece, best quality), 1girl, solo, night", "prompt": "a photo of a dog",
"negative_prompt": "(low quality, worst quality:1.2)", "negative_prompt": "(low quality, worst quality)",
"seed": -1,
"sampler_name": "Euler a", "sampler_name": "Euler a",
"sampler_index": "euler", "sampler_index": "euler",
"batch_size": 1,
"steps": 24, "steps": 24,
"cfg_scale": 8.5, "cfg_scale": 6.0,
"batch_size": 1,
"seed": -1,
"width": 512, "width": 512,
"height": 512, "height": 512,
"alwayson_scripts": { "alwayson_scripts": {
"Vectorscope CC": { "vectorscope cc": {
"args": [ "args": [
true, true,
true, true,
-2.0, -2.5,
1.5, 1.5,
1.25, 0.85,
0.0,
0.0, 0.0,
0.0, 0.0,
1.0,
false,
false,
false, false,
"Straight Abs.", "Straight Abs.",
"Flat" "Flat"

BIN
samples/method.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 964 KiB

BIN
samples/scaling.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -2,20 +2,21 @@ from modules.sd_samplers_kdiffusion import KDiffusionSampler
from modules import shared, scripts from modules import shared, scripts
from lib_cc.colorpicker import create_colorpicker from lib_cc.colorpicker import create_colorpicker
from lib_cc.callback import hook_callbacks
from lib_cc.style import StyleManager from lib_cc.style import StyleManager
from lib_cc.xyz import xyz_support from lib_cc.xyz import xyz_support
from lib_cc import callback
from lib_cc import const from lib_cc import const
from random import seed from random import seed
import gradio as gr import gradio as gr
VERSION = "2.2.6" VERSION = "2.3.0"
style_manager = StyleManager() style_manager = StyleManager()
style_manager.load_styles() style_manager.load_styles()
hook_callbacks()
class VectorscopeCC(scripts.Script): class VectorscopeCC(scripts.Script):
@ -45,24 +46,24 @@ class VectorscopeCC(scripts.Script):
with gr.Row(): with gr.Row():
bri = gr.Slider( bri = gr.Slider(
label="Brightness", label="Brightness",
value=const.Brightness.default,
minimum=const.Brightness.minimum, minimum=const.Brightness.minimum,
maximum=const.Brightness.maximum, maximum=const.Brightness.maximum,
step=0.05, step=0.05,
value=const.Brightness.default,
) )
con = gr.Slider( con = gr.Slider(
label="Contrast", label="Contrast",
value=const.Contrast.default,
minimum=const.Contrast.minimum, minimum=const.Contrast.minimum,
maximum=const.Contrast.maximum, maximum=const.Contrast.maximum,
step=0.05, step=0.05,
value=const.Contrast.default,
) )
sat = gr.Slider( sat = gr.Slider(
label="Saturation", label="Saturation",
value=const.Saturation.default,
minimum=const.Saturation.minimum, minimum=const.Saturation.minimum,
maximum=const.Saturation.maximum, maximum=const.Saturation.maximum,
step=0.05, step=0.05,
value=const.Saturation.default,
) )
with gr.Row(): with gr.Row():
@ -70,42 +71,33 @@ class VectorscopeCC(scripts.Script):
r = gr.Slider( r = gr.Slider(
label="R", label="R",
info="Cyan | Red", info="Cyan | Red",
value=const.Color.default,
minimum=const.Color.minimum, minimum=const.Color.minimum,
maximum=const.Color.maximum, maximum=const.Color.maximum,
step=0.05, step=0.05,
value=const.Color.default,
elem_id=f"cc-r-{mode}", elem_id=f"cc-r-{mode}",
) )
g = gr.Slider( g = gr.Slider(
label="G", label="G",
info="Magenta | Green", info="Magenta | Green",
value=const.Color.default,
minimum=const.Color.minimum, minimum=const.Color.minimum,
maximum=const.Color.maximum, maximum=const.Color.maximum,
step=0.05, step=0.05,
value=const.Color.default,
elem_id=f"cc-g-{mode}", elem_id=f"cc-g-{mode}",
) )
b = gr.Slider( b = gr.Slider(
label="B", label="B",
info="Yellow | Blue", info="Yellow | Blue",
value=const.Color.default,
minimum=const.Color.minimum, minimum=const.Color.minimum,
maximum=const.Color.maximum, maximum=const.Color.maximum,
step=0.05, step=0.05,
value=const.Color.default,
elem_id=f"cc-b-{mode}", elem_id=f"cc-b-{mode}",
) )
r.input( for c in (r, g, b):
None, c.input(
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
g.input(
None,
inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
)
b.input(
None, None,
inputs=[r, g, b], inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}", _js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
@ -125,7 +117,9 @@ class VectorscopeCC(scripts.Script):
refresh_btn = gr.Button(value="Refresh Style", scale=2) refresh_btn = gr.Button(value="Refresh Style", scale=2)
with gr.Row(elem_classes="style-rows"): with gr.Row(elem_classes="style-rows"):
style_name = gr.Textbox(label="Style Name", scale=3) style_name = gr.Textbox(
label="Style Name", lines=1, max_lines=1, scale=3
)
save_btn = gr.Button( save_btn = gr.Button(
value="Save Style", elem_id=f"cc-save-{mode}", scale=2 value="Save Style", elem_id=f"cc-save-{mode}", scale=2
) )
@ -145,12 +139,15 @@ class VectorscopeCC(scripts.Script):
with gr.Accordion("Advanced Settings", open=False): with gr.Accordion("Advanced Settings", open=False):
with gr.Row(): with gr.Row():
doHR = gr.Checkbox(label="Process Hires. fix") doHR = gr.Checkbox(
label="Process Hires. fix",
visible=(not is_img2img),
)
doAD = gr.Checkbox(label="Process Adetailer") doAD = gr.Checkbox(label="Process Adetailer")
doRN = gr.Checkbox(label="Randomize using Seed") doRN = gr.Checkbox(label="Randomize using Seed")
method = gr.Radio( method = gr.Radio(
[ choices=(
"Straight", "Straight",
"Straight Abs.", "Straight Abs.",
"Cross", "Cross",
@ -160,13 +157,13 @@ class VectorscopeCC(scripts.Script):
"U.Random", "U.Random",
"Multi-Res", "Multi-Res",
"Multi-Res Abs.", "Multi-Res Abs.",
], ),
label="Noise Settings", label="Noise Settings",
value="Straight Abs.", value="Straight Abs.",
) )
scaling = gr.Radio( scaling = gr.Radio(
["Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"], choices=("Flat", "Cos", "Sin", "1 - Cos", "1 - Sin"),
label="Scaling Settings", label="Scaling Settings",
value="Flat", value="Flat",
) )
@ -188,7 +185,7 @@ class VectorscopeCC(scripts.Script):
apply_btn.click( apply_btn.click(
fn=style_manager.get_style, fn=style_manager.get_style,
inputs=style_choice, inputs=[style_choice],
outputs=[*comps], outputs=[*comps],
).then( ).then(
None, None,
@ -199,18 +196,18 @@ class VectorscopeCC(scripts.Script):
save_btn.click( save_btn.click(
fn=lambda *args: gr.update(choices=style_manager.save_style(*args)), fn=lambda *args: gr.update(choices=style_manager.save_style(*args)),
inputs=[style_name, *comps], inputs=[style_name, *comps],
outputs=style_choice, outputs=[style_choice],
) )
delete_btn.click( delete_btn.click(
fn=lambda name: gr.update(choices=style_manager.delete_style(name)), fn=lambda name: gr.update(choices=style_manager.delete_style(name)),
inputs=style_name, inputs=[style_name],
outputs=style_choice, outputs=[style_choice],
) )
refresh_btn.click( refresh_btn.click(
fn=lambda _: gr.update(choices=style_manager.load_styles()), fn=lambda: gr.update(choices=style_manager.load_styles()),
outputs=style_choice, outputs=[style_choice],
) )
with gr.Row(): with gr.Row():
@ -248,7 +245,7 @@ class VectorscopeCC(scripts.Script):
outputs=[*comps], outputs=[*comps],
show_progress="hidden", show_progress="hidden",
).then( ).then(
None, fn=None,
inputs=[r, g, b], inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}", _js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
) )
@ -258,7 +255,7 @@ class VectorscopeCC(scripts.Script):
outputs=[bri, con, sat, r, g, b], outputs=[bri, con, sat, r, g, b],
show_progress="hidden", show_progress="hidden",
).then( ).then(
None, fn=None,
inputs=[r, g, b], inputs=[r, g, b],
_js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}", _js=f"(r, g, b) => {{ VectorscopeCC.updateCursor(r, g, b, {m}); }}",
) )
@ -308,52 +305,45 @@ class VectorscopeCC(scripts.Script):
seeds: list[int], seeds: list[int],
subseeds: list[int], subseeds: list[int],
): ):
if "Enable" in self.xyzCache.keys():
enable = self.xyzCache["Enable"].lower().strip() == "true" enable = self.xyzCache.pop("Enable", str(enable)).lower().strip() == "true"
if not enable: if not enable:
if len(self.xyzCache) > 0: if len(self.xyzCache) > 0:
if "Enable" not in self.xyzCache.keys():
print("\n[Vec.CC] x [X/Y/Z Plot] Extension is not Enabled!\n") print("\n[Vec.CC] x [X/Y/Z Plot] Extension is not Enabled!\n")
self.xyzCache.clear() self.xyzCache.clear()
KDiffusionSampler.vec_cc = {"enable": False} setattr(KDiffusionSampler, "vec_cc", {"enable": False})
return p
method = str(self.xyzCache.pop("Method", method))
if method == "Disabled":
setattr(KDiffusionSampler, "vec_cc", {"enable": False})
return p return p
if "Random" in self.xyzCache.keys(): if "Random" in self.xyzCache.keys():
print("[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled.") print("[X/Y/Z Plot] x [Vec.CC] Randomize is Enabled.")
if len(self.xyzCache) > 1: if len(self.xyzCache) > 1:
print("Some parameters will not apply!") print("Some parameters will be overridden!")
cc_seed = int(self.xyzCache.pop("Random"))
else:
cc_seed = int(seeds[0]) if doRN else None cc_seed = int(seeds[0]) if doRN else None
if "Alt" in self.xyzCache.keys(): latent = self.xyzCache.pop("Alt", str(latent)).lower().strip() == "true"
latent = self.xyzCache["Alt"].lower().strip() == "true" doHR = self.xyzCache.pop("DoHR", str(doHR)).lower().strip() == "true"
scaling = str(self.xyzCache.pop("Scaling", scaling))
if "DoHR" in self.xyzCache.keys(): bri = float(self.xyzCache.pop("Brightness", bri))
doHR = self.xyzCache["DoHR"].lower().strip() == "true" con = float(self.xyzCache.pop("Contrast", con))
sat = float(self.xyzCache.pop("Saturation", sat))
if "Random" in self.xyzCache.keys(): r = float(self.xyzCache.pop("R", r))
cc_seed = int(self.xyzCache["Random"]) g = float(self.xyzCache.pop("G", g))
b = float(self.xyzCache.pop("B", b))
bri = float(self.xyzCache.get("Brightness", bri)) assert len(self.xyzCache) == 0
con = float(self.xyzCache.get("Contrast", con))
sat = float(self.xyzCache.get("Saturation", sat))
r = float(self.xyzCache.get("R", r))
g = float(self.xyzCache.get("G", g))
b = float(self.xyzCache.get("B", b))
method = str(self.xyzCache.get("Method", method))
scaling = str(self.xyzCache.get("Scaling", scaling))
self.xyzCache.clear()
if method == "Disabled":
KDiffusionSampler.vec_cc = {"enable": False}
return p
steps: int = getattr(p, "firstpass_steps", None) or p.steps
if cc_seed: if cc_seed:
seed(cc_seed) seed(cc_seed)
@ -366,13 +356,13 @@ class VectorscopeCC(scripts.Script):
g = const.Color.rand() g = const.Color.rand()
b = const.Color.rand() b = const.Color.rand()
print(f"\n-> Seed: {cc_seed}") print(f"\n[Seed: {cc_seed}]")
print(f"Brightness:\t{bri}") print(f"> Brightness: {bri}")
print(f"Contrast:\t{con}") print(f"> Contrast: {con}")
print(f"Saturation:\t{sat}") print(f"> Saturation: {sat}")
print(f"R:\t\t{r}") print(f"> R: {r}")
print(f"G:\t\t{g}") print(f"> G: {g}")
print(f"B:\t\t{b}\n") print(f"> B: {b}\n")
if getattr(shared.opts, "cc_metadata", True): if getattr(shared.opts, "cc_metadata", True):
p.extra_generation_params.update( p.extra_generation_params.update(
@ -394,6 +384,8 @@ class VectorscopeCC(scripts.Script):
} }
) )
steps: int = getattr(p, "firstpass_steps", None) or p.steps
bri /= steps bri /= steps
con /= steps con /= steps
sat = pow(sat, 1.0 / steps) sat = pow(sat, 1.0 / steps)
@ -403,7 +395,10 @@ class VectorscopeCC(scripts.Script):
mode: str = "x" if latent else "denoised" mode: str = "x" if latent else "denoised"
KDiffusionSampler.vec_cc = { setattr(
KDiffusionSampler,
"vec_cc",
{
"enable": True, "enable": True,
"mode": mode, "mode": mode,
"bri": bri, "bri": bri,
@ -417,6 +412,15 @@ class VectorscopeCC(scripts.Script):
"doAD": doAD, "doAD": doAD,
"scaling": scaling, "scaling": scaling,
"step": steps, "step": steps,
} },
)
return p
def before_hr(self, p, enable: bool, *args, **kwargs):
if enable:
steps: int = getattr(p, "hr_second_pass_steps", None) or p.steps
KDiffusionSampler.vec_cc["step"] = steps
return p return p

View File

@ -1,4 +1,5 @@
from modules.processing import process_images, get_fixed_seed from modules.processing import process_images, get_fixed_seed
from modules.shared import state
from modules import scripts from modules import scripts
from copy import copy from copy import copy
import gradio as gr import gradio as gr
@ -6,39 +7,36 @@ import numpy as np
import cv2 as cv import cv2 as cv
# https://docs.opencv.org/4.8.0/d2/df0/tutorial_py_hdr.html def _merge_HDR(imgs: list, path: str, depth: str, fmt: str, gamma: float):
def merge_HDR(imgs: list, path: str, depth: str, fmt: str, gamma: float) -> np.ndarray: """https://docs.opencv.org/4.8.0/d2/df0/tutorial_py_hdr.html"""
import datetime import datetime
import math import math
import os import os
output_folder = os.path.join(path, "hdr") out_dir = os.path.join(os.path.dirname(path), "hdr")
os.makedirs(output_folder, exist_ok=True) os.makedirs(out_dir, exist_ok=True)
print(f'\nSaving HDR Outputs to "{out_dir}"\n')
imgs_np = [np.asarray(img).astype(np.uint8) for img in imgs] imgs_np = [np.asarray(img, dtype=np.uint8) for img in imgs]
merge = cv.createMergeMertens() merge = cv.createMergeMertens()
hdr = merge.process(imgs_np) hdr = merge.process(imgs_np)
hdr += math.ceil(0 - np.min(hdr) * 1000) / 1000
# print(f'{np.min(hdr)}, {np.max(hdr)}') # shift min to 0.0
hdr += math.ceil(0.0 - np.min(hdr) * 1000) / 1000
# print(f"({np.min(hdr)}, {np.max(hdr)}")
target = 65535 if depth == "16bpc" else 255 target = 65535 if depth == "16bpc" else 255
precision = "uint16" if depth == "16bpc" else "uint8" precision = np.uint16 if depth == "16bpc" else np.uint8
hdr = np.power(hdr, (1 / gamma)) hdr = np.power(hdr, (1 / gamma))
ldr = np.clip(hdr * target, 0, target).astype(precision) ldr = np.clip(hdr * target, 0, target).astype(precision)
rgb = cv.cvtColor(ldr, cv.COLOR_BGR2RGB) rgb = cv.cvtColor(ldr, cv.COLOR_BGR2RGB)
cv.imwrite( time = datetime.datetime.now().strftime("%H-%M-%S")
os.path.join( cv.imwrite(os.path.join(out_dir, f"{time}{fmt}"), rgb)
output_folder, f'{datetime.datetime.now().strftime("%H-%M-%S")}{fmt}'
),
rgb,
)
return ldr
class VectorHDR(scripts.Script): class VectorHDR(scripts.Script):
@ -50,10 +48,22 @@ class VectorHDR(scripts.Script):
return True return True
def ui(self, is_img2img): def ui(self, is_img2img):
with gr.Row(): with gr.Row():
count = gr.Slider(label="Brackets", minimum=3, maximum=9, step=2, value=5) count = gr.Slider(
label="Brackets",
minimum=3,
maximum=9,
step=2,
value=5,
)
gap = gr.Slider( gap = gr.Slider(
label="Gaps", minimum=0.50, maximum=2.50, step=0.25, value=1.25 label="Gaps",
minimum=0.50,
maximum=2.50,
step=0.25,
value=1.25,
) )
with gr.Accordion( with gr.Accordion(
@ -61,6 +71,7 @@ class VectorHDR(scripts.Script):
elem_id=f'vec-hdr-{"img" if is_img2img else "txt"}', elem_id=f'vec-hdr-{"img" if is_img2img else "txt"}',
open=False, open=False,
): ):
auto = gr.Checkbox(label="Automatically Merge", value=True) auto = gr.Checkbox(label="Automatically Merge", value=True)
with gr.Row(): with gr.Row():
@ -76,7 +87,7 @@ class VectorHDR(scripts.Script):
value=1.2, value=1.2,
) )
for comp in [count, gap, auto, depth, fmt, gamma]: for comp in (count, gap, auto, depth, fmt, gamma):
comp.do_not_save_to_config = True comp.do_not_save_to_config = True
return [count, gap, auto, depth, fmt, gamma] return [count, gap, auto, depth, fmt, gamma]
@ -84,7 +95,8 @@ class VectorHDR(scripts.Script):
def run( def run(
self, p, count: int, gap: float, auto: bool, depth: str, fmt: str, gamma: float self, p, count: int, gap: float, auto: bool, depth: str, fmt: str, gamma: float
): ):
center = count // 2 center: int = count // 2
brackets = brightness_brackets(count, gap)
p.seed = get_fixed_seed(p.seed) p.seed = get_fixed_seed(p.seed)
p.scripts.script("vectorscope cc").xyzCache.update({"Enable": "False"}) p.scripts.script("vectorscope cc").xyzCache.update({"Enable": "False"})
@ -95,9 +107,12 @@ class VectorHDR(scripts.Script):
imgs = [None] * count imgs = [None] * count
imgs[center] = baseline.images[0] imgs[center] = baseline.images[0]
brackets = brightness_brackets(count, gap)
for it in range(count): for it in range(count):
if state.skipped or state.interrupted or state.stopping_generation:
print("HDR Process Skipped...")
return baseline
if it == center: if it == center:
continue continue
@ -115,14 +130,13 @@ class VectorHDR(scripts.Script):
proc = process_images(pc) proc = process_images(pc)
imgs[it] = proc.images[0] imgs[it] = proc.images[0]
if not auto: if auto:
baseline.images = imgs _merge_HDR(imgs, p.outpath_samples, depth, fmt, gamma)
else:
baseline.images = [merge_HDR(imgs, p.outpath_samples, depth, fmt, gamma)]
baseline.images = imgs
return baseline return baseline
def brightness_brackets(count: int, gap: int) -> list[int]: def brightness_brackets(count: int, gap: float) -> list[float]:
half = count // 2 half = count // 2
return [gap * (i - half) for i in range(count)] return [gap * (i - half) for i in range(count)]