fix generation info; refine readme

pull/12/head
pkuliyi2015 2023-05-21 20:55:20 +00:00
parent 0559f26318
commit 7cbfb4097e
4 changed files with 45 additions and 12 deletions

View File

@ -18,7 +18,25 @@ Relevant Links
- [Paper on arXiv](https://arxiv.org/abs/2305.07015)
> If you find this project useful, please give me & Jianyi Wang a star! ⭐
---
***
## Features
1. **High-fidelity detailed image upscaling**:
- Being very detailed while keeping the face identity of your characters.
- Suitable for most images (Realistic or Anime, Photography or AIGC, SD 1.5 or Midjourney images...) [Official Examples](https://iceclear.github.io/projects/stablesr/)
2. **Less VRAM consumption**
- I remove the VRAM-expensive modules in the official implementation.
- The remaining model is much smaller than ControlNet Tile model and requires less VRAM.
- When combined with Tiled Diffusion & VAE, you can do 4k image super-resolution with limited VRAM (e.g., < 12 GB).
> Please be aware that sdp may lead to OOM for some unknown reasons. You may use xformers instead.
3. **Wavelet Color Fix**
- The official StableSR will significantly change the color of the generated image. The problem will be even more prominent when upscaling in tiles.
- I implement a powerful post-processing technique that effectively matches the color of the upscaled image to the original. See [Wavelet Color Fix Example](https://imgsli.com/MTgwNDg2/).
***
## Usage
### 1. Installation
@ -31,13 +49,14 @@ Relevant Links
⚪ Method 2: In progress...
> After sucessful installation, you should see "StableSR" in img2img Scripts dropdown list.
### 2. Download the main components
- You MUST use the Stable Diffusion V2.1 512 **EMA** checkpoint (~5.21GB) from StabilityAI
- You can download it from [HuggingFace](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- Put into stable-diffusion-webui/models/Stable-Diffusion/
> While it requires a SD2.1 checkpoint, you can still upscale ANY image (even from SD1.5 or NSFW). Your image won't be censored and the output quality won't be affected.
- Download the extracted StableSR module
- Official resources: [HuggingFace](https://huggingface.co/Iceclear/StableSR/resolve/main/weibu_models.zip) (~1.2 G). Note that this is a zip file containing both the StableSR module and the VQVAE.
- My resources: <[GoogleDrive](https://drive.google.com/file/d/1tWjkZQhfj07sHDR4r9Ta5Fk4iMp1t3Qw/view?usp=sharing)> <[百度网盘-提取码aguq](https://pan.baidu.com/s/1Nq_6ciGgKnTu0W14QcKKWg?pwd=aguq)>
@ -98,7 +117,7 @@ Relevant Links
- However, in practice, I found these features are astonishingly huge for large images. (>10G for 4k images even in float16!)
- Hence, **I removed the CFW component in VAE Decoder**. As this lead to inferior fidelity in details, I will try to add it back later as an option.
---
***
## License
This project is licensed under:

View File

@ -18,7 +18,23 @@ Licensed under S-Lab License 1.0
- [论文](https://arxiv.org/abs/2305.07015)
> 如果你觉得这个项目有帮助,请给我和 Jianyi Wang 的仓库点个星!⭐
---
***
## 功能
1. **高保真图像放大**
- 不修改人物脸部的同时添加非常细致的细节和纹理
- 适合大多数图片真实或动漫摄影作品或AIGCSD 1.5或Midjourney图片...
2. **较少的显存消耗**
- 我移除了官方实现中显存消耗高的模块。
- 剩下的模型比ControlNet Tile模型小得多需要的显存也少得多。
- 当结合Tiled Diffusion & VAE时你可以在有限的显存例如<12GB4k
> 注意sdp可能会不明原因炸显存。建议使用xformers。
3. **小波分解颜色修正**
- StableSR官方实现有明显的颜色偏移这一问题在分块放大时更加明显。
- 我实现了一个强大的后处理技术,有效地匹配放大图像与原图的颜色。请看[小波分解颜色修正例子](https://imgsli.com/MTgwNDg2/)。
***
## 使用
### 1. 安装
@ -31,13 +47,12 @@ Licensed under S-Lab License 1.0
⚪ 方法 2: 施工中...
> 安装成功后,你能在 img2img 最底下的Scripts下拉列表中看到 "StableSR"。
### 2. 必须模型
- 你必须使用 StabilityAI 官方的 Stable Diffusion V2.1 512 **EMA** 模型(约 5.21GB
- 你可以从 [HuggingFace](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) 下载
- 放入 stable-diffusion-webui/models/Stable-Diffusion/ 文件夹
> 虽然StableSR需要一个SD2.1的模型权重但你仍然可以放大来自SD1.5的图片。NSFW图片不会被模型扭曲输出质量也不会受到影响。
- 下载 StableSR 模块
- 官方资源:[HuggingFace](https://huggingface.co/Iceclear/StableSR/resolve/main/weibu_models.zip) (约1.2G)。请注意这是一个zip文件同时包含StableSR模块和可选组件VQVAE.
- 我的资源:<[GoogleDrive](https://drive.google.com/file/d/1tWjkZQhfj07sHDR4r9Ta5Fk4iMp1t3Qw/view?usp=sharing)> <[百度网盘-提取码aguq](https://pan.baidu.com/s/1Nq_6ciGgKnTu0W14QcKKWg?pwd=aguq)>
@ -98,7 +113,7 @@ Licensed under S-Lab License 1.0
- 然而,在实践中,我发现这些特征对于大图像来说非常大。 (>10G 用于 4k 图像,即使是在 float16)
- 因此,**我移除了 VAE 解码器中的 CFW 组件**。由于这导致了对细节的较低保真度,我将尝试将它作为一个选项添加回去。
---
***
## 许可
此项目在以下许可下授权:

View File

@ -247,16 +247,17 @@ class Script(scripts.Script):
print(f'[StableSR] Error fixing color with default method: {e}')
# save the fixed color images
n = p.n_iter
for i in range(len(fixed_images)):
try:
images.save_image(fixed_images[i], p.outpath_samples, "", result.seed, result.prompt, opts.samples_format, info=result.infotexts, p=p)
images.save_image(fixed_images[i], p.outpath_samples, "", p.all_seeds[i], p.all_prompts[i], opts.samples_format, info=result.infotexts[i], p=p)
except Exception as e:
print(f'[StableSR] Error saving color fixed image: {e}')
if save_original:
for i in range(len(result.images)):
try:
images.save_image(result.images[i], p.outpath_samples, "", result.seed, result.prompt, opts.samples_format, info=result.infotexts, p=p, suffix="-before-color-fix")
images.save_image(result.images[i], p.outpath_samples, "", p.all_seeds[i], p.all_prompts[i], opts.samples_format, info=result.infotexts[i], p=p, suffix="-before-color-fix")
except Exception as e:
print(f'[StableSR] Error saving original image: {e}')
result.images = result.images + fixed_images

View File

@ -70,8 +70,6 @@ def wavelet_blur(image: Tensor, radius: int):
"""
# input shape: (1, 3, H, W)
# convolution kernel
# input shape: (1, 3, H, W)
# convolution kernel
kernel_vals = [
[0.0625, 0.125, 0.0625],
[0.125, 0.25, 0.125],