mirror of https://github.com/vladmandic/automatic
update readme and some debug functions
Signed-off-by: vladmandic <mandic00@live.com>pull/4696/head
parent
5e4819c3e9
commit
a54e9b3311
113
README.md
113
README.md
|
|
@ -1,8 +1,14 @@
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="https://github.com/vladmandic/sdnext/raw/master/html/logo-transparent.png" width=200 alt="SD.Next">
|
<img src="https://github.com/vladmandic/sdnext/raw/master/html/logo-transparent.png" width=200 alt="SD.Next: AI art generator logo">
|
||||||
|
|
||||||
# SD.Next: All-in-one WebUI for AI generative image and video creation and captioning
|
# SD.Next: All-in-one WebUI
|
||||||
|
|
||||||
|
SD.Next is a powerful, open-source WebUI app for AI image and video generation, built on Stable Diffusion and supporting dozens of advanced models. Create, caption, and process images and videos with a modern, cross-platform interface—perfect for artists, researchers, and AI enthusiasts.
|
||||||
|
|
||||||
|
|
||||||
|

|
||||||
|

|
||||||
|

|
||||||

|

|
||||||

|

|
||||||
[](https://discord.gg/VjvR2tabEX)
|
[](https://discord.gg/VjvR2tabEX)
|
||||||
|
|
@ -17,61 +23,63 @@
|
||||||
## Table of contents
|
## Table of contents
|
||||||
|
|
||||||
- [Documentation](https://vladmandic.github.io/sdnext-docs/)
|
- [Documentation](https://vladmandic.github.io/sdnext-docs/)
|
||||||
- [SD.Next Features](#sdnext-features)
|
- [SD.Features](#features--capabilities)
|
||||||
- [Model support](#model-support)
|
- [Supported AI Models](#supported-ai-models)
|
||||||
- [Platform support](#platform-support)
|
- [Supported Platforms & Hardware](#supported-platforms--hardware)
|
||||||
- [Getting started](#getting-started)
|
- [Getting started](#getting-started)
|
||||||
|
|
||||||
## SD.Next Features
|
### Screenshot: Desktop interface
|
||||||
|
|
||||||
All individual features are not listed here, instead check [ChangeLog](CHANGELOG.md) for full list of changes
|
<div align="center">
|
||||||
- Fully localized:
|
<img src="https://github.com/vladmandic/sdnext/raw/dev/html/screenshot-robot.jpg" alt="SD.Next: AI art generator desktop interface screenshot" width="90%">
|
||||||
▹ **English | Chinese | Russian | Spanish | German | French | Italian | Portuguese | Japanese | Korean**
|
</div>
|
||||||
- Desktop and Mobile support!
|
|
||||||
- Multiple [diffusion models](https://vladmandic.github.io/sdnext-docs/Model-Support/)!
|
### Screenshot: Mobile interface
|
||||||
- Multi-platform!
|
|
||||||
▹ **Windows | Linux | MacOS | nVidia CUDA | AMD ROCm | Intel Arc / IPEX XPU | DirectML | OpenVINO | ONNX+Olive | ZLUDA**
|
<div align="center">
|
||||||
|
<img src="https://github.com/user-attachments/assets/ced9fe0c-d2c2-46d1-94a7-8f9f2307ce38" alt="SD.Next: AI art generator mobile interface screenshot" width="35%">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Features & Capabilities
|
||||||
|
|
||||||
|
SD.Next is feature-rich with a focus on performance, flexibility, and user experience. Key features include:
|
||||||
|
- [Multi-platform](#platform-support!
|
||||||
|
- Many [diffusion models](https://vladmandic.github.io/sdnext-docs/Model-Support/)!
|
||||||
|
- Fully localized to ~15 languages and with support for many [UI themes](https://vladmandic.github.io/sdnext-docs/Themes/)!
|
||||||
|
- [Desktop](#screenshot-desktop-interface) and [Mobile](#screenshot-mobile-interface) support!
|
||||||
- Platform specific auto-detection and tuning performed on install
|
- Platform specific auto-detection and tuning performed on install
|
||||||
- Optimized processing with latest `torch` developments with built-in support for model compile and quantize
|
|
||||||
Compile backends: *Triton | StableFast | DeepCache | OneDiff | TeaCache | etc.*
|
|
||||||
Quantization methods: *SDNQ | BitsAndBytes | Optimum-Quanto | TorchAO / LayerWise*
|
|
||||||
- **Captioning** with 150+ **OpenCLiP** models, **Tagger** with **WaifuDiffusion** and **DeepDanbooru** models, and 20+ built-in **VLMs**
|
|
||||||
- Built in installer with automatic updates and dependency management
|
- Built in installer with automatic updates and dependency management
|
||||||
|
|
||||||
<br>
|
### Unique features
|
||||||
|
|
||||||
**Desktop** interface
|
SD.Next includes many features not found in other WebUIs, such as:
|
||||||
<div align="center">
|
- **SDNQ**: State-of-the-Art quantization engine
|
||||||
<img src="https://github.com/vladmandic/sdnext/raw/dev/html/screenshot-robot.jpg" alt="screenshot-modernui-desktop" width="90%">
|
Use pre-quantized or run with quantizaion on-the-fly for up to 4x VRAM reduction with no or minimal quality and performance impact
|
||||||
</div>
|
- **Balanced Offload**: Dynamically balance CPU and GPU memory to run larger models on limited hardware
|
||||||
|
- **Captioning** with 150+ **OpenCLiP** models, **Tagger** with **WaifuDiffusion** and **DeepDanbooru** models, and 25+ built-in **VLMs**
|
||||||
**Mobile** interface
|
- **Image Processing** with full image correction color-grading suite of tools
|
||||||
<div align="center">
|
|
||||||
<img src="https://github.com/user-attachments/assets/ced9fe0c-d2c2-46d1-94a7-8f9f2307ce38" alt="screenshot-modernui-mobile" width="35%">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
For screenshots and information on other available themes, see [Themes](https://vladmandic.github.io/sdnext-docs/Themes/)
|
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
## Model support
|
## Supported AI Models
|
||||||
|
|
||||||
SD.Next supports broad range of models: [supported models](https://vladmandic.github.io/sdnext-docs/Model-Support/) and [model specs](https://vladmandic.github.io/sdnext-docs/Models/)
|
SD.Next supports broad range of models: [supported models](https://vladmandic.github.io/sdnext-docs/Model-Support/) and [model specs](https://vladmandic.github.io/sdnext-docs/Models/)
|
||||||
|
|
||||||
## Platform support
|
## Supported Platforms & Hardware
|
||||||
|
|
||||||
- *nVidia* GPUs using **CUDA** libraries on both *Windows and Linux*
|
- *nVidia* GPUs using **CUDA** libraries on both *Windows and Linux*
|
||||||
- *AMD* GPUs using **ROCm** libraries on *Linux*
|
- *AMD* GPUs using **ROCm** libraries on both *Linux and Windows*
|
||||||
Support will be extended to *Windows* once AMD releases ROCm for Windows
|
- *AMD* GPUs on Windows using **ZLUDA** libraries
|
||||||
- *Intel Arc* GPUs using **OneAPI** with *IPEX XPU* libraries on both *Windows and Linux*
|
- *Intel Arc* GPUs using **OneAPI** with *IPEX XPU* libraries on both *Windows and Linux*
|
||||||
|
- Any *CPU/GPU* or device compatible with **OpenVINO** libraries on both *Windows and Linux*
|
||||||
- Any GPU compatible with *DirectX* on *Windows* using **DirectML** libraries
|
- Any GPU compatible with *DirectX* on *Windows* using **DirectML** libraries
|
||||||
This includes support for AMD GPUs that are not supported by native ROCm libraries
|
|
||||||
- Any GPU or device compatible with **OpenVINO** libraries on both *Windows and Linux*
|
|
||||||
- *Apple M1/M2* on *OSX* using built-in support in Torch with **MPS** optimizations
|
- *Apple M1/M2* on *OSX* using built-in support in Torch with **MPS** optimizations
|
||||||
- *ONNX/Olive*
|
- *ONNX/Olive*
|
||||||
- *AMD* GPUs on Windows using **ZLUDA** libraries
|
|
||||||
|
|
||||||
Plus Docker container recipes for: [CUDA, ROCm, Intel IPEX and OpenVINO](https://vladmandic.github.io/sdnext-docs/Docker/)
|
Plus **Docker** container recipes for: [CUDA, ROCm, Intel IPEX and OpenVINO](https://vladmandic.github.io/sdnext-docs/Docker/)
|
||||||
|
|
||||||
## Getting started
|
## Getting started
|
||||||
|
|
||||||
|
|
@ -84,21 +92,37 @@ Plus Docker container recipes for: [CUDA, ROCm, Intel IPEX and OpenVINO](https:/
|
||||||
> And for platform specific information, check out
|
> And for platform specific information, check out
|
||||||
> [WSL](https://vladmandic.github.io/sdnext-docs/WSL/) | [Intel Arc](https://vladmandic.github.io/sdnext-docs/Intel-ARC/) | [DirectML](https://vladmandic.github.io/sdnext-docs/DirectML/) | [OpenVINO](https://vladmandic.github.io/sdnext-docs/OpenVINO/) | [ONNX & Olive](https://vladmandic.github.io/sdnext-docs/ONNX-Runtime/) | [ZLUDA](https://vladmandic.github.io/sdnext-docs/ZLUDA/) | [AMD ROCm](https://vladmandic.github.io/sdnext-docs/AMD-ROCm/) | [MacOS](https://vladmandic.github.io/sdnext-docs/MacOS-Python/) | [nVidia](https://vladmandic.github.io/sdnext-docs/nVidia/) | [Docker](https://vladmandic.github.io/sdnext-docs/Docker/)
|
> [WSL](https://vladmandic.github.io/sdnext-docs/WSL/) | [Intel Arc](https://vladmandic.github.io/sdnext-docs/Intel-ARC/) | [DirectML](https://vladmandic.github.io/sdnext-docs/DirectML/) | [OpenVINO](https://vladmandic.github.io/sdnext-docs/OpenVINO/) | [ONNX & Olive](https://vladmandic.github.io/sdnext-docs/ONNX-Runtime/) | [ZLUDA](https://vladmandic.github.io/sdnext-docs/ZLUDA/) | [AMD ROCm](https://vladmandic.github.io/sdnext-docs/AMD-ROCm/) | [MacOS](https://vladmandic.github.io/sdnext-docs/MacOS-Python/) | [nVidia](https://vladmandic.github.io/sdnext-docs/nVidia/) | [Docker](https://vladmandic.github.io/sdnext-docs/Docker/)
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
|
||||||
|
```shell
|
||||||
|
git clone https://github.com/vladmandic/sdnext
|
||||||
|
cd sdnext
|
||||||
|
./webui.sh # Linux/Mac
|
||||||
|
webui.bat # Windows
|
||||||
|
webui.ps1 # PowerShell
|
||||||
|
```
|
||||||
|
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> If you run into issues, check out [troubleshooting](https://vladmandic.github.io/sdnext-docs/Troubleshooting/) and [debugging](https://vladmandic.github.io/sdnext-docs/Debug/) guides
|
> If you run into issues, check out [troubleshooting](https://vladmandic.github.io/sdnext-docs/Troubleshooting/) and [debugging](https://vladmandic.github.io/sdnext-docs/Debug/) guides
|
||||||
|
|
||||||
|
|
||||||
|
## Community & Support
|
||||||
|
|
||||||
|
If you're unsure how to use a feature, best place to start is [Docs](https://vladmandic.github.io/sdnext-docs/) and if its not there,
|
||||||
|
check [ChangeLog](https://vladmandic.github.io/sdnext-docs/CHANGELOG/) for when feature was first introduced as it will always have a short note on how to use it
|
||||||
|
|
||||||
|
And for any question, reach out on [Discord](https://discord.gg/VjvR2tabEX) or open an [issue](https://github.com/vladmandic/sdnext/issues) or [discussion](https://github.com/vladmandic/sdnext/discussions)
|
||||||
|
|
||||||
### Contributing
|
### Contributing
|
||||||
|
|
||||||
Please see [Contributing](CONTRIBUTING) for details on how to contribute to this project
|
Please see [Contributing](CONTRIBUTING) for details on how to contribute to this project
|
||||||
And for any question, reach out on [Discord](https://discord.gg/VjvR2tabEX) or open an [issue](https://github.com/vladmandic/sdnext/issues) or [discussion](https://github.com/vladmandic/sdnext/discussions)
|
|
||||||
|
|
||||||
### Credits
|
## License & Credits
|
||||||
|
|
||||||
|
- SD.Next is licensed under the [Apache License 2.0](LICENSE.txt)
|
||||||
- Main credit goes to [Automatic1111 WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) for the original codebase
|
- Main credit goes to [Automatic1111 WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) for the original codebase
|
||||||
- Additional credits are listed in [Credits](https://github.com/AUTOMATIC1111/stable-diffusion-webui/#credits)
|
|
||||||
- Licenses for modules are listed in [Licenses](html/licenses.html)
|
|
||||||
|
|
||||||
### Evolution
|
## Evolution
|
||||||
|
|
||||||
<a href="https://star-history.com/#vladmandic/sdnext&Date">
|
<a href="https://star-history.com/#vladmandic/sdnext&Date">
|
||||||
<picture width=640>
|
<picture width=640>
|
||||||
|
|
@ -109,9 +133,4 @@ And for any question, reach out on [Discord](https://discord.gg/VjvR2tabEX) or o
|
||||||
|
|
||||||
- [OSS Stats](https://ossinsight.io/analyze/vladmandic/sdnext#overview)
|
- [OSS Stats](https://ossinsight.io/analyze/vladmandic/sdnext#overview)
|
||||||
|
|
||||||
### Docs
|
|
||||||
|
|
||||||
If you're unsure how to use a feature, best place to start is [Docs](https://vladmandic.github.io/sdnext-docs/) and if its not there,
|
|
||||||
check [ChangeLog](https://vladmandic.github.io/sdnext-docs/CHANGELOG/) for when feature was first introduced as it will always have a short note on how to use it
|
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
|
||||||
4
TODO.md
4
TODO.md
|
|
@ -2,20 +2,22 @@
|
||||||
|
|
||||||
## Release
|
## Release
|
||||||
|
|
||||||
- Update **README**
|
|
||||||
- Bumb packages
|
- Bumb packages
|
||||||
- Implement `unload_auxiliary_models`
|
- Implement `unload_auxiliary_models`
|
||||||
- Release **Launcher**
|
- Release **Launcher**
|
||||||
- Release **Enso**
|
- Release **Enso**
|
||||||
- Update **ROCm**
|
- Update **ROCm**
|
||||||
- Tips **Color Grading**
|
- Tips **Color Grading**
|
||||||
|
- Tips **Latent Corrections**
|
||||||
|
|
||||||
## Internal
|
## Internal
|
||||||
|
|
||||||
|
- Integrate: [Depth3D](https://github.com/vladmandic/sd-extension-depth3d)
|
||||||
- Feature: Color grading in processing
|
- Feature: Color grading in processing
|
||||||
- Feature: RIFE update
|
- Feature: RIFE update
|
||||||
- Feature: RIFE in processing
|
- Feature: RIFE in processing
|
||||||
- Feature: SeedVR2 in processing
|
- Feature: SeedVR2 in processing
|
||||||
|
- Feature: Add video models to `Reference`
|
||||||
- Deploy: Lite vs Expert mode
|
- Deploy: Lite vs Expert mode
|
||||||
- Engine: [mmgp](https://github.com/deepbeepmeep/mmgp)
|
- Engine: [mmgp](https://github.com/deepbeepmeep/mmgp)
|
||||||
- Engine: `TensorRT` acceleration
|
- Engine: `TensorRT` acceleration
|
||||||
|
|
|
||||||
|
|
@ -308,6 +308,7 @@ def main():
|
||||||
log.warning('Restart is recommended due to packages updates...')
|
log.warning('Restart is recommended due to packages updates...')
|
||||||
t_server = time.time()
|
t_server = time.time()
|
||||||
t_monitor = time.time()
|
t_monitor = time.time()
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
alive = uv.thread.is_alive()
|
alive = uv.thread.is_alive()
|
||||||
|
|
@ -326,8 +327,10 @@ def main():
|
||||||
if float(monitor_rate) > 0 and t_current - t_monitor > float(monitor_rate):
|
if float(monitor_rate) > 0 and t_current - t_monitor > float(monitor_rate):
|
||||||
log.trace(f'Monitor: {get_memory_stats(detailed=True)}')
|
log.trace(f'Monitor: {get_memory_stats(detailed=True)}')
|
||||||
t_monitor = t_current
|
t_monitor = t_current
|
||||||
from modules.api.validate import get_api_stats
|
# from modules.api.validate import get_api_stats
|
||||||
get_api_stats()
|
# get_api_stats()
|
||||||
|
# from modules import memstats
|
||||||
|
# memstats.get_objects()
|
||||||
if not alive:
|
if not alive:
|
||||||
if uv is not None and uv.wants_restart:
|
if uv is not None and uv.wants_restart:
|
||||||
clean_server()
|
clean_server()
|
||||||
|
|
|
||||||
|
|
@ -74,8 +74,8 @@ def add_diag_args(p):
|
||||||
p.add_argument('--safe', default=env_flag("SD_SAFE", False), action='store_true', help="Run in safe mode with no user extensions")
|
p.add_argument('--safe', default=env_flag("SD_SAFE", False), action='store_true', help="Run in safe mode with no user extensions")
|
||||||
p.add_argument('--test', default=env_flag("SD_TEST", False), action='store_true', help="Run test only and exit")
|
p.add_argument('--test', default=env_flag("SD_TEST", False), action='store_true', help="Run test only and exit")
|
||||||
p.add_argument('--version', default=False, action='store_true', help="Print version information")
|
p.add_argument('--version', default=False, action='store_true', help="Print version information")
|
||||||
p.add_argument("--monitor", default=os.environ.get("SD_MONITOR", -1), help="Run memory monitor, default: %(default)s")
|
p.add_argument("--monitor", type=float, default=float(os.environ.get("SD_MONITOR", -1)), help="Run memory monitor, default: %(default)s")
|
||||||
p.add_argument("--status", default=os.environ.get("SD_STATUS", -1), help="Run server is-alive status, default: %(default)s")
|
p.add_argument("--status", type=float, default=float(os.environ.get("SD_STATUS", -1)), help="Run server is-alive status, default: %(default)s")
|
||||||
|
|
||||||
|
|
||||||
def add_log_args(p):
|
def add_log_args(p):
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,11 @@
|
||||||
import re
|
import re
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
|
import types
|
||||||
|
from collections import deque
|
||||||
import psutil
|
import psutil
|
||||||
import torch
|
import torch
|
||||||
from modules import shared, errors
|
from modules import shared, errors, devices
|
||||||
from modules.logger import log
|
from modules.logger import log
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -130,28 +132,53 @@ def reset_stats():
|
||||||
class Object:
|
class Object:
|
||||||
pattern = r"'(.*?)'"
|
pattern = r"'(.*?)'"
|
||||||
|
|
||||||
|
def get_size(self, obj, seen=None):
|
||||||
|
size = sys.getsizeof(obj)
|
||||||
|
if seen is None:
|
||||||
|
seen = set()
|
||||||
|
obj_id = id(obj)
|
||||||
|
if obj_id in seen:
|
||||||
|
return 0 # Avoid double counting
|
||||||
|
seen.add(obj_id)
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
size += sum(self.get_size(k, seen) + self.get_size(v, seen) for k, v in obj.items())
|
||||||
|
elif isinstance(obj, (list, tuple, set, frozenset, deque)):
|
||||||
|
size += sum(self.get_size(i, seen) for i in obj)
|
||||||
|
return size
|
||||||
|
|
||||||
def __init__(self, name, obj):
|
def __init__(self, name, obj):
|
||||||
self.id = id(obj)
|
self.id = id(obj)
|
||||||
self.name = name
|
self.name = name
|
||||||
self.fn = sys._getframe(2).f_code.co_name
|
self.fn = sys._getframe(2).f_code.co_name
|
||||||
self.size = sys.getsizeof(obj)
|
|
||||||
self.refcount = sys.getrefcount(obj)
|
self.refcount = sys.getrefcount(obj)
|
||||||
if torch.is_tensor(obj):
|
if torch.is_tensor(obj):
|
||||||
self.type = obj.dtype
|
self.type = obj.dtype
|
||||||
self.size = obj.element_size() * obj.nelement()
|
self.size = obj.element_size() * obj.nelement()
|
||||||
else:
|
else:
|
||||||
self.type = re.findall(self.pattern, str(type(obj)))[0]
|
self.type = re.findall(self.pattern, str(type(obj)))[0]
|
||||||
self.size = sys.getsizeof(obj)
|
self.size = self.get_size(obj)
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
return f'{self.fn}.{self.name} type={self.type} size={self.size} ref={self.refcount}'
|
return f'{self.fn}.{self.name} type={self.type} size={self.size} ref={self.refcount}'
|
||||||
|
|
||||||
|
|
||||||
def get_objects(gcl=None, threshold:int=0):
|
def get_objects(gcl=None, threshold:int=1024*1024):
|
||||||
|
devices.torch_gc(force=True)
|
||||||
if gcl is None:
|
if gcl is None:
|
||||||
|
# gcl = globals()
|
||||||
gcl = {}
|
gcl = {}
|
||||||
|
log.trace(f'Memory: modules={len(sys.modules)}')
|
||||||
|
for _module_name, module in sys.modules.items():
|
||||||
|
try:
|
||||||
|
if not isinstance(module, types.ModuleType):
|
||||||
|
continue
|
||||||
|
namespace = vars(module)
|
||||||
|
gcl.update(namespace)
|
||||||
|
except Exception:
|
||||||
|
pass # Some modules may not allow introspection
|
||||||
objects = []
|
objects = []
|
||||||
seen = []
|
seen = []
|
||||||
|
|
||||||
|
log.trace(f'Memory: items={len(gcl)} threshold={threshold}')
|
||||||
for name, obj in gcl.items():
|
for name, obj in gcl.items():
|
||||||
if id(obj) in seen:
|
if id(obj) in seen:
|
||||||
continue
|
continue
|
||||||
|
|
@ -169,6 +196,6 @@ def get_objects(gcl=None, threshold:int=0):
|
||||||
|
|
||||||
objects = sorted(objects, key=lambda x: x.size, reverse=True)
|
objects = sorted(objects, key=lambda x: x.size, reverse=True)
|
||||||
for obj in objects:
|
for obj in objects:
|
||||||
log.trace(obj)
|
log.trace(f'Memory: {obj}')
|
||||||
|
|
||||||
return objects
|
return objects
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue