Anima replaces the Cosmos T5-11B text encoder with Qwen3-0.6B + a 6-layer LLM adapter and uses CONST preconditioning instead of EDM. - Add pipelines/model_anima.py loader with dynamic import of custom AnimaTextToImagePipeline and AnimaLLMAdapter from model repo - Register 'Anima' pipeline in shared_items.py - Add name-based detection in sd_detect.py - Fix list-format _class_name handling in guess_by_diffusers() - Wire loader in sd_models.py load_diffuser_force() - Skip noise_pred callback injection for Anima (uses velocity instead) - Add output_type='np' override in processing_args.py |
||
|---|---|---|
| .github | ||
| .vscode | ||
| cli | ||
| configs | ||
| data | ||
| extensions | ||
| extensions-builtin | ||
| html | ||
| javascript | ||
| models | ||
| modules | ||
| pipelines | ||
| scripts | ||
| wiki@a678731c5d | ||
| .dockerignore | ||
| .gitignore | ||
| .gitmodules | ||
| .markdownlint.json | ||
| .pre-commit-config.yaml | ||
| .pylintrc | ||
| .ruff.toml | ||
| CHANGELOG.md | ||
| CITATION.cff | ||
| CODE_OF_CONDUCT | ||
| CONTRIBUTING | ||
| LICENSE.txt | ||
| README.md | ||
| SECURITY.md | ||
| TODO.md | ||
| eslint.config.mjs | ||
| installer.py | ||
| launch.py | ||
| motd | ||
| package.json | ||
| requirements.txt | ||
| webui.bat | ||
| webui.ps1 | ||
| webui.py | ||
| webui.sh | ||
README.md
SD.Next: All-in-one WebUI for AI generative image and video creation and captioning
Table of contents
SD.Next Features
All individual features are not listed here, instead check ChangeLog for full list of changes
- Fully localized: ▹ English | Chinese | Russian | Spanish | German | French | Italian | Portuguese | Japanese | Korean
- Desktop and Mobile support!
- Multiple diffusion models!
- Multi-platform!
▹ Windows | Linux | MacOS | nVidia CUDA | AMD ROCm | Intel Arc / IPEX XPU | DirectML | OpenVINO | ONNX+Olive | ZLUDA - Platform specific auto-detection and tuning performed on install
- Optimized processing with latest
torchdevelopments with built-in support for model compile and quantize
Compile backends: Triton | StableFast | DeepCache | OneDiff | TeaCache | etc.
Quantization methods: SDNQ | BitsAndBytes | Optimum-Quanto | TorchAO / LayerWise - Interrogate/Captioning with 150+ OpenCLiP models and 20+ built-in VLMs
- Built in installer with automatic updates and dependency management
Desktop interface
Mobile interface
For screenshots and information on other available themes, see Themes
Model support
SD.Next supports broad range of models: supported models and model specs
Platform support
- nVidia GPUs using CUDA libraries on both Windows and Linux
- AMD GPUs using ROCm libraries on Linux
Support will be extended to Windows once AMD releases ROCm for Windows - Intel Arc GPUs using OneAPI with IPEX XPU libraries on both Windows and Linux
- Any GPU compatible with DirectX on Windows using DirectML libraries
This includes support for AMD GPUs that are not supported by native ROCm libraries - Any GPU or device compatible with OpenVINO libraries on both Windows and Linux
- Apple M1/M2 on OSX using built-in support in Torch with MPS optimizations
- ONNX/Olive
- AMD GPUs on Windows using ZLUDA libraries
Plus Docker container recipes for: CUDA, ROCm, Intel IPEX and OpenVINO
Getting started
- Get started with SD.Next by following the installation instructions
- For more details, check out advanced installation guide
- List and explanation of command line arguments
- Install walkthrough video
[!TIP] And for platform specific information, check out
WSL | Intel Arc | DirectML | OpenVINO | ONNX & Olive | ZLUDA | AMD ROCm | MacOS | nVidia | Docker
[!WARNING] If you run into issues, check out troubleshooting and debugging guides
Contributing
Please see Contributing for details on how to contribute to this project
And for any question, reach out on Discord or open an issue or discussion
Credits
- Main credit goes to Automatic1111 WebUI for the original codebase
- Additional credits are listed in Credits
- Licenses for modules are listed in Licenses
Evolution
Docs
If you're unsure how to use a feature, best place to start is Docs and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it