mirror of https://github.com/vladmandic/automatic
4.4 KiB
4.4 KiB
TODO
Project Board
Internal
- UI: New inpaint/outpaint interface
Kanvas - Deploy: Create executable for SD.Next
- Feature: Integrate natural language imagesearch
ImageDB - Feature: Transformers unified cache handler
- Feature: Remote Text-Encoder support
- Refactor: Modular pipelines and guiders
- Refactor: move sampler options to settings to config
- Refactor: GGUF
- Feature: LoRA add OMI format support for SD35/FLUX.1
- Refactor: remove
CodeFormer - Refactor: remove
GFPGAN - UI: Lite vs Expert mode
- Video tab: add full API support
- Control tab: add overrides handling
- Engine: TensorRT acceleration
Features
New models / Pipelines
TODO: Prioritize!
- Bria FIBO
- Bytedance Lynx
- ByteDance OneReward
- ByteDance USO
- Chroma1 Radiance
- DiffSynth Studio
- DiffusionForcing
- Dream0 guidance
- HunyuanAvatar
- HunyuanCustom
- Inf-DiT
- Krea Realtime Video
- LanDiff
- Liquid
- LongCat-Video
- LucyEdit
- Lumina-DiMOO
- Magi(https://github.com/huggingface/diffusers/pull/11713)
- Ming
- MUG-V 10B
- Ovi
- Phantom HuMo
- SD3 UltraEdit
- SelfForcing
- SEVA
- Step1X
- Wan-2.2 Animate
- Wan-2.2 S2V
- WAN-CausVid-Plus t2v
- WAN-CausVid
- WAN-StepDistill
- Wan2.2-Animate-14B
- WAN2GP
Code TODO
npm run todo
- control: support scripts via api
- fc: autodetect distilled based on model
- fc: autodetect tensor format based on model
- hypertile: vae breaks when using non-standard sizes
- install: switch to pytorch source when it becomes available
- loader: load receipe
- loader: save receipe
- lora: add other quantization types
- lora: add t5 key support for sd35/f1
- lora: maybe force imediate quantization
- model load: force-reloading entire model as loading transformers only leads to massive memory usage
- model load: implement model in-memory caching
- modernui: monkey-patch for missing tabs.select event
- modules/lora/lora_extract.py:188:9: W0511: TODO: lora: support pre-quantized flux
- modules/modular_guiders.py:65:58: W0511: TODO: guiders
- processing: remove duplicate mask params
- resize image: enable full VAE mode for resize-latent