diff --git a/CHANGELOG.md b/CHANGELOG.md index de1824aa5..d2ac998f8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -204,7 +204,7 @@ What else? implemented as an extension for **SD.Next** (for the moment while dev is ongoing) generate high-quality videos with pretty much unlimited duration and with limited VRAM! install as any other extension and for details see extension [README](https://github.com/vladmandic/sd-extension-framepack/blob/main/README.md) - - I2V & FLF2V support with explicit strength controls + - I2V & FLF2V support with explicit strength controls - complex actions: modify prompts for each section of the video - LoRA support: use normal **HunyuanVideo** LoRAs - decode: use local, tiny or remote VAE @@ -316,7 +316,7 @@ There are quite a few other performance and quality-of-life improvements in this - **Models** - [HiDream-I1](https://huggingface.co/HiDream-ai/HiDream-I1-Full) in fast, dev and full variants! - new absolutely massive image generative foundation model with **17B** parameters and 4 text-encoders with additional **8.3B** parameters + new absolutely massive image generative foundation model with **17B** parameters and 4 text-encoders with additional **8.3B** parameters simply select from *networks -> models -> reference* due to size (over 25B params in 58GB), offloading and on-the-fly quantization are pretty much a necessity see [HiDream Wiki page](https://github.com/vladmandic/sdnext/wiki/HiDream) for details @@ -375,7 +375,7 @@ Time for another major release with ~120 commits and [ChangeLog](https://github. *Highlights?* Video...Brand new Video processing module with support for all latest models: **WAN21, Hunyuan, LTX, Cog, Allegro, Mochi1, Latte1** in both *T2V* and *I2V* workflows And combined with *on-the-fly quantization*, support for *Local/Tiny/Remote* VAE, acceleration modules such as *FasterCache or PAB*, and more! -Models...And support for new models: **CogView-4**, **SANA 1.5**, +Models...And support for new models: **CogView-4**, **SANA 1.5**, *Plus...* - New **Prompt Enhance** using LLM, @@ -674,7 +674,7 @@ We're back with another update with nearly 100 commits! - updated **CUDA** receipe to `torch==2.6.0` with `cuda==12.6` and add prebuilt image - added **ROCm** receipe and prebuilt image - added **IPEX** receipe and add prebuilt image - - added **OpenVINO** receipe and prebuilt image + - added **OpenVINO** receipe and prebuilt image - **System** - improve **python==3.12** compatibility - **Torch** @@ -747,7 +747,7 @@ Just one week after latest release and what a week it was with over 50 commits! - **GitHub** - rename core repo from to - old repo url should automatically redirect to new one for seamless transition and in-place upgrades + old repo url should automatically redirect to new one for seamless transition and in-place upgrades all internal links have been updated wiki content and docs site have been updated - **Docs**: @@ -1096,7 +1096,7 @@ We've also added support for several new models such as highly anticipated [NVLa And several new SOTA video models: [Lightricks LTX-Video](https://huggingface.co/Lightricks/LTX-Video), [Hunyuan Video](https://huggingface.co/tencent/HunyuanVideo) and [Genmo Mochi.1 Preview](https://huggingface.co/genmo/mochi-1-preview) And a lot of **Control** and **IPAdapter** goodies -- for **SDXL** there is new [ProMax](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0), improved *Union* and *Tiling* models +- for **SDXL** there is new [ProMax](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0), improved *Union* and *Tiling* models - for **FLUX.1** there are [Flux Tools](https://blackforestlabs.ai/flux-1-tools/) as well as official *Canny* and *Depth* models, a cool [Redux](https://huggingface.co/black-forest-labs/FLUX.1-Redux-dev) model as well as [XLabs](https://huggingface.co/XLabs-AI/flux-ip-adapter-v2) IP-adapter - for **SD3.5** there are official *Canny*, *Blur* and *Depth* models in addition to existing 3rd party models diff --git a/installer.py b/installer.py index 8ee43a780..cecf6d0c6 100644 --- a/installer.py +++ b/installer.py @@ -546,7 +546,7 @@ def check_diffusers(): t_start = time.time() if args.skip_all or args.skip_git or args.experimental: return - sha = '20379d9d1395b8e95977faf80facff43065ba75f' # diffusers commit hash + sha = '6508da6f06a0da1054ae6a808d0025c04b70f0e8' # diffusers commit hash pkg = pkg_resources.working_set.by_key.get('diffusers', None) minor = int(pkg.version.split('.')[1] if pkg is not None else 0) cur = opts.get('diffusers_version', '') if minor > 0 else '' diff --git a/package.json b/package.json index 15ccbaff1..8e6126230 100644 --- a/package.json +++ b/package.json @@ -35,6 +35,7 @@ "eslint-config-airbnb-base": "^15.0.0", "eslint-plugin-css": "^0.9.2", "eslint-plugin-html": "^8.1.1", + "eslint-plugin-import": "^2.31.0", "eslint-plugin-json": "^3.1.0", "eslint-plugin-markdown": "^4.0.1", "eslint-plugin-node": "^11.1.0" diff --git a/requirements.txt b/requirements.txt index 3c267794f..5c464a7a1 100644 --- a/requirements.txt +++ b/requirements.txt @@ -41,7 +41,7 @@ torchsde==0.2.6 antlr4-python3-runtime==4.9.3 requests==2.32.3 tqdm==4.67.1 -accelerate==1.6.0 +accelerate==1.7.0 opencv-contrib-python-headless==4.9.0.80 einops==0.4.1 gradio==3.43.2 @@ -52,7 +52,7 @@ numba==0.61.2 protobuf==4.25.3 pytorch_lightning==1.9.4 tokenizers==0.21.1 -transformers==4.51.3 +transformers==4.52.3 urllib3==1.26.19 Pillow==10.4.0 timm==0.9.16