mirror of https://github.com/vladmandic/automatic
experimental python==3.14 support
Signed-off-by: Vladimir Mandic <mandic00@live.com>pull/4654/head
parent
ab44e59bd1
commit
d6bbfe3dc2
19
CHANGELOG.md
19
CHANGELOG.md
|
|
@ -37,6 +37,9 @@ TBD
|
|||
- ui: **themes** add *CTD-NT64Light*, *CTD-NT64Medium* and *CTD-NT64Dark*, thanks @resonantsky
|
||||
- ui: **gallery** add option to auto-refresh gallery, thanks @awsr
|
||||
- **Internal**
|
||||
- `python==3.14` initial support
|
||||
see [docs](https://vladmandic.github.io/sdnext-docs/Python/) for details
|
||||
- remove hard-dependnecies: `clip`, `numba`, `skimage`, `torchsde`
|
||||
- refactor: to/from image/tensor logic, thanks @CalamitousFelicitousness
|
||||
- refactor: switch to `pyproject.toml` for tool configs
|
||||
- refactor: reorganize `cli` scripts
|
||||
|
|
@ -91,7 +94,7 @@ Also here are updates to `torch` and additional GPU archs support for `ROCm` bac
|
|||
- add 13(!) new scheduler families
|
||||
not a port, but more of inspired-by [res4lyf](https://github.com/ClownsharkBatwing/RES4LYF) library
|
||||
all schedulers should be compatible with both `epsilon` and `flow` prediction style!
|
||||
*note*: each family may have multiple actual schedulers, so the list total is 56(!) new schedulers
|
||||
*note*: each family may have multiple actual schedulers, so the list total is 56(!) new schedulers
|
||||
- core family: *RES*
|
||||
- exponential: *DEIS, ETD, Lawson, ABNorsett*
|
||||
- integrators: *Runge-Kutta, Linear-RK, Specialized-RK, Lobatto, Radau-IIA, Gauss-Legendre*
|
||||
|
|
@ -180,7 +183,7 @@ For full list of changes, see full changelog.
|
|||
available in both *original* and *sdnq-dynamic prequantized* variants
|
||||
thanks @CalamitousFelicitousness
|
||||
*note*: model requires pre-release versions of `transformers` package:
|
||||
> pip install --upgrade git+https://github.com/huggingface/transformers.git
|
||||
> pip install --upgrade git+<https://github.com/huggingface/transformers.git>
|
||||
> ./webui.sh --experimental
|
||||
- [Nunchaku Z-Image Turbo](https://huggingface.co/nunchaku-tech/nunchaku-z-image-turbo)
|
||||
nunchaku optimized z-image turbo
|
||||
|
|
@ -341,9 +344,9 @@ Plus a lot of internal improvements and fixes
|
|||
**Z-Image** is a powerful and highly efficient image generation model with 6B parameters and using Qwen-3 as text encoder
|
||||
unlike most of new models that are far larger, Z-Image architecture allows it to run with good performance even on mid-range hardware
|
||||
*note*: initial release is *Turbo* variant only with *Base* and *Edit* variants to follow
|
||||
- [Kandinsky 5.0 Lite]() is a new 6B model using Qwen-2.5 as text encoder
|
||||
- [Kandinsky 5.0 Lite](https://huggingface.co/kandinskylab/Kandinsky-5.0-I2V-Lite-5s-Diffusers) is a new 6B model using Qwen-2.5 as text encoder
|
||||
it comes in text-to-image and image-edit variants
|
||||
- **Google Gemini Nano Banana** [2.5 Flash](https://blog.google/products/gemini/gemini-nano-banana-examples/) and [3.0 Pro](https://deepmind.google/models/gemini-image/pro/)
|
||||
- **Google Gemini Nano Banana** [2.5 Flash](https://blog.google/products/gemini/gemini-nano-banana-examples/) and [3.0 Pro](https://deepmind.google/models/gemini-image/pro/)
|
||||
first cloud-based model directly supported in SD.Next UI
|
||||
*note*: need to set `GOOGLE_API_KEY` environment variable with your key to use this model
|
||||
- [Photoroom PRX 1024 Beta](https://huggingface.co/Photoroom/prx-1024-t2i-beta)
|
||||
|
|
@ -375,7 +378,7 @@ Plus a lot of internal improvements and fixes
|
|||
- ui indicator of model capabilities
|
||||
- support for *prefill* style of prompting/answering
|
||||
- support for *reasoning* mode for supported models
|
||||
with option to output answer-only or reasoning-process
|
||||
with option to output answer-only or reasoning-process
|
||||
- additional debug logging
|
||||
- **Other Features**
|
||||
- **wildcards**: allow recursive inline wildcards using curly braces syntax
|
||||
|
|
@ -699,7 +702,7 @@ Highlight are:
|
|||
requires qwen-image-edit-2509 or its variant as multi-image edits are not available in original qwen-image
|
||||
in ui control tab: inputs -> separate init image
|
||||
add image for *input media* and *control media*
|
||||
can be
|
||||
can be
|
||||
- [Cache-DiT](https://github.com/vipshop/cache-dit)
|
||||
cache-dit is a unified, flexible and training-free cache acceleration framework
|
||||
compatible with many dit-based models such as FLUX.1, Qwen, HunyuanImage, Wan2.2, Chroma, etc.
|
||||
|
|
@ -914,7 +917,7 @@ And check out new **history** tab in the right panel, it now shows visualization
|
|||
- update openvino to `openvino==2025.3.0`
|
||||
- add deprecation warning for `python==3.9`
|
||||
- allow setting denoise strength to 0 in control/img2img
|
||||
this allows to run workflows which only refine or detail existing image without changing it
|
||||
this allows to run workflows which only refine or detail existing image without changing it
|
||||
- **Fixes**
|
||||
- normalize path hanlding when deleting images
|
||||
- unified compile upscalers
|
||||
|
|
@ -1000,7 +1003,7 @@ New release two weeks after the last one and its a big one with over 150 commits
|
|||
- Several new models: [Qwen-Image](https://qwenlm.github.io/blog/qwen-image/) (plus *Lightning* variant) and [FLUX.1-Krea-Dev](https://www.krea.ai/blog/flux-krea-open-source-release)
|
||||
- Several updated models: [Chroma](https://huggingface.co/lodestones/Chroma), [SkyReels-V2](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P-Diffusers), [Wan-VACE](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B-diffusers), [HunyuanDiT](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers-Distilled)
|
||||
- Plus continuing with major **UI** work with new embedded **Docs/Wiki** search, redesigned real-time **hints**, **wildcards** UI selector, built-in **GPU monitor**, **CivitAI** integration and more!
|
||||
- On the compute side, new profiles for high-vram GPUs, offloading improvements, parallel-load for large models, support for new `torch` release and improved quality when using low-bit quantization!
|
||||
- On the compute side, new profiles for high-vram GPUs, offloading improvements, parallel-load for large models, support for new `torch` release and improved quality when using low-bit quantization!
|
||||
- [SD.Next Model Samples Gallery](https://vladmandic.github.io/sd-samples/compare.html): pre-generated image gallery with 60 models (45 base and 15 finetunes) and 40 different styles resulting in 2,400 high resolution images!
|
||||
gallery additionally includes model details such as typical load and inference times as well as sizes and types of each model component (*e.g. unet, transformer, text-encoder, vae*)
|
||||
- And (*as always*) many bugfixes and improvements to existing features!
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
Subproject commit 3ec8f9eb796be519a98de985bac03645d0040ede
|
||||
Subproject commit f3e21bb70e5f8a665b066c5d0ece99bfcff272f3
|
||||
29
installer.py
29
installer.py
|
|
@ -650,18 +650,20 @@ def get_platform():
|
|||
# check python version
|
||||
def check_python(supported_minors=[], experimental_minors=[], reason=None):
|
||||
if supported_minors is None or len(supported_minors) == 0:
|
||||
supported_minors = [10, 11, 12]
|
||||
experimental_minors = [13]
|
||||
supported_minors = [10, 11, 12, 13]
|
||||
experimental_minors = [14]
|
||||
t_start = time.time()
|
||||
if args.quick:
|
||||
return
|
||||
log.info(f'Python: version={platform.python_version()} platform={platform.system()} bin="{sys.executable}" venv="{sys.prefix}"')
|
||||
if int(sys.version_info.minor) == 12:
|
||||
os.environ.setdefault('SETUPTOOLS_USE_DISTUTILS', 'local') # hack for python 3.11 setuptools
|
||||
if int(sys.version_info.minor) == 10:
|
||||
log.warning(f"Python: version={platform.python_version()} is not actively supported")
|
||||
if int(sys.version_info.minor) == 9:
|
||||
log.error(f"Python: version={platform.python_version()} is end-of-life")
|
||||
if int(sys.version_info.minor) == 10:
|
||||
log.warning(f"Python: version={platform.python_version()} is not actively supported")
|
||||
if int(sys.version_info.minor) >= 12:
|
||||
os.environ.setdefault('SETUPTOOLS_USE_DISTUTILS', 'local') # hack for python 3.11 setuptools
|
||||
if int(sys.version_info.minor) >= 13:
|
||||
log.warning(f"Python: version={platform.python_version()} not all features are available")
|
||||
if not (int(sys.version_info.major) == 3 and int(sys.version_info.minor) in supported_minors):
|
||||
if (int(sys.version_info.major) == 3 and int(sys.version_info.minor) in experimental_minors):
|
||||
log.warning(f"Python experimental: {sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}")
|
||||
|
|
@ -1262,6 +1264,7 @@ def reload(package, desired=None):
|
|||
for m in modules:
|
||||
del sys.modules[m]
|
||||
sys.modules[package] = importlib.import_module(package)
|
||||
importlib.reload(sys.modules[package])
|
||||
log.debug(f'Reload: package={package} version={sys.modules[package].__version__ if hasattr(sys.modules[package], "__version__") else "N/A"}')
|
||||
|
||||
|
||||
|
|
@ -1321,7 +1324,6 @@ def install_gradio():
|
|||
# aiofiles-23.2.1 altair-5.5.0 annotated-types-0.7.0 anyio-4.9.0 attrs-25.3.0 certifi-2025.6.15 charset_normalizer-3.4.2 click-8.2.1 contourpy-1.3.2 cycler-0.12.1 fastapi-0.115.14 ffmpy-0.6.0 filelock-3.18.0 fonttools-4.58.4 fsspec-2025.5.1 gradio-3.43.2 gradio-client-0.5.0 h11-0.16.0 hf-xet-1.1.5 httpcore-1.0.9 httpx-0.28.1 huggingface-hub-0.33.1 idna-3.10 importlib-resources-6.5.2 jinja2-3.1.6 jsonschema-4.24.0 jsonschema-specifications-2025.4.1 kiwisolver-1.4.8 markupsafe-2.1.5 matplotlib-3.10.3 narwhals-1.45.0 numpy-1.26.4 orjson-3.10.18 packaging-25.0 pandas-2.3.0 pillow-10.4.0 pydantic-2.11.7 pydantic-core-2.33.2 pydub-0.25.1 pyparsing-3.2.3 python-dateutil-2.9.0.post0 python-multipart-0.0.20 pytz-2025.2 pyyaml-6.0.2 referencing-0.36.2 requests-2.32.4 rpds-py-0.25.1 semantic-version-2.10.0 six-1.17.0 sniffio-1.3.1 starlette-0.46.2 tqdm-4.67.1 typing-extensions-4.14.0 typing-inspection-0.4.1 tzdata-2025.2 urllib3-2.5.0 uvicorn-0.35.0 websockets-11.0.3
|
||||
install('gradio==3.43.2', no_deps=True)
|
||||
install('gradio-client==0.5.0', no_deps=True, quiet=True)
|
||||
install('dctorch==0.1.2', no_deps=True, quiet=True)
|
||||
pkgs = ['fastapi', 'websockets', 'aiofiles', 'ffmpy', 'pydub', 'uvicorn', 'semantic-version', 'altair', 'python-multipart', 'matplotlib']
|
||||
for pkg in pkgs:
|
||||
if not installed(pkg, quiet=True):
|
||||
|
|
@ -1329,13 +1331,19 @@ def install_gradio():
|
|||
|
||||
|
||||
def install_pydantic():
|
||||
if args.new:
|
||||
install('pydantic==2.11.7', ignore=True, quiet=True)
|
||||
reload('pydantic', '2.11.7')
|
||||
if args.new or (sys.version_info >= (3, 14)):
|
||||
install('pydantic==2.12.5', ignore=True, quiet=True)
|
||||
reload('pydantic', '2.12.5')
|
||||
else:
|
||||
install('pydantic==1.10.21', ignore=True, quiet=True)
|
||||
reload('pydantic', '1.10.21')
|
||||
|
||||
def install_scipy():
|
||||
if args.new or (sys.version_info >= (3, 14)):
|
||||
install('scipy==1.17.0', ignore=True, quiet=True)
|
||||
else:
|
||||
install('scipy==1.14.1', ignore=True, quiet=True)
|
||||
|
||||
|
||||
def install_opencv():
|
||||
install('opencv-python==4.12.0.88', ignore=True, quiet=True)
|
||||
|
|
@ -1407,6 +1415,7 @@ def install_requirements():
|
|||
_res = install(line)
|
||||
install_pydantic()
|
||||
install_opencv()
|
||||
install_scipy()
|
||||
if args.profile:
|
||||
pr.disable()
|
||||
print_profile(pr, 'Requirements')
|
||||
|
|
|
|||
|
|
@ -30,6 +30,14 @@ def to_half(tensor, enable):
|
|||
|
||||
|
||||
def run_modelmerger(id_task, **kwargs): # pylint: disable=unused-argument
|
||||
from installer import install
|
||||
install('tensordict', quiet=True)
|
||||
try:
|
||||
from tensordict import TensorDict
|
||||
except Exception as e:
|
||||
shared.log.error(f"Merge: {e}")
|
||||
return [*[gr.update() for _ in range(4)], "tensordict not available"]
|
||||
|
||||
jobid = shared.state.begin('Merge')
|
||||
t0 = time.time()
|
||||
|
||||
|
|
|
|||
|
|
@ -133,6 +133,8 @@ except Exception as e:
|
|||
timer.startup.record("onnx")
|
||||
|
||||
from fastapi import FastAPI # pylint: disable=W0611,C0411
|
||||
timer.startup.record("fastapi")
|
||||
|
||||
import gradio # pylint: disable=W0611,C0411
|
||||
timer.startup.record("gradio")
|
||||
errors.install([gradio])
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ from contextlib import contextmanager
|
|||
from typing import Dict, Optional, Tuple, Set
|
||||
import safetensors.torch
|
||||
import torch
|
||||
from tensordict import TensorDict
|
||||
import modules.memstats
|
||||
import modules.devices as devices
|
||||
from installer import log, console
|
||||
|
|
@ -78,6 +77,7 @@ def load_thetas(
|
|||
device: torch.device,
|
||||
precision: str,
|
||||
) -> Dict:
|
||||
from tensordict import TensorDict
|
||||
thetas = {k: TensorDict.from_dict(read_state_dict(m, "cpu")) for k, m in models.items()}
|
||||
if prune:
|
||||
keyset = set.intersection(*[set(m.keys()) for m in thetas.values() if len(m.keys())])
|
||||
|
|
@ -147,6 +147,7 @@ def un_prune_model(
|
|||
log.info("Merge restoring pruned keys")
|
||||
del thetas
|
||||
devices.torch_gc(force=False)
|
||||
from tensordict import TensorDict
|
||||
original_a = TensorDict.from_dict(read_state_dict(models["model_a"], device))
|
||||
unpruned = 0
|
||||
for key in original_a.keys():
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ def hf_check_cache():
|
|||
from modules.modelstats import stat
|
||||
if opts.hfcache_dir != prev_default:
|
||||
size, _mtime = stat(prev_default)
|
||||
if size//1024//1024 > 0:
|
||||
if size//1024//1024 > 16:
|
||||
log.warning(f'Cache location changed: previous="{prev_default}" size={size//1024//1024} MB')
|
||||
size, _mtime = stat(opts.hfcache_dir)
|
||||
log.debug(f'Huggingface: cache="{opts.hfcache_dir}" size={size//1024//1024} MB')
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
from basicsr.archs.rrdbnet_arch import RRDBNet
|
||||
from modules.postprocess.realesrgan_model_arch import SRVGGNetCompact
|
||||
from modules.upscaler import Upscaler
|
||||
from modules.shared import opts, device, log
|
||||
|
|
@ -9,6 +8,9 @@ from modules import devices
|
|||
|
||||
class UpscalerRealESRGAN(Upscaler):
|
||||
def __init__(self, dirname):
|
||||
from installer import install
|
||||
install('--no-build-isolation git+https://github.com/Disty0/BasicSR@23c1fb6f5c559ef5ce7ad657f2fa56e41b121754', 'basicsr', ignore=True, quiet=True)
|
||||
from basicsr.archs.rrdbnet_arch import RRDBNet
|
||||
self.name = "RealESRGAN"
|
||||
self.user_path = dirname
|
||||
super().__init__()
|
||||
|
|
|
|||
|
|
@ -36,6 +36,8 @@ def setup_color_correction(image):
|
|||
|
||||
|
||||
def apply_color_correction(correction, original_image):
|
||||
from installer import install
|
||||
install('scikit-image', quiet=True)
|
||||
from skimage import exposure
|
||||
shared.log.debug(f"Applying color correction: correction={correction.shape} image={original_image}")
|
||||
np_image = np.asarray(original_image)
|
||||
|
|
|
|||
|
|
@ -6,19 +6,18 @@ from typing import List, Optional, Tuple, Union
|
|||
|
||||
import numpy as np
|
||||
import torch
|
||||
import torchsde
|
||||
|
||||
from diffusers.configuration_utils import ConfigMixin, register_to_config
|
||||
from diffusers.utils import BaseOutput
|
||||
from diffusers.utils.torch_utils import randn_tensor
|
||||
from diffusers.schedulers.scheduling_utils import SchedulerMixin
|
||||
import scipy.stats
|
||||
|
||||
|
||||
class BatchedBrownianTree:
|
||||
"""A wrapper around torchsde.BrownianTree that enables batches of entropy."""
|
||||
|
||||
def __init__(self, x, t0, t1, seed=None, **kwargs):
|
||||
import torchsde
|
||||
t0, t1, self.sign = self.sort(t0, t1)
|
||||
w0 = kwargs.get("w0", torch.zeros_like(x))
|
||||
if seed is None:
|
||||
|
|
@ -168,6 +167,13 @@ class FlowMatchDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
|
|||
base_image_seq_len: Optional[int] = 256,
|
||||
max_image_seq_len: Optional[int] = 4096,
|
||||
):
|
||||
from installer import install
|
||||
install('torchsde==0.2.6', 'torchsde', quiet=True)
|
||||
try:
|
||||
import torchsde
|
||||
except Exception as e:
|
||||
raise ImportError("Failed to import torchsde. Please make sure it is installed correctly.") from e
|
||||
|
||||
# settings for DPM-Solver
|
||||
if algorithm_type not in ["dpmsolver2", "dpmsolver2A", "dpmsolver++2M", "dpmsolver++2S", "dpmsolver++sde", "dpmsolver++2Msde", "dpmsolver++3Msde"]:
|
||||
raise NotImplementedError(f"{algorithm_type} is not implemented for {self.__class__}")
|
||||
|
|
@ -378,6 +384,7 @@ class FlowMatchDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
|
|||
# Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_beta
|
||||
def _convert_to_beta(self, sigma_min, sigma_max, num_inference_steps, device: Union[str, torch.device] = None, alpha: float = 0.6, beta: float = 0.6) -> torch.Tensor:
|
||||
"""From "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)"""
|
||||
import scipy.stats
|
||||
sigmas = torch.Tensor(
|
||||
[
|
||||
sigma_min + (ppf * (sigma_max - sigma_min))
|
||||
|
|
|
|||
|
|
@ -364,8 +364,8 @@ def read_metadata_from_safetensors(filename):
|
|||
res = {}
|
||||
# try:
|
||||
t0 = time.time()
|
||||
with open(filename, mode="rb") as file:
|
||||
try:
|
||||
try:
|
||||
with open(filename, mode="rb") as file:
|
||||
metadata_len = file.read(8)
|
||||
metadata_len = int.from_bytes(metadata_len, "little")
|
||||
json_start = file.read(2)
|
||||
|
|
@ -394,10 +394,10 @@ def read_metadata_from_safetensors(filename):
|
|||
except Exception:
|
||||
pass
|
||||
res[k] = v
|
||||
except Exception as e:
|
||||
shared.log.error(f'Model metadata: file="{filename}" {e}')
|
||||
from modules import errors
|
||||
errors.display(e, 'Model metadata')
|
||||
except Exception as e:
|
||||
shared.log.error(f'Model metadata: file="{filename}" {e}')
|
||||
from modules import errors
|
||||
errors.display(e, 'Model metadata')
|
||||
sd_metadata[filename] = res
|
||||
global sd_metadata_pending # pylint: disable=global-statement
|
||||
sd_metadata_pending += 1
|
||||
|
|
|
|||
|
|
@ -68,21 +68,6 @@ pipelines = {
|
|||
}
|
||||
|
||||
|
||||
try:
|
||||
from modules.onnx_impl import initialize_onnx
|
||||
initialize_onnx()
|
||||
onnx_pipelines = {
|
||||
'ONNX Stable Diffusion': getattr(diffusers, 'OnnxStableDiffusionPipeline', None),
|
||||
'ONNX Stable Diffusion Img2Img': getattr(diffusers, 'OnnxStableDiffusionImg2ImgPipeline', None),
|
||||
'ONNX Stable Diffusion Inpaint': getattr(diffusers, 'OnnxStableDiffusionInpaintPipeline', None),
|
||||
'ONNX Stable Diffusion Upscale': getattr(diffusers, 'OnnxStableDiffusionUpscalePipeline', None),
|
||||
}
|
||||
except Exception as e:
|
||||
from installer import log
|
||||
log.error(f'ONNX initialization error: {e}')
|
||||
onnx_pipelines = {}
|
||||
|
||||
|
||||
def postprocessing_scripts():
|
||||
import modules.scripts_manager
|
||||
return modules.scripts_manager.scripts_postproc.scripts
|
||||
|
|
@ -132,8 +117,22 @@ def list_crossattention():
|
|||
"Dynamic Attention BMM"
|
||||
]
|
||||
|
||||
|
||||
def get_pipelines():
|
||||
if hasattr(diffusers, 'OnnxStableDiffusionPipeline') and 'ONNX Stable Diffusion' not in list(pipelines):
|
||||
try:
|
||||
from modules.onnx_impl import initialize_onnx
|
||||
initialize_onnx()
|
||||
onnx_pipelines = {
|
||||
'ONNX Stable Diffusion': getattr(diffusers, 'OnnxStableDiffusionPipeline', None),
|
||||
'ONNX Stable Diffusion Img2Img': getattr(diffusers, 'OnnxStableDiffusionImg2ImgPipeline', None),
|
||||
'ONNX Stable Diffusion Inpaint': getattr(diffusers, 'OnnxStableDiffusionInpaintPipeline', None),
|
||||
'ONNX Stable Diffusion Upscale': getattr(diffusers, 'OnnxStableDiffusionUpscalePipeline', None),
|
||||
}
|
||||
except Exception as e:
|
||||
from installer import log
|
||||
log.error(f'ONNX initialization error: {e}')
|
||||
onnx_pipelines = {}
|
||||
pipelines.update(onnx_pipelines)
|
||||
for k, v in pipelines.items():
|
||||
if k != 'Autodetect' and v is None:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# required for python 3.12
|
||||
# required for `python>=3.12`
|
||||
setuptools==69.5.1
|
||||
wheel
|
||||
|
||||
|
|
@ -16,7 +16,6 @@ jsonmerge
|
|||
kornia
|
||||
lark
|
||||
omegaconf
|
||||
optimum
|
||||
piexif
|
||||
mpmath
|
||||
psutil
|
||||
|
|
@ -32,26 +31,24 @@ invisible-watermark
|
|||
PyWavelets
|
||||
pi-heif
|
||||
ftfy
|
||||
blendmodes
|
||||
|
||||
# versioned
|
||||
fastapi==0.124.4
|
||||
rich==14.1.0
|
||||
safetensors==0.7.0
|
||||
tensordict==0.8.3
|
||||
peft==0.18.1
|
||||
httpx==0.28.1
|
||||
compel==2.2.1
|
||||
torchsde==0.2.6
|
||||
antlr4-python3-runtime==4.9.3
|
||||
requests==2.32.4
|
||||
tqdm==4.67.1
|
||||
requests==2.32.3
|
||||
tqdm==4.67.3
|
||||
accelerate==1.12.0
|
||||
einops==0.8.1
|
||||
huggingface_hub==0.36.0
|
||||
huggingface_hub==0.36.2
|
||||
numexpr==2.11.0
|
||||
numpy==2.1.2
|
||||
pandas==2.3.1
|
||||
numba==0.61.2
|
||||
protobuf==4.25.3
|
||||
pytorch_lightning==2.6.0
|
||||
urllib3==1.26.19
|
||||
|
|
@ -61,11 +58,6 @@ pyparsing==3.2.3
|
|||
typing-extensions==4.14.1
|
||||
sentencepiece==0.2.1
|
||||
|
||||
# additional
|
||||
blendmodes
|
||||
scipy==1.14.1
|
||||
scikit-image
|
||||
|
||||
# lint
|
||||
ruff
|
||||
pylint
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
from collections import defaultdict
|
||||
from typing import List, Union
|
||||
|
||||
from scipy.optimize import linear_sum_assignment
|
||||
import PIL.Image as Image
|
||||
import numpy as np
|
||||
import torch
|
||||
|
|
@ -58,6 +57,7 @@ class UnsupervisedEvaluator:
|
|||
|
||||
@property
|
||||
def mean_iou(self) -> float:
|
||||
from scipy.optimize import linear_sum_assignment
|
||||
n = max(max(self.ious), max([y[0] for x in self.ious.values() for y in x])) + 1
|
||||
iou_matrix = np.zeros((n, n))
|
||||
count_matrix = np.zeros((n, n))
|
||||
|
|
|
|||
|
|
@ -1,9 +1,6 @@
|
|||
import math
|
||||
from scipy import integrate
|
||||
import torch
|
||||
from torch import nn
|
||||
from torchdiffeq import odeint
|
||||
import torchsde
|
||||
from tqdm.auto import trange
|
||||
|
||||
|
||||
|
|
@ -80,6 +77,7 @@ class BatchedBrownianTree:
|
|||
"""A wrapper around torchsde.BrownianTree that enables batches of entropy."""
|
||||
|
||||
def __init__(self, x, t0, t1, seed=None, **kwargs):
|
||||
import torchsde
|
||||
t0, t1, self.sign = self.sort(t0, t1)
|
||||
w0 = kwargs.get('w0', torch.zeros_like(x))
|
||||
if seed is None:
|
||||
|
|
@ -174,6 +172,7 @@ def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=None, dis
|
|||
|
||||
|
||||
def linear_multistep_coeff(order, t, i, j):
|
||||
from scipy import integrate
|
||||
if order - 1 > i:
|
||||
raise ValueError(f'Order {order} too high for step {i}')
|
||||
def fn(tau):
|
||||
|
|
@ -188,6 +187,7 @@ def linear_multistep_coeff(order, t, i, j):
|
|||
|
||||
@torch.no_grad()
|
||||
def log_likelihood(model, x, sigma_min, sigma_max, extra_args=None, atol=1e-4, rtol=1e-4):
|
||||
from torchdiffeq import odeint
|
||||
extra_args = {} if extra_args is None else extra_args
|
||||
s_in = x.new_ones([x.shape[0]])
|
||||
v = torch.randint_like(x, 2) * 2 - 1
|
||||
|
|
|
|||
2
wiki
2
wiki
|
|
@ -1 +1 @@
|
|||
Subproject commit 98f878680440272626dccb44a061ff785209cdbe
|
||||
Subproject commit c0924688d04e3b41399f2cbd8e6050d937bebc06
|
||||
Loading…
Reference in New Issue