1.12.1 (#321)
* recovery for i2i-b * return process result * support fp8 * simplified fp8 * cast norm * cn fp8 * improve restore * Dev add s3 (#315) * add file output to s3 * test * 1 * test * 11 * add loggger * 修改描述 * add log * log formate * install edit * 临时修改验证 * remove install * modify import --------- Co-authored-by: zhangruicheng <zhangruicheng@migu.cn> * LCM Sampler (#318) * lcm, not tested * lcm appears at UI * refactor, not working properly * lcm * readme * 1.12.1 --------- Co-authored-by: zhangrc <zrc_java@163.com> Co-authored-by: zhangruicheng <zhangruicheng@migu.cn>pull/294/head v1.12.1
parent
1f1d912711
commit
4cd270d556
35
README.md
35
README.md
|
|
@ -17,6 +17,11 @@ You might also be interested in another extension I created: [Segment Anything f
|
|||
- [Prompt Travel](#prompt-travel)
|
||||
- [ControlNet V2V](#controlnet-v2v)
|
||||
- [SDXL](#sdxl)
|
||||
- [Optimizations](#optimizations)
|
||||
- [Attention](#attention)
|
||||
- [FP8](#fp8)
|
||||
- [LCM](#lcm)
|
||||
- [Others](#others)
|
||||
- [Model Zoo](#model-zoo)
|
||||
- [VRAM](#vram)
|
||||
- [Batch Size](#batch-size)
|
||||
|
|
@ -55,6 +60,7 @@ You might also be interested in another extension I created: [Segment Anything f
|
|||
- `2023/10/29`: [v1.11.0](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.11.0): Support [HotShot-XL](https://github.com/hotshotco/Hotshot-XL) for SDXL. See [SDXL](#sdxl) for more information.
|
||||
- `2023/11/06`: [v1.11.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.11.1): Optimize VRAM for ControlNet V2V, patch [encode_pil_to_base64](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/api/api.py#L104-L133) for api return a video, save frames to `AnimateDiff/yy-mm-dd/`, recover from assertion error, optional [request id](#api) for API.
|
||||
- `2023/11/10`: [v1.12.0](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.12.0): [AnimateDiff for SDXL](https://github.com/guoyww/AnimateDiff/tree/sdxl) supported. See [SDXL](#sdxl) for more information. You need to add `--disable-safe-unpickle` to your command line arguments to get rid of the bad file error.
|
||||
- `2023/11/16`: [v1.12.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.12.1): FP8 precision and LCM sampler supported. See [Optimizations](#optimizations) for more information. You can also optionally upload videos to AWS S3 storage by configuring appropriately via `Settings/AnimateDiff AWS`.
|
||||
|
||||
For future update plan, please query [here](https://github.com/continue-revolution/sd-webui-animatediff/pull/294).
|
||||
|
||||
|
|
@ -212,6 +218,35 @@ Technically all features available for AnimateDiff + SD1.5 are also available fo
|
|||
For download link, please read [Model Zoo](#model-zoo). For VRAM usage, please read [VRAM](#vram). For demo, please see [demo](#animatediff--sdxl).
|
||||
|
||||
|
||||
## Optimizations
|
||||
Optimizations can be significantly helpful if you want to improve speed and reduce VRAM usage. With [attention optimization](#attention), [FP8](#fp8) and unchecking `Batch cond/uncond` in `Settings/Optimization`, I am able to run 4 x ControlNet + AnimateDiff + Stable Diffusion to generate 36 frames of 1024 * 1024 images with 18GB VRAM.
|
||||
|
||||
### Attention
|
||||
Adding `--xformers` / `--opt-sdp-attention` to your command lines can significantly reduce VRAM and improve speed. However, due to a bug in xformers, you may or may not get CUDA error. If you get CUDA error, please either completely switch to `--opt-sdp-attention`, or preserve `--xformers` -> go to `Settings/AnimateDiff` -> choose "Optimize attention layers with sdp (torch >= 2.0.0 required)".
|
||||
|
||||
### FP8
|
||||
FP8 requires torch >= 2.1.0 and WebUI [test-fp8](https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8) branch by [@KohakuBlueleaf](https://github.com/KohakuBlueleaf). Follow these steps to enable FP8:
|
||||
1. Switch to `test-fp8` branch via `git checkout test-fp8` in your `stable-diffusion-webui` directory.
|
||||
1. Reinstall torch via adding `--reinstall-torch` ONCE to your command line arguments.
|
||||
1. Add `--opt-unet-fp8-storage` to your command line arguments and launch WebUI.
|
||||
|
||||
### LCM
|
||||
[Latent Consistency Model](https://github.com/luosiallen/latent-consistency-model) is a recent breakthrough in Stable Diffusion community. I provide a "gift" to everyone who update this extension to >= [v1.12.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.12.1) that you will find `LCM` sampler in the normal place that you choose samplers in WebUI. You can generate images / videos within 6-8 steps if you
|
||||
- choose `Euler A` / `LCM` sampler
|
||||
- use [LCM LoRA](https://civitai.com/models/195519/lcm-lora-weights-stable-diffusion-acceleration-module)
|
||||
- use a low CFG denoising strength (1-2 is recommended)
|
||||
|
||||
Note that LCM sampler is still under experiment and subject to change adhering to the [original author](https://github.com/luosiallen)'s wish.
|
||||
|
||||
Benefits of using this extension instead of [sd-webui-lcm](https://github.com/0xbitches/sd-webui-lcm) are
|
||||
- you do not need to install diffusers
|
||||
- you can use LCM with any other extensions, including ControlNet and AnimateDiff
|
||||
|
||||
### Others
|
||||
- Remove any VRAM heavy arguments such as `--no-half`. These arguments can significantly increase VRAM usage and reduce speed.
|
||||
- Check `Batch cond/uncond` in `Settings/Optimization` to improve speed; uncheck it to reduce VRAM usage.
|
||||
|
||||
|
||||
## Model Zoo
|
||||
- `mm_sd_v14.ckpt` & `mm_sd_v15.ckpt` & `mm_sd_v15_v2.ckpt` & `mm_sdxl_v10_beta.ckpt` by [@guoyww](https://github.com/guoyww): [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI) | [HuggingFace](https://huggingface.co/guoyww/animatediff/tree/main) | [CivitAI](https://civitai.com/models/108836)
|
||||
- `mm_sd_v14.safetensors` & `mm_sd_v15.safetensors` & `mm_sd_v15_v2.safetensors` by [@neph1](https://github.com/neph1): [HuggingFace](https://huggingface.co/guoyww/animatediff/tree/refs%2Fpr%2F3)
|
||||
|
|
|
|||
|
|
@ -168,7 +168,7 @@ class TemporalTransformer3DModel(nn.Module):
|
|||
batch, channel, height, weight = hidden_states.shape
|
||||
residual = hidden_states
|
||||
|
||||
hidden_states = self.norm(hidden_states)
|
||||
hidden_states = self.norm(hidden_states).type(hidden_states.dtype)
|
||||
inner_dim = hidden_states.shape[1]
|
||||
hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
|
||||
hidden_states = self.proj_in(hidden_states)
|
||||
|
|
@ -238,14 +238,14 @@ class TemporalTransformerBlock(nn.Module):
|
|||
|
||||
def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None):
|
||||
for attention_block, norm in zip(self.attention_blocks, self.norms):
|
||||
norm_hidden_states = norm(hidden_states)
|
||||
norm_hidden_states = norm(hidden_states).type(hidden_states.dtype)
|
||||
hidden_states = attention_block(
|
||||
norm_hidden_states,
|
||||
encoder_hidden_states=encoder_hidden_states if attention_block.is_cross_attention else None,
|
||||
video_length=video_length,
|
||||
) + hidden_states
|
||||
|
||||
hidden_states = self.ff(self.ff_norm(hidden_states)) + hidden_states
|
||||
hidden_states = self.ff(self.ff_norm(hidden_states).type(hidden_states.dtype)) + hidden_states
|
||||
|
||||
output = hidden_states
|
||||
return output
|
||||
|
|
@ -362,7 +362,7 @@ class CrossAttention(nn.Module):
|
|||
encoder_hidden_states = encoder_hidden_states
|
||||
|
||||
if self.group_norm is not None:
|
||||
hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
|
||||
hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2).type(hidden_states.dtype)
|
||||
|
||||
query = self.to_q(hidden_states)
|
||||
dim = query.shape[-1]
|
||||
|
|
@ -566,7 +566,7 @@ class VersatileAttention(CrossAttention):
|
|||
encoder_hidden_states = encoder_hidden_states
|
||||
|
||||
if self.group_norm is not None:
|
||||
hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
|
||||
hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2).dtype(hidden_states.dtype)
|
||||
|
||||
query = self.to_q(hidden_states)
|
||||
dim = query.shape[-1]
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ class AnimateDiffScript(scripts.Script):
|
|||
self.cfg_hacker = None
|
||||
self.cn_hacker = None
|
||||
self.prompt_scheduler = None
|
||||
self.hacked = False
|
||||
|
||||
|
||||
def title(self):
|
||||
|
|
@ -56,6 +57,13 @@ class AnimateDiffScript(scripts.Script):
|
|||
self.cn_hacker = AnimateDiffControl(p, self.prompt_scheduler)
|
||||
self.cn_hacker.hack(params)
|
||||
update_infotext(p, params)
|
||||
self.hacked = True
|
||||
elif self.hacked:
|
||||
self.cn_hacker.restore()
|
||||
self.cfg_hacker.restore()
|
||||
self.lora_hacker.restore()
|
||||
motion_module.restore(p.sd_model)
|
||||
self.hacked = False
|
||||
|
||||
|
||||
def before_process_batch(self, p: StableDiffusionProcessing, params: AnimateDiffProcess, **kwargs):
|
||||
|
|
@ -84,12 +92,14 @@ class AnimateDiffScript(scripts.Script):
|
|||
self.cfg_hacker.restore()
|
||||
self.lora_hacker.restore()
|
||||
motion_module.restore(p.sd_model)
|
||||
self.hacked = False
|
||||
AnimateDiffOutput().output(p, res, params)
|
||||
logger.info("AnimateDiff process end.")
|
||||
|
||||
|
||||
def on_ui_settings():
|
||||
section = ("animatediff", "AnimateDiff")
|
||||
s3_selection =("animatediff", "AnimateDiff AWS")
|
||||
shared.opts.add_option(
|
||||
"animatediff_model_path",
|
||||
shared.OptionInfo(
|
||||
|
|
@ -161,7 +171,61 @@ def on_ui_settings():
|
|||
section=section
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_enable",
|
||||
shared.OptionInfo(
|
||||
False,
|
||||
"Enable to Store file in object storage that supports the s3 protocol",
|
||||
gr.Checkbox,
|
||||
section=s3_selection
|
||||
)
|
||||
)
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_host",
|
||||
shared.OptionInfo(
|
||||
None,
|
||||
"S3 protocol host",
|
||||
gr.Textbox,
|
||||
section=s3_selection,
|
||||
),
|
||||
)
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_port",
|
||||
shared.OptionInfo(
|
||||
None,
|
||||
"S3 protocol port",
|
||||
gr.Textbox,
|
||||
section=s3_selection,
|
||||
),
|
||||
)
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_access_key",
|
||||
shared.OptionInfo(
|
||||
None,
|
||||
"S3 protocol access_key",
|
||||
gr.Textbox,
|
||||
section=s3_selection,
|
||||
),
|
||||
)
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_secret_key",
|
||||
shared.OptionInfo(
|
||||
None,
|
||||
"S3 protocol secret_key",
|
||||
gr.Textbox,
|
||||
section=s3_selection,
|
||||
),
|
||||
)
|
||||
shared.opts.add_option(
|
||||
"animatediff_s3_storge_bucket",
|
||||
shared.OptionInfo(
|
||||
None,
|
||||
"Bucket for file storage",
|
||||
gr.Textbox,
|
||||
section=s3_selection,
|
||||
),
|
||||
)
|
||||
|
||||
script_callbacks.on_ui_settings(on_ui_settings)
|
||||
script_callbacks.on_after_component(AnimateDiffUiGroup.on_after_component)
|
||||
script_callbacks.on_before_ui(AnimateDiffUiGroup.on_before_ui)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import cv2
|
|||
import numpy as np
|
||||
import torch
|
||||
from PIL import Image, ImageFilter, ImageOps
|
||||
from modules import processing, shared, masking, images
|
||||
from modules import processing, shared, masking, images, devices
|
||||
from modules.paths import data_path
|
||||
from modules.processing import (StableDiffusionProcessing,
|
||||
StableDiffusionProcessingImg2Img,
|
||||
|
|
@ -127,10 +127,11 @@ class AnimateDiffControl:
|
|||
|
||||
|
||||
def restore_batchhijack(self):
|
||||
from scripts.batch_hijack import BatchHijack, instance
|
||||
BatchHijack.processing_process_images_hijack = AnimateDiffControl.original_processing_process_images_hijack
|
||||
AnimateDiffControl.original_processing_process_images_hijack = None
|
||||
processing.process_images_inner = instance.processing_process_images_hijack
|
||||
if AnimateDiffControl.original_processing_process_images_hijack is not None:
|
||||
from scripts.batch_hijack import BatchHijack, instance
|
||||
BatchHijack.processing_process_images_hijack = AnimateDiffControl.original_processing_process_images_hijack
|
||||
AnimateDiffControl.original_processing_process_images_hijack = None
|
||||
processing.process_images_inner = instance.processing_process_images_hijack
|
||||
|
||||
|
||||
def hack_cn(self):
|
||||
|
|
@ -244,6 +245,8 @@ class AnimateDiffControl:
|
|||
else:
|
||||
model_net = cn_script.load_control_model(p, unet, unit.model)
|
||||
model_net.reset()
|
||||
if model_net is not None and getattr(devices, "fp8", False):
|
||||
model_net.to(torch.float8_e4m3fn)
|
||||
|
||||
if getattr(model_net, 'is_control_lora', False):
|
||||
control_lora = model_net.control_model
|
||||
|
|
@ -613,10 +616,12 @@ class AnimateDiffControl:
|
|||
|
||||
|
||||
def restore_cn(self):
|
||||
self.cn_script.controlnet_main_entry = AnimateDiffControl.original_controlnet_main_entry
|
||||
AnimateDiffControl.original_controlnet_main_entry = None
|
||||
self.cn_script.postprocess_batch = AnimateDiffControl.original_postprocess_batch
|
||||
AnimateDiffControl.original_postprocess_batch = None
|
||||
if AnimateDiffControl.original_controlnet_main_entry is not None:
|
||||
self.cn_script.controlnet_main_entry = AnimateDiffControl.original_controlnet_main_entry
|
||||
AnimateDiffControl.original_controlnet_main_entry = None
|
||||
if AnimateDiffControl.original_postprocess_batch is not None:
|
||||
self.cn_script.postprocess_batch = AnimateDiffControl.original_postprocess_batch
|
||||
AnimateDiffControl.original_postprocess_batch = None
|
||||
|
||||
|
||||
def hack(self, params: AnimateDiffProcess):
|
||||
|
|
|
|||
|
|
@ -22,11 +22,17 @@ from scripts.animatediff_logger import logger_animatediff as logger
|
|||
|
||||
|
||||
class AnimateDiffI2IBatch:
|
||||
original_img2img_process_batch = None
|
||||
|
||||
def hack(self):
|
||||
# TODO: PR this hack to A1111
|
||||
if AnimateDiffI2IBatch.original_img2img_process_batch is not None:
|
||||
logger.info("Hacking i2i-batch is already done.")
|
||||
return
|
||||
|
||||
logger.info("Hacking i2i-batch.")
|
||||
original_img2img_process_batch = img2img.process_batch
|
||||
AnimateDiffI2IBatch.original_img2img_process_batch = img2img.process_batch
|
||||
original_img2img_process_batch = AnimateDiffI2IBatch.original_img2img_process_batch
|
||||
|
||||
def hacked_i2i_init(self, all_prompts, all_seeds, all_subseeds): # only hack this when i2i-batch with batch mask
|
||||
self.image_cfg_scale: float = self.image_cfg_scale if shared.sd_model.cond_stage_key == "edit" else None
|
||||
|
|
@ -281,7 +287,7 @@ class AnimateDiffI2IBatch:
|
|||
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}-[generation_number]'
|
||||
else:
|
||||
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}'
|
||||
process_images(p)
|
||||
return process_images(p)
|
||||
else:
|
||||
logger.warn("Warning: you are using an unsupported external script. AnimateDiff may not work properly.")
|
||||
|
||||
|
|
|
|||
|
|
@ -313,6 +313,10 @@ class AnimateDiffInfV2V:
|
|||
|
||||
|
||||
def restore(self):
|
||||
if AnimateDiffInfV2V.cfg_original_forward is None:
|
||||
logger.info("CFGDenoiser already restored.")
|
||||
return
|
||||
|
||||
logger.info(f"Restoring CFGDenoiser forward function.")
|
||||
CFGDenoiser.forward = AnimateDiffInfV2V.cfg_original_forward
|
||||
AnimateDiffInfV2V.cfg_original_forward = None
|
||||
|
|
|
|||
|
|
@ -0,0 +1,134 @@
|
|||
|
||||
# TODO: remove this file when LCM is merged to A1111
|
||||
import torch
|
||||
|
||||
from k_diffusion import utils, sampling
|
||||
from k_diffusion.external import DiscreteEpsDDPMDenoiser
|
||||
from k_diffusion.sampling import default_noise_sampler, trange
|
||||
|
||||
from modules import shared, sd_samplers_cfg_denoiser, sd_samplers_kdiffusion
|
||||
from scripts.animatediff_logger import logger_animatediff as logger
|
||||
|
||||
|
||||
class LCMCompVisDenoiser(DiscreteEpsDDPMDenoiser):
|
||||
def __init__(self, model):
|
||||
timesteps = 1000
|
||||
beta_start = 0.00085
|
||||
beta_end = 0.012
|
||||
|
||||
betas = torch.linspace(beta_start**0.5, beta_end**0.5, timesteps, dtype=torch.float32) ** 2
|
||||
alphas = 1.0 - betas
|
||||
alphas_cumprod = torch.cumprod(alphas, dim=0)
|
||||
|
||||
original_timesteps = 50 # LCM Original Timesteps (default=50, for current version of LCM)
|
||||
self.skip_steps = timesteps // original_timesteps
|
||||
|
||||
|
||||
alphas_cumprod_valid = torch.zeros((original_timesteps), dtype=torch.float32, device=model.device)
|
||||
for x in range(original_timesteps):
|
||||
alphas_cumprod_valid[original_timesteps - 1 - x] = alphas_cumprod[timesteps - 1 - x * self.skip_steps]
|
||||
|
||||
super().__init__(model, alphas_cumprod_valid, quantize=None)
|
||||
|
||||
|
||||
def get_sigmas(self, n=None, sgm=False):
|
||||
if n is None:
|
||||
return sampling.append_zero(self.sigmas.flip(0))
|
||||
|
||||
start = self.sigma_to_t(self.sigma_max)
|
||||
end = self.sigma_to_t(self.sigma_min)
|
||||
|
||||
if sgm:
|
||||
t = torch.linspace(start, end, n + 1, device=shared.sd_model.device)[:-1]
|
||||
else:
|
||||
t = torch.linspace(start, end, n, device=shared.sd_model.device)
|
||||
|
||||
return sampling.append_zero(self.t_to_sigma(t))
|
||||
|
||||
|
||||
def sigma_to_t(self, sigma, quantize=None):
|
||||
log_sigma = sigma.log()
|
||||
dists = log_sigma - self.log_sigmas[:, None]
|
||||
return dists.abs().argmin(dim=0).view(sigma.shape) * self.skip_steps + (self.skip_steps - 1)
|
||||
|
||||
|
||||
def t_to_sigma(self, timestep):
|
||||
t = torch.clamp(((timestep - (self.skip_steps - 1)) / self.skip_steps).float(), min=0, max=(len(self.sigmas) - 1))
|
||||
return super().t_to_sigma(t)
|
||||
|
||||
|
||||
def get_eps(self, *args, **kwargs):
|
||||
return self.inner_model.apply_model(*args, **kwargs)
|
||||
|
||||
|
||||
def get_scaled_out(self, sigma, output, input):
|
||||
sigma_data = 0.5
|
||||
scaled_timestep = utils.append_dims(self.sigma_to_t(sigma), output.ndim) * 10.0
|
||||
|
||||
c_skip = sigma_data**2 / (scaled_timestep**2 + sigma_data**2)
|
||||
c_out = scaled_timestep / (scaled_timestep**2 + sigma_data**2) ** 0.5
|
||||
|
||||
return c_out * output + c_skip * input
|
||||
|
||||
|
||||
def forward(self, input, sigma, **kwargs):
|
||||
c_out, c_in = [utils.append_dims(x, input.ndim) for x in self.get_scalings(sigma)]
|
||||
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
|
||||
return self.get_scaled_out(sigma, input + eps * c_out, input)
|
||||
|
||||
|
||||
def sample_lcm(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None):
|
||||
extra_args = {} if extra_args is None else extra_args
|
||||
noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
|
||||
s_in = x.new_ones([x.shape[0]])
|
||||
|
||||
for i in trange(len(sigmas) - 1, disable=disable):
|
||||
denoised = model(x, sigmas[i] * s_in, **extra_args)
|
||||
|
||||
if callback is not None:
|
||||
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
|
||||
|
||||
x = denoised
|
||||
if sigmas[i + 1] > 0:
|
||||
x += sigmas[i + 1] * noise_sampler(sigmas[i], sigmas[i + 1])
|
||||
return x
|
||||
|
||||
|
||||
class CFGDenoiserLCM(sd_samplers_cfg_denoiser.CFGDenoiser):
|
||||
@property
|
||||
def inner_model(self):
|
||||
if self.model_wrap is None:
|
||||
denoiser = LCMCompVisDenoiser
|
||||
self.model_wrap = denoiser(shared.sd_model)
|
||||
|
||||
return self.model_wrap
|
||||
|
||||
|
||||
class LCMSampler(sd_samplers_kdiffusion.KDiffusionSampler):
|
||||
def __init__(self, funcname, sd_model, options=None):
|
||||
super().__init__(funcname, sd_model, options)
|
||||
self.model_wrap_cfg = CFGDenoiserLCM(self)
|
||||
self.model_wrap = self.model_wrap_cfg.inner_model
|
||||
|
||||
|
||||
class AnimateDiffLCM:
|
||||
lcm_ui_injected = False
|
||||
|
||||
|
||||
@staticmethod
|
||||
def hack_kdiff_ui():
|
||||
if AnimateDiffLCM.lcm_ui_injected:
|
||||
logger.info(f"LCM UI already injected.")
|
||||
return
|
||||
|
||||
logger.info(f"Injecting LCM to UI.")
|
||||
from modules import sd_samplers, sd_samplers_common
|
||||
samplers_lcm = [('LCM', sample_lcm, ['k_lcm'], {})]
|
||||
samplers_data_lcm = [
|
||||
sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: LCMSampler(funcname, model), aliases, options)
|
||||
for label, funcname, aliases, options in samplers_lcm
|
||||
]
|
||||
sd_samplers.all_samplers.extend(samplers_data_lcm)
|
||||
sd_samplers.all_samplers_map = {x.name: x for x in sd_samplers.all_samplers}
|
||||
sd_samplers.set_samplers()
|
||||
AnimateDiffLCM.lcm_ui_injected = True
|
||||
|
|
@ -71,8 +71,14 @@ class AnimateDiffLora:
|
|||
|
||||
|
||||
def restore(self):
|
||||
if self.v2:
|
||||
logger.info("Restoring hacked lora")
|
||||
import networks
|
||||
networks.load_network = AnimateDiffLora.original_load_network
|
||||
AnimateDiffLora.original_load_network = None
|
||||
if not self.v2:
|
||||
return
|
||||
|
||||
if AnimateDiffLora.original_load_network is None:
|
||||
logger.info("AnimateDiff LoRA already restored")
|
||||
return
|
||||
|
||||
logger.info("Restoring hacked lora")
|
||||
import networks
|
||||
networks.load_network = AnimateDiffLora.original_load_network
|
||||
AnimateDiffLora.original_load_network = None
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import os
|
|||
|
||||
import torch
|
||||
from einops import rearrange
|
||||
from modules import hashes, shared, sd_models
|
||||
from modules import hashes, shared, sd_models, devices
|
||||
from modules.devices import cpu, device, torch_gc
|
||||
|
||||
from motion_module import MotionWrapper, MotionModuleType
|
||||
|
|
@ -47,6 +47,8 @@ class AnimateDiffMM:
|
|||
self.mm.to(device).eval()
|
||||
if not shared.cmd_opts.no_half:
|
||||
self.mm.half()
|
||||
if getattr(devices, "fp8", False):
|
||||
self.mm.to(torch.float8_e4m3fn)
|
||||
|
||||
|
||||
def inject(self, sd_model, model_name="mm_sd_v15.ckpt"):
|
||||
|
|
@ -106,6 +108,10 @@ class AnimateDiffMM:
|
|||
|
||||
|
||||
def restore(self, sd_model):
|
||||
if not AnimateDiffMM.mm_injected:
|
||||
logger.info("Motion module already removed.")
|
||||
return
|
||||
|
||||
inject_sdxl = sd_model.is_sdxl or self.mm.is_xl
|
||||
sd_ver = "SDXL" if sd_model.is_sdxl else "SD1.5"
|
||||
self._restore_ddim_alpha(sd_model)
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@ from scripts.animatediff_logger import logger_animatediff as logger
|
|||
from scripts.animatediff_ui import AnimateDiffProcess
|
||||
|
||||
|
||||
|
||||
class AnimateDiffOutput:
|
||||
api_encode_pil_to_base64_hooked = False
|
||||
|
||||
|
|
@ -149,6 +150,7 @@ class AnimateDiffOutput:
|
|||
video_paths = []
|
||||
video_array = [np.array(v) for v in frame_list]
|
||||
infotext = res.infotexts[index]
|
||||
s3_enable =shared.opts.data.get("animatediff_s3_enable", False)
|
||||
use_infotext = shared.opts.enable_pnginfo and infotext is not None
|
||||
if "PNG" in params.format and (shared.opts.data.get("animatediff_save_to_custom", False) or getattr(params, "force_save_to_custom", False)):
|
||||
video_path_prefix.mkdir(exist_ok=True, parents=True)
|
||||
|
|
@ -274,7 +276,9 @@ class AnimateDiffOutput:
|
|||
file.container_metadata["Title"] = infotext
|
||||
file.container_metadata["Comment"] = infotext
|
||||
file.write(video_array, codec="vp9", fps=params.fps)
|
||||
|
||||
|
||||
if s3_enable:
|
||||
for video_path in video_paths: self._save_to_s3_stroge(video_path)
|
||||
return video_paths
|
||||
|
||||
|
||||
|
|
@ -302,3 +306,54 @@ class AnimateDiffOutput:
|
|||
with open(v_path, "rb") as video_file:
|
||||
videos.append(base64.b64encode(video_file.read()).decode("utf-8"))
|
||||
return videos
|
||||
|
||||
def _install_requirement_if_absent(self,lib):
|
||||
import launch
|
||||
if not launch.is_installed(lib):
|
||||
launch.run_pip(f"install {lib}", f"animatediff requirement: {lib}")
|
||||
|
||||
def _exist_bucket(self,s3_client,bucketname):
|
||||
try:
|
||||
s3_client.head_bucket(Bucket=bucketname)
|
||||
return True
|
||||
except ClientError as e:
|
||||
if e.response['Error']['Code'] == '404':
|
||||
return False
|
||||
else:
|
||||
raise
|
||||
|
||||
def _save_to_s3_stroge(self ,file_path):
|
||||
"""
|
||||
put object to object storge
|
||||
:type bucketname: string
|
||||
:param bucketname: will save to this 'bucket' , access_key and secret_key must have permissions to save
|
||||
:type file : file
|
||||
:param file : the local file
|
||||
"""
|
||||
self._install_requirement_if_absent('boto3')
|
||||
import boto3
|
||||
from botocore.exceptions import ClientError
|
||||
import os
|
||||
host = shared.opts.data.get("animatediff_s3_host", '127.0.0.1')
|
||||
port = shared.opts.data.get("animatediff_s3_port", '9001')
|
||||
access_key = shared.opts.data.get("animatediff_s3_access_key", '')
|
||||
secret_key = shared.opts.data.get("animatediff_s3_secret_key", '')
|
||||
bucket = shared.opts.data.get("animatediff_s3_storge_bucket", '')
|
||||
client = boto3.client(
|
||||
service_name='s3',
|
||||
aws_access_key_id = access_key,
|
||||
aws_secret_access_key = secret_key,
|
||||
endpoint_url=f'http://{host}:{port}',
|
||||
)
|
||||
|
||||
if not os.path.exists(file_path): return
|
||||
date = datetime.datetime.now().strftime('%Y-%m-%d')
|
||||
if not self._exist_bucket(client,bucket):
|
||||
client.create_bucket(Bucket=bucket)
|
||||
|
||||
filename = os.path.split(file_path)[1]
|
||||
targetpath = f"{date}/{filename}"
|
||||
client.upload_file(file_path, bucket, targetpath)
|
||||
logger.info(f"{file_path} saved to s3 in bucket: {bucket}")
|
||||
return f"http://{host}:{port}/{bucket}/{targetpath}"
|
||||
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ from modules.processing import StableDiffusionProcessing
|
|||
|
||||
from scripts.animatediff_mm import mm_animatediff as motion_module
|
||||
from scripts.animatediff_i2ibatch import animatediff_i2ibatch
|
||||
from scripts.animatediff_lcm import AnimateDiffLCM
|
||||
|
||||
|
||||
class ToolButton(gr.Button, gr.components.FormComponent):
|
||||
|
|
@ -341,3 +342,8 @@ class AnimateDiffUiGroup:
|
|||
if elem_id == "img2img_generate":
|
||||
AnimateDiffUiGroup.img2img_submit_button = component
|
||||
return
|
||||
|
||||
|
||||
@staticmethod
|
||||
def on_before_ui():
|
||||
AnimateDiffLCM.hack_kdiff_ui()
|
||||
|
|
|
|||
Loading…
Reference in New Issue