Main Repository

pull/125/head
alexbofa 2023-10-22 00:55:16 +03:00 committed by GitHub
parent 0c77dfc0ef
commit 07747ed844
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 2156 additions and 581 deletions

147
README.md
View File

@ -1,18 +1,58 @@
# ebsynth_utility_lite
Fork was created to facilitate the creation of videos via img2img based on the original [ebsynth_utility](https://github.com/s9roll7/ebsynth_utility)
<br>
## TODO
- [x] Delete script for img2img
- [ ] Add configuration → stage 5
- [ ] Stage 0 — changing the video size, for example from 1080x1920 to 512x904
- [ ] Stage 2 — manually add **custom_gap**
- [ ] Change Stage 3 for create a grid (min 1x1 max 3x3)
- [ ] Change Stage 4 for disassemble the grid back
- [ ] Stage 0 — add Presets (with changes via .json)
- [ ] Stage 5 — automatisation with Ebsynth? (Is it possible?)
- [ ] Edit **Readme.md**
# ebsynth_utility
#### If you want to help, feel free to create the [PR](https://github.com/alexbofa/ebsynth_utility_lite/pulls)
## Overview
#### AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.
#### This extension allows you to output edited videos using ebsynth.(AE is not required)
##### With [Controlnet](https://github.com/Mikubill/sd-webui-controlnet) installed, I have confirmed that all features of this extension are working properly!
##### [Controlnet](https://github.com/Mikubill/sd-webui-controlnet) is a must for video editing, so I recommend installing it.
##### Multi ControlNet("canny" + "normal map") would be suitable for video editing.
<br>
###### I modified animatediff-cli to create a txt2video tool that allows flexible prompt specification. You can use it if you like.
###### [animatediff-cli-prompt-travel](https://github.com/s9roll7/animatediff-cli-prompt-travel)
<div><video controls src="https://github.com/s9roll7/animatediff-cli-prompt-travel/assets/118420657/2f1de542-7084-417b-9baa-59a55fdd0e1b" muted="false"></video></div>
<br>
## Example
- The following sample is raw output of this extension.
#### sample 1 mask with [clipseg](https://github.com/timojl/clipseg)
- first from left : original
- second from left : masking "cat" exclude "finger"
- third from left : masking "cat head"
- right : color corrected with [color-matcher](https://github.com/hahnec/color-matcher) (see stage 3.5)
- Multiple targets can also be specified.(e.g. cat,dog,boy,girl)
<div><video controls src="https://user-images.githubusercontent.com/118420657/223008549-167beaee-1453-43fa-85ce-fe3982466c26.mp4" muted="false"></video></div>
#### sample 2 blend background
- person : masterpiece, best quality, masterpiece, 1girl, masterpiece, best quality,anime screencap, anime style
- background : cyberpunk, factory, room ,anime screencap, anime style
- It is also possible to blend with your favorite videos.
<div><video controls src="https://user-images.githubusercontent.com/118420657/214592811-9677634f-93bb-40dd-95b6-1c97c8e7bb63.mp4" muted="false"></video></div>
#### sample 3 auto tagging
- left : original
- center : apply the same prompts in all keyframes
- right : apply auto tagging by deepdanbooru in all keyframes
- This function improves the detailed changes in facial expressions, hand expressions, etc.
In the sample video, the "closed_eyes" and "hands_on_own_face" tags have been added to better represent eye blinks and hands brought in front of the face.
<div><video controls src="https://user-images.githubusercontent.com/118420657/218247502-6c8e04fe-859b-4739-8c9d-0bf459d04e3b.mp4" muted="false"></video></div>
#### sample 4 auto tagging (apply lora dynamically)
- left : apply auto tagging by deepdanbooru in all keyframes
- right : apply auto tagging by deepdanbooru in all keyframes + apply "anyahehface" lora dynamically
- Added the function to dynamically apply TI, hypernet, Lora, and additional prompts according to automatically attached tags.
In the sample video, if the "smile" tag is given, the lora and lora trigger keywords are set to be added according to the strength of the "smile" tag.
Also, since automatically added tags are sometimes incorrect, unnecessary tags are listed in the blacklist.
[Here](sample/) is the actual configuration file used. placed in "Project directory" for use.
<div><video controls src="https://user-images.githubusercontent.com/118420657/218247633-ab2b1e6b-d81c-4f1d-af8a-6a97df23be0e.mp4" muted="false"></video></div>
<br>
## Installation
- Install [ffmpeg](https://ffmpeg.org/) for your operating system
@ -20,9 +60,12 @@ Fork was created to facilitate the creation of videos via img2img based on the o
- Install [Ebsynth](https://ebsynth.com/)
- Use the Extensions tab of the webui to [Install from URL]
<br>
<br>
## Usage
- Go to [Ebsynth Utility] tab.
- Create an empty directory somewhere, and fill in the «Project directory» field.
- Create an empty directory somewhere, and fill in the "Project directory" field.
- Place the video you want to edit from somewhere, and fill in the "Original Movie Path" field.
Use short videos of a few seconds at first.
- Select stage 1 and Generate.
@ -32,6 +75,7 @@ Fork was created to facilitate the creation of videos via img2img based on the o
(In the current latest webui, it seems to cause an error if you do not drop the image on the main screen of img2img.
Please drop the image as it does not affect the result.)
<br>
<br>
## Note 1
@ -45,9 +89,51 @@ In the implementation of this extension, the keyframe interval is chosen to be s
If the animation breaks up, increase the keyframe, if it flickers, decrease the keyframe.
First, generate one time with the default settings and go straight ahead without worrying about the result.
#### Stage 3 (In development)
#### Stage 3
Select one of the keyframes, throw it to img2img, and run [Interrogate DeepBooru].
Delete unwanted words such as blur from the displayed prompt.
Fill in the rest of the settings as you would normally do for image generation.
Here is the settings I used.
- Sampling method : Euler a
- Sampling Steps : 50
- Width : 960
- Height : 512
- CFG Scale : 20
- Denoising strength : 0.2
Here is the settings for extension.
- Mask Mode(Override img2img Mask mode) : Normal
- Img2Img Repeat Count (Loop Back) : 5
- Add N to seed when repeating : 1
- use Face Crop img2img : True
- Face Detection Method : YuNet
- Max Crop Size : 1024
- Face Denoising Strength : 0.25
- Face Area Magnification : 1.5 (The larger the number, the closer to the model's painting style, but the more likely it is to shift when merged with the body.)
- Enable Face Prompt : False
Trial and error in this process is the most time-consuming part.
Monitor the destination folder and if you do not like results, interrupt and change the settings.
[Prompt][Denoising strength] and [Face Denoising Strength] settings when using Face Crop img2img will greatly affect the result.
For more information on Face Crop img2img, check [here](https://github.com/s9roll7/face_crop_img2img)
If you have lots of memory to spare, increasing the width and height values while maintaining the aspect ratio may greatly improve results.
This extension may help with the adjustment.
https://github.com/s9roll7/img2img_for_all_method
#### Stage 4 (Will be changed in the future)
<br>
**The information above is from a time when there was no controlnet.
When controlnet are used together (especially multi-controlnets),
Even setting "Denoising strength" to a high value works well, and even setting it to 1.0 produces meaningful results.
If "Denoising strength" is set to a high value, "Loop Back" can be set to 1.**
<br>
#### Stage 4
Scale it up or down and process it to exactly the same size as the original video.
This process should only need to be done once.
@ -73,4 +159,29 @@ Finally, output the video.
In my case, the entire process from 1 to 7 took about 30 minutes.
- Crossfade blend rate : 1.0
- Export type : mp4
- Export type : mp4
<br>
<br>
## Note 2 : How to use multi-controlnet together
#### in webui setting
![controlnet_setting](imgs/controlnet_setting.png "controlnet_setting")
<br>
#### In controlnet settings in img2img tab(for controlnet 0)
![controlnet_0](imgs/controlnet_0.png "controlnet_0")
<br>
#### In controlnet settings in img2img tab(for controlnet 1)
![controlnet_1](imgs/controlnet_1.png "controlnet_1")
<br>
#### In ebsynth_utility settings in img2img tab
**Warning : "Weight" in the controlnet settings is overridden by the following values**
![controlnet_option_in_ebsynthutil](imgs/controlnet_option_in_ebsynthutil.png "controlnet_option_in_ebsynthutil")
<br>
<br>
## Note 3 : How to use clipseg
![clipseg](imgs/clipseg.png "How to use clipseg")

237
calculator.py Normal file
View File

@ -0,0 +1,237 @@
# https://www.mycompiler.io/view/3TFZagC
class ParseError(Exception):
def __init__(self, pos, msg, *args):
self.pos = pos
self.msg = msg
self.args = args
def __str__(self):
return '%s at position %s' % (self.msg % self.args, self.pos)
class Parser:
def __init__(self):
self.cache = {}
def parse(self, text):
self.text = text
self.pos = -1
self.len = len(text) - 1
rv = self.start()
self.assert_end()
return rv
def assert_end(self):
if self.pos < self.len:
raise ParseError(
self.pos + 1,
'Expected end of string but got %s',
self.text[self.pos + 1]
)
def eat_whitespace(self):
while self.pos < self.len and self.text[self.pos + 1] in " \f\v\r\t\n":
self.pos += 1
def split_char_ranges(self, chars):
try:
return self.cache[chars]
except KeyError:
pass
rv = []
index = 0
length = len(chars)
while index < length:
if index + 2 < length and chars[index + 1] == '-':
if chars[index] >= chars[index + 2]:
raise ValueError('Bad character range')
rv.append(chars[index:index + 3])
index += 3
else:
rv.append(chars[index])
index += 1
self.cache[chars] = rv
return rv
def char(self, chars=None):
if self.pos >= self.len:
raise ParseError(
self.pos + 1,
'Expected %s but got end of string',
'character' if chars is None else '[%s]' % chars
)
next_char = self.text[self.pos + 1]
if chars == None:
self.pos += 1
return next_char
for char_range in self.split_char_ranges(chars):
if len(char_range) == 1:
if next_char == char_range:
self.pos += 1
return next_char
elif char_range[0] <= next_char <= char_range[2]:
self.pos += 1
return next_char
raise ParseError(
self.pos + 1,
'Expected %s but got %s',
'character' if chars is None else '[%s]' % chars,
next_char
)
def keyword(self, *keywords):
self.eat_whitespace()
if self.pos >= self.len:
raise ParseError(
self.pos + 1,
'Expected %s but got end of string',
','.join(keywords)
)
for keyword in keywords:
low = self.pos + 1
high = low + len(keyword)
if self.text[low:high] == keyword:
self.pos += len(keyword)
self.eat_whitespace()
return keyword
raise ParseError(
self.pos + 1,
'Expected %s but got %s',
','.join(keywords),
self.text[self.pos + 1],
)
def match(self, *rules):
self.eat_whitespace()
last_error_pos = -1
last_exception = None
last_error_rules = []
for rule in rules:
initial_pos = self.pos
try:
rv = getattr(self, rule)()
self.eat_whitespace()
return rv
except ParseError as e:
self.pos = initial_pos
if e.pos > last_error_pos:
last_exception = e
last_error_pos = e.pos
last_error_rules.clear()
last_error_rules.append(rule)
elif e.pos == last_error_pos:
last_error_rules.append(rule)
if len(last_error_rules) == 1:
raise last_exception
else:
raise ParseError(
last_error_pos,
'Expected %s but got %s',
','.join(last_error_rules),
self.text[last_error_pos]
)
def maybe_char(self, chars=None):
try:
return self.char(chars)
except ParseError:
return None
def maybe_match(self, *rules):
try:
return self.match(*rules)
except ParseError:
return None
def maybe_keyword(self, *keywords):
try:
return self.keyword(*keywords)
except ParseError:
return None
class CalcParser(Parser):
def start(self):
return self.expression()
def expression(self):
rv = self.match('term')
while True:
op = self.maybe_keyword('+', '-')
if op is None:
break
term = self.match('term')
if op == '+':
rv += term
else:
rv -= term
return rv
def term(self):
rv = self.match('factor')
while True:
op = self.maybe_keyword('*', '/')
if op is None:
break
term = self.match('factor')
if op == '*':
rv *= term
else:
rv /= term
return rv
def factor(self):
if self.maybe_keyword('('):
rv = self.match('expression')
self.keyword(')')
return rv
return self.match('number')
def number(self):
chars = []
sign = self.maybe_keyword('+', '-')
if sign is not None:
chars.append(sign)
chars.append(self.char('0-9'))
while True:
char = self.maybe_char('0-9')
if char is None:
break
chars.append(char)
if self.maybe_char('.'):
chars.append('.')
chars.append(self.char('0-9'))
while True:
char = self.maybe_char('0-9')
if char is None:
break
chars.append(char)
rv = float(''.join(chars))
return rv

View File

@ -1,176 +1,185 @@
import os
from modules.ui import plaintext_to_html
import cv2
import glob
from PIL import Image
from extensions.ebsynth_utility_lite.stage1 import ebsynth_utility_stage1,ebsynth_utility_stage1_invert
from extensions.ebsynth_utility_lite.stage2 import ebsynth_utility_stage2
from extensions.ebsynth_utility_lite.stage5 import ebsynth_utility_stage5
from extensions.ebsynth_utility_lite.stage7 import ebsynth_utility_stage7
from extensions.ebsynth_utility_lite.stage8 import ebsynth_utility_stage8
def x_ceiling(value, step):
return -(-value // step) * step
def dump_dict(string, d:dict):
for key in d.keys():
string += ( key + " : " + str(d[key]) + "\n")
return string
class debug_string:
txt = ""
def print(self, comment):
print(comment)
self.txt += comment + '\n'
def to_string(self):
return self.txt
def ebsynth_utility_process(stage_index: int, project_dir:str, original_movie_path:str, frame_width:int, frame_height:int, st1_masking_method_index:int, st1_mask_threshold:float, tb_use_fast_mode:bool, tb_use_jit:bool, clipseg_mask_prompt:str, clipseg_exclude_prompt:str, clipseg_mask_threshold:int, clipseg_mask_blur_size:int, clipseg_mask_blur_size2:int, key_min_gap:int, key_max_gap:int, key_th:float, key_add_last_frame:bool, blend_rate:float, export_type:str, bg_src:str, bg_type:str, mask_blur_size:int, mask_threshold:float, fg_transparency:float, mask_mode:str):
args = locals()
info = ""
info = dump_dict(info, args)
dbg = debug_string()
def process_end(dbg, info):
return plaintext_to_html(dbg.to_string()), plaintext_to_html(info)
if not os.path.isdir(project_dir):
dbg.print("{0} project_dir not found".format(project_dir))
return process_end( dbg, info )
if not os.path.isfile(original_movie_path):
dbg.print("{0} original_movie_path not found".format(original_movie_path))
return process_end( dbg, info )
is_invert_mask = False
if mask_mode == "Invert":
is_invert_mask = True
frame_path = os.path.join(project_dir , "video_frame")
frame_mask_path = os.path.join(project_dir, "video_mask")
if is_invert_mask:
inv_path = os.path.join(project_dir, "inv")
os.makedirs(inv_path, exist_ok=True)
org_key_path = os.path.join(inv_path, "video_key")
img2img_key_path = os.path.join(inv_path, "img2img_key")
img2img_upscale_key_path = os.path.join(inv_path, "img2img_upscale_key")
else:
org_key_path = os.path.join(project_dir, "video_key")
img2img_key_path = os.path.join(project_dir, "img2img_key")
img2img_upscale_key_path = os.path.join(project_dir, "img2img_upscale_key")
if mask_mode == "None":
frame_mask_path = ""
project_args = [project_dir, original_movie_path, frame_path, frame_mask_path, org_key_path, img2img_key_path, img2img_upscale_key_path]
if stage_index == 0:
ebsynth_utility_stage1(dbg, project_args, frame_width, frame_height, st1_masking_method_index, st1_mask_threshold, tb_use_fast_mode, tb_use_jit, clipseg_mask_prompt, clipseg_exclude_prompt, clipseg_mask_threshold, clipseg_mask_blur_size, clipseg_mask_blur_size2, is_invert_mask)
if is_invert_mask:
inv_mask_path = os.path.join(inv_path, "inv_video_mask")
ebsynth_utility_stage1_invert(dbg, frame_mask_path, inv_mask_path)
elif stage_index == 1:
ebsynth_utility_stage2(dbg, project_args, key_min_gap, key_max_gap, key_th, key_add_last_frame, is_invert_mask)
elif stage_index == 2:
sample_image = glob.glob( os.path.join(frame_path , "*.png" ) )[0]
img_height, img_width, _ = cv2.imread(sample_image).shape
if img_width < img_height:
re_w = 512
re_h = int(x_ceiling( (512 / img_width) * img_height , 64))
else:
re_w = int(x_ceiling( (512 / img_height) * img_width , 64))
re_h = 512
img_width = re_w
img_height = re_h
dbg.print("stage 3")
dbg.print("")
dbg.print("This is an information button, it does not do anything")
dbg.print("")
dbg.print("1. Go to img2img tab")
dbg.print("2. Generate")
dbg.print("(Images are output to " + img2img_key_path + ")")
dbg.print("")
dbg.print("This tab will be changed to create a GRID (min 1x1 — max 3x3)")
dbg.print("")
dbg.print("If you know how to do it and want to help, create the PR")
return process_end( dbg, "" )
elif stage_index == 4:
sample_image = glob.glob( os.path.join(frame_path , "*.png" ) )[0]
img_height, img_width, _ = cv2.imread(sample_image).shape
sample_img2img_key = glob.glob( os.path.join(img2img_key_path , "*.png" ) )[0]
img_height_key, img_width_key, _ = cv2.imread(sample_img2img_key).shape
if is_invert_mask:
project_dir = inv_path
dbg.print("stage 4")
dbg.print("")
if img_height == img_height_key and img_width == img_width_key:
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("!! The size of frame and img2img_key matched.")
dbg.print("!! You can skip this stage.")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("0. Enable the following item")
dbg.print("Settings ->")
dbg.print(" Saving images/grids ->")
dbg.print(" Use original name for output filename during batch process in extras tab")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("1. If \"img2img_upscale_key\" directory already exists in the %s, delete it manually before executing."%(project_dir))
dbg.print("2. Go to Extras tab")
dbg.print("3. Go to Batch from Directory tab")
dbg.print("4. Fill in the \"Input directory\" field with [" + img2img_key_path + "]" )
dbg.print("5. Fill in the \"Output directory\" field with [" + img2img_upscale_key_path + "]" )
dbg.print("6. Go to Scale to tab")
dbg.print("7. Fill in the \"Width\" field with [" + str(img_width) + "]" )
dbg.print("8. Fill in the \"Height\" field with [" + str(img_height) + "]" )
dbg.print("9. Fill in the remaining configuration fields of Upscaler.")
dbg.print("10. Generate")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
elif stage_index == 5:
ebsynth_utility_stage5(dbg, project_args, is_invert_mask)
elif stage_index == 6:
if is_invert_mask:
project_dir = inv_path
dbg.print("stage 6")
dbg.print("")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("Running ebsynth.(on your self)")
dbg.print("Open the generated .ebs under %s and press [Run All] button."%(project_dir))
dbg.print("If ""out-*"" directory already exists in the %s, delete it manually before executing."%(project_dir))
dbg.print("If multiple .ebs files are generated, run them all.")
dbg.print("(I recommend associating the .ebs file with EbSynth.exe.)")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
elif stage_index == 7:
ebsynth_utility_stage7(dbg, project_args, blend_rate, export_type, is_invert_mask)
elif stage_index == 8:
if mask_mode != "Normal":
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("Please reset [configuration]->[etc]->[Mask Mode] to Normal.")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
ebsynth_utility_stage8(dbg, project_args, bg_src, bg_type, mask_blur_size, mask_threshold, fg_transparency, export_type)
else:
pass
return process_end( dbg, info )
import os
from modules.ui import plaintext_to_html
import cv2
import glob
from PIL import Image
from extensions.ebsynth_utility.stage1 import ebsynth_utility_stage1,ebsynth_utility_stage1_invert
from extensions.ebsynth_utility.stage2 import ebsynth_utility_stage2
from extensions.ebsynth_utility.stage5 import ebsynth_utility_stage5
from extensions.ebsynth_utility.stage7 import ebsynth_utility_stage7
from extensions.ebsynth_utility.stage8 import ebsynth_utility_stage8
from extensions.ebsynth_utility.stage3_5 import ebsynth_utility_stage3_5
def x_ceiling(value, step):
return -(-value // step) * step
def dump_dict(string, d:dict):
for key in d.keys():
string += ( key + " : " + str(d[key]) + "\n")
return string
class debug_string:
txt = ""
def print(self, comment):
print(comment)
self.txt += comment + '\n'
def to_string(self):
return self.txt
def ebsynth_utility_process(stage_index: int, project_dir:str, original_movie_path:str, frame_width:int, frame_height:int, st1_masking_method_index:int, st1_mask_threshold:float, tb_use_fast_mode:bool, tb_use_jit:bool, clipseg_mask_prompt:str, clipseg_exclude_prompt:str, clipseg_mask_threshold:int, clipseg_mask_blur_size:int, clipseg_mask_blur_size2:int, key_min_gap:int, key_max_gap:int, key_th:float, key_add_last_frame:bool, color_matcher_method:str, st3_5_use_mask:bool, st3_5_use_mask_ref:bool, st3_5_use_mask_org:bool, color_matcher_ref_type:int, color_matcher_ref_image:Image, blend_rate:float, export_type:str, bg_src:str, bg_type:str, mask_blur_size:int, mask_threshold:float, fg_transparency:float, mask_mode:str):
args = locals()
info = ""
info = dump_dict(info, args)
dbg = debug_string()
def process_end(dbg, info):
return plaintext_to_html(dbg.to_string()), plaintext_to_html(info)
if not os.path.isdir(project_dir):
dbg.print("{0} project_dir not found".format(project_dir))
return process_end( dbg, info )
if not os.path.isfile(original_movie_path):
dbg.print("{0} original_movie_path not found".format(original_movie_path))
return process_end( dbg, info )
is_invert_mask = False
if mask_mode == "Invert":
is_invert_mask = True
frame_path = os.path.join(project_dir , "video_frame")
frame_mask_path = os.path.join(project_dir, "video_mask")
if is_invert_mask:
inv_path = os.path.join(project_dir, "inv")
os.makedirs(inv_path, exist_ok=True)
org_key_path = os.path.join(inv_path, "video_key")
img2img_key_path = os.path.join(inv_path, "img2img_key")
img2img_upscale_key_path = os.path.join(inv_path, "img2img_upscale_key")
else:
org_key_path = os.path.join(project_dir, "video_key")
img2img_key_path = os.path.join(project_dir, "img2img_key")
img2img_upscale_key_path = os.path.join(project_dir, "img2img_upscale_key")
if mask_mode == "None":
frame_mask_path = ""
project_args = [project_dir, original_movie_path, frame_path, frame_mask_path, org_key_path, img2img_key_path, img2img_upscale_key_path]
if stage_index == 0:
ebsynth_utility_stage1(dbg, project_args, frame_width, frame_height, st1_masking_method_index, st1_mask_threshold, tb_use_fast_mode, tb_use_jit, clipseg_mask_prompt, clipseg_exclude_prompt, clipseg_mask_threshold, clipseg_mask_blur_size, clipseg_mask_blur_size2, is_invert_mask)
if is_invert_mask:
inv_mask_path = os.path.join(inv_path, "inv_video_mask")
ebsynth_utility_stage1_invert(dbg, frame_mask_path, inv_mask_path)
elif stage_index == 1:
ebsynth_utility_stage2(dbg, project_args, key_min_gap, key_max_gap, key_th, key_add_last_frame, is_invert_mask)
elif stage_index == 2:
sample_image = glob.glob( os.path.join(frame_path , "*.png" ) )[0]
img_height, img_width, _ = cv2.imread(sample_image).shape
if img_width < img_height:
re_w = 512
re_h = int(x_ceiling( (512 / img_width) * img_height , 64))
else:
re_w = int(x_ceiling( (512 / img_height) * img_width , 64))
re_h = 512
img_width = re_w
img_height = re_h
dbg.print("stage 3")
dbg.print("")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("1. Go to img2img tab")
dbg.print("2. Select [ebsynth utility] in the script combo box")
dbg.print("3. Fill in the \"Project directory\" field with [" + project_dir + "]" )
dbg.print("4. Select in the \"Mask Mode(Override img2img Mask mode)\" field with [" + ("Invert" if is_invert_mask else "Normal") + "]" )
dbg.print("5. I recommend to fill in the \"Width\" field with [" + str(img_width) + "]" )
dbg.print("6. I recommend to fill in the \"Height\" field with [" + str(img_height) + "]" )
dbg.print("7. I recommend to fill in the \"Denoising strength\" field with lower than 0.35" )
dbg.print(" (When using controlnet together, you can put in large values (even 1.0 is possible).)")
dbg.print("8. Fill in the remaining configuration fields of img2img. No image and mask settings are required.")
dbg.print("9. Drop any image onto the img2img main screen. This is necessary to avoid errors, but does not affect the results of img2img.")
dbg.print("10. Generate")
dbg.print("(Images are output to [" + img2img_key_path + "])")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
elif stage_index == 3:
ebsynth_utility_stage3_5(dbg, project_args, color_matcher_method, st3_5_use_mask, st3_5_use_mask_ref, st3_5_use_mask_org, color_matcher_ref_type, color_matcher_ref_image)
elif stage_index == 4:
sample_image = glob.glob( os.path.join(frame_path , "*.png" ) )[0]
img_height, img_width, _ = cv2.imread(sample_image).shape
sample_img2img_key = glob.glob( os.path.join(img2img_key_path , "*.png" ) )[0]
img_height_key, img_width_key, _ = cv2.imread(sample_img2img_key).shape
if is_invert_mask:
project_dir = inv_path
dbg.print("stage 4")
dbg.print("")
if img_height == img_height_key and img_width == img_width_key:
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("!! The size of frame and img2img_key matched.")
dbg.print("!! You can skip this stage.")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("0. Enable the following item")
dbg.print("Settings ->")
dbg.print(" Saving images/grids ->")
dbg.print(" Use original name for output filename during batch process in extras tab")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("1. If \"img2img_upscale_key\" directory already exists in the %s, delete it manually before executing."%(project_dir))
dbg.print("2. Go to Extras tab")
dbg.print("3. Go to Batch from Directory tab")
dbg.print("4. Fill in the \"Input directory\" field with [" + img2img_key_path + "]" )
dbg.print("5. Fill in the \"Output directory\" field with [" + img2img_upscale_key_path + "]" )
dbg.print("6. Go to Scale to tab")
dbg.print("7. Fill in the \"Width\" field with [" + str(img_width) + "]" )
dbg.print("8. Fill in the \"Height\" field with [" + str(img_height) + "]" )
dbg.print("9. Fill in the remaining configuration fields of Upscaler.")
dbg.print("10. Generate")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
elif stage_index == 5:
ebsynth_utility_stage5(dbg, project_args, is_invert_mask)
elif stage_index == 6:
if is_invert_mask:
project_dir = inv_path
dbg.print("stage 6")
dbg.print("")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("Running ebsynth.(on your self)")
dbg.print("Open the generated .ebs under %s and press [Run All] button."%(project_dir))
dbg.print("If ""out-*"" directory already exists in the %s, delete it manually before executing."%(project_dir))
dbg.print("If multiple .ebs files are generated, run them all.")
dbg.print("(I recommend associating the .ebs file with EbSynth.exe.)")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
elif stage_index == 7:
ebsynth_utility_stage7(dbg, project_args, blend_rate, export_type, is_invert_mask)
elif stage_index == 8:
if mask_mode != "Normal":
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
dbg.print("Please reset [configuration]->[etc]->[Mask Mode] to Normal.")
dbg.print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
return process_end( dbg, "" )
ebsynth_utility_stage8(dbg, project_args, bg_src, bg_type, mask_blur_size, mask_threshold, fg_transparency, export_type)
else:
pass
return process_end( dbg, info )

View File

@ -1,28 +1,31 @@
import launch
import platform
def update_transparent_background():
from importlib.metadata import version as meta_version
from packaging import version
v = meta_version("transparent-background")
print("current transparent-background " + v)
if version.parse(v) < version.parse('1.2.3'):
launch.run_pip("install -U transparent-background", "update transparent-background version for Ebsynth Utility Lite")
# Check if user is running an M1/M2 device and, if so, install pyvirtualcam, which is required for updating the transparent_background package
# Note that we have to directly install from source because the prebuilt PyPl wheel does not support ARM64 machines such as M1/M2 Macs
if platform.system() == "Darwin" and platform.machine() == "arm64":
if not launch.is_installed("pyvirtualcam"):
launch.run_pip("install git+https://github.com/letmaik/pyvirtualcam", "requirements for Ebsynth Utility Lite")
if not launch.is_installed("transparent_background"):
launch.run_pip("install transparent-background", "requirements for Ebsynth Utility Lite")
update_transparent_background()
if not launch.is_installed("IPython"):
launch.run_pip("install ipython", "requirements for Ebsynth Utility Lite")
if not launch.is_installed("seaborn"):
launch.run_pip("install ""seaborn>=0.11.0""", "requirements for Ebsynth Utility Lite")
import launch
import platform
def update_transparent_background():
from importlib.metadata import version as meta_version
from packaging import version
v = meta_version("transparent-background")
print("current transparent-background " + v)
if version.parse(v) < version.parse('1.2.3'):
launch.run_pip("install -U transparent-background", "update transparent-background version for Ebsynth Utility")
# Check if user is running an M1/M2 device and, if so, install pyvirtualcam, which is required for updating the transparent_background package
# Note that we have to directly install from source because the prebuilt PyPl wheel does not support ARM64 machines such as M1/M2 Macs
if platform.system() == "Darwin" and platform.machine() == "arm64":
if not launch.is_installed("pyvirtualcam"):
launch.run_pip("install git+https://github.com/letmaik/pyvirtualcam", "requirements for Ebsynth Utility")
if not launch.is_installed("transparent_background"):
launch.run_pip("install transparent-background", "requirements for Ebsynth Utility")
update_transparent_background()
if not launch.is_installed("IPython"):
launch.run_pip("install ipython", "requirements for Ebsynth Utility")
if not launch.is_installed("seaborn"):
launch.run_pip("install ""seaborn>=0.11.0""", "requirements for Ebsynth Utility")
if not launch.is_installed("color_matcher"):
launch.run_pip("install color-matcher", "requirements for Ebsynth Utility")

1012
scripts/custom_script.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,171 +1,196 @@
import gradio as gr
from ebsynth_utility import ebsynth_utility_process
from modules import script_callbacks
from modules.call_queue import wrap_gradio_gpu_call
def on_ui_tabs():
with gr.Blocks(analytics_enabled=False) as ebs_interface:
with gr.Row().style(equal_height=False):
with gr.Column(variant='panel'):
with gr.Row():
with gr.Tabs(elem_id="ebs_settings"):
with gr.TabItem('project setting', elem_id='ebs_project_setting'):
project_dir = gr.Textbox(label='Project directory', lines=1)
original_movie_path = gr.Textbox(label='Original Movie Path', lines=1)
org_video = gr.Video(interactive=True, mirror_webcam=False)
def fn_upload_org_video(video):
return video
org_video.upload(fn_upload_org_video, org_video, original_movie_path)
gr.HTML(value="<p style='margin-bottom: 1.2em'>\
If you have trouble entering the video path manually, you can also use drag and drop. \
</p>")
with gr.TabItem('configuration', elem_id='ebs_configuration'):
with gr.Tabs(elem_id="ebs_configuration_tab"):
with gr.TabItem(label="stage 1",elem_id='ebs_configuration_tab1'):
with gr.Row():
frame_width = gr.Number(value=-1, label="Frame Width", precision=0, interactive=True)
frame_height = gr.Number(value=-1, label="Frame Height", precision=0, interactive=True)
gr.HTML(value="<p style='margin-bottom: 1.2em'>\
-1 means that it is calculated automatically. If both are -1, the size will be the same as the source size. \
</p>")
st1_masking_method_index = gr.Radio(label='Masking Method', choices=["transparent-background","clipseg","transparent-background AND clipseg"], value="transparent-background", type="index")
with gr.Accordion(label="transparent-background options"):
st1_mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.0)
# https://pypi.org/project/transparent-background/
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
configuration for \
<font color=\"blue\"><a href=\"https://pypi.org/project/transparent-background\">[transparent-background]</a></font>\
</p>")
tb_use_fast_mode = gr.Checkbox(label="Use Fast Mode(It will be faster, but the quality of the mask will be lower.)", value=False)
tb_use_jit = gr.Checkbox(label="Use Jit", value=False)
with gr.Accordion(label="clipseg options"):
clipseg_mask_prompt = gr.Textbox(label='Mask Target (e.g., girl, cats)', lines=1)
clipseg_exclude_prompt = gr.Textbox(label='Exclude Target (e.g., finger, book)', lines=1)
clipseg_mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.4)
clipseg_mask_blur_size = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size(MedianBlur)', value=11)
clipseg_mask_blur_size2 = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size(GaussianBlur)', value=11)
with gr.TabItem(label="stage 2", elem_id='ebs_configuration_tab2'):
key_min_gap = gr.Slider(minimum=0, maximum=500, step=1, label='Minimum keyframe gap', value=10)
key_max_gap = gr.Slider(minimum=0, maximum=1000, step=1, label='Maximum keyframe gap', value=300)
key_th = gr.Slider(minimum=0.0, maximum=100.0, step=0.1, label='Threshold of delta frame edge', value=8.5)
key_add_last_frame = gr.Checkbox(label="Add last frame to keyframes", value=True)
with gr.TabItem(label="stage 7", elem_id='ebs_configuration_tab7'):
blend_rate = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Crossfade blend rate', value=1.0)
export_type = gr.Dropdown(choices=["mp4","webm","gif","rawvideo"], value="mp4" ,label="Export type")
with gr.TabItem(label="stage 8", elem_id='ebs_configuration_tab8'):
bg_src = gr.Textbox(label='Background source(mp4 or directory containing images)', lines=1)
bg_type = gr.Dropdown(choices=["Fit video length","Loop"], value="Fit video length" ,label="Background type")
mask_blur_size = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size', value=5)
mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.0)
#is_transparent = gr.Checkbox(label="Is Transparent", value=True, visible = False)
fg_transparency = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Foreground Transparency', value=0.0)
with gr.TabItem(label="etc", elem_id='ebs_configuration_tab_etc'):
mask_mode = gr.Dropdown(choices=["Normal","Invert","None"], value="Normal" ,label="Mask Mode")
with gr.TabItem('info', elem_id='ebs_info'):
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
The process of creating a video can be divided into the following stages.<br>\
(Stage 3, 4, and 6 only show a guide and do nothing actual processing.)<br><br>\
<b>stage 1</b> <br>\
Extract frames from the original video. <br>\
Generate a mask image. <br><br>\
<b>stage 2</b> <br>\
Select keyframes to be given to ebsynth.<br><br>\
<b>stage 3</b> <br>\
img2img keyframes.<br><br>\
<b>stage 4</b> <br>\
and upscale to the size of the original video.<br><br>\
<b>stage 5</b> <br>\
Rename keyframes.<br>\
Generate .ebs file.(ebsynth project file)<br><br>\
<b>stage 6</b> <br>\
Running ebsynth.(on your self)<br>\
Open the generated .ebs under project directory and press [Run All] button. <br>\
If ""out-*"" directory already exists in the Project directory, delete it manually before executing.<br>\
If multiple .ebs files are generated, run them all.<br><br>\
<b>stage 7</b> <br>\
Concatenate each frame while crossfading.<br>\
Composite audio files extracted from the original video onto the concatenated video.<br><br>\
<b>stage 8</b> <br>\
This is an extra stage.<br>\
You can put any image or images or video you like in the background.<br>\
You can specify in this field -> [Ebsynth Utility]->[configuration]->[stage 8]->[Background source]<br>\
If you have already created a background video in Invert Mask Mode([Ebsynth Utility]->[configuration]->[etc]->[Mask Mode]),<br>\
You can specify \"path_to_project_dir/inv/crossfade_tmp\".<br>\
</p>")
with gr.Column(variant='panel'):
with gr.Column(scale=1):
with gr.Row():
stage_index = gr.Radio(label='Process Stage', choices=["stage 1","stage 2","stage 3","stage 4","stage 5","stage 6","stage 7","stage 8"], value="stage 1", type="index", elem_id='ebs_stages')
with gr.Row():
generate_btn = gr.Button('Generate', elem_id="ebs_generate_btn", variant='primary')
with gr.Group():
debug_info = gr.HTML(elem_id="ebs_info_area", value=".")
with gr.Column(scale=2):
html_info = gr.HTML()
ebs_args = dict(
fn=wrap_gradio_gpu_call(ebsynth_utility_process),
inputs=[
stage_index,
project_dir,
original_movie_path,
frame_width,
frame_height,
st1_masking_method_index,
st1_mask_threshold,
tb_use_fast_mode,
tb_use_jit,
clipseg_mask_prompt,
clipseg_exclude_prompt,
clipseg_mask_threshold,
clipseg_mask_blur_size,
clipseg_mask_blur_size2,
key_min_gap,
key_max_gap,
key_th,
key_add_last_frame,
blend_rate,
export_type,
bg_src,
bg_type,
mask_blur_size,
mask_threshold,
fg_transparency,
mask_mode,
],
outputs=[
debug_info,
html_info,
],
show_progress=False,
)
generate_btn.click(**ebs_args)
return (ebs_interface, "Ebsynth Utility Lite", "ebs_interface"),
script_callbacks.on_ui_tabs(on_ui_tabs)
import gradio as gr
from ebsynth_utility import ebsynth_utility_process
from modules import script_callbacks
from modules.call_queue import wrap_gradio_gpu_call
def on_ui_tabs():
with gr.Blocks(analytics_enabled=False) as ebs_interface:
with gr.Row(equal_height=False):
with gr.Column(variant='panel'):
with gr.Row():
with gr.Tabs(elem_id="ebs_settings"):
with gr.TabItem('project setting', elem_id='ebs_project_setting'):
project_dir = gr.Textbox(label='Project directory', lines=1)
original_movie_path = gr.Textbox(label='Original Movie Path', lines=1)
org_video = gr.Video(interactive=True, mirror_webcam=False)
def fn_upload_org_video(video):
return video
org_video.upload(fn_upload_org_video, org_video, original_movie_path)
gr.HTML(value="<p style='margin-bottom: 1.2em'>\
If you have trouble entering the video path manually, you can also use drag and drop.For large videos, please enter the path manually. \
</p>")
with gr.TabItem('configuration', elem_id='ebs_configuration'):
with gr.Tabs(elem_id="ebs_configuration_tab"):
with gr.TabItem(label="stage 1",elem_id='ebs_configuration_tab1'):
with gr.Row():
frame_width = gr.Number(value=-1, label="Frame Width", precision=0, interactive=True)
frame_height = gr.Number(value=-1, label="Frame Height", precision=0, interactive=True)
gr.HTML(value="<p style='margin-bottom: 1.2em'>\
-1 means that it is calculated automatically. If both are -1, the size will be the same as the source size. \
</p>")
st1_masking_method_index = gr.Radio(label='Masking Method', choices=["transparent-background","clipseg","transparent-background AND clipseg"], value="transparent-background", type="index")
with gr.Accordion(label="transparent-background options"):
st1_mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.0)
# https://pypi.org/project/transparent-background/
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
configuration for \
<font color=\"blue\"><a href=\"https://pypi.org/project/transparent-background\">[transparent-background]</a></font>\
</p>")
tb_use_fast_mode = gr.Checkbox(label="Use Fast Mode(It will be faster, but the quality of the mask will be lower.)", value=False)
tb_use_jit = gr.Checkbox(label="Use Jit", value=False)
with gr.Accordion(label="clipseg options"):
clipseg_mask_prompt = gr.Textbox(label='Mask Target (e.g., girl, cats)', lines=1)
clipseg_exclude_prompt = gr.Textbox(label='Exclude Target (e.g., finger, book)', lines=1)
clipseg_mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.4)
clipseg_mask_blur_size = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size(MedianBlur)', value=11)
clipseg_mask_blur_size2 = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size(GaussianBlur)', value=11)
with gr.TabItem(label="stage 2", elem_id='ebs_configuration_tab2'):
key_min_gap = gr.Slider(minimum=0, maximum=500, step=1, label='Minimum keyframe gap', value=10)
key_max_gap = gr.Slider(minimum=0, maximum=1000, step=1, label='Maximum keyframe gap', value=300)
key_th = gr.Slider(minimum=0.0, maximum=100.0, step=0.1, label='Threshold of delta frame edge', value=8.5)
key_add_last_frame = gr.Checkbox(label="Add last frame to keyframes", value=True)
with gr.TabItem(label="stage 3.5", elem_id='ebs_configuration_tab3_5'):
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
<font color=\"blue\"><a href=\"https://github.com/hahnec/color-matcher\">[color-matcher]</a></font>\
</p>")
color_matcher_method = gr.Radio(label='Color Transfer Method', choices=['default', 'hm', 'reinhard', 'mvgd', 'mkl', 'hm-mvgd-hm', 'hm-mkl-hm'], value="hm-mkl-hm", type="value")
color_matcher_ref_type = gr.Radio(label='Color Matcher Ref Image Type', choices=['original video frame', 'first frame of img2img result'], value="original video frame", type="index")
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
<font color=\"red\">If an image is specified below, it will be used with highest priority.</font>\
</p>")
color_matcher_ref_image = gr.Image(label="Color Matcher Ref Image", source='upload', mirror_webcam=False, type='pil')
st3_5_use_mask = gr.Checkbox(label="Apply mask to the result", value=True)
st3_5_use_mask_ref = gr.Checkbox(label="Apply mask to the Ref Image", value=False)
st3_5_use_mask_org = gr.Checkbox(label="Apply mask to original image", value=False)
#st3_5_number_of_itr = gr.Slider(minimum=1, maximum=10, step=1, label='Number of iterations', value=1)
with gr.TabItem(label="stage 7", elem_id='ebs_configuration_tab7'):
blend_rate = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Crossfade blend rate', value=1.0)
export_type = gr.Dropdown(choices=["mp4","webm","gif","rawvideo"], value="mp4" ,label="Export type")
with gr.TabItem(label="stage 8", elem_id='ebs_configuration_tab8'):
bg_src = gr.Textbox(label='Background source(mp4 or directory containing images)', lines=1)
bg_type = gr.Dropdown(choices=["Fit video length","Loop"], value="Fit video length" ,label="Background type")
mask_blur_size = gr.Slider(minimum=0, maximum=150, step=1, label='Mask Blur Kernel Size', value=5)
mask_threshold = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Mask Threshold', value=0.0)
#is_transparent = gr.Checkbox(label="Is Transparent", value=True, visible = False)
fg_transparency = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Foreground Transparency', value=0.0)
with gr.TabItem(label="etc", elem_id='ebs_configuration_tab_etc'):
mask_mode = gr.Dropdown(choices=["Normal","Invert","None"], value="Normal" ,label="Mask Mode")
with gr.TabItem('info', elem_id='ebs_info'):
gr.HTML(value="<p style='margin-bottom: 0.7em'>\
The process of creating a video can be divided into the following stages.<br>\
(Stage 3, 4, and 6 only show a guide and do nothing actual processing.)<br><br>\
<b>stage 1</b> <br>\
Extract frames from the original video. <br>\
Generate a mask image. <br><br>\
<b>stage 2</b> <br>\
Select keyframes to be given to ebsynth.<br><br>\
<b>stage 3</b> <br>\
img2img keyframes.<br><br>\
<b>stage 3.5</b> <br>\
(this is optional. Perform color correction on the img2img results and expect flickering to decrease. Or, you can simply change the color tone from the generated result.)<br><br>\
<b>stage 4</b> <br>\
and upscale to the size of the original video.<br><br>\
<b>stage 5</b> <br>\
Rename keyframes.<br>\
Generate .ebs file.(ebsynth project file)<br><br>\
<b>stage 6</b> <br>\
Running ebsynth.(on your self)<br>\
Open the generated .ebs under project directory and press [Run All] button. <br>\
If ""out-*"" directory already exists in the Project directory, delete it manually before executing.<br>\
If multiple .ebs files are generated, run them all.<br><br>\
<b>stage 7</b> <br>\
Concatenate each frame while crossfading.<br>\
Composite audio files extracted from the original video onto the concatenated video.<br><br>\
<b>stage 8</b> <br>\
This is an extra stage.<br>\
You can put any image or images or video you like in the background.<br>\
You can specify in this field -> [Ebsynth Utility]->[configuration]->[stage 8]->[Background source]<br>\
If you have already created a background video in Invert Mask Mode([Ebsynth Utility]->[configuration]->[etc]->[Mask Mode]),<br>\
You can specify \"path_to_project_dir/inv/crossfade_tmp\".<br>\
</p>")
with gr.Column(variant='panel'):
with gr.Column(scale=1):
with gr.Row():
stage_index = gr.Radio(label='Process Stage', choices=["stage 1","stage 2","stage 3","stage 3.5","stage 4","stage 5","stage 6","stage 7","stage 8"], value="stage 1", type="index", elem_id='ebs_stages')
with gr.Row():
generate_btn = gr.Button('Generate', elem_id="ebs_generate_btn", variant='primary')
with gr.Group():
debug_info = gr.HTML(elem_id="ebs_info_area", value=".")
with gr.Column(scale=2):
html_info = gr.HTML()
ebs_args = dict(
fn=wrap_gradio_gpu_call(ebsynth_utility_process),
inputs=[
stage_index,
project_dir,
original_movie_path,
frame_width,
frame_height,
st1_masking_method_index,
st1_mask_threshold,
tb_use_fast_mode,
tb_use_jit,
clipseg_mask_prompt,
clipseg_exclude_prompt,
clipseg_mask_threshold,
clipseg_mask_blur_size,
clipseg_mask_blur_size2,
key_min_gap,
key_max_gap,
key_th,
key_add_last_frame,
color_matcher_method,
st3_5_use_mask,
st3_5_use_mask_ref,
st3_5_use_mask_org,
color_matcher_ref_type,
color_matcher_ref_image,
blend_rate,
export_type,
bg_src,
bg_type,
mask_blur_size,
mask_threshold,
fg_transparency,
mask_mode,
],
outputs=[
debug_info,
html_info,
],
show_progress=False,
)
generate_btn.click(**ebs_args)
return (ebs_interface, "Ebsynth Utility", "ebs_interface"),
script_callbacks.on_ui_tabs(on_ui_tabs)

178
stage3_5.py Normal file
View File

@ -0,0 +1,178 @@
import cv2
import os
import glob
import shutil
import numpy as np
from PIL import Image
from color_matcher import ColorMatcher
from color_matcher.normalizer import Normalizer
def resize_img(img, w, h):
if img.shape[0] + img.shape[1] < h + w:
interpolation = interpolation=cv2.INTER_CUBIC
else:
interpolation = interpolation=cv2.INTER_AREA
return cv2.resize(img, (w, h), interpolation=interpolation)
def get_pair_of_img(img_path, target_dir):
img_basename = os.path.basename(img_path)
target_path = os.path.join( target_dir , img_basename )
return target_path if os.path.isfile( target_path ) else None
def remove_pngs_in_dir(path):
if not os.path.isdir(path):
return
pngs = glob.glob( os.path.join(path, "*.png") )
for png in pngs:
os.remove(png)
def get_pair_of_img(img, target_dir):
img_basename = os.path.basename(img)
pair_path = os.path.join( target_dir , img_basename )
if os.path.isfile( pair_path ):
return pair_path
print("!!! pair of "+ img + " not in " + target_dir)
return ""
def get_mask_array(mask_path):
if not mask_path:
return None
mask_array = np.asarray(Image.open( mask_path ))
if mask_array.ndim == 2:
mask_array = mask_array[:, :, np.newaxis]
mask_array = mask_array[:,:,:1]
mask_array = mask_array/255
return mask_array
def color_match(imgs, ref_image, color_matcher_method, dst_path):
cm = ColorMatcher(method=color_matcher_method)
i = 0
total = len(imgs)
for fname in imgs:
img_src = Image.open(fname)
img_src = Normalizer(np.asarray(img_src)).type_norm()
img_src = cm.transfer(src=img_src, ref=ref_image, method=color_matcher_method)
img_src = Normalizer(img_src).uint8_norm()
Image.fromarray(img_src).save(os.path.join(dst_path, os.path.basename(fname)))
i += 1
print("{0}/{1}".format(i, total))
imgs = sorted( glob.glob( os.path.join(dst_path, "*.png") ) )
def ebsynth_utility_stage3_5(dbg, project_args, color_matcher_method, st3_5_use_mask, st3_5_use_mask_ref, st3_5_use_mask_org, color_matcher_ref_type, color_matcher_ref_image):
dbg.print("stage3.5")
dbg.print("")
_, _, frame_path, frame_mask_path, org_key_path, img2img_key_path, _ = project_args
backup_path = os.path.join( os.path.join( img2img_key_path, "..") , "st3_5_backup_img2img_key")
backup_path = os.path.normpath(backup_path)
if not os.path.isdir( backup_path ):
dbg.print("{0} not found -> create backup.".format(backup_path))
os.makedirs(backup_path, exist_ok=True)
imgs = glob.glob( os.path.join(img2img_key_path, "*.png") )
for img in imgs:
img_basename = os.path.basename(img)
pair_path = os.path.join( backup_path , img_basename )
shutil.copy( img , pair_path)
else:
dbg.print("{0} found -> Treat the images here as originals.".format(backup_path))
org_imgs = sorted( glob.glob( os.path.join(backup_path, "*.png") ) )
head_of_keyframe = org_imgs[0]
# open ref img
ref_image = color_matcher_ref_image
if not ref_image:
dbg.print("color_matcher_ref_image not set")
if color_matcher_ref_type == 0:
#'original video frame'
dbg.print("select -> original video frame")
ref_image = Image.open( get_pair_of_img(head_of_keyframe, frame_path) )
else:
#'first frame of img2img result'
dbg.print("select -> first frame of img2img result")
ref_image = Image.open( get_pair_of_img(head_of_keyframe, backup_path) )
ref_image = np.asarray(ref_image)
if st3_5_use_mask_ref:
mask = get_pair_of_img(head_of_keyframe, frame_mask_path)
if mask:
mask_array = get_mask_array( mask )
ref_image = ref_image * mask_array
ref_image = ref_image.astype(np.uint8)
else:
dbg.print("select -> color_matcher_ref_image")
ref_image = np.asarray(ref_image)
if color_matcher_method in ('mvgd', 'hm-mvgd-hm'):
sample_img = Image.open(head_of_keyframe)
ref_image = resize_img( ref_image, sample_img.width, sample_img.height )
ref_image = Normalizer(ref_image).type_norm()
if st3_5_use_mask_org:
tmp_path = os.path.join( os.path.join( img2img_key_path, "..") , "st3_5_tmp")
tmp_path = os.path.normpath(tmp_path)
dbg.print("create {0} for masked original image".format(tmp_path))
remove_pngs_in_dir(tmp_path)
os.makedirs(tmp_path, exist_ok=True)
for org_img in org_imgs:
image_basename = os.path.basename(org_img)
org_image = np.asarray(Image.open(org_img))
mask = get_pair_of_img(org_img, frame_mask_path)
if mask:
mask_array = get_mask_array( mask )
org_image = org_image * mask_array
org_image = org_image.astype(np.uint8)
Image.fromarray(org_image).save( os.path.join( tmp_path, image_basename ) )
org_imgs = sorted( glob.glob( os.path.join(tmp_path, "*.png") ) )
color_match(org_imgs, ref_image, color_matcher_method, img2img_key_path)
if st3_5_use_mask or st3_5_use_mask_org:
imgs = sorted( glob.glob( os.path.join(img2img_key_path, "*.png") ) )
for img in imgs:
mask = get_pair_of_img(img, frame_mask_path)
if mask:
mask_array = get_mask_array( mask )
bg = get_pair_of_img(img, frame_path)
bg_image = np.asarray(Image.open( bg ))
fg_image = np.asarray(Image.open( img ))
final_img = fg_image * mask_array + bg_image * (1-mask_array)
final_img = final_img.astype(np.uint8)
Image.fromarray(final_img).save(img)
dbg.print("")
dbg.print("completed.")

View File

@ -8,7 +8,7 @@ from sys import byteorder
import binascii
import numpy as np
SYNTHS_PER_PROJECT = 25
SYNTHS_PER_PROJECT = 15
def to_float_bytes(f):
if byteorder == 'little':

292
stage8.py
View File

@ -1,146 +1,146 @@
import os
import re
import subprocess
import glob
import shutil
import time
import cv2
import numpy as np
import itertools
from extensions.ebsynth_utility_lite.stage7 import create_movie_from_frames, get_ext, trying_to_add_audio
def clamp(n, smallest, largest):
return sorted([smallest, n, largest])[1]
def resize_img(img, w, h):
if img.shape[0] + img.shape[1] < h + w:
interpolation = interpolation=cv2.INTER_CUBIC
else:
interpolation = interpolation=cv2.INTER_AREA
return cv2.resize(img, (w, h), interpolation=interpolation)
def merge_bg_src(base_frame_dir, bg_dir, frame_mask_path, tmp_dir, bg_type, mask_blur_size, mask_threshold, fg_transparency):
base_frames = sorted(glob.glob( os.path.join(base_frame_dir, "[0-9]*.png"), recursive=False) )
bg_frames = sorted(glob.glob( os.path.join(bg_dir, "*.png"), recursive=False) )
def bg_frame(total_frames):
bg_len = len(bg_frames)
if bg_type == "Loop":
itr = itertools.cycle(bg_frames)
while True:
yield next(itr)
else:
for i in range(total_frames):
yield bg_frames[ int(bg_len * (i/total_frames))]
bg_itr = bg_frame(len(base_frames))
for base_frame in base_frames:
im = cv2.imread(base_frame)
bg = cv2.imread( next(bg_itr) )
bg = resize_img(bg, im.shape[1], im.shape[0] )
basename = os.path.basename(base_frame)
mask_path = os.path.join(frame_mask_path, basename)
mask = cv2.imread(mask_path)[:,:,0]
mask[mask < int( 255 * mask_threshold )] = 0
if mask_blur_size > 0:
mask_blur_size = mask_blur_size//2 * 2 + 1
mask = cv2.GaussianBlur(mask, (mask_blur_size, mask_blur_size), 0)
mask = mask[:, :, np.newaxis]
fore_rate = (mask/255) * (1 - fg_transparency)
im = im * fore_rate + bg * (1- fore_rate)
im = im.astype(np.uint8)
cv2.imwrite( os.path.join( tmp_dir , basename ) , im)
def extract_frames(movie_path , output_dir, fps):
png_path = os.path.join(output_dir , "%05d.png")
# ffmpeg.exe -ss 00:00:00 -y -i %1 -qscale 0 -f image2 -c:v png "%05d.png"
subprocess.call("ffmpeg -ss 00:00:00 -y -i " + movie_path + " -vf fps=" + str( round(fps, 2)) + " -qscale 0 -f image2 -c:v png " + png_path, shell=True)
def ebsynth_utility_stage8(dbg, project_args, bg_src, bg_type, mask_blur_size, mask_threshold, fg_transparency, export_type):
dbg.print("stage8")
dbg.print("")
if not bg_src:
dbg.print("Fill [configuration] -> [stage 8] -> [Background source]")
return
project_dir, original_movie_path, _, frame_mask_path, _, _, _ = project_args
fps = 30
clip = cv2.VideoCapture(original_movie_path)
if clip:
fps = clip.get(cv2.CAP_PROP_FPS)
clip.release()
dbg.print("bg_src: {}".format(bg_src))
dbg.print("bg_type: {}".format(bg_type))
dbg.print("mask_blur_size: {}".format(mask_blur_size))
dbg.print("export_type: {}".format(export_type))
dbg.print("fps: {}".format(fps))
base_frame_dir = os.path.join( project_dir , "crossfade_tmp")
if not os.path.isdir(base_frame_dir):
dbg.print(base_frame_dir + " base frame not found")
return
tmp_dir = os.path.join( project_dir , "bg_merge_tmp")
if os.path.isdir(tmp_dir):
shutil.rmtree(tmp_dir)
os.mkdir(tmp_dir)
### create frame imgs
if os.path.isfile(bg_src):
bg_ext = os.path.splitext(os.path.basename(bg_src))[1]
if bg_ext == ".mp4":
bg_tmp_dir = os.path.join( project_dir , "bg_extract_tmp")
if os.path.isdir(bg_tmp_dir):
shutil.rmtree(bg_tmp_dir)
os.mkdir(bg_tmp_dir)
extract_frames(bg_src, bg_tmp_dir, fps)
bg_src = bg_tmp_dir
else:
dbg.print(bg_src + " must be mp4 or directory")
return
elif not os.path.isdir(bg_src):
dbg.print(bg_src + " must be mp4 or directory")
return
merge_bg_src(base_frame_dir, bg_src, frame_mask_path, tmp_dir, bg_type, mask_blur_size, mask_threshold, fg_transparency)
### create movie
movie_base_name = time.strftime("%Y%m%d-%H%M%S")
movie_base_name = "merge_" + movie_base_name
nosnd_path = os.path.join(project_dir , movie_base_name + get_ext(export_type))
merged_frames = sorted(glob.glob( os.path.join(tmp_dir, "[0-9]*.png"), recursive=False) )
start = int(os.path.splitext(os.path.basename(merged_frames[0]))[0])
end = int(os.path.splitext(os.path.basename(merged_frames[-1]))[0])
create_movie_from_frames(tmp_dir,start,end,5,fps,nosnd_path,export_type)
dbg.print("exported : " + nosnd_path)
if export_type == "mp4":
with_snd_path = os.path.join(project_dir , movie_base_name + '_with_snd.mp4')
if trying_to_add_audio(original_movie_path, nosnd_path, with_snd_path, tmp_dir):
dbg.print("exported : " + with_snd_path)
dbg.print("")
dbg.print("completed.")
import os
import re
import subprocess
import glob
import shutil
import time
import cv2
import numpy as np
import itertools
from extensions.ebsynth_utility.stage7 import create_movie_from_frames, get_ext, trying_to_add_audio
def clamp(n, smallest, largest):
return sorted([smallest, n, largest])[1]
def resize_img(img, w, h):
if img.shape[0] + img.shape[1] < h + w:
interpolation = interpolation=cv2.INTER_CUBIC
else:
interpolation = interpolation=cv2.INTER_AREA
return cv2.resize(img, (w, h), interpolation=interpolation)
def merge_bg_src(base_frame_dir, bg_dir, frame_mask_path, tmp_dir, bg_type, mask_blur_size, mask_threshold, fg_transparency):
base_frames = sorted(glob.glob( os.path.join(base_frame_dir, "[0-9]*.png"), recursive=False) )
bg_frames = sorted(glob.glob( os.path.join(bg_dir, "*.png"), recursive=False) )
def bg_frame(total_frames):
bg_len = len(bg_frames)
if bg_type == "Loop":
itr = itertools.cycle(bg_frames)
while True:
yield next(itr)
else:
for i in range(total_frames):
yield bg_frames[ int(bg_len * (i/total_frames))]
bg_itr = bg_frame(len(base_frames))
for base_frame in base_frames:
im = cv2.imread(base_frame)
bg = cv2.imread( next(bg_itr) )
bg = resize_img(bg, im.shape[1], im.shape[0] )
basename = os.path.basename(base_frame)
mask_path = os.path.join(frame_mask_path, basename)
mask = cv2.imread(mask_path)[:,:,0]
mask[mask < int( 255 * mask_threshold )] = 0
if mask_blur_size > 0:
mask_blur_size = mask_blur_size//2 * 2 + 1
mask = cv2.GaussianBlur(mask, (mask_blur_size, mask_blur_size), 0)
mask = mask[:, :, np.newaxis]
fore_rate = (mask/255) * (1 - fg_transparency)
im = im * fore_rate + bg * (1- fore_rate)
im = im.astype(np.uint8)
cv2.imwrite( os.path.join( tmp_dir , basename ) , im)
def extract_frames(movie_path , output_dir, fps):
png_path = os.path.join(output_dir , "%05d.png")
# ffmpeg.exe -ss 00:00:00 -y -i %1 -qscale 0 -f image2 -c:v png "%05d.png"
subprocess.call("ffmpeg -ss 00:00:00 -y -i " + movie_path + " -vf fps=" + str( round(fps, 2)) + " -qscale 0 -f image2 -c:v png " + png_path, shell=True)
def ebsynth_utility_stage8(dbg, project_args, bg_src, bg_type, mask_blur_size, mask_threshold, fg_transparency, export_type):
dbg.print("stage8")
dbg.print("")
if not bg_src:
dbg.print("Fill [configuration] -> [stage 8] -> [Background source]")
return
project_dir, original_movie_path, _, frame_mask_path, _, _, _ = project_args
fps = 30
clip = cv2.VideoCapture(original_movie_path)
if clip:
fps = clip.get(cv2.CAP_PROP_FPS)
clip.release()
dbg.print("bg_src: {}".format(bg_src))
dbg.print("bg_type: {}".format(bg_type))
dbg.print("mask_blur_size: {}".format(mask_blur_size))
dbg.print("export_type: {}".format(export_type))
dbg.print("fps: {}".format(fps))
base_frame_dir = os.path.join( project_dir , "crossfade_tmp")
if not os.path.isdir(base_frame_dir):
dbg.print(base_frame_dir + " base frame not found")
return
tmp_dir = os.path.join( project_dir , "bg_merge_tmp")
if os.path.isdir(tmp_dir):
shutil.rmtree(tmp_dir)
os.mkdir(tmp_dir)
### create frame imgs
if os.path.isfile(bg_src):
bg_ext = os.path.splitext(os.path.basename(bg_src))[1]
if bg_ext == ".mp4":
bg_tmp_dir = os.path.join( project_dir , "bg_extract_tmp")
if os.path.isdir(bg_tmp_dir):
shutil.rmtree(bg_tmp_dir)
os.mkdir(bg_tmp_dir)
extract_frames(bg_src, bg_tmp_dir, fps)
bg_src = bg_tmp_dir
else:
dbg.print(bg_src + " must be mp4 or directory")
return
elif not os.path.isdir(bg_src):
dbg.print(bg_src + " must be mp4 or directory")
return
merge_bg_src(base_frame_dir, bg_src, frame_mask_path, tmp_dir, bg_type, mask_blur_size, mask_threshold, fg_transparency)
### create movie
movie_base_name = time.strftime("%Y%m%d-%H%M%S")
movie_base_name = "merge_" + movie_base_name
nosnd_path = os.path.join(project_dir , movie_base_name + get_ext(export_type))
merged_frames = sorted(glob.glob( os.path.join(tmp_dir, "[0-9]*.png"), recursive=False) )
start = int(os.path.splitext(os.path.basename(merged_frames[0]))[0])
end = int(os.path.splitext(os.path.basename(merged_frames[-1]))[0])
create_movie_from_frames(tmp_dir,start,end,5,fps,nosnd_path,export_type)
dbg.print("exported : " + nosnd_path)
if export_type == "mp4":
with_snd_path = os.path.join(project_dir , movie_base_name + '_with_snd.mp4')
if trying_to_add_audio(original_movie_path, nosnd_path, with_snd_path, tmp_dir):
dbg.print("exported : " + with_snd_path)
dbg.print("")
dbg.print("completed.")

View File

@ -1,41 +1,41 @@
#ebs_info_area {
border: #374151 2px solid;
border-radius: 5px;
font-size: 15px;
margin: 10px;
padding: 10px;
}
#ebs_configuration_tab1>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab2>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab3_5>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab7>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab8>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab_etc>div{
margin: 5px;
padding: 5px;
}
video.svelte-w5wajl {
max-height: 500px;
}
#ebs_info_area {
border: #0B0F19 2px solid;
border-radius: 5px;
font-size: 15px;
margin: 10px;
padding: 10px;
}
#ebs_configuration_tab1>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab2>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab3_5>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab7>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab8>div{
margin: 5px;
padding: 5px;
}
#ebs_configuration_tab_etc>div{
margin: 5px;
padding: 5px;
}
video.svelte-w5wajl {
max-height: 500px;
}