Update custom models in README, share generated images with LAION
parent
f8f9a84e4f
commit
fef6d7d04c
18
README.md
18
README.md
|
|
@ -42,6 +42,7 @@
|
|||
models_out.sort()
|
||||
cog.out("\n".join(models_out))
|
||||
]]] -->
|
||||
- **3DKX** (v1.1): SFW model with limited nsfw capabilities (suggestive nsfw) that is highly versatile for 3D renders.
|
||||
- **ACertainThing** (v1.0): An improved version of Anything v3 made with ACertainThing, focusing on scenes rather than characters
|
||||
- **AIO Pixel Art** (v1): Stable Diffusion fine tuned on pixel art sprites and scenes
|
||||
- **Analog Diffusion** (v1.0): A dreambooth model trained on a diverse set of analog photographs
|
||||
|
|
@ -63,11 +64,13 @@
|
|||
- **Dawgsmix** (v1): anime and realistic anatomy focused merged of anything , trinart , f222 , sd1.5
|
||||
- **DnD Item** (v1.0): This is a model (dnditem) for creating magic items, for the game Dungeons and Dragons! It was trained to be very similar to the official results that are available here: https://www.dndbeyond.com/magic-items
|
||||
- **Double Exposure Diffusion** (v2.0): The Double Exposure Diffusion model, trained specifically on images of people and a few animals
|
||||
- **DreamLikeSamKuvshinov** (v1): A mixture of Dreamlike Diffusion 1.0, SamDoesArt V3 and Kuvshinov style models. Created mostly for exploring different character concepts with a focus on drawings, but the mix happened to be pretty good at realistic-ish images, all thanks to wonderful models that it uses.
|
||||
- **Dreamlike Diffusion** (v1.0): Dreamlike Diffusion 1.0 is SD 1.5 fine tuned on high quality art, made by dreamlike.art
|
||||
- **Dreamlike Photoreal** (v1.0): Dreamlike Photoreal 1.0 is a photorealistic Stable Diffusion 1.5 model fine tuned on high quality photos, made by dreamlike.art.
|
||||
- **Dreamlike Photoreal** (v2.0): Dreamlike Photoreal 1.0 is a photorealistic Stable Diffusion 1.5 model fine tuned on high quality photos, made by dreamlike.art.
|
||||
- **Dungeons and Diffusion** (v1): Generates D&D styled characters, trained on art commissions
|
||||
- **Eimis Anime Diffusion** (v1): This model is trained with high quality and detailed anime images
|
||||
- **Elden Ring Diffusion** (v2): Based on the Elden Ring video game style
|
||||
- **Elldreth's Lucid Mix** (v1.0): It's an all-around easy-to-prompt general purpose semi-realistic to realistic model that cranks out some really nice images. No trigger words required
|
||||
- **Eternos** (v1.0): A surrealist / Minimalist model
|
||||
- **Fantasy Card Diffusion** (v1): fantasy trading card style art, trained on all currently available Magic: the Gathering card art
|
||||
- **Funko Diffusion** (v1.0): Stable Diffusion fine tuned on Funko Pop, by PromptHero.
|
||||
|
|
@ -76,8 +79,10 @@
|
|||
- **GTA5 Artwork Diffusion** (v1.0): This model was trained on the loading screens, gta storymode, and gta online DLCs artworks. Which includes characters, background, chop, and some objects. The model can do people and portrait pretty easily, as well as cars, and houses. For some reasons, the model stills automatically include in some game footage, so landscapes tend to look a bit more game-like.
|
||||
- **Ghibli Diffusion** (v1): fine-tuned Stable Diffusion model trained on images from Studio Ghibli feature films
|
||||
- **Guohua Diffusion** (v1): fine-tuned Stable Diffusion model trained on traditional Chinese paintings
|
||||
- **Hassanblend** (v1.4): This model was for creating people
|
||||
- **Hentai Diffusion** (v17): Anime focused model with better hands, obscure poses/camera angles and consistent style
|
||||
- **HASDX** (v1.0): He merged a few checkpoints and got something buttery and amazing. Does great with things other then people too. It can do anything really. It doesn't need crazy prompts either. Keep it simple. No need for all the artist names and trending on whatever.
|
||||
- **Hassanblend** (v1.5): This model was for creating people
|
||||
- **Healy's Anime Blend** (v1.0): This is a blend of some anime models mixed with 'realistic' stuff
|
||||
- **Hentai Diffusion** (v19): Anime focused model with better hands, obscure poses/camera angles and consistent style
|
||||
- **Inkpunk Diffusion** (v2): inspired by Gorillaz art, FLCL and Yoji Shinkawa. Trained on images generated from Midjourney
|
||||
- **JWST Deep Space Diffusion** (v1): Stable Diffusion fine tuned on JWST imagery
|
||||
- **Knollingcase** (v1): generates a glass display case with objects inside, inspired by Sean Preston. Trained on Midjourney images
|
||||
|
|
@ -88,21 +93,26 @@
|
|||
- **Midjourney PaintArt** (v1): Midjourney v4 painting style
|
||||
- **Min Illust Background** (v1.0): This fine-tuned Stable Diffusion v1.5 model was trained on a selection of artistic works by Sin Jong Hun
|
||||
- **ModernArt Diffusion** (v1.0): You can use this model to generate modernart style images
|
||||
- **Moedel** (v2): Moe.del produces cute female characters. It is also a mix of Stable Diffusion 1.4/1.5 in different proportions so you can challenge it to generate for you pretty much anything using regular SD prompts (like cute dogs, cats etc.)
|
||||
- **MoistMix** (v1.0): A do (almost) anything model
|
||||
- **Nitro Diffusion** (v1): Multi-Style model trained on Arcane, Archer and Mo-Di
|
||||
- **Papercut Diffusion** (v1): Stable Diffusion fine tuned on Paper cut images
|
||||
- **Papercutcraft** (v1): Paper Cut Craft is a fine tuned Stable Diffusion model trained on Midjourney images
|
||||
- **Poison** (v1): Anything Diffusion fine-tuned to produce high-quality realistic anime styled images
|
||||
- **PortraitPlus** (v1.0): This is a dreambooth model trained on a diverse set of close to medium range portraits of people.
|
||||
- **RPG** (v1): portraits of charecters in the style of the game Baldur's Gate
|
||||
- **ProtoGen** (v5.3): One Step Closer to Reality
|
||||
- **RPG** (v2): portraits of charecters in the style of the game Baldur's Gate
|
||||
- **Ranma Diffusion** (v1): imitates the style of late '80s early 90's anime, Anything v3 base
|
||||
- **Redshift Diffusion** (v1): Dreambooth model trained on high resolution 3D artworks
|
||||
- **Robo-Diffusion** (v1): Robot oriented drawing style
|
||||
- **Samdoesarts Ultmerge** (v1): Portraits in the style of Sam Yang, merged with chewtoy and orange code's models
|
||||
- **Sci-Fi Diffusion** (v1.0): A Sci-Fi themed model trained on SD 1.5 with a 26K+ image dataset
|
||||
- **Seek.art MEGA** (v1.0): Seek.art MEGA is a general use 'anything' model that significantly improves on 1.5 across dozens of styles. Created by Coreco at seek.art
|
||||
- **Smoke Diffusion** (v1.0): This is the fine-tuned Stable Diffusion model trained on images of smoke
|
||||
- **Spider-Verse Diffusion** (v1): Based on the Into the Spider-Verse movie's animation style
|
||||
- **Squishmallow Diffusion** (v1): Squishmallows
|
||||
- **Supermarionation** (v2.0): This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from Gerry Anderson Supermarionation stop motion animation movie, basically from Thunderbirds tv series
|
||||
- **Sygil-Dev Diffusion** (v1): This model is a Stable Diffusion v1.5 fine-tune trained on the Imaginary Network Expanded Dataset. It is an advanced version of Stable Diffusion and can generate nearly all kinds of images, no matter humans, reflections, cities, architecture, fantasy, digital arts, landscapes, or nature views.
|
||||
- **Synthwave** (v1): Stable Diffusion model to create images in Synthwave/outrun style
|
||||
- **Trinart Characters** (v2.0): Derrida (formerly TrinArt Characters v2) is a stable diffusion v1-based model that was further improved on the previous characters v1 model. While this is still a versatility and compositional variation anime/manga model like other TrinArt models, when compared to the v1 model, Derrida was focused on more anatomical stability and slightly less on variation due to further multi-epoch training and finetuning.
|
||||
- **Tron Legacy Diffusion** (v1): Tron Legacy movie style
|
||||
|
|
|
|||
|
|
@ -94,15 +94,16 @@ class Main(scripts.Script):
|
|||
|
||||
def ui(self, is_img2img):
|
||||
with gradio.Box():
|
||||
with gradio.Row(elem_id="model_row"):
|
||||
with gradio.Row(elem_id="horde_model_row"):
|
||||
model = gradio.Dropdown(self.load_models(), value="Random", label="Model")
|
||||
model.style(container=False)
|
||||
update = gradio.Button(ui.refresh_symbol, elem_id="update_model")
|
||||
update = gradio.Button(ui.refresh_symbol, elem_id="horde_update_model")
|
||||
|
||||
with gradio.Box():
|
||||
with gradio.Row():
|
||||
nsfw = gradio.Checkbox(False, label="NSFW")
|
||||
seed_variation = gradio.Slider(minimum=1, maximum=1000, value=1, step=1, label="Seed variation")
|
||||
shared = gradio.Checkbox(False, label="Share with LAION")
|
||||
seed_variation = gradio.Slider(minimum=1, maximum=1000, value=1, step=1, label="Seed variation", elem_id="horde_seed_variation")
|
||||
|
||||
with gradio.Box():
|
||||
with gradio.Row():
|
||||
|
|
@ -125,9 +126,9 @@ class Main(scripts.Script):
|
|||
update.click(fn=update_click, outputs=model)
|
||||
post_processing_1.change(fn=post_processing_1_change, inputs=post_processing_1, outputs=[post_processing_2, post_processing_3])
|
||||
post_processing_2.change(fn=post_processing_2_change, inputs=[post_processing_1, post_processing_2], outputs=post_processing_3)
|
||||
return [nsfw, model, seed_variation, post_processing_1, post_processing_2, post_processing_3]
|
||||
return [model, nsfw, shared, seed_variation, post_processing_1, post_processing_2, post_processing_3]
|
||||
|
||||
def run(self, p, nsfw, model, seed_variation, post_processing_1, post_processing_2, post_processing_3):
|
||||
def run(self, p, model, nsfw, shared, seed_variation, post_processing_1, post_processing_2, post_processing_3):
|
||||
if model != "Random":
|
||||
model = model.split("(")[0].rstrip()
|
||||
|
||||
|
|
@ -142,9 +143,9 @@ class Main(scripts.Script):
|
|||
if post_processing_3 != "None":
|
||||
post_processing.append(post_processing_3.split("(")[0].rstrip())
|
||||
|
||||
return self.process_images(p, nsfw, model, int(seed_variation), post_processing)
|
||||
return self.process_images(p, model, nsfw, shared, int(seed_variation), post_processing)
|
||||
|
||||
def process_images(self, p, nsfw, model, seed_variation, post_processing):
|
||||
def process_images(self, p, model, nsfw, shared, seed_variation, post_processing):
|
||||
# Copyright (C) 2022 AUTOMATIC1111
|
||||
|
||||
stored_opts = {k: shared.opts.data[k] for k in p.override_settings.keys()}
|
||||
|
|
@ -153,7 +154,7 @@ class Main(scripts.Script):
|
|||
for k, v in p.override_settings.items():
|
||||
setattr(shared.opts, k, v)
|
||||
|
||||
res = self.process_images_inner(p, nsfw, model, seed_variation, post_processing)
|
||||
res = self.process_images_inner(p, model, nsfw, shared, seed_variation, post_processing)
|
||||
finally:
|
||||
if p.override_settings_restore_afterwards:
|
||||
for k, v in stored_opts.items():
|
||||
|
|
@ -161,7 +162,7 @@ class Main(scripts.Script):
|
|||
|
||||
return res
|
||||
|
||||
def process_images_inner(self, p, nsfw, model, seed_variation, post_processing):
|
||||
def process_images_inner(self, p, model, nsfw, shared, seed_variation, post_processing):
|
||||
# Copyright (C) 2022 AUTOMATIC1111
|
||||
|
||||
if type(p.prompt) == list:
|
||||
|
|
@ -238,7 +239,7 @@ class Main(scripts.Script):
|
|||
if p.n_iter > 1:
|
||||
shared.state.job = f"Batch {n+1} out of {p.n_iter}"
|
||||
|
||||
x_samples_ddim = self.process_batch_horde(p, nsfw, model, seed_variation, post_processing, prompts[0], negative_prompts[0], seeds[0])
|
||||
x_samples_ddim = self.process_batch_horde(p, model, nsfw, shared, seed_variation, post_processing, prompts[0], negative_prompts[0], seeds[0])
|
||||
|
||||
if x_samples_ddim is None:
|
||||
del x_samples_ddim
|
||||
|
|
@ -310,7 +311,7 @@ class Main(scripts.Script):
|
|||
|
||||
return res
|
||||
|
||||
def process_batch_horde(self, p, nsfw, model, seed_variation, post_processing, prompt, negative_prompt, seed):
|
||||
def process_batch_horde(self, p, model, nsfw, shared, seed_variation, post_processing, prompt, negative_prompt, seed):
|
||||
payload = {
|
||||
"prompt": "{} ### {}".format(prompt, negative_prompt) if len(negative_prompt) > 0 else prompt,
|
||||
"params": {
|
||||
|
|
@ -355,6 +356,9 @@ class Main(scripts.Script):
|
|||
p.image_mask.save(buffer, format="WEBP")
|
||||
payload["source_mask"] = base64.b64encode(buffer.getvalue()).decode()
|
||||
|
||||
if shared:
|
||||
payload["shared"] = True
|
||||
|
||||
if len(post_processing) > 0:
|
||||
payload["params"]["post_processing"] = post_processing
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ def settings():
|
|||
|
||||
with gradio.Row():
|
||||
api_key = gradio.Textbox(max_lines=1, placeholder="0000000000", label="API key", interactive=True, type="password")
|
||||
show = gradio.Button(value="Show", elem_id="show_api_key")
|
||||
show = gradio.Button(value="Show", elem_id="horde_show_api_key")
|
||||
|
||||
censor_nsfw = gradio.Checkbox(label="Censor NSFW when NSFW is disabled", interactive=True)
|
||||
trusted_workers = gradio.Checkbox(label="Only send requests to trusted workers", interactive=True)
|
||||
|
|
|
|||
Loading…
Reference in New Issue