pull/36/head
Vladimir Mandic 2023-01-11 13:48:56 -05:00
parent f2adac6c12
commit fa36f33b19
5 changed files with 27 additions and 12 deletions

View File

@ -13,8 +13,13 @@ All code changes are merged upstream whenever possible
Fork does differ in few things:
- Different start script
> ./automatic.sh
- Drops compatibility with `python` **3.7** and requires **3.10**
- If you're using **PyTorch 2.0** models will be auto-compiled and optimized on load
Using `max-tune`
- Updated **Python** libraries to latest known compatible versions
e.g. `accelerate`, `transformers`, `numpy`, etc.
e.g. `accelerate`, `transformers`, `numpy`, etc.
- Includes opinionated **System** and **Options** configuration
e.g. `samplers`, `upscalers`, etc.
- Includes reskinned **UI**
@ -23,16 +28,17 @@ Fork does differ in few things:
- Ships with additional **extensions**
e.g. `System Info`
- Uses simplified folder structure
e.g. `/train`, `outputs`
e.g. `/train`, `/outputs/*`
- Modified training templates
Only Python library which is not auto-updated is `PyTorch` itself as that is very system specific
I'm currently using **PyTorch 2.0-nightly** compiled with **CUDA 11.8**:
I'm currently using **PyTorch 2.0-nightly** compiled with **CUDA 11.8** and with **Triton** optimizations:
> pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu118
> pip3 install --pre torch torchvision torchaudio torchtriton --extra-index-url https://download.pytorch.org/whl/nightly/cu118
> pip show torch
> 2.0.0.dev20230111+cu118
<br>
## Docs

View File

@ -1,5 +1,5 @@
#/bin/env bash
export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
python launch.py --api --disable-console-progressbars
# python launch.py --api --xformers --disable-console-progressbars
python launch.py --api --disable-console-progressbars
# python launch.py --api --xformers --disable-console-progressbars --opt-channelslast

View File

@ -48,7 +48,7 @@
"memmon_poll_rate": 1,
"samples_log_stdout": false,
"multiple_tqdm": true,
"unload_models_when_training": true,
"unload_models_when_training": false,
"pin_memory": true,
"save_optimizer_state": false,
"dataset_filename_word_regex": "",

View File

@ -358,6 +358,17 @@ def load_model(checkpoint_info=None):
sd_hijack.model_hijack.hijack(sd_model)
sd_model.eval()
"""
try:
t0 = time.time()
sd_model = torch.compile(sd_model, mode="max-autotune", fullgraph=True)
t1 = time.time()
print(f"Model compiled in {round(t1 - t0, 2)} sec")
except Exception as err:
print(f"Model compile not supported: {err}")
"""
shared.sd_model = sd_model
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model

View File

@ -1,17 +1,15 @@
accelerate==0.15.0
basicsr==1.4.2
blendmodes==2022
transformers==4.25.1
basicsr==1.4.2
blendmodes==2022
gfpgan==1.3.8
GitPython==3.1.27
gradio==3.15.0
numpy==1.23.5
Pillow==9.4.0
realesrgan==0.3.0
torch
omegaconf==2.2.3
Pillow==9.4.0
pytorch_lightning==1.7.7
realesrgan==0.3.0
scikit-image==0.19.2
timm==0.6.7
torchdiffeq==0.2.3