Vladimir Mandic
f275e43eb9
update torch load and external repos
2023-03-20 10:10:56 -04:00
Vladimir Mandic
f6679fcc77
add global exception handler
2023-03-17 10:08:07 -04:00
Vladimir Mandic
54ab13502f
Merge pull request #37 from AUTOMATIC1111/master
...
sync forks
2023-02-19 10:11:25 -05:00
AUTOMATIC
11183b4d90
fix for #6700
2023-02-19 12:44:56 +03:00
Shondoit
edb10092de
Add ability to choose using weighted loss or not
2023-02-15 10:03:59 +01:00
Shondoit
bc50936745
Call weighted_forward during training
2023-02-15 10:03:59 +01:00
Shondoit
21642000b3
Add PNG alpha channel as weight maps to data entries
2023-02-15 10:03:59 +01:00
Vladimir Mandic
e92b66e2ea
update version
2023-01-29 13:44:33 -05:00
Vladimir Mandic
2314ca9841
Merge pull request #31 from AUTOMATIC1111/master
...
sync branches
2023-01-29 12:19:38 -05:00
Vladimir Mandic
0fa1d29fca
pre-merge cleanup
2023-01-29 12:19:17 -05:00
AUTOMATIC
aa6e55e001
do not display the message for TI unless the list of loaded embeddings changed
2023-01-29 11:53:05 +03:00
Vladimir Mandic
cb72d69970
Merge pull request #29 from AUTOMATIC1111/master
...
sync branches
2023-01-28 10:42:43 -05:00
Max Audron
5eee2ac398
add data-dir flag and set all user data directories based on it
2023-01-27 14:44:30 +01:00
Vladimir Mandic
ec61d14085
remove messages
2023-01-26 11:33:02 -05:00
Vladimir Mandic
3e30c8d877
Merge pull request #26 from AUTOMATIC1111/master
...
sync branches
2023-01-26 10:27:46 -05:00
Alex "mcmonkey" Goodwin
e179b6098a
allow symlinks in the textual inversion embeddings folder
2023-01-25 08:48:40 -08:00
Vladimir Mandic
9e0b2d8dcb
Merge pull request #23 from AUTOMATIC1111/master
...
sync branches
2023-01-22 09:22:32 -05:00
AUTOMATIC
40ff6db532
extra networks UI
...
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
Vladimir Mandic
9901913af7
Merge pull request #21 from AUTOMATIC1111/master
...
sync branches
2023-01-19 17:56:11 -05:00
AUTOMATIC1111
0f9cacaa0e
Merge pull request #6844 from guaneec/crop-ui
...
Add auto-sized cropping UI
2023-01-19 13:11:05 +03:00
dan
2985b317d7
Fix of fix
2023-01-19 17:39:30 +08:00
dan
18a09c7e00
Simplification and bugfix
2023-01-19 17:36:23 +08:00
Vladimir Mandic
85162a946a
Merge pull request #20 from AUTOMATIC1111/master
...
sync branch
2023-01-18 19:22:34 -05:00
AUTOMATIC
924e222004
add option to show/hide warnings
...
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
Vladimir Mandic
1c17b8156b
major update to preprocess and train
2023-01-18 08:10:15 -05:00
dan
4688bfff55
Add auto-sized cropping UI
2023-01-17 17:16:43 +08:00
Vladimir Mandic
4f16475ecb
Merge pull request #18 from AUTOMATIC1111/master
...
sync branches
2023-01-15 15:27:38 -05:00
Vladimir Mandic
110d1a2d59
add fields to settings file
2023-01-15 12:41:00 -05:00
Vladimir Mandic
34fcb16370
Merge pull request #17 from AUTOMATIC1111/master
...
sync branches
2023-01-15 12:03:07 -05:00
AUTOMATIC
d8b90ac121
big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other
2023-01-15 18:51:04 +03:00
Vladimir Mandic
e4ed905e2a
Merge pull request #13 from AUTOMATIC1111/master
...
sync branches
2023-01-14 11:14:26 -05:00
AUTOMATIC
a95f135308
change hash to sha256
2023-01-14 09:56:59 +03:00
Vladimir Mandic
e8c5503af0
Merge pull request #12 from AUTOMATIC1111/master
...
sync branches
2023-01-13 08:30:07 -05:00
Vladimir Mandic
e1413a0f7f
update
2023-01-13 08:29:53 -05:00
AUTOMATIC
82725f0ac4
fix a bug caused by merge
2023-01-13 15:04:37 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard
2023-01-13 14:57:38 +03:00
AUTOMATIC1111
544e7a233e
Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_file
...
add gradient settings to training settings log files
2023-01-13 14:45:32 +03:00
AUTOMATIC
a176d89487
print bucket sizes for training without resizing images #6620
...
fix an error when generating a picture with embedding in it
2023-01-13 14:32:15 +03:00
AUTOMATIC1111
486bda9b33
Merge pull request #6620 from guaneec/varsize_batch
...
Enable batch_size>1 for mixed-sized training
2023-01-13 14:03:31 +03:00
Josh R
0b262802b8
add gradient settings to training settings log files
2023-01-12 17:31:05 -08:00
Shondoit
d52a80f7f7
Allow creation of zero vectors for TI
2023-01-12 09:22:29 +01:00
Vladimir Mandic
ace60b00ab
Merge pull request #11 from AUTOMATIC1111/master
...
sync branches
2023-01-11 13:50:00 -05:00
Vladimir Mandic
b2c463c7dd
add homepage
2023-01-11 12:27:14 -05:00
Vladimir Mandic
3f43d8a966
set descriptions
2023-01-11 10:28:55 -05:00
Vladimir Mandic
97a706867f
Merge pull request #10 from AUTOMATIC1111/master
...
synch branch
2023-01-11 07:17:23 -05:00
Vladimir Mandic
32a1c19980
update
2023-01-11 07:17:03 -05:00
Lee Bousfield
f9706acf43
Support loading textual inversion embeddings from safetensors files
2023-01-10 18:40:34 -07:00
dan
6be644fa04
Enable batch_size>1 for mixed-sized training
2023-01-11 05:31:58 +08:00
Vladimir Mandic
010d5f0399
Merge pull request #7 from AUTOMATIC1111/master
...
sync fork
2023-01-09 19:10:47 -05:00
Vladimir Mandic
5754401e50
update
2023-01-09 17:38:58 -05:00
AUTOMATIC
1fbb6f9ebe
make a dropdown for prompt template selection
2023-01-09 23:35:40 +03:00
AUTOMATIC
43bb5190fc
remove/simplify some changes from #6481
2023-01-09 22:52:23 +03:00
AUTOMATIC1111
18c001792a
Merge branch 'master' into varsize
2023-01-09 22:45:39 +03:00
Vladimir Mandic
331550a923
Merge pull request #6 from AUTOMATIC1111/master
...
update 01-08
2023-01-08 17:12:20 -05:00
Vladimir Mandic
91d0fc6c95
update paths
2023-01-08 17:04:51 -05:00
AUTOMATIC
085427de0e
make it possible for extensions/scripts to add their own embedding directories
2023-01-08 09:37:33 +03:00
AUTOMATIC
a0c87f1fdf
skip images in embeddings dir if they have a second .preview extension
2023-01-08 08:52:26 +03:00
dan
72497895b9
Move batchsize check
2023-01-08 02:57:36 +08:00
dan
669fb18d52
Add checkbox for variable training dims
2023-01-08 02:31:40 +08:00
dan
448b9cedab
Allow variable img size
2023-01-08 02:14:36 +08:00
AUTOMATIC
79e39fae61
CLIP hijack rework
2023-01-07 01:46:13 +03:00
AUTOMATIC
683287d87f
rework saving training params to file #6372
2023-01-06 08:52:06 +03:00
AUTOMATIC1111
88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
...
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
Faber
81133d4168
allow loading embeddings from subdirectories
2023-01-06 03:38:37 +07:00
Kuma
fda04e620d
typo in TI
2023-01-05 18:44:19 +01:00
timntorres
b6bab2f052
Include model in log file. Exclude directory.
2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4
Clean up ti, add same behavior to hypernetwork.
2023-01-05 08:14:38 -08:00
timntorres
eea8fc40e1
Add option to save ti settings to file.
2023-01-05 07:24:22 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping
2023-01-04 19:56:35 +03:00
AUTOMATIC
525cea9245
use shared function from processing for creating dummy mask when training inpainting model
2023-01-04 17:58:07 +03:00
AUTOMATIC
184e670126
fix the merge
2023-01-04 17:45:01 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion
2023-01-04 17:40:19 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
...
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules
2023-01-03 10:34:51 -05:00
Shondoit
bddebe09ed
Save Optimizer next to TI embedding
...
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax
c65909ad16
feat(api): return more data for embeddings
2023-01-02 12:21:48 +11:00
AUTOMATIC
311354c0bb
fix the issue with training on SD2.0
2023-01-02 00:38:09 +03:00
AUTOMATIC
bdbe09827b
changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149
2022-12-31 22:49:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings
2022-12-31 11:27:02 -05:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
Jim Hays
c0355caefe
Fix various typos
2022-12-14 21:01:32 -05:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix
2022-12-03 10:19:51 +03:00
AUTOMATIC1111
a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
...
Use devices.autocast() and fix MPS randn issues
2022-12-03 09:58:08 +03:00
PhytoEpidemic
119a945ef7
Fix divide by 0 error
...
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
2022-12-02 12:16:29 -06:00
brkirch
4d5f1691dd
Use devices.autocast instead of torch.autocast
2022-11-30 10:33:42 -05:00
AUTOMATIC1111
39827a3998
Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords
...
resolve [name] after resolving [filewords] in training
2022-11-27 22:46:49 +03:00
AUTOMATIC
b48b7999c8
Merge remote-tracking branch 'flamelaw/master'
2022-11-27 12:19:59 +03:00
flamelaw
755df94b2a
set TI AdamW default weight decay to 0
2022-11-27 00:35:44 +09:00
AUTOMATIC
ce6911158b
Add support Stable Diffusion 2.0
2022-11-26 16:10:46 +03:00
flamelaw
89d8ecff09
small fixes
2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4
fix pin_memory with different latent sampling method
2022-11-21 10:15:46 +09:00
AUTOMATIC
c81d440d87
moved deepdanbooru to pure pytorch implementation
2022-11-20 16:39:20 +03:00
flamelaw
2d22d72cda
fix random sampling with pin_memory
2022-11-20 16:14:27 +09:00
flamelaw
a4a5735d0a
remove unnecessary comment
2022-11-20 12:38:18 +09:00
flamelaw
bd68e35de3
Gradient accumulation, autocast fix, new latent sampling method, etc
2022-11-20 12:35:26 +09:00
AUTOMATIC1111
89daf778fb
Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing
...
Add interrupt button to preprocessing
2022-11-19 13:26:33 +03:00
AUTOMATIC
cdc8020d13
change StableDiffusionProcessing to internally use sampler name instead of sampler index
2022-11-19 12:01:51 +03:00
space-nuko
c8c40c8a64
Add interrupt button to preprocessing
2022-11-17 18:05:29 -08:00
parasi
9a1aff645a
resolve [name] after resolving [filewords] in training
2022-11-13 13:49:28 -06:00
AUTOMATIC1111
73776907ec
Merge pull request #4117 from TinkTheBoush/master
...
Adding optional tag shuffling for training
2022-11-11 15:46:20 +03:00
KyuSeok Jung
a1e271207d
Update dataset.py
2022-11-11 10:56:53 +09:00
KyuSeok Jung
b19af67d29
Update dataset.py
2022-11-11 10:54:19 +09:00
KyuSeok Jung
13a2f1dca3
adding tag drop out option
2022-11-11 10:29:55 +09:00
Muhammad Rizqi Nur
d85c2cb2d5
Merge branch 'master' into gradient-clipping
2022-11-09 16:29:37 +07:00
AUTOMATIC
8011be33c3
move functions out of main body for image preprocessing for easier hijacking
2022-11-08 08:37:05 +03:00
Muhammad Rizqi Nur
bb832d7725
Simplify grad clip
2022-11-05 11:48:38 +07:00
TinkTheBoush
821e2b883d
change option position to Training setting
2022-11-04 19:39:03 +09:00
Fampai
39541d7725
Fixes race condition in training when VAE is unloaded
...
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur
237e79c77d
Merge branch 'master' into gradient-clipping
2022-11-02 20:48:58 +07:00
KyuSeok Jung
af6fba2475
Merge branch 'master' into master
2022-11-02 17:10:56 +09:00
Nerogar
cffc240a73
fixed textual inversion training with inpainting models
2022-11-01 21:02:07 +01:00
TinkTheBoush
467cae167a
append_tag_shuffle
2022-11-01 23:29:12 +09:00
Fampai
890e68aaf7
Fixed minor bug
...
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai
3b0127e698
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
2022-10-31 09:54:51 -04:00
Fampai
006756f9cd
Added TI training optimizations
...
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur
cd4d59c0de
Merge master
2022-10-30 18:57:51 +07:00
AUTOMATIC1111
17a2076f72
Merge pull request #3928 from R-N/validate-before-load
...
Optimize training a little
2022-10-30 09:51:36 +03:00
Muhammad Rizqi Nur
3d58510f21
Fix dataset still being loaded even when training will be skipped
2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86
Add missing info on hypernetwork/embedding model log
...
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur
ab05a74ead
Revert "Add cleanup after training"
...
This reverts commit 3ce2bfdf95 .
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur
a27d19de2e
Additional assert on dataset
2022-10-29 19:44:05 +07:00
Muhammad Rizqi Nur
3ce2bfdf95
Add cleanup after training
2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur
ab27c111d0
Add input validations before loading dataset for training
2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur
ef4c94e1cf
Improve lr schedule error message
2022-10-29 15:42:51 +07:00
Muhammad Rizqi Nur
a5f3adbdd7
Allow trailing comma in learning rate
2022-10-29 15:37:24 +07:00
Muhammad Rizqi Nur
05e2e40537
Merge branch 'master' into gradient-clipping
2022-10-29 15:04:21 +07:00
AUTOMATIC1111
810e6a407d
Merge pull request #3858 from R-N/log-csv
...
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
Muhammad Rizqi Nur
9ceef81f77
Fix log off by 1
2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573
Learning rate sched syntax support for grad clipping
2022-10-28 17:16:23 +07:00
Muhammad Rizqi Nur
1618df41ba
Gradient clipping for textual embedding
2022-10-28 10:31:27 +07:00
FlameLaw
a0a7024c67
Fix random dataset shuffle on TI
2022-10-28 02:13:48 +09:00
DepFA
737eb28fac
typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
2022-10-26 17:38:08 +03:00
timntorres
f4e1464217
Implement PR #3625 but for embeddings.
2022-10-26 10:14:35 +03:00
timntorres
4875a6c217
Implement PR #3309 but for embeddings.
2022-10-26 10:14:35 +03:00
timntorres
c2dc9bfa89
Implement PR #3189 but for embeddings.
2022-10-26 10:14:35 +03:00
AUTOMATIC
cbb857b675
enable creating embedding with --medvram
2022-10-26 09:44:02 +03:00
captin411
df0c5ea29d
update default weights
2022-10-25 17:06:59 -07:00
captin411
54f0c14824
download better face detection module dynamically
2022-10-25 16:14:13 -07:00
captin411
db8ed5fe5c
Focal crop UI elements
2022-10-25 15:22:29 -07:00
captin411
6629446a2f
Merge branch 'master' into focal-point-cropping
2022-10-25 13:22:27 -07:00
captin411
3e6c2420c1
improve debug markers, fix algo weighting
2022-10-25 13:13:12 -07:00
Melan
18f86e41f6
Removed two unused imports
2022-10-24 17:21:18 +02:00
captin411
1be5933ba2
auto cropping now works with non square crops
2022-10-23 04:11:07 -07:00
AUTOMATIC
f49c08ea56
prevent error spam when processing images without txt files for captions
2022-10-21 18:46:02 +03:00
AUTOMATIC1111
5e9afa5c8a
Merge branch 'master' into fix/train-preprocess-keep-ratio
2022-10-21 18:36:29 +03:00
DepFA
306e2ff6ab
Update image_embedding.py
2022-10-21 16:47:37 +03:00
DepFA
d0ea471b0c
Use opts in textual_inversion image_embedding.py for dynamic fonts
2022-10-21 16:47:37 +03:00
AUTOMATIC
7d6b388d71
Merge branch 'ae'
2022-10-21 13:35:01 +03:00
AUTOMATIC1111
0c5522ea21
Merge branch 'master' into training-help-text
2022-10-21 09:57:55 +03:00
guaneec
b69c37d25e
Allow datasets with only 1 image in TI
2022-10-21 09:54:09 +03:00
Melan
8f59129847
Some changes to the tensorboard code and hypernetwork support
2022-10-20 22:37:16 +02:00
Melan
a6d593a6b5
Fixed a typo in a variable
2022-10-20 19:43:21 +02:00
Milly
85dd62c4c7
train: ui: added `Split image threshold` and `Split image overlap ratio` to preprocess
2022-10-20 23:35:01 +09:00
Milly
9681419e42
train: fixed preprocess image ratio
2022-10-20 23:32:41 +09:00
Melan
29e74d6e71
Add support for Tensorboard for training embeddings
2022-10-20 16:26:16 +02:00
captin411
0ddaf8d202
improve face detection a lot
2022-10-20 00:34:55 -07:00
DepFA
858462f719
do caption copy for both flips
2022-10-20 02:57:18 +01:00
captin411
59ed744383
face detection algo, configurability, reusability
...
Try to move the crop in the direction of a face if it is present
More internal configuration options for choosing weights of each of the algorithm's findings
Move logic into its module
2022-10-19 17:19:02 -07:00
DepFA
9b65c4ecf4
pass preprocess_txt_action param
2022-10-20 00:49:23 +01:00
DepFA
fbcce66601
add existing caption file handling
2022-10-20 00:46:54 +01:00
DepFA
c3835ec85c
pass overwrite old flag
2022-10-20 00:24:24 +01:00
DepFA
0087079c2d
allow overwrite old embedding
2022-10-20 00:10:59 +01:00
captin411
41e3877be2
fix entropy point calculation
2022-10-19 13:44:59 -07:00
captin411
abeec4b630
Add auto focal point cropping to Preprocess images
...
This algorithm plots a bunch of points of interest on the source
image and averages their locations to find a center.
Most points come from OpenCV. One point comes from an
entropy model. OpenCV points account for 50% of the weight and the
entropy based point is the other 50%.
The center of all weighted points is calculated and a bounding box
is drawn as close to centered over that point as possible.
2022-10-19 03:18:26 -07:00
MalumaDev
1997ccff13
Merge branch 'master' into test_resolve_conflicts
2022-10-18 08:55:08 +02:00
DepFA
62edfae257
print list of embeddings on reload
2022-10-17 08:42:17 +03:00
MalumaDev
ae0fdad64a
Merge branch 'master' into test_resolve_conflicts
2022-10-16 17:55:58 +02:00
MalumaDev
9324cdaa31
ui fix, re organization of the code
2022-10-16 17:53:56 +02:00
AUTOMATIC
0c5fa9a681
do not reload embeddings from disk when doing textual inversion
2022-10-16 09:09:04 +03:00
MalumaDev
97ceaa23d0
Merge branch 'master' into test_resolve_conflicts
2022-10-16 00:06:36 +02:00
DepFA
b6e3b96dab
Change vector size footer label
2022-10-15 17:23:39 +03:00
DepFA
ddf6899df0
generalise to popular lossless formats
2022-10-15 17:23:39 +03:00
DepFA
9a1dcd78ed
add webp for embed load
2022-10-15 17:23:39 +03:00
DepFA
939f16529a
only save 1 image per embedding
2022-10-15 17:23:39 +03:00
DepFA
9e846083b7
add vector size to embed text
2022-10-15 17:23:39 +03:00
MalumaDev
7b7561f6e4
Merge branch 'master' into test_resolve_conflicts
2022-10-15 16:20:17 +02:00
AUTOMATIC1111
ea8aa1701a
Merge branch 'master' into master
2022-10-15 10:13:16 +03:00
AUTOMATIC
c7a86f7fe9
add option to use batch size for training
2022-10-15 09:24:59 +03:00
Melan
4d19f3b7d4
Raise an assertion error if no training images have been found.
2022-10-14 22:45:26 +02:00
AUTOMATIC
03d62538ae
remove duplicate code for log loss, add step, make it read from options rather than gradio input
2022-10-14 22:43:55 +03:00
AUTOMATIC
326fe7d44b
Merge remote-tracking branch 'Melanpan/master'
2022-10-14 22:14:50 +03:00
AUTOMATIC
c344ba3b32
add option to read generation params for learning previews from txt2img
2022-10-14 20:31:49 +03:00
MalumaDev
bb57f30c2d
init
2022-10-14 10:56:41 +02:00
Melan
8636b50aea
Add learn_rate to csv and removed a left-over debug statement
2022-10-13 12:37:58 +02:00
Melan
1cfc2a1898
Save a csv containing the loss while training
2022-10-12 23:36:29 +02:00
Greg Fuller
f776254b12
[2/?] [wip] ignore OPT_INCLUDE_RANKS for training filenames
2022-10-12 13:12:18 -07:00
AUTOMATIC
698d303b04
deepbooru: added option to use spaces or underscores
...
deepbooru: added option to quote (\) in tags
deepbooru/BLIP: write caption to file instead of image filename
deepbooru/BLIP: now possible to use both for captions
deepbooru: process is stopped even if an exception occurs
2022-10-12 21:55:43 +03:00
AUTOMATIC
c3c8eef9fd
train: change filename processing to be more simple and configurable
...
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
2022-10-12 20:49:47 +03:00
AUTOMATIC1111
cc5803603b
Merge pull request #2037 from AUTOMATIC1111/embed-embeddings-in-images
...
Add option to store TI embeddings in png chunks, and load from same.
2022-10-12 15:59:24 +03:00
DepFA
10a2de644f
formatting
2022-10-12 13:15:35 +01:00
DepFA
50be33e953
formatting
2022-10-12 13:13:25 +01:00
JC_Array
f53f703aeb
resolved conflicts, moved settings under interrogate section, settings only show if deepbooru flag is enabled
2022-10-11 18:12:12 -05:00
JC-Array
963d986396
Merge branch 'AUTOMATIC1111:master' into deepdanbooru_pre_process
2022-10-11 17:33:15 -05:00
AUTOMATIC
6be32b31d1
reports that training with medvram is possible.
2022-10-11 23:07:09 +03:00
DepFA
66ec505975
add file based test
2022-10-11 20:21:30 +01:00
DepFA
7e6a6e00ad
Add files via upload
2022-10-11 20:20:46 +01:00
DepFA
5f3317376b
spacing
2022-10-11 20:09:49 +01:00
DepFA
91d7ee0d09
update imports
2022-10-11 20:09:10 +01:00
DepFA
aa75d5cfe8
correct conflict resolution typo
2022-10-11 20:06:13 +01:00
AUTOMATIC
d6fcc6b87b
apply lr schedule to hypernets
2022-10-11 22:03:05 +03:00