Martin Kristiansen
537ae2f901
Resolving merge conflicts for flake8
2023-08-18 15:52:04 +10:00
Lincoln Stein
bb1b8ceaa8
Update invokeai/backend/model_management/model_cache.py
...
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>
2023-08-16 08:48:44 -04:00
Lincoln Stein
f9958de6be
added memory used to load models
2023-08-15 21:56:19 -04:00
Lincoln Stein
ec10aca91e
report RAM and RAM cache statistics
2023-08-15 21:00:30 -04:00
Lincoln Stein
6ad565d84c
folded in changes from 4099
2023-08-04 18:24:47 -04:00
Sergey Borisov
1ac14a1e43
add sdxl lora support
2023-08-04 11:44:56 -04:00
Brandon Rising
f5ac73b091
Merge branch 'main' into feat/onnx
2023-07-31 10:58:40 -04:00
Lincoln Stein
e20c4dc1e8
blackified
2023-07-30 08:17:10 -04:00
Lincoln Stein
844578ab88
fix lora loading crash
2023-07-30 07:57:10 -04:00
Lincoln Stein
99daa97978
more refactoring; fixed place where rel conversion missed
2023-07-29 13:00:07 -04:00
Brandon Rising
da751da3dd
Merge branch 'main' into feat/onnx
2023-07-28 09:59:35 -04:00
Brandon Rising
2b7b3dd4ba
Run python black
2023-07-28 09:46:44 -04:00
Brandon Rising
bfdc8c80f3
Testing caching onnx sessions
2023-07-27 14:13:29 -04:00
Martin Kristiansen
218b6d0546
Apply black
2023-07-27 10:54:01 -04:00
Sergey Borisov
bda0000acd
Cleanup vram after models offloading, tweak to cleanup local variable references on ram offload
2023-07-18 23:21:18 +03:00
Sergey Borisov
bc11296a5e
Disable lazy offloading on disabled vram cache, move resulted tensors to cpu(to not stack vram tensors in cache), fix - text encoder not freed(detach)
2023-07-18 16:20:25 +03:00
Lincoln Stein
dab03fb646
rename gpu_mem_reserved to max_vram_cache_size
...
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
Lincoln Stein
d32f9f7cb0
reverse logic of gpu_mem_reserved
...
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
for model caching (similar to max_cache_size).
2023-07-11 15:16:40 -04:00
Lincoln Stein
5759a390f9
introduce gpu_mem_reserved configuration parameter
2023-07-09 18:35:04 -04:00
Lincoln Stein
8d7dba937d
fix undefined variable
2023-07-09 14:37:45 -04:00
Lincoln Stein
d6cb0e54b3
don't unload models from GPU until the space is needed
2023-07-09 14:26:30 -04:00
Lincoln Stein
0a6dccd607
expose max_cache_size to invokeai-configure interface
2023-07-05 20:59:14 -04:00
Lincoln Stein
6935858ef3
add debugging messages to aid in memory leak tracking
2023-07-02 13:34:53 -04:00
Lincoln Stein
0f02915012
remove hardcoded cuda device in model manager init
2023-07-01 21:15:42 -04:00
Sergey Borisov
740c05a0bb
Save models on rescan, uncache model on edit/delete, fixes
2023-06-14 03:12:12 +03:00
Sergey Borisov
e7db6d8120
Fix ckpt and vae conversion, migrate script, remove sd2-base
2023-06-13 18:05:12 +03:00
Sergey Borisov
36eb1bd893
Fixes
2023-06-12 16:14:09 +03:00
Lincoln Stein
893f776f1d
model_probe working; model_install incomplete
2023-06-11 19:51:53 -04:00
Lincoln Stein
8e1a56875e
remove defunct code
2023-06-11 12:57:06 -04:00
Lincoln Stein
000626ab2e
move all installation code out of model_manager
2023-06-11 12:51:50 -04:00
Sergey Borisov
738ba40f51
Fixes
2023-06-11 06:12:21 +03:00
Sergey Borisov
3ce3a7ee72
Rewrite model configs, separate models
2023-06-11 04:49:09 +03:00
Lincoln Stein
74b43c9bdf
fix incorrect variable/typenames in model_cache
2023-06-10 10:41:48 -04:00
Sergey Borisov
2c056ead42
New models structure draft
2023-06-10 03:14:10 +03:00
Lincoln Stein
887576d217
add directory scanning for loras, controlnets and textual_inversions
2023-06-08 23:11:53 -04:00
Lincoln Stein
04f9757f8d
prevent crash when trying to calculate size of missing safety_checker
...
- Also fixed up order in which logger is created in invokeai-web
so that handlers are installed after command-line options are
parsed (and not before!)
2023-06-06 22:57:49 -04:00
Sergey Borisov
b47786e846
First working TI draft
2023-05-31 02:12:27 +03:00
Sergey Borisov
69ccd3a0b5
Fixes for checkpoint models
2023-05-30 19:12:47 +03:00
Sergey Borisov
79de9047b5
First working lora implementation
2023-05-30 01:11:00 +03:00
Sergey Borisov
8e419a4f97
Revert weak references as can be done without it
2023-05-23 04:29:40 +03:00
Sergey Borisov
2533209326
Rewrite cache to weak references
2023-05-23 03:48:22 +03:00
Lincoln Stein
259d6ec90d
fixup cachedir call
2023-05-18 14:52:16 -04:00
Lincoln Stein
d96175d127
resolve some undefined symbols in model_cache
2023-05-18 14:31:47 -04:00
Lincoln Stein
b1a99d772c
added method to convert vaes
2023-05-18 13:31:11 -04:00
Sergey Borisov
fd82763412
Model manager draft
2023-05-18 03:56:52 +03:00
Lincoln Stein
c8f765cc06
improve debugging messages
2023-05-14 18:29:55 -04:00
Lincoln Stein
b9e9087dbe
do not manage GPU for pipelines if sequential_offloading is True
2023-05-14 18:09:38 -04:00
Lincoln Stein
63e465eb5c
tweaks to get_model()
behavior
...
1. If an external VAE is specified in config file, then
get_model(submodel=vae) will return the external VAE, not the one
burnt into the parent diffusers pipeline.
2. The mechanism in (1) is generalized such that you can now have
"unet:", "text_encoder:" and similar stanzas in the config file.
Valid formats of these subsections:
unet:
repo_id: foo/bar
unet:
path: /path/to/local/folder
unet:
repo_id: foo/bar
subfolder: unet
In the near future, these will also be used to attach external
parts to the pipeline, generalizing VAE behavior.
3. Accommodate callers (i.e. the WebUI) that are passing the
model key ("diffusers/stable-diffusion-1.5") to get_model()
instead of the tuple of model_name and model_type.
4. Fixed bug in VAE model attaching code.
5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
Lincoln Stein
b31a6ff605
fix reversed args in _model_key() call
2023-05-13 21:11:06 -04:00
Sergey Borisov
1f602e6143
Fix - apply precision to text_encoder
2023-05-14 03:46:13 +03:00