InvokeAI/invokeai/backend
Lincoln Stein dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
..
generator Mac MPS FP16 fixes (#3641) 2023-07-07 17:43:23 -04:00
image_util fix potential race condition in config system 2023-05-25 20:41:26 -04:00
install fix checkpoint VAE handling in migrate script 2023-07-07 09:34:42 -04:00
model_management rename gpu_mem_reserved to max_vram_cache_size 2023-07-11 15:25:39 -04:00
restoration add migration script and update convert and face restoration paths 2023-06-13 01:27:51 -04:00
stable_diffusion Mac MPS FP16 fixes (#3641) 2023-07-07 17:43:23 -04:00
training documentation tweaks; fixed initialization in a couple more places 2023-05-25 21:10:00 -04:00
util Mac MPS FP16 fixes (#3641) 2023-07-07 17:43:23 -04:00
web Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) 2023-06-18 23:38:15 +03:00
__init__.py Remove more old logic 2023-06-19 15:57:28 +10:00
safety_checker.py add migration script and update convert and face restoration paths 2023-06-13 01:27:51 -04:00