InvokeAI/invokeai/backend/model_management
Lincoln Stein dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
..
models merge with main, fix conflicts 2023-07-06 15:21:45 -04:00
__init__.py model merging API ready for testing 2023-07-06 13:15:15 -04:00
convert_ckpt_to_diffusers.py Fix ckpt scanning on conversion 2023-07-05 14:18:30 +03:00
lora.py Merge branch 'main' into feat/clip_skip 2023-07-07 16:21:53 +12:00
model_cache.py rename gpu_mem_reserved to max_vram_cache_size 2023-07-11 15:25:39 -04:00
model_manager.py rename gpu_mem_reserved to max_vram_cache_size 2023-07-11 15:25:39 -04:00
model_merge.py add merge api 2023-07-06 15:12:34 -04:00
model_probe.py improve user migration experience 2023-07-07 08:18:46 -04:00