InvokeAI/invokeai/backend/model_manager/load/model_cache
Lincoln Stein 21a60af881
when unlocking models, offload_unlocked_models should prune to vram limit only (#6450)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-29 03:01:21 +00:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
model_cache_default.py Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
model_locker.py when unlocking models, offload_unlocked_models should prune to vram limit only (#6450) 2024-05-29 03:01:21 +00:00