InvokeAI/invokeai/backend/model_manager/load/model_cache
Lincoln Stein 812f10730f
adjust free vram calculation for models that will be removed by lazy offloading (#6150)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-04 22:51:12 -04:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py fix(mm): fix ModelCacheBase method name 2024-03-01 10:42:33 +11:00
model_cache_default.py adjust free vram calculation for models that will be removed by lazy offloading (#6150) 2024-04-04 22:51:12 -04:00
model_locker.py adjust free vram calculation for models that will be removed by lazy offloading (#6150) 2024-04-04 22:51:12 -04:00