InvokeAI/invokeai/backend/model_manager/load/model_cache
Lincoln Stein 3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py fix(mm): fix ModelCacheBase method name 2024-03-01 10:42:33 +11:00
model_cache_default.py [mm] Do not write diffuser model to disk when convert_cache set to zero (#6072) 2024-03-29 16:11:08 -04:00
model_locker.py chore: ruff 2024-03-01 10:42:33 +11:00