InvokeAI/invokeai/backend/model_manager/load/model_cache
Lincoln Stein 532f82cb97
Optimize RAM to VRAM transfer (#6312)
* avoid copying model back from cuda to cpu

* handle models that don't have state dicts

* add assertions that models need a `device()` method

* do not rely on torch.nn.Module having the device() method

* apply all patches after model is on the execution device

* fix model patching in latents too

* log patched tokenizer

* closes #6375

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-24 17:06:09 +00:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
model_cache_default.py Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
model_locker.py fix misplaced lock call 2024-04-05 14:32:18 +11:00