InvokeAI/invokeai/backend/model_manager/load/model_cache
Lincoln Stein 2871676f79
LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
model_cache_default.py Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
model_locker.py LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00