InvokeAI/invokeai/backend/model_manager
Lincoln Stein 2871676f79
LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
..
load LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
metadata
util
__init__.py
config.py
convert_ckpt_to_diffusers.py
libc_util.py
merge.py
probe.py
search.py
starter_models.py