InvokeAI/invokeai
Lincoln Stein 2871676f79
LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
..
app LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
assets feat(api): chore: pydantic & fastapi upgrade 2023-10-17 14:59:25 +11:00
backend LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
configs feat(mm): support sdxl ckpt inpainting models 2024-04-28 12:57:27 +10:00
frontend tidy(ui): organize control layers konva logic 2024-06-06 07:45:13 +10:00
invocation_api Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
version Update invokeai_version.py 2024-06-05 05:53:19 +10:00
__init__.py Various fixes 2023-01-30 18:42:17 -05:00