InvokeAI/invokeai/backend
Lincoln Stein 2871676f79
LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
..
image_util feat(nodes): use new blur_if_nsfw method 2024-05-14 07:23:38 +10:00
ip_adapter Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00
model_hash feat(mm): rename "blake3" to "blake3_multi" 2024-03-22 08:26:36 +11:00
model_manager LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
onnx final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
stable_diffusion cleanup: seamless unused older code cleanup 2024-05-13 08:11:08 +10:00
tiles feat(nodes): extract LATENT_SCALE_FACTOR to constants.py 2024-03-01 10:42:33 +11:00
util Re-enable app shutdown actions (#6244) 2024-04-19 06:45:42 -04:00
__init__.py consolidate model manager parts into a single class 2024-03-01 10:42:33 +11:00
lora.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
model_patcher.py LoRA patching optimization (#6439) 2024-06-06 13:53:35 +00:00
raw_model.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
textual_inversion.py Add a callout about the hackiness of dropping tokens in the TextualInversionManager. 2024-05-28 05:11:54 -07:00