InvokeAI/ldm
Kevin Turner 8a0d45ac5a
new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
..
data Textual Inversion for M1 2022-09-27 01:39:17 +02:00
invoke new OffloadingDevice loads one model at a time, on demand (#2596) 2023-02-16 23:48:27 +00:00
models Add thresholding for all diffusers types (#2479) 2023-02-14 18:00:34 -06:00
modules new OffloadingDevice loads one model at a time, on demand (#2596) 2023-02-16 23:48:27 +00:00
__init__.py Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
generate.py new OffloadingDevice loads one model at a time, on demand (#2596) 2023-02-16 23:48:27 +00:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py Squashed commit of the following: 2022-09-12 14:31:48 -04:00
util.py Allow multiple models to be imported by passing a directory. 2023-02-05 08:11:02 -08:00