mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
8a0d45ac5a
* new OffloadingDevice loads one model at a time, on demand * fixup! new OffloadingDevice loads one model at a time, on demand * fix(prompt_to_embeddings): call the text encoder directly instead of its forward method allowing any associated hooks to run with it. * more attempts to get things on the right device from the offloader * more attempts to get things on the right device from the offloader * make offloading methods an explicit part of the pipeline interface * inlining some calls where device is only used once * ensure model group is ready after pipeline.to is called * fixup! Strategize slicing based on free [V]RAM (#2572) * doc(offloading): docstrings for offloading.ModelGroup * doc(offloading): docstrings for offloading-related pipeline methods * refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup * refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup to frame it is the same terms as "FullyLoadedModelGroup" --------- Co-authored-by: Damian Stewart <null@damianstewart.com> |
||
---|---|---|
.. | ||
ckpt_generator | ||
config | ||
generator | ||
restoration | ||
training | ||
__init__.py | ||
_version.py | ||
args.py | ||
ckpt_to_diffuser.py | ||
CLI.py | ||
concepts_lib.py | ||
conditioning.py | ||
devices.py | ||
globals.py | ||
image_util.py | ||
log.py | ||
merge_diffusers.py | ||
model_manager.py | ||
offloading.py | ||
patchmatch.py | ||
pngwriter.py | ||
prompt_parser.py | ||
readline.py | ||
seamless.py | ||
server_legacy.py | ||
server.py | ||
txt2mask.py |