mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
532f82cb97
* avoid copying model back from cuda to cpu * handle models that don't have state dicts * add assertions that models need a `device()` method * do not rely on torch.nn.Module having the device() method * apply all patches after model is on the execution device * fix model patching in latents too * log patched tokenizer * closes #6375 --------- Co-authored-by: Lincoln Stein <lstein@gmail.com> |
||
---|---|---|
.. | ||
convert_cache | ||
model_cache | ||
model_loaders | ||
__init__.py | ||
load_base.py | ||
load_default.py | ||
memory_snapshot.py | ||
model_loader_registry.py | ||
model_util.py | ||
optimizations.py |