mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
532f82cb97
* avoid copying model back from cuda to cpu * handle models that don't have state dicts * add assertions that models need a `device()` method * do not rely on torch.nn.Module having the device() method * apply all patches after model is on the execution device * fix model patching in latents too * log patched tokenizer * closes #6375 --------- Co-authored-by: Lincoln Stein <lstein@gmail.com> |
||
---|---|---|
.. | ||
image_util | ||
ip_adapter | ||
model_hash | ||
model_manager | ||
onnx | ||
stable_diffusion | ||
tiles | ||
util | ||
__init__.py | ||
lora.py | ||
model_patcher.py | ||
raw_model.py | ||
textual_inversion.py |