InvokeAI/invokeai/app
Lincoln Stein 532f82cb97
Optimize RAM to VRAM transfer (#6312)
* avoid copying model back from cuda to cpu

* handle models that don't have state dicts

* add assertions that models need a `device()` method

* do not rely on torch.nn.Module having the device() method

* apply all patches after model is on the execution device

* fix model patching in latents too

* log patched tokenizer

* closes #6375

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-24 17:06:09 +00:00
..
api tidy: remove unnecessary whitespace changes 2024-05-24 20:02:24 +10:00
assets/images tweaks in response to psychedelicious review of PR 2023-07-26 15:27:04 +10:00
invocations Optimize RAM to VRAM transfer (#6312) 2024-05-24 17:06:09 +00:00
services fix(processor): race condition that could result in node errors not getting reported 2024-05-24 20:02:24 +10:00
shared tidy(nodes): move all field things to fields.py 2024-03-01 10:42:33 +11:00
util tidy(backend): clean up controlnet_utils 2024-04-25 13:20:09 +10:00
__init__.py fix: make invocation_context.py accessible to mkdocs 2024-03-01 10:42:33 +11:00
api_app.py feat(api): add InvocationOutputMap to OpenAPI schema 2024-05-15 14:09:44 +10:00
run_app.py feat: single app entrypoint with CLI arg parsing 2024-03-19 09:24:28 +11:00