InvokeAI/invokeai/backend
Lincoln Stein b85f2bc87d add support for multi-gpu rendering
This commit adds speculative support for parallel rendering across
multiple GPUs. The parallelism is at the level of a session. Each
session is given access to a different GPU. When all GPUs are busy,
execution of the session will block until a GPU becomes available.

The code is untested at the current time, and is being posted for
comment.
2024-02-19 15:21:55 -05:00
..
embeddings Tidy names and locations of modules 2024-02-19 08:16:56 +11:00
image_util make model manager v2 ready for PR review 2024-02-15 18:00:08 +11:00
install remove errant def that was crashing invokeai-configure 2024-02-19 17:31:53 +11:00
ip_adapter final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
model_manager add support for multi-gpu rendering 2024-02-19 15:21:55 -05:00
onnx final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
stable_diffusion final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
tiles feat(nodes): extract LATENT_SCALE_FACTOR to constants.py 2024-02-15 17:30:03 +11:00
training Port the command-line tools to use model_manager2 (#5546) 2024-02-02 17:18:47 +00:00
util final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
__init__.py consolidate model manager parts into a single class 2024-02-15 17:57:14 +11:00
lora.py final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
model_patcher.py Tidy names and locations of modules 2024-02-19 08:16:56 +11:00
raw_model.py final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00
textual_inversion.py final tidying before marking PR as ready for review 2024-02-19 08:16:56 +11:00