mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
b85f2bc87d
This commit adds speculative support for parallel rendering across multiple GPUs. The parallelism is at the level of a session. Each session is given access to a different GPU. When all GPUs are busy, execution of the session will block until a GPU becomes available. The code is untested at the current time, and is being posted for comment. |
||
---|---|---|
.. | ||
embeddings | ||
image_util | ||
install | ||
ip_adapter | ||
model_manager | ||
onnx | ||
stable_diffusion | ||
tiles | ||
training | ||
util | ||
__init__.py | ||
lora.py | ||
model_patcher.py | ||
raw_model.py | ||
textual_inversion.py |