mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
b85f2bc87d
This commit adds speculative support for parallel rendering across multiple GPUs. The parallelism is at the level of a session. Each session is given access to a different GPU. When all GPUs are busy, execution of the session will block until a GPU becomes available. The code is untested at the current time, and is being posted for comment. |
||
---|---|---|
.. | ||
aa_nodes | ||
app | ||
backend | ||
fixtures | ||
inpainting | ||
test_model_manager | ||
test_model_probe/vae | ||
__init__.py | ||
conftest.py | ||
dev_prompts.txt | ||
legacy_tests.sh | ||
test_config.py | ||
test_item_storage_memory.py | ||
test_model_probe.py | ||
test_object_serializer_disk.py | ||
test_path.py | ||
test_profiler.py | ||
test_sqlite_migrator.py | ||
validate_pr_prompt.txt |