mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
d14a7d756e
On hyperthreaded CPUs we get two threads operating on the queue by default on each core. This cases two threads to process queue items. This results in pytorch errors and sometimes generates garbage. Locking this to single thread makes sense because we are bound by the number of GPUs in the system, not by CPU cores. And to parallelize across GPUs we should just start multiple processors (and use async instead of threading) Fixes #3289 |
||
---|---|---|
.. | ||
app | ||
assets | ||
backend | ||
configs | ||
frontend | ||
version | ||
__init__.py | ||
README |
Organization of the source tree: app -- Home of nodes invocations and services assets -- Images and other data files used by InvokeAI backend -- Non-user facing libraries, including the rendering core. configs -- Configuration files used at install and run times frontend -- User-facing scripts, including the CLI and the WebUI version -- Current InvokeAI version string, stored in version/invokeai_version.py