InvokeAI/invokeai
Eugene d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00
..
app nodes-api: enforce single thread for the processor 2023-05-01 18:24:37 -04:00
assets Various fixes 2023-01-30 18:42:17 -05:00
backend Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
configs support both epsilon and v-prediction v2 inference 2023-03-05 22:51:40 -05:00
frontend feat(ui): disable w/h when img2img & not fit 2023-05-01 17:28:22 +10:00
version fix issue with invokeai.version 2023-03-03 01:34:38 -05:00
__init__.py Various fixes 2023-01-30 18:42:17 -05:00
README CODEOWNERS coarse draft 2023-03-03 14:36:43 -05:00

Organization of the source tree:

app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
	core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
	in version/invokeai_version.py