InvokeAI/invokeai/app
Eugene d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00
..
api fix(api): return same URL on location header 2023-04-26 06:29:30 +10:00
cli [nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution (#3180) 2023-04-14 06:41:06 +00:00
invocations feat(nodes): fix image to image fit param 2023-05-01 17:28:22 +10:00
models Partial migration of UI to nodes API (#3195) 2023-04-22 13:10:20 +10:00
services nodes-api: enforce single thread for the processor 2023-05-01 18:24:37 -04:00
util Partial migration of UI to nodes API (#3195) 2023-04-22 13:10:20 +10:00
api_app.py [api] Add models router and list model API. 2023-04-05 23:59:07 -04:00
cli_app.py Partial migration of UI to nodes API (#3195) 2023-04-22 13:10:20 +10:00