Lincoln Stein
d96175d127
resolve some undefined symbols in model_cache
2023-05-18 14:31:47 -04:00
Lincoln Stein
7ea995149e
fixes to env parsing, textual inversion & help text
...
- Make environment variable settings case InSenSiTive:
INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
environment variables will both set `max_loaded_models`
- Updated realesrgan to use new config system.
- Updated textual_inversion_training to use new config system.
- Discovered a race condition when InvokeAIAppConfig is created
at module load time, which makes it impossible to customize
or replace the help message produced with --help on the command
line. To fix this, moved all instances of get_invokeai_config()
from module load time to object initialization time. Makes code
cleaner, too.
- Added `--from_file` argument to `invokeai-node-cli` and changed
github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Eugene
20ca9e1fc1
config: move 'CORS' settings to 'Web Server' in the docstring to match the actual category
2023-05-17 19:45:51 -04:00
Eugene
9e4e386c9b
web and formatting fixes
...
- remove non-existent import InvokeAIWebConfig
- fix workflow file formatting
- clean up whitespace
2023-05-17 19:12:03 -04:00
Lincoln Stein
ffaadb9d05
reorder options in help text
2023-05-17 15:22:58 -04:00
Lincoln Stein
b7c5a39685
make invokeai.yaml more hierarchical; fix list configuration bug
2023-05-17 12:19:19 -04:00
Lincoln Stein
eadfd239a8
update config script to work with new config system
2023-05-17 00:18:19 -04:00
Lincoln Stein
8d75e50435
partial port of invokeai-configure
2023-05-16 01:50:01 -04:00
Lincoln Stein
4fe94a9315
list_models() now returns a dict of {type,{name: info}}
2023-05-15 23:44:08 -04:00
Lincoln Stein
baf5451fa0
Merge branch 'main' into lstein/new-model-manager
2023-05-13 22:01:34 -04:00
Lincoln Stein
1103ab2844
merge with main
2023-05-13 21:35:19 -04:00
Sergey Borisov
039fa73269
Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web)
2023-05-14 03:06:26 +03:00
Lincoln Stein
b23c9f1da5
get Tuple type hint syntax right
2023-05-13 14:59:21 -04:00
Lincoln Stein
5e8e3cf464
correct typos in model_manager_service
2023-05-13 14:55:59 -04:00
Lincoln Stein
72967bf118
convert add_model(), del_model(), list_models() etc to use bifurcated names
2023-05-13 14:44:44 -04:00
Eugene Brodsky
63db3fc22f
reduce queue check interval to 0.5s
2023-05-12 17:54:26 -04:00
Eugene
ad0bb3f61a
fix: queue error should not crash InvocationProcessor
...
1. if retrieving an item from the queue raises an exception, the
InvocationProcessor thread crashes, but the API continues running in
a non-functional state. This fixes the issue
2. when there are no items in the queue, sleep 1 second before checking
again.
3. Also ensures the thread isn't crashed if an exception is raised from
invoker, and emits the error event
Intentionally using base Exceptions because for now we don't know which
specific exception to expect.
Fixes (sort of)? #3222
2023-05-12 17:54:26 -04:00
Sergey Borisov
5431dd5f50
Fix event args
2023-05-12 23:08:03 +03:00
psychedelicious
e5b7dd63e9
fix(nodes): temporarily disable librarygraphs
...
- Do not retrieve graph from DB until we resolve the issue of changing node schemas causing application to fail to start up due to invalid graphs
2023-05-12 22:33:49 +10:00
Lincoln Stein
2ef79b8bf3
fix bug in persistent model scheme
2023-05-12 00:14:56 -04:00
Lincoln Stein
11ecf438f5
latents.py converted to use model manager service; events emitted
2023-05-11 23:33:24 -04:00
Lincoln Stein
df5b968954
model manager now running as a service
2023-05-11 21:24:29 -04:00
Lincoln Stein
8ad8c5c67a
resolve conflicts with main
2023-05-11 00:19:20 -04:00
Lincoln Stein
590942edd7
Merge branch 'main' into lstein/new-model-manager
2023-05-11 00:16:03 -04:00
Lincoln Stein
4627910c5d
added a wrapper model_manager_service and model events
2023-05-11 00:09:19 -04:00
psychedelicious
f488b1a7f2
fix(nodes): fix usage of Optional
2023-05-11 11:55:51 +10:00
psychedelicious
20f6a597ab
fix(nodes): add MetadataColorField
2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730
feat(nodes): wip inpaint node
2023-05-11 11:55:51 +10:00
psychedelicious
5e09dd380d
Revert "feat(nodes): free gpu mem after invocation"
...
This reverts commit 99cb33f477306d5dcc455efe04053ce41b8d85bd.
2023-05-11 11:55:51 +10:00
psychedelicious
a75148cb16
feat(nodes): free gpu mem after invocation
2023-05-11 11:55:51 +10:00
psychedelicious
7dfa135b2c
fix(nodes): fix #3306
...
Check if the cache has the object before deleting it.
2023-05-10 15:29:10 +10:00
Lincoln Stein
fa6a580452
merge with main
2023-05-10 00:03:32 -04:00
Lincoln Stein
4649920074
adjust t2i to work with new model structure
2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90
cap model cache size using bytes, not # models
2023-05-07 18:07:28 -04:00
Lincoln Stein
647ffb2a0f
defined abstract baseclass for model manager service
2023-05-06 22:41:19 -04:00
Lincoln Stein
afd2e32092
Merge branch 'main' into lstein/global-configuration
2023-05-06 21:20:25 -04:00
Lincoln Stein
e0214a32bc
mostly ported to new manager API; needs testing
2023-05-06 00:44:12 -04:00
StAlKeR7779
58d7833c5c
Review changes
2023-05-05 21:09:29 +03:00
StAlKeR7779
5012f61599
Separate conditionings back to positive and negative
2023-05-05 15:47:51 +03:00
StAlKeR7779
1e6adf0a06
Fix default graph and test
2023-05-04 21:14:31 +03:00
Lincoln Stein
742ed19d66
add missing config module
2023-05-04 01:20:30 -04:00
Lincoln Stein
15ffb53e59
remove globals, args, generate and the legacy CLI
2023-05-03 23:36:51 -04:00
Lincoln Stein
90054ddf0d
use InvokeAISettings for app-wide configuration
2023-05-03 22:30:30 -04:00
StAlKeR7779
56d3cbead0
Merge branch 'main' into feat/compel_node
2023-05-04 00:28:33 +03:00
Lincoln Stein
4687ad4ed6
Merge branch 'main' into enhance/invokeai-logs
2023-05-03 13:36:06 -04:00
psychedelicious
f9f40adcdc
fix(nodes): fix t2i
graph
...
Removed width and height edges.
2023-05-02 13:11:28 +10:00
Eugene
d39de0ad38
fix(nodes): fix duplicate Invoker start/stop events
2023-05-01 18:24:37 -04:00
Eugene
d14a7d756e
nodes-api: enforce single thread for the processor
...
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.
Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)
Fixes #3289
2023-05-01 18:24:37 -04:00
Lincoln Stein
974841926d
logger is a interchangeable service
2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95
rename log to logger throughout
2023-04-29 09:43:40 -04:00