Commit Graph

138 Commits

Author SHA1 Message Date
Sergey Borisov
3b2a054f7a Add model loader node; unet, clip, vae fields; change compel node to clip field 2023-05-13 04:37:20 +03:00
Sergey Borisov
4492044d29 Redo compel node to separate model loading 2023-05-12 23:09:33 +03:00
Sergey Borisov
5431dd5f50 Fix event args 2023-05-12 23:08:03 +03:00
Sergey Borisov
79fecba274 Fix model manager initialization in web ui 2023-05-12 23:05:08 +03:00
Lincoln Stein
2ef79b8bf3 fix bug in persistent model scheme 2023-05-12 00:14:56 -04:00
Lincoln Stein
11ecf438f5 latents.py converted to use model manager service; events emitted 2023-05-11 23:33:24 -04:00
Lincoln Stein
df5b968954 model manager now running as a service 2023-05-11 21:24:29 -04:00
Lincoln Stein
8ad8c5c67a resolve conflicts with main 2023-05-11 00:19:20 -04:00
Lincoln Stein
590942edd7 Merge branch 'main' into lstein/new-model-manager 2023-05-11 00:16:03 -04:00
Lincoln Stein
4627910c5d added a wrapper model_manager_service and model events 2023-05-11 00:09:19 -04:00
psychedelicious
f488b1a7f2 fix(nodes): fix usage of Optional 2023-05-11 11:55:51 +10:00
psychedelicious
34f3a0f0e3 feat(nodes): improve default model choosing output 2023-05-11 11:55:51 +10:00
psychedelicious
d0bac1675e fix(nodes): fix ImageOutput Config 2023-05-11 11:55:51 +10:00
psychedelicious
4e56c962f4 fix(nodes): fix infill docstrings 2023-05-11 11:55:51 +10:00
psychedelicious
4ef0e43759 fix(nodes): remove dataURL invocation 2023-05-11 11:55:51 +10:00
psychedelicious
a7786d5ff2 fix(nodes): restore seamless to TextToLatents 2023-05-11 11:55:51 +10:00
psychedelicious
6c1de975d9 feat(nodes): add infill nodes 2023-05-11 11:55:51 +10:00
psychedelicious
a1079e455a feat(nodes): cleanup unused params, seed generation 2023-05-11 11:55:51 +10:00
psychedelicious
da4eacdffe feat(nodes): add InfillInvocation 2023-05-11 11:55:51 +10:00
psychedelicious
6102e560ba feat(nodes): add LatentsToImage node (VAE encode) 2023-05-11 11:55:51 +10:00
psychedelicious
20f6a597ab fix(nodes): add MetadataColorField 2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730 feat(nodes): wip inpaint node 2023-05-11 11:55:51 +10:00
psychedelicious
357cee2849 fix(nodes): fix cfg scale min value 2023-05-11 11:55:51 +10:00
psychedelicious
0b49997bb6 feat(nodes): allow uploaded images to be any ImageType (eg intermediates) 2023-05-11 11:55:51 +10:00
psychedelicious
5e09dd380d Revert "feat(nodes): free gpu mem after invocation"
This reverts commit 99cb33f477306d5dcc455efe04053ce41b8d85bd.
2023-05-11 11:55:51 +10:00
psychedelicious
a75148cb16 feat(nodes): free gpu mem after invocation 2023-05-11 11:55:51 +10:00
psychedelicious
e0b9b5cc6c feat(nodes): add dataURL to image node 2023-05-11 11:55:51 +10:00
psychedelicious
ee24ad7b13 fix(nodes): fix broken docs routes 2023-05-10 08:28:17 -04:00
psychedelicious
f8e90ba3f0 feat(nodes): add ui build static route 2023-05-10 08:28:17 -04:00
psychedelicious
7dfa135b2c fix(nodes): fix #3306
Check if the cache has the object before deleting it.
2023-05-10 15:29:10 +10:00
Lincoln Stein
fa6a580452 merge with main 2023-05-10 00:03:32 -04:00
Lincoln Stein
4649920074 adjust t2i to work with new model structure 2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90 cap model cache size using bytes, not # models 2023-05-07 18:07:28 -04:00
Lincoln Stein
647ffb2a0f defined abstract baseclass for model manager service 2023-05-06 22:41:19 -04:00
Lincoln Stein
a8cfa3565c Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-05-06 08:14:15 -04:00
Lincoln Stein
e0214a32bc mostly ported to new manager API; needs testing 2023-05-06 00:44:12 -04:00
StAlKeR7779
a80fe05e23 Rename compel node 2023-05-05 21:30:16 +03:00
StAlKeR7779
58d7833c5c Review changes 2023-05-05 21:09:29 +03:00
StAlKeR7779
5012f61599 Separate conditionings back to positive and negative 2023-05-05 15:47:51 +03:00
blessedcoolant
85c33823c3
Merge branch 'main' into feat/compel_node 2023-05-05 14:41:45 +12:00
psychedelicious
e04ada1319
Merge branch 'main' into patch-1 2023-05-05 10:38:45 +10:00
StAlKeR7779
81ec476f3a Revert seed field addition 2023-05-04 21:50:40 +03:00
StAlKeR7779
1e6adf0a06 Fix default graph and test 2023-05-04 21:14:31 +03:00
StAlKeR7779
7d221e2518 Combine conditioning to one field(better fits for multiple type conditioning like perp-neg) 2023-05-04 20:14:22 +03:00
StAlKeR7779
56d3cbead0
Merge branch 'main' into feat/compel_node 2023-05-04 00:28:33 +03:00
Lincoln Stein
4687ad4ed6
Merge branch 'main' into enhance/invokeai-logs 2023-05-03 13:36:06 -04:00
psychedelicious
f9f40adcdc fix(nodes): fix t2i graph
Removed width and height edges.
2023-05-02 13:11:28 +10:00
psychedelicious
3264d30b44 feat(nodes): allow multiples of 8 for dimensions 2023-05-02 12:01:52 +10:00
Eugene
d39de0ad38 fix(nodes): fix duplicate Invoker start/stop events 2023-05-01 18:24:37 -04:00
Eugene
d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00