Lincoln Stein
4627910c5d
added a wrapper model_manager_service and model events
2023-05-11 00:09:19 -04:00
Lincoln Stein
fa6a580452
merge with main
2023-05-10 00:03:32 -04:00
Lincoln Stein
99c692f397
check that model name matches format
2023-05-09 23:46:59 -04:00
Lincoln Stein
3d85e769ce
clean up ckpt handling
...
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
Lincoln Stein
9cb962cad7
ckpt model conversion now done in ModelCache
2023-05-08 23:39:44 -04:00
Mary Hipp
853c83d0c2
surface detail field for 403 errors
2023-05-09 12:40:19 +10:00
Lincoln Stein
a108155544
added StALKeR779's great model size calculating routine
2023-05-08 21:47:03 -04:00
Mary Hipp
1809990ed4
if backend returns an error, show it in toast
2023-05-09 11:09:36 +10:00
Eugene
79d49853d2
use websocket transport first for socket.io
2023-05-09 11:01:02 +10:00
Lincoln Stein
c15b49c805
implement StALKeR7779 requested API for fetching submodels
2023-05-07 23:18:17 -04:00
Lincoln Stein
fd63e36822
optimize subfolder so that it returns submodel if parent is in RAM
2023-05-07 21:39:11 -04:00
Lincoln Stein
4649920074
adjust t2i to work with new model structure
2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90
cap model cache size using bytes, not # models
2023-05-07 18:07:28 -04:00
psychedelicious
440912dcff
feat(ui): make base log level debug
2023-05-07 15:36:37 +10:00
psychedelicious
8b87a26e7e
feat(ui): support collect nodes
2023-05-07 15:36:37 +10:00
Lincoln Stein
647ffb2a0f
defined abstract baseclass for model manager service
2023-05-06 22:41:19 -04:00
Lincoln Stein
05a27bda5e
generalize model loading support, include loras/embeds
2023-05-06 15:58:44 -04:00
Lincoln Stein
350b1421bb
Merge branch 'main' into lstein/bugfix/logger-namespace
2023-05-06 08:14:44 -04:00
Lincoln Stein
a8cfa3565c
Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager
2023-05-06 08:14:15 -04:00
Lincoln Stein
e0214a32bc
mostly ported to new manager API; needs testing
2023-05-06 00:44:12 -04:00
Lincoln Stein
af8c7c7d29
model manager rewritten to use model_cache; API changed!
2023-05-05 19:32:28 -04:00
StAlKeR7779
a80fe05e23
Rename compel node
2023-05-05 21:30:16 +03:00
StAlKeR7779
58d7833c5c
Review changes
2023-05-05 21:09:29 +03:00
StAlKeR7779
5012f61599
Separate conditionings back to positive and negative
2023-05-05 15:47:51 +03:00
Lincoln Stein
a4e36bc02a
when model is forcibly moved into RAM update loaded_models set
2023-05-04 23:28:03 -04:00
Lincoln Stein
2e9bec15e7
Merge branch 'main' into lstein/new-model-manager
2023-05-04 23:19:38 -04:00
Lincoln Stein
68bc0112fa
implement lazy GPU offloading and ref counting
2023-05-04 23:15:32 -04:00
blessedcoolant
85c33823c3
Merge branch 'main' into feat/compel_node
2023-05-05 14:41:45 +12:00
psychedelicious
e04ada1319
Merge branch 'main' into patch-1
2023-05-05 10:38:45 +10:00
Lincoln Stein
d866dcb3d2
close #3343
2023-05-04 20:30:59 -04:00
StAlKeR7779
81ec476f3a
Revert seed field addition
2023-05-04 21:50:40 +03:00
StAlKeR7779
1e6adf0a06
Fix default graph and test
2023-05-04 21:14:31 +03:00
StAlKeR7779
7d221e2518
Combine conditioning to one field(better fits for multiple type conditioning like perp-neg)
2023-05-04 20:14:22 +03:00
Lincoln Stein
a273bdbdc1
Merge branch 'main' into lstein/new-model-manager
2023-05-03 18:09:29 -04:00
StAlKeR7779
56d3cbead0
Merge branch 'main' into feat/compel_node
2023-05-04 00:28:33 +03:00
Lincoln Stein
4687ad4ed6
Merge branch 'main' into enhance/invokeai-logs
2023-05-03 13:36:06 -04:00
Lincoln Stein
8a0ec0fa0f
Merge branch 'main' into lstein/new-model-manager
2023-05-03 13:30:50 -04:00
Lincoln Stein
e1fed52c66
work on model cache and its regression test finished
2023-05-03 12:38:18 -04:00
psychedelicious
994b247f8e
feat(ui): do not persist gallery images
...
- I've sorted out the issues that make *not* persisting troublesome, these will be rolled out with canvas
- Also realized that persisting gallery images very quickly fills up localStorage, so we can't really do it anyways
2023-05-03 23:41:48 +10:00
Lincoln Stein
bb959448c1
implement hashing for local & remote models
2023-05-02 16:52:27 -04:00
psychedelicious
0419f50ab0
chore(ui): bump react-virtuoso
...
- Resolves an issue with gallery not rendering all items
2023-05-02 20:15:29 +10:00
psychedelicious
f9f40adcdc
fix(nodes): fix t2i
graph
...
Removed width and height edges.
2023-05-02 13:11:28 +10:00
Lincoln Stein
2e2abf6ea6
caching of subparts working
2023-05-01 22:57:30 -04:00
psychedelicious
3264d30b44
feat(nodes): allow multiples of 8 for dimensions
2023-05-02 12:01:52 +10:00
psychedelicious
4d885653e9
feat(ui): tidy
2023-05-02 11:27:08 +10:00
psychedelicious
475b6bef53
feat(ui): use windowing for gallery
...
vastly improves the gallery performance when many images are loaded.
- `react-virtuoso` to do the virtualized list
- `overlayscrollbars` for a scrollbar
2023-05-02 11:27:08 +10:00
Eugene
d39de0ad38
fix(nodes): fix duplicate Invoker start/stop events
2023-05-01 18:24:37 -04:00
Eugene
d14a7d756e
nodes-api: enforce single thread for the processor
...
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.
Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)
Fixes #3289
2023-05-01 18:24:37 -04:00
Lincoln Stein
b050c1bb8f
use logger in ApiDependencies
2023-05-01 16:27:44 -04:00
psychedelicious
276dfc591b
feat(ui): disable w/h when img2img & not fit
2023-05-01 17:28:22 +10:00