Lincoln Stein
b31a6ff605
fix reversed args in _model_key() call
2023-05-13 21:11:06 -04:00
Sergey Borisov
1f602e6143
Fix - apply precision to text_encoder
2023-05-14 03:46:13 +03:00
Sergey Borisov
039fa73269
Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web)
2023-05-14 03:06:26 +03:00
Lincoln Stein
2204e47596
allow submodels to be fetched independent of parent pipeline
2023-05-13 16:54:47 -04:00
Lincoln Stein
d8b1f29066
proxy SDModelInfo so that it can be used directly as context
2023-05-13 16:29:18 -04:00
Lincoln Stein
b23c9f1da5
get Tuple type hint syntax right
2023-05-13 14:59:21 -04:00
Lincoln Stein
72967bf118
convert add_model(), del_model(), list_models() etc to use bifurcated names
2023-05-13 14:44:44 -04:00
Sergey Borisov
3b2a054f7a
Add model loader node; unet, clip, vae fields; change compel node to clip field
2023-05-13 04:37:20 +03:00
Sergey Borisov
131145eab1
A big refactor of model manager(according to IMHO)
2023-05-12 23:13:34 +03:00
Lincoln Stein
df5b968954
model manager now running as a service
2023-05-11 21:24:29 -04:00
Lincoln Stein
8ad8c5c67a
resolve conflicts with main
2023-05-11 00:19:20 -04:00
Lincoln Stein
4627910c5d
added a wrapper model_manager_service and model events
2023-05-11 00:09:19 -04:00
psychedelicious
49db6f4fac
fix(nodes): fix trivial typing issues
2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730
feat(nodes): wip inpaint node
2023-05-11 11:55:51 +10:00
Lincoln Stein
99c692f397
check that model name matches format
2023-05-09 23:46:59 -04:00
Lincoln Stein
3d85e769ce
clean up ckpt handling
...
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
Lincoln Stein
9cb962cad7
ckpt model conversion now done in ModelCache
2023-05-08 23:39:44 -04:00
Lincoln Stein
a108155544
added StALKeR779's great model size calculating routine
2023-05-08 21:47:03 -04:00
Lincoln Stein
c15b49c805
implement StALKeR7779 requested API for fetching submodels
2023-05-07 23:18:17 -04:00
Lincoln Stein
fd63e36822
optimize subfolder so that it returns submodel if parent is in RAM
2023-05-07 21:39:11 -04:00
Lincoln Stein
4649920074
adjust t2i to work with new model structure
2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90
cap model cache size using bytes, not # models
2023-05-07 18:07:28 -04:00
Lincoln Stein
647ffb2a0f
defined abstract baseclass for model manager service
2023-05-06 22:41:19 -04:00
Lincoln Stein
05a27bda5e
generalize model loading support, include loras/embeds
2023-05-06 15:58:44 -04:00
Lincoln Stein
e0214a32bc
mostly ported to new manager API; needs testing
2023-05-06 00:44:12 -04:00
Lincoln Stein
af8c7c7d29
model manager rewritten to use model_cache; API changed!
2023-05-05 19:32:28 -04:00
Lincoln Stein
a4e36bc02a
when model is forcibly moved into RAM update loaded_models set
2023-05-04 23:28:03 -04:00
Lincoln Stein
68bc0112fa
implement lazy GPU offloading and ref counting
2023-05-04 23:15:32 -04:00
Lincoln Stein
d866dcb3d2
close #3343
2023-05-04 20:30:59 -04:00
Lincoln Stein
a273bdbdc1
Merge branch 'main' into lstein/new-model-manager
2023-05-03 18:09:29 -04:00
Lincoln Stein
e1fed52c66
work on model cache and its regression test finished
2023-05-03 12:38:18 -04:00
Lincoln Stein
bb959448c1
implement hashing for local & remote models
2023-05-02 16:52:27 -04:00
Lincoln Stein
2e2abf6ea6
caching of subparts working
2023-05-01 22:57:30 -04:00
Lincoln Stein
974841926d
logger is a interchangeable service
2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95
rename log to logger throughout
2023-04-29 09:43:40 -04:00
Lincoln Stein
6b79e2b407
Merge branch 'main' into enhance/invokeai-logs
...
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
Lincoln Stein
956ad6bcf5
add redesigned model cache for diffusers & transformers
2023-04-28 00:41:52 -04:00
Lincoln Stein
31a904b903
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-25 03:28:45 +01:00
Lincoln Stein
4fa5c963a1
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-25 03:10:51 +01:00
Lincoln Stein
b164330e3c
replaced remaining print statements with log.*()
2023-04-18 20:49:00 -04:00
Lincoln Stein
69433c9f68
Merge branch 'main' into lstein/enhance/diffusers-0.15
2023-04-18 19:21:53 -04:00
Lincoln Stein
bd8ffd36bf
bump to diffusers 0.15.1, remove dangling module
2023-04-18 19:20:38 -04:00
Tim Cabbage
f6cdff2c5b
[bug] #3218 HuggingFace API off when --no-internet set
...
https://github.com/invoke-ai/InvokeAI/issues/3218
Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
Lincoln Stein
aab262d991
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-14 20:12:38 -04:00
Lincoln Stein
47b9910b48
update to diffusers 0.15 and fix code for name changes
...
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
Lincoln Stein
0b0e6fe448
convert remainder of print() to log.info()
2023-04-14 15:15:14 -04:00
Lincoln Stein
c132dbdefa
change "ialog" to "log"
2023-04-11 18:48:20 -04:00
Lincoln Stein
f3081e7013
add module-level getLogger() method
2023-04-11 12:23:13 -04:00
Lincoln Stein
f904f14f9e
add missing module-level methods
2023-04-11 11:10:43 -04:00
Lincoln Stein
8917a6d99b
add logging support
...
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.
Examples:
### A critical error (logging.CRITICAL)
*** A non-fatal error (logging.ERROR)
** A warning (logging.WARNING)
>> Informational message (logging.INFO)
| Debugging message (logging.DEBUG)
This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:
import invokeai.backend.util.logging as ialog
ialog.debug('this is a debugging message')
ialog.info('this is a informational message')
ialog.log(level=logging.CRITICAL, 'get out of dodge')
ialog.disable(level=logging.INFO)
ialog.basicConfig(filename='/var/log/invokeai.log')
Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.
For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.getLogger(__name__)
fh = logging.FileHandler('/var/invokeai.log')
logger.addHandler(fh)
logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00