Lincoln Stein
ff63433591
Merge branch 'main' into lstein/config-management-fixes
2023-06-02 22:56:43 -04:00
Lincoln Stein
31281d7181
Merge branch 'main' into lstein/logging-improvements
2023-06-02 22:56:13 -04:00
Lincoln Stein
72d1e4e404
fix bug in model_manager that prevented import of inpainting models
2023-06-02 22:39:26 -04:00
Lincoln Stein
91918e648b
dynamic display of log messages now working
2023-06-02 22:24:46 -04:00
Lincoln Stein
1390b65a9c
new TUI is fully functional; needs some polishing
2023-06-02 17:20:50 -04:00
Lincoln Stein
41f7758977
listing, downloading and deleting LoRAs working; TI support pending
2023-06-02 00:40:15 -04:00
Lincoln Stein
98773b20ac
merge with main
2023-06-01 18:09:49 -04:00
Lincoln Stein
e9821ab711
implemented tabbed model selection; not wired to backend yet
2023-06-01 00:31:46 -04:00
Lincoln Stein
d6530df635
rename invokeai.backend.config to invokeai.backend.install
2023-05-31 21:34:20 -04:00
Sergey Borisov
b47786e846
First working TI draft
2023-05-31 02:12:27 +03:00
Lincoln Stein
1632ac6b9f
add controlnet model downloading
2023-05-30 13:49:43 -04:00
Sergey Borisov
69ccd3a0b5
Fixes for checkpoint models
2023-05-30 19:12:47 +03:00
user1
b1b94a3d56
Fixed problem with inpainting after controlnet support added to main.
...
Problem was that controlnet support involved adding **kwargs to method calls down in denoising loop, and AddsMaskLatents didn't accept **kwarg arg. So just changed to accept and pass on **kwargs.
2023-05-30 08:01:21 -04:00
Lincoln Stein
c9ee42450e
added controlnet models to frontend; backend needs to be done
2023-05-30 00:38:37 -04:00
Lincoln Stein
10fe31c2a1
Merge branch 'main' into lstein/config-management-fixes
2023-05-29 21:03:03 -04:00
Sergey Borisov
79de9047b5
First working lora implementation
2023-05-30 01:11:00 +03:00
Lincoln Stein
d37b08a7dd
Merge branch 'main' into release/make-web-dist-startable
2023-05-28 19:46:09 -04:00
user1
f2b41c60ff
Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.
2023-05-26 21:44:00 -04:00
user1
11fc7e40a5
Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.
2023-05-26 21:44:00 -04:00
user1
e2a94be336
Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.
2023-05-26 21:44:00 -04:00
user1
901a277959
Core implementation of ControlNet and MultiControlNet.
2023-05-26 21:44:00 -04:00
user1
a2a2cfa765
Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.
2023-05-26 21:44:00 -04:00
user1
768cfe3aab
Core implementation of ControlNet and MultiControlNet.
2023-05-26 21:44:00 -04:00
user1
297931f5d9
Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.
2023-05-26 21:44:00 -04:00
user1
f613c073c1
Added support for specifying which step iteration to start using
...
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
user1
63d248622c
Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.
2023-05-26 21:44:00 -04:00
user1
0096fb2790
Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.
2023-05-26 21:44:00 -04:00
user1
940e3b6635
Core implementation of ControlNet and MultiControlNet.
2023-05-26 21:44:00 -04:00
user1
714ad6dbb8
Fixed use of ControlNet control_weight parameter
2023-05-26 21:44:00 -04:00
user1
5d5cdc7716
Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.
2023-05-26 21:44:00 -04:00
user1
5e4c0217c7
Switching to ControlField for output from controlnet nodes.
2023-05-26 21:44:00 -04:00
user1
a91dee87d0
Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving.
2023-05-26 21:44:00 -04:00
user1
5ff98a4179
Core implementation of ControlNet and MultiControlNet.
2023-05-26 21:44:00 -04:00
Lincoln Stein
5f8f51436a
merge with main; fix conflicts
2023-05-25 22:40:45 -04:00
Lincoln Stein
5659d10778
remove unused function get_root()
2023-05-25 22:06:37 -04:00
Lincoln Stein
e56965ad76
documentation tweaks; fixed initialization in a couple more places
2023-05-25 21:10:00 -04:00
Lincoln Stein
2273b3a8c8
fix potential race condition in config system
2023-05-25 20:41:26 -04:00
Lincoln Stein
9110838fe4
Merge branch 'main' into release/make-web-dist-startable
2023-05-25 19:06:09 -04:00
Lincoln Stein
ca7b267326
raise error if syslogging requested and syslog lib not available
2023-05-25 10:10:46 -04:00
Lincoln Stein
88776fb2de
get invokeai_configure working again
2023-05-25 09:39:45 -04:00
Lincoln Stein
b87f3043ae
add logging configuration
2023-05-24 23:57:15 -04:00
psychedelicious
d14b02e93f
feat(logger): fix logger type issues
2023-05-24 11:30:47 -04:00
psychedelicious
fb0b63c580
fix(nodes): fix seam painting
...
The problem was the same seed was getting used for the seam painting pass, causing the fried look.
Same issue as if you do img2img on a txt2img with the same seed/prompt.
Thanks to @hipsterusername for teaming up to debug this. We got pretty deep into the weeds.
2023-05-25 00:58:03 +10:00
Sergey Borisov
8e419a4f97
Revert weak references as can be done without it
2023-05-23 04:29:40 +03:00
Sergey Borisov
2533209326
Rewrite cache to weak references
2023-05-23 03:48:22 +03:00
Lincoln Stein
d2dc1ed26f
make InvokeAI package installable
...
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.
Main changes.
1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.
2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.
3. Fix a bug in the checkpoint converter script that was identified
during testing.
4. Better error reporting when checkpoint converter fails.
5. Rebuild front end.
2023-05-22 17:51:47 -04:00
Lincoln Stein
27241cdde1
port more globals changes over
2023-05-18 17:17:45 -04:00
Lincoln Stein
259d6ec90d
fixup cachedir call
2023-05-18 14:52:16 -04:00
Lincoln Stein
a77c4c87b2
fixed logic error in resolution of model path
2023-05-18 14:35:34 -04:00
Lincoln Stein
d96175d127
resolve some undefined symbols in model_cache
2023-05-18 14:31:47 -04:00
Lincoln Stein
b1a99d772c
added method to convert vaes
2023-05-18 13:31:11 -04:00
Lincoln Stein
7ea995149e
fixes to env parsing, textual inversion & help text
...
- Make environment variable settings case InSenSiTive:
INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
environment variables will both set `max_loaded_models`
- Updated realesrgan to use new config system.
- Updated textual_inversion_training to use new config system.
- Discovered a race condition when InvokeAIAppConfig is created
at module load time, which makes it impossible to customize
or replace the help message produced with --help on the command
line. To fix this, moved all instances of get_invokeai_config()
from module load time to object initialization time. Makes code
cleaner, too.
- Added `--from_file` argument to `invokeai-node-cli` and changed
github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Sergey Borisov
fd82763412
Model manager draft
2023-05-18 03:56:52 +03:00
Eugene
f9710dd6ed
remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure
2023-05-17 20:39:00 -04:00
Lincoln Stein
8adff96e29
Merge branch 'main' into lstein/global-configuration
2023-05-17 14:37:09 -04:00
Lincoln Stein
7593dc19d6
complete several steps needed to make 3.0 installable
...
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
Lincoln Stein
b7c5a39685
make invokeai.yaml more hierarchical; fix list configuration bug
2023-05-17 12:19:19 -04:00
Lincoln Stein
eadfd239a8
update config script to work with new config system
2023-05-17 00:18:19 -04:00
Lincoln Stein
e971a7f35c
when migrating models.yaml, rename original models.yaml.orig
2023-05-16 22:37:53 -04:00
Lincoln Stein
8d75e50435
partial port of invokeai-configure
2023-05-16 01:50:01 -04:00
Lincoln Stein
cd16857f38
fix None in model_type
2023-05-16 00:13:44 -04:00
Lincoln Stein
1442f1cb8d
change model filter to None in second place
2023-05-16 00:03:57 -04:00
Lincoln Stein
4fe94a9315
list_models() now returns a dict of {type,{name: info}}
2023-05-15 23:44:08 -04:00
Lincoln Stein
7ef0d2aa35
merge with main
2023-05-15 09:07:17 -04:00
Lincoln Stein
c8f765cc06
improve debugging messages
2023-05-14 18:29:55 -04:00
Lincoln Stein
b9e9087dbe
do not manage GPU for pipelines if sequential_offloading is True
2023-05-14 18:09:38 -04:00
Lincoln Stein
63e465eb5c
tweaks to get_model()
behavior
...
1. If an external VAE is specified in config file, then
get_model(submodel=vae) will return the external VAE, not the one
burnt into the parent diffusers pipeline.
2. The mechanism in (1) is generalized such that you can now have
"unet:", "text_encoder:" and similar stanzas in the config file.
Valid formats of these subsections:
unet:
repo_id: foo/bar
unet:
path: /path/to/local/folder
unet:
repo_id: foo/bar
subfolder: unet
In the near future, these will also be used to attach external
parts to the pipeline, generalizing VAE behavior.
3. Accommodate callers (i.e. the WebUI) that are passing the
model key ("diffusers/stable-diffusion-1.5") to get_model()
instead of the tuple of model_name and model_type.
4. Fixed bug in VAE model attaching code.
5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
Eugene Brodsky
c8a98a9a22
Merge branch 'main' into lstein/bugfix/compel
2023-05-14 14:43:18 -04:00
blessedcoolant
c4681774a5
Merge branch 'main' into logging-facelift
2023-05-15 02:08:29 +12:00
Damian Stewart
050add58d2
fix getting conditionings
2023-05-14 12:20:54 +02:00
Eugene
b72c9787a9
Revert "comment out customer_attention_context"
...
This reverts commit 8f8cd90787
.
Due to NameError: name 'options' is not defined
2023-05-14 00:37:55 -04:00
Eugene Brodsky
2623941d91
Merge branch 'main' into lstein/bugfix/compel
2023-05-13 22:23:59 -04:00
Lincoln Stein
baf5451fa0
Merge branch 'main' into lstein/new-model-manager
2023-05-13 22:01:34 -04:00
blessedcoolant
026d3260b4
Add Heun Karras Scheduler
2023-05-14 11:45:08 +10:00
Lincoln Stein
1103ab2844
merge with main
2023-05-13 21:35:19 -04:00
Lincoln Stein
b31a6ff605
fix reversed args in _model_key() call
2023-05-13 21:11:06 -04:00
Sergey Borisov
1f602e6143
Fix - apply precision to text_encoder
2023-05-14 03:46:13 +03:00
Sergey Borisov
039fa73269
Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web)
2023-05-14 03:06:26 +03:00
blessedcoolant
691e1bf829
Make debug messages cyan/blue
2023-05-14 09:06:57 +12:00
Lincoln Stein
2204e47596
allow submodels to be fetched independent of parent pipeline
2023-05-13 16:54:47 -04:00
Lincoln Stein
d8b1f29066
proxy SDModelInfo so that it can be used directly as context
2023-05-13 16:29:18 -04:00
Lincoln Stein
b23c9f1da5
get Tuple type hint syntax right
2023-05-13 14:59:21 -04:00
Lincoln Stein
72967bf118
convert add_model(), del_model(), list_models() etc to use bifurcated names
2023-05-13 14:44:44 -04:00
Sergey Borisov
3b2a054f7a
Add model loader node; unet, clip, vae fields; change compel node to clip field
2023-05-13 04:37:20 +03:00
Sergey Borisov
131145eab1
A big refactor of model manager(according to IMHO)
2023-05-12 23:13:34 +03:00
Kent Keirsey
8f8cd90787
comment out customer_attention_context
2023-05-12 13:59:00 -04:00
blessedcoolant
d796ea7bec
feat: Logging Improvements
2023-05-13 02:13:49 +12:00
Eugene Brodsky
af060188bd
Merge branch 'main' into lstein/bugfix/compel
2023-05-12 08:22:18 -04:00
Kevin Turner
4caa1f19b2
fix(model manager): fix string formatting error on model checksum timer
2023-05-11 19:06:02 -07:00
Lincoln Stein
df5b968954
model manager now running as a service
2023-05-11 21:24:29 -04:00
Lincoln Stein
95d4bd3012
Merge branch 'lstein/bugfix/compel' of github.com:invoke-ai/InvokeAI into lstein/bugfix/compel
2023-05-11 21:13:29 -04:00
Lincoln Stein
037078c8ad
make InvokeAIDiffuserComponent.custom_attention_control a classmethod
2023-05-11 21:13:18 -04:00
blessedcoolant
f7dc171c4f
Rename default schedulers across the app
2023-05-12 03:44:20 +12:00
blessedcoolant
4b957edfec
Add DDPM Scheduler
2023-05-12 03:18:34 +12:00
blessedcoolant
46ca7718d9
Add DEIS Scheduler
2023-05-12 03:10:30 +12:00
blessedcoolant
b928d7a6e6
Change scheduler names to be accurate
...
_a = Ancestral
_k = Karras
2023-05-12 02:59:43 +12:00
blessedcoolant
8a836247c8
Add DPMPP Single, Euler Karras and DPMPP2 Multi Karras Schedulers
2023-05-12 02:23:33 +12:00
blessedcoolant
9a383e456d
Codesplit SCHEDULER_MAP for reusage
2023-05-12 00:40:03 +12:00
blessedcoolant
3ffff023b2
Add missing key to scheduler_map
...
It was breaking coz the sampler was not being reset. So needs a key on each. Will simplify this later.
2023-05-12 00:08:50 +12:00
blessedcoolant
d1029138d2
Default to DDIM if scheduler is missing
2023-05-11 22:54:35 +12:00
blessedcoolant
06b5800d28
Add UniPC Scheduler
2023-05-11 22:43:18 +12:00
Eugene Brodsky
3baa230077
Merge branch 'main' into lstein/bugfix/compel
2023-05-11 00:50:45 -04:00
Eugene
9e594f9018
pad conditioning tensors to same length
...
fixes crash when prompt length is greater than 75 tokens
2023-05-11 00:34:15 -04:00
Lincoln Stein
8ad8c5c67a
resolve conflicts with main
2023-05-11 00:19:20 -04:00
Lincoln Stein
4627910c5d
added a wrapper model_manager_service and model events
2023-05-11 00:09:19 -04:00
psychedelicious
49db6f4fac
fix(nodes): fix trivial typing issues
2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730
feat(nodes): wip inpaint node
2023-05-11 11:55:51 +10:00
Lincoln Stein
5d5157fc65
make conditioning.py work with compel 1.1.5
2023-05-10 18:08:33 -04:00
Lincoln Stein
99c692f397
check that model name matches format
2023-05-09 23:46:59 -04:00
Lincoln Stein
3d85e769ce
clean up ckpt handling
...
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
Lincoln Stein
9cb962cad7
ckpt model conversion now done in ModelCache
2023-05-08 23:39:44 -04:00
Lincoln Stein
a108155544
added StALKeR779's great model size calculating routine
2023-05-08 21:47:03 -04:00
Lincoln Stein
c15b49c805
implement StALKeR7779 requested API for fetching submodels
2023-05-07 23:18:17 -04:00
Lincoln Stein
fd63e36822
optimize subfolder so that it returns submodel if parent is in RAM
2023-05-07 21:39:11 -04:00
Lincoln Stein
4649920074
adjust t2i to work with new model structure
2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90
cap model cache size using bytes, not # models
2023-05-07 18:07:28 -04:00
Lincoln Stein
647ffb2a0f
defined abstract baseclass for model manager service
2023-05-06 22:41:19 -04:00
Lincoln Stein
afd2e32092
Merge branch 'main' into lstein/global-configuration
2023-05-06 21:20:25 -04:00
Lincoln Stein
05a27bda5e
generalize model loading support, include loras/embeds
2023-05-06 15:58:44 -04:00
Lincoln Stein
e0214a32bc
mostly ported to new manager API; needs testing
2023-05-06 00:44:12 -04:00
Lincoln Stein
af8c7c7d29
model manager rewritten to use model_cache; API changed!
2023-05-05 19:32:28 -04:00
Lincoln Stein
a4e36bc02a
when model is forcibly moved into RAM update loaded_models set
2023-05-04 23:28:03 -04:00
Lincoln Stein
68bc0112fa
implement lazy GPU offloading and ref counting
2023-05-04 23:15:32 -04:00
Lincoln Stein
d866dcb3d2
close #3343
2023-05-04 20:30:59 -04:00
Lincoln Stein
e4196bbe5b
adjust non-app modules to use new config system
2023-05-04 00:43:51 -04:00
Lincoln Stein
15ffb53e59
remove globals, args, generate and the legacy CLI
2023-05-03 23:36:51 -04:00
Lincoln Stein
90054ddf0d
use InvokeAISettings for app-wide configuration
2023-05-03 22:30:30 -04:00
Lincoln Stein
a273bdbdc1
Merge branch 'main' into lstein/new-model-manager
2023-05-03 18:09:29 -04:00
Lincoln Stein
e1fed52c66
work on model cache and its regression test finished
2023-05-03 12:38:18 -04:00
Lincoln Stein
bb959448c1
implement hashing for local & remote models
2023-05-02 16:52:27 -04:00
Lincoln Stein
2e2abf6ea6
caching of subparts working
2023-05-01 22:57:30 -04:00
Lincoln Stein
974841926d
logger is a interchangeable service
2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95
rename log to logger throughout
2023-04-29 09:43:40 -04:00
Lincoln Stein
6b79e2b407
Merge branch 'main' into enhance/invokeai-logs
...
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
Lincoln Stein
956ad6bcf5
add redesigned model cache for diffusers & transformers
2023-04-28 00:41:52 -04:00
Lincoln Stein
31a904b903
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-25 03:28:45 +01:00
Lincoln Stein
4fa5c963a1
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-25 03:10:51 +01:00
Lincoln Stein
b164330e3c
replaced remaining print statements with log.*()
2023-04-18 20:49:00 -04:00
Lincoln Stein
69433c9f68
Merge branch 'main' into lstein/enhance/diffusers-0.15
2023-04-18 19:21:53 -04:00
Lincoln Stein
bd8ffd36bf
bump to diffusers 0.15.1, remove dangling module
2023-04-18 19:20:38 -04:00
Tim Cabbage
f6cdff2c5b
[bug] #3218 HuggingFace API off when --no-internet set
...
https://github.com/invoke-ai/InvokeAI/issues/3218
Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
Lincoln Stein
aab262d991
Merge branch 'main' into bugfix/prevent-cli-crash
2023-04-14 20:12:38 -04:00
Lincoln Stein
47b9910b48
update to diffusers 0.15 and fix code for name changes
...
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
Lincoln Stein
0b0e6fe448
convert remainder of print() to log.info()
2023-04-14 15:15:14 -04:00
Lincoln Stein
c132dbdefa
change "ialog" to "log"
2023-04-11 18:48:20 -04:00
Lincoln Stein
f3081e7013
add module-level getLogger() method
2023-04-11 12:23:13 -04:00
Lincoln Stein
f904f14f9e
add missing module-level methods
2023-04-11 11:10:43 -04:00
Lincoln Stein
8917a6d99b
add logging support
...
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.
Examples:
### A critical error (logging.CRITICAL)
*** A non-fatal error (logging.ERROR)
** A warning (logging.WARNING)
>> Informational message (logging.INFO)
| Debugging message (logging.DEBUG)
This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:
import invokeai.backend.util.logging as ialog
ialog.debug('this is a debugging message')
ialog.info('this is a informational message')
ialog.log(level=logging.CRITICAL, 'get out of dodge')
ialog.disable(level=logging.INFO)
ialog.basicConfig(filename='/var/log/invokeai.log')
Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.
For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.getLogger(__name__)
fh = logging.FileHandler('/var/invokeai.log')
logger.addHandler(fh)
logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
Lincoln Stein
5a4765046e
add logging support
...
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.
Examples:
### A critical error (logging.CRITICAL)
*** A non-fatal error (logging.ERROR)
** A warning (logging.WARNING)
>> Informational message (logging.INFO)
| Debugging message (logging.DEBUG)
2023-04-11 09:33:28 -04:00
AbdBarho
de189f2db6
Increase chunk size when computing SHAs
2023-04-09 21:53:59 +02:00