Compare commits

...

242 Commits

Author SHA1 Message Date
dacfe6853e Update rc version, regenerate schema 2024-03-20 08:21:23 -07:00
2b093da4b0 Remove no longer used pwinput dependency 2024-03-20 08:15:37 -07:00
368c1b709c chore: v4.0.0rc3
RC version bump
2024-03-20 05:59:08 -07:00
2269253a6c docs: update installation docs
Remove/edit references to configure script.
2024-03-20 05:48:02 -07:00
3490aee247 tidy(installer): remove unused messages 2024-03-20 05:48:02 -07:00
f592ad3649 fix(installer): remove configure flow from installer 2024-03-20 05:48:02 -07:00
42a2bad936 fix(installer): remove deleted scripts from launcher
These scripts no longer exist and need to be removed from the launcher:
- invokeai-ti
- invokeai-merge
- invokeai-model-install
- invokeai-configure
2024-03-20 05:48:02 -07:00
ba2fd875ad fix(ui): typo 2024-03-20 16:26:14 +11:00
9d30a063e7 fix: remaining strings 2024-03-20 16:26:14 +11:00
dc9a9c0160 fix: not translated strings 2024-03-20 16:26:14 +11:00
d45931a0af fix(ui): localize text 2024-03-20 16:26:14 +11:00
c1de129bbc fix(ui): use refiner's seamless node for i2l VAE
Closes  #5995
2024-03-20 16:08:27 +11:00
bf852348aa Update pytorch and xFormers 2.1.2 -> 2.2.1 2024-03-20 15:16:08 +11:00
fc63419c6e fix(ui): refresh starter models on model add/update/delete 2024-03-20 15:05:25 +11:00
c356cabe97 chore(ui): lint 2024-03-20 15:05:25 +11:00
97fe6e483d fix(mm): do not attempt to reinstall starter model dependencies 2024-03-20 15:05:25 +11:00
1069303309 fix(config): remove configure arg & logic from docker image 2024-03-20 15:05:25 +11:00
eb607498bf fix(config): create parent dir when writing config file 2024-03-20 15:05:25 +11:00
bdb52cfcf7 feat(ui): set HF token in MM tab
- Display a toast on UI launch if the HF token is invalid
- Show form in MM if token is invalid or unable to be verified, let user set the token via this form
2024-03-20 15:05:25 +11:00
3f6f8199f6 chore(ui): typegen 2024-03-20 15:05:25 +11:00
9a5575b46b feat(mm): move HF token helper to route 2024-03-20 15:05:25 +11:00
dea9142cb8 tests: fix config test after changing config schema version format 2024-03-20 15:05:25 +11:00
02329df1df feat(config): write example config file out on app startup 2024-03-20 15:05:25 +11:00
f5337c7ce2 fix(config): handle relative paths to v3 models.yamls 2024-03-20 15:05:25 +11:00
b02f2da71d fix(config): handle legacy_conf_dir setting migration 2024-03-20 15:05:25 +11:00
6c13fa13ea fix(mm): regression from change to legacy conf dir change 2024-03-20 15:05:25 +11:00
040ea8f41b tidy: do not show msg when loading NSFW checker 2024-03-20 15:05:25 +11:00
13c72206d8 docs: update CONFIGURATION.md 2024-03-20 15:05:25 +11:00
96ef7e3889 docs: add link to docs to invokeai.yaml template 2024-03-20 15:05:25 +11:00
2eacbb4d9d fix(nodes): do not load NSFW checker model on startup
Just check if the path exists to determine if it is "available". When needed, load it.
2024-03-20 15:05:25 +11:00
0e51495071 chore(ui): lint 2024-03-20 15:05:25 +11:00
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
6c558279dd feat(config): add CLI arg to specify config file
This allows users to create simple "profiles" via separate `invokeai.yaml` files.

- Remove `InvokeAIAppConfig.set_root()`, it's extraneous
- Remove `InvokeAIAppConfig.merge_from_file()`, it's extraneous
- Add `--config` to the app arg parser, add `InvokeAIAppConfig._config_file`, and consume in the config singleton getter
- `InvokeAIAppConfig.init_file_path` -> `InvokeAIAppConfig.config_file_path`
2024-03-20 15:05:25 +11:00
bd3e8cbdfb feat(ui): add starter models tab to MM
Lists all starter models with an install button if the model is not yet installed.
2024-03-20 15:05:25 +11:00
aa689e5384 style(ui): tweak ModelBaseBadge style 2024-03-20 15:05:25 +11:00
484488dee4 feat(ui): add useStarterModelsToast
This displays a toast linking to the MM tab when there are no main models installed. It is a no-op when the `starterModels` feature is disabled.
2024-03-20 15:05:25 +11:00
e40b715f39 feat(ui): add starterModels feature
This can be disabled to prevent a toast from appearing, linking users to the model manager tab.
2024-03-20 15:05:25 +11:00
e8f4012b56 feat(ui): extract FetchingModelsLoader into reusable component 2024-03-20 15:05:25 +11:00
bc12ca9220 chore(ui): typegen 2024-03-20 15:05:25 +11:00
5ceaeb234d feat(mm): add starter models route
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
2024-03-20 15:05:25 +11:00
429f87c60b fix(mm): HFModelSource string format
The dunder `__str__` method for `HFModelSource` was appending a colon `:` to the end of the source strings.
2024-03-20 15:05:25 +11:00
ee3096f616 feat(config): add flag to indicate if args were parsed
This flag acts as a proxy for the `get_config()` function to determine if the full application is running.

If it was, the config will set the root, do HF login, etc.

If not (e.g. it's called by an external script), all that stuff will be skipped.
2024-03-20 15:05:25 +11:00
6af6673a4f feat: move all config-related initialization to app
HF login, legacy yaml confs, and default init file are all handled during app setup.

All directories are created as they are needed by the app.

No need to check for a valid root dir - we will make it if it doesn't exist.
2024-03-20 15:05:25 +11:00
b173e4c08d tidy(config): type checker ignores + comment 2024-03-20 15:05:25 +11:00
059f869737 tidy(config): remove ignore_missing_core_models CLI arg and setting
This is now a no-op, with all models being downloaded when they are first requested.
2024-03-20 15:05:25 +11:00
813e679b77 feat: add hf_login util
This provides a simple way to provide a HF token. If HF reports no valid token, one is prompted for until a valid token is provided, or the user presses Ctrl + C to cancel.
2024-03-20 15:05:25 +11:00
857e9c9b5f feat: add SuppressOutput util
This context manager suppresses/hides stdout.
2024-03-20 15:05:25 +11:00
5c1aa02e7b fix(config): set default legacy_conf_dir to configs
It was `configs/stable-diffusion` before, which broke conversions.
2024-03-20 15:05:25 +11:00
0e3fb4e97a deps: add pwinput, pinned to specific fork
This simple package provides a cross-platform way to type a password on the CLI and have it show up as asterisks.

The fork, pending merge into the upstream package, adds support for Ctrl+C to cancel input.
2024-03-20 15:05:25 +11:00
6e882d3fd6 feat(config): dynamic ram cache size
Use the util function to calculate ram cache size on startup. This way, the `ram` setting will always be optimized for a system, even if they add or remove RAM. In other words, the default value is now dynamic.
2024-03-20 15:05:25 +11:00
fabef8b45b feat(mm): download upscaling & lama models as they are requested 2024-03-20 15:05:25 +11:00
97f16b2b7e fix(ui): fix model install progress display 2024-03-20 15:05:25 +11:00
609c2c0abf Fix: progress image preview for inpainting 2024-03-20 13:36:05 +11:00
fe5fa7f8cc chore: make ruff 2024-03-20 13:36:05 +11:00
8b30cbe81e chore: clean up old code comments 2024-03-20 13:36:05 +11:00
2af9286345 fix: denoise mask incorectly applied after step 2024-03-20 13:36:05 +11:00
29b04b7e83 chore: bump nodes versions
Bump all nodes in prep for v4.0.0.
2024-03-20 10:28:07 +11:00
04aa97f0fd gh: update pr template 2024-03-20 10:19:04 +11:00
39fa8874fc undo 2024-03-20 10:05:46 +11:00
4e245e9331 fix refiner metadata 2024-03-20 10:05:46 +11:00
74a51571a0 Fix race condition causing hangs during model install unit tests (#5994)
* fix race condition causing hangs during model install unit tests

* remove extraneous sanity checks

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-19 20:54:49 +00:00
c87497fd54 record model_variant in t2i and clip_vision configs (#5989)
- Move base of t2i and clip_vision config models to DiffusersBase, which contains
  a field to record the model variant (e.g. "fp16")
- This restore the ability to load fp16 t2i and clip_vision models
- Also add defensive coding to load the vanilla model when the fp16 model
  has been replaced (or more likely, user's preferences changed since installation)

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-19 20:14:12 +00:00
3f61c51c3a fix(ui): model list refreshes after changes
When consolidating all the model queries I messed up the query tags. Fixed now, so that when a model is installed, removed, or changed, the list refreshes.
2024-03-20 06:25:57 +11:00
07c9c0b0ab translationBot(ui): update translation (Russian)
Currently translated at 99.5% (1091 of 1096 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
2322d3cbbe translationBot(ui): update translation (Japanese)
Currently translated at 52.5% (576 of 1096 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 52.0% (570 of 1096 strings)

Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
419ce02aae translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1077 of 1096 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.2% (1077 of 1096 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
629ccd059e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
2dfa51c2e5 translationBot(ui): update translation (Russian)
Currently translated at 99.0% (1518 of 1533 strings)

translationBot(ui): update translation (Russian)

Currently translated at 99.0% (1518 of 1533 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
edde0fe174 translationBot(ui): update translation (Bulgarian)
Currently translated at 3.9% (61 of 1533 strings)

translationBot(ui): update translation (Bulgarian)

Currently translated at 1.8% (28 of 1533 strings)

translationBot(ui): added translation (Bulgarian)

Co-authored-by: Sufi2425 <sufi24251@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/bg/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
3e46f7a010 translationBot(ui): update translation (Italian)
Currently translated at 97.8% (1510 of 1543 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.1% (1503 of 1532 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.1% (1503 of 1532 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
faa555df20 chore(ui): lint 2024-03-19 22:11:48 +11:00
7a3e19227f feat(ui): display created_by using valueOrNull 2024-03-19 22:11:48 +11:00
e706afe8a6 feat(ui): add valueOrNull to useMetadataItem
In order to allow for null and undefined metadata values, this hook returned a symbol to indicate that parsing failed or was pending.

For values where the parsed value will never be null or undefined, it is useful get the value or null (instead of a symbol).
2024-03-19 22:11:48 +11:00
acca197893 revert(ui): restore metadata parsers for created_by 2024-03-19 22:11:48 +11:00
aa2c404cab move created_by out of recall panel 2024-03-19 22:11:48 +11:00
300a4693ae prettier 2024-03-19 21:59:51 +11:00
820614e4d8 ruff 2024-03-19 21:59:51 +11:00
fe563f05fc tsc 2024-03-19 21:59:51 +11:00
d89e653588 fix(ui): remove image_resolution from colormap 2024-03-19 21:59:51 +11:00
4e9207a10b fix(worker): remove resolution from zoe as it seems to break it 2024-03-19 21:59:51 +11:00
8c6c33a315 'feat(ui): update processor constants to calculate default resolution based on current base model, add image_resolution to the processors that didn't have it in the UI as a configurable op
tion
2024-03-19 21:59:51 +11:00
ed0f9f7d66 feat(worker): add image_resolution as option for all cnet procesors 2024-03-19 21:59:51 +11:00
b25850a585 typegen 2024-03-19 21:59:51 +11:00
94257e35f5 Comment out test failing to run due to issue in model install service 2024-03-19 01:16:11 -04:00
19e16384f7 Remove redundant embedding file read 2024-03-19 01:16:11 -04:00
a3ed6e694c Use wait_for_job instead of wait_for_installs 2024-03-19 01:16:11 -04:00
20d9d10798 Wrap in try except for InvalidModelConfigException 2024-03-19 01:16:11 -04:00
77a70a8a9c Skip hashing in test_heuristic_import_with_type 2024-03-19 01:16:11 -04:00
06abea8db0 Increase timeout for test_heuristic_import_with_type, fix Url import 2024-03-19 01:16:11 -04:00
a28f0932e6 Run ruff 2024-03-19 01:16:11 -04:00
6968a068bb Fix test to run on windows vms 2024-03-19 01:16:11 -04:00
9d5b96c119 Pull format type setting out of model_type if statement 2024-03-19 01:16:11 -04:00
5daefccf77 Simplify logic for determining model type in probe 2024-03-19 01:16:11 -04:00
1f3c35ee90 Run Ruff 2024-03-19 01:16:11 -04:00
f78ed3a952 Add unit test 2024-03-19 01:16:11 -04:00
d38262a7ea Allow type field to be a string 2024-03-19 01:16:11 -04:00
5feb62d440 Allow users to specify model type and skip detection step of probe 2024-03-19 01:16:11 -04:00
f8df293d2c Revert "fix(mm): provide ckpt config as stream to diffusers"
This reverts commit 9d045964d6.
2024-03-19 14:24:54 +11:00
9d045964d6 fix(mm): provide ckpt config as stream to diffusers
Fixes converting ckpt main models
2024-03-19 09:24:28 +11:00
9fa9ebe386 fix(config): set ignore_missing_core_models when provided as CLI arg 2024-03-19 09:24:28 +11:00
1cb1b60b4c tidy: "check_root.py" -> "check_directories.py" 2024-03-19 09:24:28 +11:00
1d4517d00d tidy: "validate_root" -> "validate_directories" 2024-03-19 09:24:28 +11:00
982b513af3 tidy(config): move a few get_config calls to inside the functions where they are needed 2024-03-19 09:24:28 +11:00
f1450c2c24 update textual inversion training to use root_path rather than root_dir 2024-03-19 09:24:28 +11:00
5d16a40b95 fix invokeai-configure to use isolated argument-parsing pattern 2024-03-19 09:24:28 +11:00
d871fca643 partially address --root CLI argument handling
- fix places where `get_config()` is being called at import time rather
  than at run time.

- add regression test for import time get_config() calling.
2024-03-19 09:24:28 +11:00
8cd65755ef make invokeai-model-install use the --root argument correctly 2024-03-19 09:24:28 +11:00
e76cc71e81 fix(config): edge cases in models.yaml migration
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.

- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
2024-03-19 09:24:28 +11:00
1ed1c1fb24 chore: ruff 2024-03-19 09:24:28 +11:00
9063ea9173 tests: comprehensive config migration tests
Add testing for the settings that had to be explicitly migrated.
2024-03-19 09:24:28 +11:00
4633242503 tidy(config): move config docstring builder to its script 2024-03-19 09:24:28 +11:00
e8b030427d fix(config): do not discard conf_path, migrate custom models.yaml
Hold onto `conf_path` temporarily while migrating `invokeai.yaml` so that it gets migrated correctly as the model installer starts up. Stashed as `legacy_models_yaml_path` in the config, excluded from serialization.
2024-03-19 09:24:28 +11:00
415a4baf78 docs: add note about pydantic-settings' yaml support 2024-03-19 09:24:28 +11:00
09a8c0328a docs: update CONFIGURATION.md 2024-03-19 09:24:28 +11:00
e32c609fec fix(config): ignore empty environment variables (use default values instead) 2024-03-19 09:24:28 +11:00
beffca6b49 docs: update CONFIGURATION.md 2024-03-19 09:24:28 +11:00
a281671e6c docs: update InvokeAIAppConfig doc generator
It now renders the valid values.
2024-03-19 09:24:28 +11:00
5179587b5a feat(config): restore ignore_missing_core_models arg 2024-03-19 09:24:28 +11:00
cb180909f7 fix(install): resolve config-related issues with configurator
- Do not use the singleton app config, this causes a lot of weirdness
- Update logic to use new config object
- Remove unused code
2024-03-19 09:24:28 +11:00
ce9aeeece3 feat: single app entrypoint with CLI arg parsing
We have two problems with how argparse is being utilized:
- We parse CLI args as the `api_app.py` file is read. This causes a problem pytest, which has an incompatible set of CLI args. Some tests import the FastAPI app, which triggers the config to parse CLI args, which receives the pytest args and fails.
- We've repeatedly had problems when something that uses the config is imported before the CLI args are parsed. When this happens, the root dir may not be set correctly, so we attempt to operate on incorrect paths.

To resolve these issues, we need to lift CLI arg parsing outside of the application code, but still let the application access the CLI args. We can create a external app entrypoint to do this.

- `InvokeAIArgs` is a simple helper class that parses CLI args and stores the result.
- `run_app()` is the new entrypoint. It first parses CLI args, then runs `invoke_api` to start the app.

The `invokeai-web` project script and `invokeai-web.py` dev script now call `run_app()` instead of `invoke_api()`.

The first time `get_config()` is called to get the singleton config object, it retrieves the args from `InvokeAIArgs`, sets the root dir if provided, then merges settings in from `invokeai.yaml`.

CLI arg parsing is now safely insulated from application code, but still accessible. And we don't need to worry about import order having an impact on anything, because by the time the app is running, we have already parsed CLI args. Whew!
2024-03-19 09:24:28 +11:00
5ecfa86cd0 tests: fix test on macos 2024-03-19 09:24:28 +11:00
d09f03ef25 fix(config): if no invokeai.yaml is found, create a default one
This fixes an issue with `test_images.py`, which tests the bulk images routers and imports the whole FastAPI app. This triggers the config logic which fails on the test runner, because it has no `invokeai.yaml`.

Also probably just good for graceful fallback.
2024-03-19 09:24:28 +11:00
3f8e2bfd18 fix(config): migrate deprecated max_cache_size and max_vram_cache_size settings 2024-03-19 09:24:28 +11:00
60492500db chore: ruff 2024-03-19 09:24:28 +11:00
f69938c6a8 fix(config): revised config methods
- `write_file` requires an destination file path
- `read_config` -> `merge_from_file`, if no path is provided, reads from `self.init_file_path`
- update app, tests to use new methods
- fix configurator, was overwriting config file data unexpectedly
2024-03-19 09:24:28 +11:00
5e39e46954 feat(config): more resiliant update_config method
Only set values that have changed.
2024-03-19 09:24:28 +11:00
1079bf3ccf feat(config): fix bad compress_level setting
Tweak the name of it so that incoming configs with the old default value of 6 have the setting stripped out. The result is all configs will now have the new, much better default value of 1.
2024-03-19 09:24:28 +11:00
fbbf9c01b5 tests: fix remaining tests 2024-03-19 09:24:28 +11:00
15cef98a8b tests: fix docs test 2024-03-19 09:24:28 +11:00
5606f4d627 tests: redo config tests 2024-03-19 09:24:28 +11:00
53c8f36029 docs(config): clarify comment during config migration 2024-03-19 09:24:28 +11:00
b9884a6166 feat(config): split out parse_args and read_config logic from get_config
Having this all in the `get_config` function makes testing hard. Move these two functions to their own methods, and call them on app startup explicitly.
2024-03-19 09:24:28 +11:00
77b86e9ad5 fix(install): remove broken v2.3 -> v3 migration logic from configurator 2024-03-19 09:24:28 +11:00
a6181b5759 fix(install): update configurator to use new config system 2024-03-19 09:24:28 +11:00
b4b0af7c60 fix(install): do not use deprecated pydantic methods 2024-03-19 09:24:28 +11:00
3d1f3818cb fix(config): use set_root to set root 2024-03-19 09:24:28 +11:00
deffeb9655 fix(config): use get_config singleton, new paths 2024-03-19 09:24:28 +11:00
b8c46fb15b fix(config): split check_invokeai_root into separate function to validate, use this in model_install to determine if need to run configurator 2024-03-19 09:24:28 +11:00
9539ecce79 fix(config): use correct config in textual_inversion_training 2024-03-19 09:24:28 +11:00
7716a4a8c7 fix(config): use correct config in install_helper 2024-03-19 09:24:28 +11:00
dedce2d896 fix(config): remove unnecessary resolve on config path 2024-03-19 09:24:28 +11:00
e43bfa3d70 docs(mm): do not hide members in InvokeAIAppConfig autodoc 2024-03-19 09:24:28 +11:00
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
7b1f9409bc fix(config): drop nonexistent config.use_cpu setting 2024-03-19 09:24:28 +11:00
a72cea014c fix(config): drop usage of deprecated config.xformers, just use the existing utility function 2024-03-19 09:24:28 +11:00
b4182b190f fix(config): use new config.patchmatch 2024-03-19 09:24:28 +11:00
22ac204678 fix(config): fix invisible_watermark handling
This setting was hardcoded to True. Simplified logic around it to not have a conditional that does nothing.
2024-03-19 09:24:28 +11:00
7ca447ded1 fix(config): use new config setup in api_app.py 2024-03-19 09:24:28 +11:00
4df28f1de6 fix(config): use yaml module instead of omegaconf when migrating models.yaml
Also use new paths.
2024-03-19 09:24:28 +11:00
ebd0cb6113 fix(config): remove reference to internet_available
Nothing ever set this. Only a debug print statement referenced it.
2024-03-19 09:24:28 +11:00
fbe3afa5e1 fix(config): fix nsfw_checker handling
This setting was hardcoded to True. Rework logic around it to not conditionally check the setting.
2024-03-19 09:24:28 +11:00
3fb116155b refactor(config): simplified config
- Remove OmegaConf. It functioned as an intermediary data format, between YAML/argparse and pydantic. It's not necessary - we can parse YAML or CLI args directly with pydantic.

- Remove dynamic CLI args. Only `root` is explicitly supported. This greatly simplifies config handling. Configuration is done by editing the YAML file. Frequently-used args can be added if there is a demand.

- A separate arg parser is created to handle the slimmed-down CLI args. It's run immediately in the `invokeai-web` script to handle `--version` and `--help`. It is also used inside the singleton config getter (see below).

- Remove categories from the config. Our settings model is mostly flat. Handling categories adds complexity for both us and users - we have to handle transforming a flat config to categorized config (and vice-versa), while users have to be careful with indentation in their YAML file.

- Add a `meta` key to the config file. Currently, this holds the config schema version only. It is not a part of the config object itself.

- Remove legacy settings that are no longer referenced, or were effectively no-op settings when referenced in code.

- Implement simple migration logic to for v3 configs. If migration is successful, the v3 config file is backed up to `invokeai.yaml.bak` and the new config written to `invokeai.yaml`.

- Previously, the singleton config was accessed by calling `InvokeAIAppConfig.get_config()`. This returned an instance of `InvokeAIAppConfig`, which _also_ has the `get_config` function. This created to a confusing situation where you weren't sure if you needed to call `get_config` or just use the config object. This method is replaced by a standalone `get_config` function which returns a singleton config object.

- Wrap CLI arg parsing (for `root`) and loading/migrating `invokeai.yaml` into the new `get_config()` function.

- Move `generate_config_docstrings` into standalone utility function.

- Make `root` a private attr (`_root`). This reduces the temptation to directly modify and or use this sensitive field and ensures it is neither serialized nor read from input data. Use `root_path` to access the resolved root path, or `set_root` to set the root to something.
2024-03-19 09:24:28 +11:00
7387b0bdc9 install missing clip_vision encoders if required by an ip adapter (#5982)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 02:19:53 +00:00
7ea9cac9a3 Add sdxl controlnet models (#5980)
* allow removal of models with legacy relative path addressing

* added five controlnet models for sdxl to INITIAL_MODELS

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 01:15:15 +00:00
ea5bc94b9c Resolve when instantiating _cached_model_paths 2024-03-18 11:17:23 +11:00
a1743647b7 Stop registering and moving models which have symlinks in the models dir 2024-03-18 11:17:23 +11:00
a6d64f69e1 Remove core conversion models (#5981)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

We've been using a forked copy of the diffusers safetensors->diffusers
model conversion code, which was hacked to read CLIP and the other
models needed for conversion from the local invokeai root models
directory. This was getting unsustainable as the code bases diverged,
and also required the installation and maintenance of the "core/convert"
directory.

This PR gets rid of the hacked conversion code and reverts to using the
native diffusers methods. Core convert models are no longer installed at
root configure time. Instead we rely on the HuggingFace hub system to
download the conversion models if and when they are needed. They are
relatively small and the initial delay seems minor.

Conversion of SD-1, SD-2 (both epsilon and v-prediction), SDXL, VAE and
ControlNet SD-1/2 models has been tested. ControlNet SDXL models are
still a WIP due to the need for some work on the prober.

The main implication of this change is that InvokeAI is no longer
internet-independent and will need an internet connection at least the
first time a safetensors file needs to be converted. However, there are
several other places where the "no internet" rule is already violated,
and I suggest that we abandon this principle.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #5964 

## QA Instructions, Screenshots, Recordings

1. Remove or move `$INVOKEAI_ROOT/models/.cache`
2. Move `$INVOKEAI/models/core/convert`
3. Try generating with an unconverted .safetensors model.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-03-18 11:11:15 +11:00
e74e78894f Merge branch 'main' into optimization/remove-core-conversion-models 2024-03-17 19:25:44 -04:00
71a1740740 Remove core safetensors->diffusers conversion models
- No longer install core conversion models. Use the HuggingFace cache to load
  them if and when needed.

- Call directly into the diffusers library to perform conversions with only shallow
   wrappers around them to massage arguments, etc.

- At root configuration time, do not create all the possible model subdirectories,
  but let them be created and populated at model install time.

- Remove checks for missing core conversion files, since they are no
  longer installed.
2024-03-17 19:13:18 -04:00
b79f2f337e Allow removal of models with legacy relative path addressing (#5979)
* allow removal of models with legacy relative path addressing

* fix ruff error

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-17 18:07:35 +00:00
a0420d1442 fix ruff error 2024-03-17 14:01:04 -04:00
a17021ba0c allow removal of models with legacy relative path addressing 2024-03-17 09:58:16 -04:00
faa1ffb06f Update diffusers 0.26.3 -> 0.27.0 and other HF packages 2024-03-16 05:44:58 -07:00
8c04eec210 fix initial main model logic 2024-03-15 10:22:16 -04:00
330e1354b4 Run typegen, update version to 4.0.0rc2 2024-03-14 17:01:36 -04:00
21621eebf0 feat(ui): handle control adapter processed images
- Add helper functions to build metadata for control adapters, including the processed images
- Update parses to parse the new metadata
2024-03-14 12:34:03 -07:00
c24f2046e7 chore(ui): typegen 2024-03-14 12:34:03 -07:00
297408d67e feat(nodes): add control adapter processed images to metadata
In the client, a controlnet or t2i adapter has two images:
- The source control image: the image the user selected (required)
- The processed control image: the user's image after we've processed it (optional)

The processed image is optional because a user may provide a pre-processed image.

We only actually use one of these images when building the graph, and until this change, we only stored one of the in image metadata. This created a situation where only a processed image was stored in metadata - say, a canny edge map - and the user-selected image wasn't provided.

By adding the processed image to metadata, we can recall both the control image and optional processed image.

This commit is followed by a UI-facing changes to support the change.
2024-03-14 12:34:03 -07:00
0131e7d928 fix(ui): recall control adapter metadata fields 2024-03-14 12:34:03 -07:00
06ff105a1f fix(ui): reset loras/control adapters when using recall all or remix 2024-03-14 12:34:03 -07:00
bb8e6bbee6 docs: update CONFIGURATION.md
Update model hashing docs
2024-03-15 00:14:48 +11:00
328dc99f3a fix(ui): log model load events
- Fix types
- Fix logging in listener
2024-03-14 18:29:55 +05:30
ef55077e84 feat(events): add submodel_type to model load events
This was lost during MM2 migration
2024-03-14 18:29:55 +05:30
ba3d8af161 fix(events): dump event payloads to serializable format 2024-03-14 18:29:55 +05:30
b07b7af710 feat(ui): single getModelConfigs query (#5962)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [z] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Single query, with simple wrapper hooks (type-safe). Updated everywhere
in frontend.

## QA Instructions, Screenshots, Recordings

Things that use models should work. All of this code is strictly
typechecked, so we can be confident in this change.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-03-14 18:20:38 +05:30
19d66d5ec7 feat(ui): single getModelConfigs query
Single query, with simple wrapper hooks (type-safe). Updated everywhere in frontend.
2024-03-14 23:37:40 +11:00
ed20255abf fix(nodes): depth anything processor (#5956) (#5961)
We were passing a PIL image when we needed to pass the np image.

Closes #5956

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

We were passing a PIL image when we needed to pass the np image.

Closes #5956

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #5956

## QA Instructions, Screenshots, Recordings

Depth anything processor should work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-03-14 14:57:40 +05:30
fed1f983db fix(nodes): depth anything processor (#5956)
We were passing a PIL image when we needed to pass the np image.

Closes #5956
2024-03-14 20:14:53 +11:00
a386544a1d chore: ruff 2024-03-14 17:32:02 +11:00
0851de9090 closes #5932 2024-03-14 17:32:02 +11:00
1bd8e33f8c Work around missing core conversion model issue
- This adds additional logic to the safetensors->diffusers conversion script
  to check for and install missing core conversion models at runtime.

- Fixes #5934
2024-03-14 16:52:01 +11:00
e3f29ed320 tests: update default settings tests 2024-03-14 16:03:37 +11:00
3fd824306c feat(mm): probe for main model default settings
Currently, this is just the width and height, derived from the model base.
2024-03-14 16:03:37 +11:00
2584a950aa feat(ui): add w/h to default model settings 2024-03-14 16:03:37 +11:00
1adaf63253 chore(ui): typegen 2024-03-14 16:03:37 +11:00
b9f1a4bd65 feat(nodes): add w/h defaults for models 2024-03-14 16:03:37 +11:00
731942dbed feat(nodes): add constraints & descriptions to default settings 2024-03-14 16:03:37 +11:00
4117cea5bf tidy(mm): remove misplaced comment 2024-03-14 15:54:42 +11:00
21617f3bc1 docs: update description for hashing_algorithm in config 2024-03-14 15:54:42 +11:00
9fcd67b5c0 feat(mm): add algorithm prefix to hashes
For example:
- md5:a0cd925fc063f98dbf029eee315060c3
- sha1:9e362940e5603fdc60566ea100a288ba2fe48b8c
- blake3:ce3f0c5f3c05d119f4a5dcaf209b50d3149046a0d3a9adee9fed4c83cad6b4d0
2024-03-14 15:54:42 +11:00
a4be935458 docs: update config docs 2024-03-14 15:54:42 +11:00
eb6e6548ed feat(mm): faster hashing for spinning disk HDDs
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31

- Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted.
- Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality.
- Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue
- Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3`
- Update all calls of the probe to use the app's configured hashing algorithm
- Update an external script that probes models
- Update tests
- Move ModelHash into its own module to avoid circuclar import issues
2024-03-14 15:54:42 +11:00
8287fcf097 feat: ✏️ rename "Workflow Editor" tab label to "Workflows" 2024-03-14 12:22:23 +11:00
dd475e28ed chore(ui): remove unused translation keys via script 2024-03-14 11:38:29 +11:00
24e741e2d1 feat(ui): add script to clean translations
This script removes unused translations from the `en.json` source translation file:
- Parse `en.json` to build a list of all keys, e.g. `controlnet.depthAnything`
- Check every frontend file for every key
- If the key is not found, it is removed from the translation file
- Exact matches (e.g. `controlnet.depthAnything`) and stem matches (e.g. `depthAnything`) are ignored
2024-03-14 11:38:29 +11:00
e0bf9ce5c6 tidy(ui): use normal quotes in translations 2024-03-14 11:38:29 +11:00
c66e8b395e fix(ui): remove unused input on depth anything processor node 2024-03-14 10:53:57 +11:00
4c417adc82 fix(ui): use revised metadata model types
We can also totally remove the fetch logic because we store the same model data in state now.
2024-03-14 10:53:57 +11:00
437a413ca3 chore(ui): typegen 2024-03-14 10:53:57 +11:00
4492bedd19 tidy(nodes): use ModelIdentifierField for model metadata
Until recently, this had a different shape than the ModelMetadataField. They are now the same, so we can re-use the ModelIdentifierField.
2024-03-14 10:53:57 +11:00
db12ce95a8 fix(ui): invalid collect node error w/ control adapters
The graph builders used awaited functions within `Array.prototype.forEach` loops. This doesn't do what you'd think. This caused graphs to be enqueued before they were fully constructed.

 Changed to `for..of` loops to fix this.
2024-03-14 10:53:57 +11:00
ee3a1a95ef fix(ui): control adapters require control images
There wasn't enough validation of control adapters during graph building. It would be possible for a graph to be built with empty collect node, causing an error. Addressed with an extra check.

This should never happen in practice, because the invoke button should be disabled if an invalid CA is active.
2024-03-14 10:53:57 +11:00
4bb5aba70e feat(ui): only fetch TIs on first load, add comment 2024-03-14 07:38:09 +11:00
cd55c23713 initiate TI model query when socket connects so user doesnt have to wait when opening prompt trigger phrases 2024-03-14 07:38:09 +11:00
1d2743af1b remove log 2024-03-14 07:25:48 +11:00
99d2099ccd add key for controladapter CustomSelect too 2024-03-14 07:25:48 +11:00
b64a693f16 try adding a key to force rerender when items load 2024-03-14 07:25:48 +11:00
9d523a3094 chore: cleanup DepthAnything code (#5945)
## What type of PR is this? (check all applicable)

- [x] Optimization

## Description

Was merged into next but never carried over to main. So cleaning up
again.
2024-03-13 20:46:54 +05:30
af660163ca chore: cleanup DepthAnything code 2024-03-13 20:35:52 +05:30
7e4b462fca docs: OVERVIEW.md typo 2024-03-13 22:43:20 +11:00
4468dd6948 docs: update OVERVIEW.md
Update pkg scripts.
2024-03-13 22:43:20 +11:00
4f39e248dd docs: update OVERVIEW.md
Fix links
2024-03-13 22:43:20 +11:00
44b3e5d43f docs: update INVOCATION_API.md
Add blurb about `WithMetadata` and `WithBoard` mixins.
2024-03-13 22:43:20 +11:00
8894a9e48a docs: update WORKFLOWS.md 2024-03-13 22:43:20 +11:00
c73f58e486 docs: move frontend docs to mkdocs 2024-03-13 22:43:20 +11:00
614fece147 chore(ui): prettier 2024-03-13 21:02:29 +11:00
8ef8082d65 feat(ui): style add model panel 2024-03-13 21:02:29 +11:00
d93d4afbb7 feat(ui): style HF scan tab 2024-03-13 21:02:29 +11:00
01207a2fa5 fix(mm): config.json to indicates diffusers model 2024-03-13 21:02:29 +11:00
d0800c4888 ui consistency, moved is_diffusers logic to backend, extended HuggingFaceMetadata, removed logic from service 2024-03-13 21:02:29 +11:00
2a300ecada updated add model copy, added search to hugging face results 2024-03-13 21:02:29 +11:00
90340a39c7 clean up python errors 2024-03-13 21:02:29 +11:00
ee77abb4fe updated simple install button to match other tabs 2024-03-13 21:02:29 +11:00
004bca5c42 updated endpoint types 2024-03-13 21:02:29 +11:00
5ad048a161 fixed error handling 2024-03-13 21:02:29 +11:00
6369ccd05e added placeholders, updated some copy 2024-03-13 21:02:29 +11:00
3a5314f1ca install model if diffusers or single file, cleaned up backend logic to not mess with existing model install 2024-03-13 21:02:29 +11:00
4c0896e436 removed log 2024-03-13 21:02:29 +11:00
f7cd3cf1f4 added hf models import tab and route for getting available hf models 2024-03-13 21:02:29 +11:00
efea1a8a7d ci: add always_run input to checks & tests, use this on release workflow
This bypasses the `changed-files` check, and forces the checks to run. The release workflow sets this flag to ensure that the checks and tests are always run for a release.
2024-03-13 15:32:42 +11:00
d0d695c020 disable trigger phrase form if empty 2024-03-12 21:08:15 -04:00
2a648da557 updated model manager to display when import item is cancelled 2024-03-13 09:18:05 +11:00
54f1a1f952 Update l2i invoke and seamless to support AutoencoderTiny, remove att… (#5936)
…ention processors if no mid_block is detected

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
L2i throws an assertion error when run with `madebyollin/taesdxl` due to
it requiring a different class in diffusers to load it. This is a small
PR to update seamless and l2i to accept AutoencoderTiny models and not
throw exceptions while processing them.

## QA Instructions, Screenshots, Recordings

<img width="445" alt="Screenshot 2024-03-12 at 12 04 29 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/58442074/34a17e44-d911-4fef-8fc1-71f7b688688c">
Run an sdxl pipeline using a vae that requires AutoencoderTiny and
validate that the image successfully encodes and decodes.

## Merge Plan

This PR can be merged when approved
2024-03-12 21:52:32 +05:30
8d2a4db902 Found another instance of expecting a mid_block on the decoder in a vae 2024-03-12 12:11:38 -04:00
7b393656de Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
43948e0758 feat(ui): add setting for always show image size badge 2024-03-12 18:52:23 +11:00
cc03fcbcb6 style(ui): tweak image dimension badge overlay styles 2024-03-12 18:52:23 +11:00
d1e445fa49 fix(ui): changed to theme tokens 2024-03-12 18:52:23 +11:00
adba8489f2 fix(ui): made changes to avoid overlapping 2024-03-12 18:52:23 +11:00
d919022ba5 fix(ui): fixed requested changes and made the badge display on hover 2024-03-12 18:52:23 +11:00
e076898798 fix(ui): logic to remove badge for small image size 2024-03-12 18:52:23 +11:00
9f19b766a4 feat(ui): Add image size badge to gallery images 2024-03-12 18:52:23 +11:00
259 changed files with 5786 additions and 16038 deletions

View File

@ -1,66 +1,25 @@
## What type of PR is this? (check all applicable)
<!--Thanks for contributing!-->
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Summary
<!--A description of the changes in this PR. Include the kind of change (fix, feature, docs, etc), the "why" and the "how". Screenshots or videos are useful for frontend changes.-->
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Related Issues / Discussions
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
<!--List any related issues or discussions on github or discord. If this PR closes an issue, please use the "Closes #1234" format, so that the issue will be automatically closed when the PR merges.-->
## QA Instructions
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like DB schemas, may need some care when merging. For example, a careful rebase by the change author, timing to not interfere with a pending release, or a message to contributors on discord after merging.-->
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
## Checklist
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
<!--If any of these are not completed or not applicable to the change, please add a note.-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- [ ] The PR has a short but descriptive title
- [ ] Tests added / updated
- [ ] Documentation added / updated

View File

@ -1,7 +1,7 @@
# Runs frontend code quality checks.
#
# Checks for changes to frontend files before running the checks.
# When manually triggered or when called from another workflow, always runs the checks.
# If always_run is true, always runs the checks.
name: 'frontend checks'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
defaults:
run:
@ -30,7 +42,7 @@ jobs:
- uses: actions/checkout@v4
- name: check for changed frontend files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -39,30 +51,30 @@ jobs:
- 'invokeai/frontend/web/**'
- name: install dependencies
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
uses: ./.github/actions/install-frontend-deps
- name: tsc
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:tsc'
shell: bash
- name: dpdm
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:dpdm'
shell: bash
- name: eslint
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:eslint'
shell: bash
- name: prettier
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:prettier'
shell: bash
- name: knip
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:knip'
shell: bash

View File

@ -1,7 +1,7 @@
# Runs frontend tests.
#
# Checks for changes to frontend files before running the tests.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the tests.
name: 'frontend tests'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
defaults:
run:
@ -30,7 +42,7 @@ jobs:
- uses: actions/checkout@v4
- name: check for changed frontend files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -39,10 +51,10 @@ jobs:
- 'invokeai/frontend/web/**'
- name: install dependencies
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
uses: ./.github/actions/install-frontend-deps
- name: vitest
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm test:no-watch'
shell: bash

View File

@ -1,7 +1,7 @@
# Runs python code quality checks.
#
# Checks for changes to python files before running the checks.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the checks.
#
# TODO: Add mypy or pyright to the checks.
@ -18,7 +18,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
jobs:
python-checks:
@ -29,7 +41,7 @@ jobs:
uses: actions/checkout@v4
- name: check for changed python files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -41,7 +53,7 @@ jobs:
- 'tests/**'
- name: setup python
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: '3.10'
@ -49,16 +61,16 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install ruff
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pip install ruff
shell: bash
- name: ruff check
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff check --output-format=github .
shell: bash
- name: ruff format
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff format --check .
shell: bash

View File

@ -1,7 +1,7 @@
# Runs python tests on a matrix of python versions and platforms.
#
# Checks for changes to python files before running the tests.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the tests.
name: 'python tests'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@ -63,7 +75,7 @@ jobs:
uses: actions/checkout@v4
- name: check for changed python files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -75,7 +87,7 @@ jobs:
- 'tests/**'
- name: setup python
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
@ -83,12 +95,12 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install dependencies
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install --editable=".[test]"
- name: run pytest
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pytest

View File

@ -30,15 +30,23 @@ jobs:
frontend-checks:
uses: ./.github/workflows/frontend-checks.yml
with:
always_run: true
frontend-tests:
uses: ./.github/workflows/frontend-tests.yml
with:
always_run: true
python-checks:
uses: ./.github/workflows/python-checks.yml
with:
always_run: true
python-tests:
uses: ./.github/workflows/python-tests.yml
with:
always_run: true
build:
uses: ./.github/workflows/build-installer.yml

View File

@ -9,10 +9,6 @@ set -e -o pipefail
### Set INVOKEAI_ROOT pointing to a valid runtime directory
# Otherwise configure the runtime dir first.
### Configure the InvokeAI runtime directory (done by default)):
# docker run --rm -it <this image> --configure
# or skip with --no-configure
### Set the CONTAINER_UID envvar to match your user.
# Ensures files created in the container are owned by you:
# docker run --rm -it -v /some/path:/invokeai -e CONTAINER_UID=$(id -u) <this image>
@ -22,27 +18,6 @@ USER_ID=${CONTAINER_UID:-1000}
USER=ubuntu
usermod -u ${USER_ID} ${USER} 1>/dev/null
configure() {
# Configure the runtime directory
if [[ -f ${INVOKEAI_ROOT}/invokeai.yaml ]]; then
echo "${INVOKEAI_ROOT}/invokeai.yaml exists. InvokeAI is already configured."
echo "To reconfigure InvokeAI, delete the above file."
echo "======================================================================"
else
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
gosu ${USER} invokeai-configure --yes --default_only
fi
}
## Skip attempting to configure.
## Must be passed first, before any other args.
if [[ $1 != "--no-configure" ]]; then
configure
else
shift
fi
### Set the $PUBLIC_KEY env var to enable SSH access.
# We do not install openssh-server in the image by default to avoid bloat.
# but it is useful to have the full SSH server e.g. on Runpod.

View File

@ -0,0 +1,133 @@
# Invoke UI
Invoke's UI is made possible by many contributors and open-source libraries. Thank you!
## Dev environment
### Setup
1. Install [node] and [pnpm].
1. Run `pnpm i` to install all packages.
#### Run in dev mode
1. From `invokeai/frontend/web/`, run `pnpm dev`.
1. From repo root, run `python scripts/invokeai-web.py`.
1. Point your browser to the dev server address, e.g. <http://localhost:5173/>
### Package scripts
- `dev`: run the frontend in dev mode, enabling hot reloading
- `build`: run all checks (madge, eslint, prettier, tsc) and then build the frontend
- `typegen`: generate types from the OpenAPI schema (see [Type generation])
- `lint:dpdm`: check circular dependencies
- `lint:eslint`: check code quality
- `lint:prettier`: check code formatting
- `lint:tsc`: check type issues
- `lint:knip`: check for unused exports or objects (failures here are just suggestions, not hard fails)
- `lint`: run all checks concurrently
- `fix`: run `eslint` and `prettier`, fixing fixable issues
### Type generation
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
The generated types are committed to the repo in [schema.ts].
```sh
# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
pnpm typegen
```
### Localization
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project.
Only the English source strings should be changed on this repo.
### VSCode
#### Example debugger config
```jsonc
{
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Invoke UI",
"url": "http://localhost:5173",
"webRoot": "${workspaceFolder}/invokeai/frontend/web"
}
]
}
```
#### Remote dev
We've noticed an intermittent timeout issue with the VSCode remote dev port forwarding.
We suggest disabling the editor's port forwarding feature and doing it manually via SSH:
```sh
ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host
```
## Contributing Guidelines
Thanks for your interest in contributing to the Invoke Web UI!
Please follow these guidelines when contributing.
### Check in before investing your time
Please check in before you invest your time on anything besides a trivial fix, in case it conflicts with ongoing work or isn't aligned with the vision for the app.
If a feature request or issue doesn't already exist for the thing you want to work on, please create one.
Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the feature request / issue you want to work on - we're happy to chat.
### Code conventions
- This is a fairly complex app with a deep component tree. Please use memoization (`useCallback`, `useMemo`, `memo`) with enthusiasm.
- If you need to add some global, ephemeral state, please use [nanostores] if possible.
- Be careful with your redux selectors. If they need to be parameterized, consider creating them inside a `useMemo`.
- Feel free to use `lodash` (via `lodash-es`) to make the intent of your code clear.
- Please add comments describing the "why", not the "how" (unless it is really arcane).
### Commit format
Please use the [conventional commits] spec for the web UI, with a scope of "ui":
- `chore(ui): bump deps`
- `chore(ui): lint`
- `feat(ui): add some cool new feature`
- `fix(ui): fix some bug`
### Submitting a PR
- Ensure your branch is tidy. Use an interactive rebase to clean up the commit history and reword the commit messages if they are not descriptive.
- Run `pnpm lint`. Some issues are auto-fixable with `pnpm fix`.
- Fill out the PR form when creating the PR.
- It doesn't need to be super detailed, but a screenshot or video is nice if you changed something visually.
- If a section isn't relevant, delete it. There are no UI tests at this time.
## Other docs
- [Workflows - Design and Implementation]
- [State Management]
[node]: https://nodejs.org/en/download/
[pnpm]: https://github.com/pnpm/pnpm
[discord]: https://discord.gg/ZmtBAhwWhy
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[Type generation]: #type-generation
[schema.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/services/api/schema.ts
[conventional commits]: https://www.conventionalcommits.org/en/v1.0.0/
[Workflows - Design and Implementation]: ./WORKFLOWS.md
[State Management]: ./STATE_MGMT.md

View File

@ -1,40 +1,5 @@
# Workflows - Design and Implementation
<!-- @import "[TOC]" {cmd="toc" depthFrom=1 depthTo=6 orderedList=false} -->
<!-- code_chunk_output -->
- [Workflows - Design and Implementation](#workflows---design-and-implementation)
- [Design](#design)
- [Linear UI](#linear-ui)
- [Workflow Editor](#workflow-editor)
- [Workflows](#workflows)
- [Workflow -> reactflow state -> InvokeAI graph](#workflow---reactflow-state---invokeai-graph)
- [Nodes vs Invocations](#nodes-vs-invocations)
- [Workflow Linear View](#workflow-linear-view)
- [OpenAPI Schema](#openapi-schema)
- [Field Instances and Templates](#field-instances-and-templates)
- [Stateful vs Stateless Fields](#stateful-vs-stateless-fields)
- [Collection and Polymorphic Fields](#collection-and-polymorphic-fields)
- [Implementation](#implementation)
- [zod Schemas and Types](#zod-schemas-and-types)
- [OpenAPI Schema Parsing](#openapi-schema-parsing)
- [Parsing Field Types](#parsing-field-types)
- [Primitive Types](#primitive-types)
- [Complex Types](#complex-types)
- [Collection Types](#collection-types)
- [Collection or Scalar Types](#collection-or-scalar-types)
- [Optional Fields](#optional-fields)
- [Building Field Input Templates](#building-field-input-templates)
- [Building Field Output Templates](#building-field-output-templates)
- [Managing reactflow State](#managing-reactflow-state)
- [Building Nodes and Edges](#building-nodes-and-edges)
- [Building a Workflow](#building-a-workflow)
- [Loading a Workflow](#loading-a-workflow)
- [Workflow Migrations](#workflow-migrations)
<!-- /code_chunk_output -->
> This document describes, at a high level, the design and implementation of workflows in the InvokeAI frontend. There are a substantial number of implementation details not included, but which are hopefully clear from the code.
InvokeAI's backend uses graphs, composed of **nodes** and **edges**, to process data and generate images.
@ -152,13 +117,13 @@ Stateless fields do not store their value in the node, so their field instances
"Custom" fields will always be treated as stateless fields.
##### Collection and Polymorphic Fields
##### Collection and Scalar Fields
Field types have a name and two flags which may identify it as a **collection** or **polymorphic** field.
Field types have a name and two flags which may identify it as a **collection** or **collection or scalar** field.
If a field is annotated in python as a list, its field type is parsed and flagged as a collection type (e.g. `list[int]`).
If a field is annotated in python as a list, its field type is parsed and flagged as a **collection** type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a polymorphic type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
If it is annotated as a union of a type and list, the type will be flagged as a **collection or scalar** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
@ -338,13 +303,13 @@ Migration logic is in [migrations.ts].
[reactflow]: https://github.com/xyflow/xyflow 'reactflow'
[reactflow-concepts]: https://reactflow.dev/learn/concepts/terms-and-definitions
[reactflow-events]: https://reactflow.dev/api-reference/react-flow#event-handlers
[buildWorkflow.ts]: ../src/features/nodes/util/workflow/buildWorkflow.ts
[nodesSlice.ts]: ../src/features/nodes/store/nodesSlice.ts
[buildLinearTextToImageGraph.ts]: ../src/features/nodes/util/graph/buildLinearTextToImageGraph.ts
[buildNodesGraph.ts]: ../src/features/nodes/util/graph/buildNodesGraph.ts
[buildInvocationNode.ts]: ../src/features/nodes/util/node/buildInvocationNode.ts
[validateWorkflow.ts]: ../src/features/nodes/util/workflow/validateWorkflow.ts
[migrations.ts]: ../src/features/nodes/util/workflow/migrations.ts
[parseSchema.ts]: ../src/features/nodes/util/schema/parseSchema.ts
[buildFieldInputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldInputTemplate.ts
[buildFieldOutputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldOutputTemplate.ts
[buildWorkflow.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/buildWorkflow.ts
[nodesSlice.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts
[buildLinearTextToImageGraph.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/graph/buildLinearTextToImageGraph.ts
[buildNodesGraph.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/graph/buildNodesGraph.ts
[buildInvocationNode.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/node/buildInvocationNode.ts
[validateWorkflow.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/validateWorkflow.ts
[migrations.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/migrations.ts
[parseSchema.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/parseSchema.ts
[buildFieldInputTemplate.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts
[buildFieldOutputTemplate.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldOutputTemplate.ts

View File

@ -6,195 +6,136 @@ title: Configuration
## Intro
InvokeAI has numerous runtime settings which can be used to adjust
many aspects of its operations, including the location of files and
directories, memory usage, and performance. These settings can be
viewed and customized in several ways:
Runtime settings, including the location of files and
directories, memory usage, and performance, are managed via the
`invokeai.yaml` config file or environment variables. A subset
of settings may be set via commandline arguments.
1. By editing settings in the `invokeai.yaml` file.
2. By setting environment variables.
3. On the command-line, when InvokeAI is launched.
Settings sources are used in this order:
In addition, the most commonly changed settings are accessible
graphically via the `invokeai-configure` script.
- CLI args
- Environment variables
- `invokeai.yaml` settings
- Fallback: defaults
### How the Configuration System Works
### InvokeAI Root Directory
When InvokeAI is launched, the very first thing it needs to do is to
find its "root" directory, which contains its configuration files,
installed models, its database of images, and the folder(s) of
generated images themselves. In this document, the root directory will
be referred to as ROOT.
On startup, InvokeAI searches for its "root" directory. This is the directory
that contains models, images, the database, and so on. It also contains
a configuration file called `invokeai.yaml`.
#### Finding the Root Directory
InvokeAI searches for the root directory in this order:
To find its root directory, InvokeAI uses the following recipe:
1. The `--root <path>` CLI arg.
2. The environment variable INVOKEAI_ROOT.
3. The directory containing the currently active virtual environment.
4. Fallback: a directory in the current user's home directory named `invokeai`.
1. It first looks for the argument `--root <path>` on the command line
it was launched from, and uses the indicated path if present.
### InvokeAI Configuration File
2. Next it looks for the environment variable INVOKEAI_ROOT, and uses
the directory path found there if present.
Inside the root directory, we read settings from the `invokeai.yaml` file.
3. If neither of these are present, then InvokeAI looks for the
folder containing the `.venv` Python virtual environment directory for
the currently active environment. This directory is checked for files
expected inside the InvokeAI root before it is used.
It has two sections - one for internal use and one for user settings:
4. Finally, InvokeAI looks for a directory in the current user's home
directory named `invokeai`.
```yaml
# Internal metadata - do not edit:
schema_version: 4
#### Reading the InvokeAI Configuration File
Once the root directory has been located, InvokeAI looks for a file
named `ROOT/invokeai.yaml`, and if present reads configuration values
from it. The top of this file looks like this:
```
InvokeAI:
Web Server:
host: localhost
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
restore: true
...
# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:
host: 0.0.0.0 # serve the app on your local network
models_dir: D:\invokeai\models # store models on an external drive
precision: float16 # always use fp16 precision
```
This lines in this file are used to establish default values for
Invoke's settings. In the above fragment, the Web Server's listening
port is set to 9090 by the `port` setting.
The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
You can edit this file with a text editor such as "Notepad" (do not
use Word or any other word processor). When editing, be careful to
maintain the indentation, and do not add extraneous text, as syntax
errors will prevent InvokeAI from launching. A basic guide to the
format of YAML files can be found
[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/).
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Reading Environment Variables
#### Custom Config File Location
Next InvokeAI looks for defined environment variables in the format
`INVOKEAI_<setting_name>`, for example `INVOKEAI_port`. Environment
variable values take precedence over configuration file variables. On
a Macintosh system, for example, you could change the port that the
web server listens on by setting the environment variable this way:
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.
```
export INVOKEAI_port=8000
invokeai-web
Note that environment variables will trump any settings in the config file.
### Environment Variables
All settings may be set via environment variables by prefixing `INVOKEAI_`
to the variable name. For example, `INVOKEAI_HOST` would set the `host`
setting.
For non-primitive values, pass a JSON-encoded string:
```sh
export INVOKEAI_REMOTE_API_TOKENS='[{"url_regex":"modelmarketplace", "token": "12345"}]'
```
Please check out these
[Macintosh](https://phoenixnap.com/kb/set-environment-variable-mac)
and
[Windows](https://phoenixnap.com/kb/windows-set-environment-variable)
guides for setting temporary and permanent environment variables.
We suggest using `invokeai.yaml`, as it is more user-friendly.
#### Reading the Command Line
### CLI Args
Lastly, InvokeAI takes settings from the command line, which override
everything else. The command-line settings have the same name as the
corresponding configuration file settings, preceded by a `--`, for
example `--port 8000`.
A subset of settings may be specified using CLI args:
If you are using the launcher (`invoke.sh` or `invoke.bat`) to launch
InvokeAI, then just pass the command-line arguments to the launcher:
- `--root`: specify the root directory
- `--config`: override the default `invokeai.yaml` file location
```
invoke.bat --port 8000 --host 0.0.0.0
```
The arguments will be applied when you select the web server option
(and the other options as well).
If, on the other hand, you prefer to launch InvokeAI directly from the
command line, you would first activate the virtual environment (known
as the "developer's console" in the launcher), and run `invokeai-web`:
```
> C:\Users\Fred\invokeai\.venv\scripts\activate
(.venv) > invokeai-web --port 8000 --host 0.0.0.0
```
You can get a listing and brief instructions for each of the
command-line options by giving the `--help` argument:
```
(.venv) > invokeai-web --help
usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials] [--allow_methods [ALLOW_METHODS ...]]
[--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan] [--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization]
[--patchmatch | --no-patchmatch] [--restore | --no-restore]
[--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_loaded_models MAX_LOADED_MODELS] [--max_cache_size MAX_CACHE_SIZE]
[--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--gpu_mem_reserved GPU_MEM_RESERVED] [--precision {auto,float16,float32,autocast}]
[--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled] [--tiled_decode | --no-tiled_decode] [--root ROOT]
[--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR] [--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH]
[--models_dir MODELS_DIR] [--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE]
[--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]] [--log_format {plain,color,syslog,legacy}]
[--log_level {debug,info,warning,error,critical}] [--version | --no-version]
```
## The Configuration Settings
The config is managed by the `InvokeAIAppConfig` class, which is a pydantic model. The below docs are autogenerated from the class.
When editing your `invokeai.yaml` file, you'll need to put settings under their appropriate group. The group for each setting is denoted in the table below.
### All Settings
Following the table are additional explanations for certain settings.
<!-- prettier-ignore-start -->
::: invokeai.app.services.config.config_default.InvokeAIAppConfig
options:
heading_level: 3
heading_level: 4
members: false
show_docstring_description: false
group_by_category: true
show_category_heading: false
<!-- prettier-ignore-end -->
### Model Marketplace API Keys
#### Model Marketplace API Keys
Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your `invokeai.yaml` file to provide that API key.
The pattern can be any valid regex (you may need to surround the pattern with quotes):
```yaml
InvokeAI:
Model Install:
remote_api_tokens:
# Any URL containing `models.com` will automatically use `your_models_com_token`
- url_regex: models.com
token: your_models_com_token
# Any URL matching this contrived regex will use `some_other_token`
- url_regex: '^[a-z]{3}whatever.*\.com$'
token: some_other_token
remote_api_tokens:
# Any URL containing `models.com` will automatically use `your_models_com_token`
- url_regex: models.com
token: your_models_com_token
# Any URL matching this contrived regex will use `some_other_token`
- url_regex: '^[a-z]{3}whatever.*\.com$'
token: some_other_token
```
The provided token will be added as a `Bearer` token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.
### Model Hashing
#### Model Hashing
Models are hashed during installation with the `BLAKE3` algorithm, providing a stable identifier for models across all platforms.
Models are hashed during installation, providing a stable identifier for models across all platforms. The default algorithm is `blake3`, with a multi-threaded implementation.
Model hashing is a one-time operation, but it may take a couple minutes to hash a large model collection. You may opt out of model hashing and instead have a random UUID assigned instead:
If your models are stored on a spinning hard drive, we suggest using `blake3_single`, the single-threaded implementation. The hashes are the same, but it's much faster on spinning disks.
```yaml
InvokeAI:
Model Install:
skip_model_hash: true
hashing_algorithm: blake3_single
```
### Paths
Model hashing is a one-time operation, but it may take a couple minutes to hash a large model collection. You may opt out of model hashing entirely by setting the algorithm to `random`.
```yaml
hashing_algorithm: random
```
Most common algorithms are supported, like `md5`, `sha256`, and `sha512`. These are typically much, much slower than `blake3`.
#### Path Settings
These options set the paths of various directories and files used by
InvokeAI. Relative paths are interpreted relative to the root directory, so
@ -206,15 +147,15 @@ Note that the autoimport directory will be searched recursively,
allowing you to organize the models into folders and subfolders in any
way you wish.
### Logging
#### Logging
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```yaml
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
- `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
@ -248,3 +189,6 @@ The `log_format` option provides several alternative formats:
- `plain` - same as above, but monochrome text only
- `syslog` - the log level and error message only, allowing the syslog system to attach the time and date
- `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases.
[basic guide to yaml files]: https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/
[Model Marketplace API Keys]: #model-marketplace-api-keys

View File

@ -122,9 +122,9 @@ experimental versions later.
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest),
and look for a file named:
- InvokeAI-installer-v3.X.X.zip
- InvokeAI-installer-v4.X.X.zip
where "3.X.X" is the latest released version. The file is located
where "4.X.X" is the latest released version. The file is located
at the very bottom of the release page, under **Assets**.
4. **Unpack the installer**: Unpack the zip file into a convenient directory. This will create a new
@ -199,136 +199,7 @@ experimental versions later.
![initial-settings-screenshot](../assets/installer-walkthrough/settings-form.png)
</figure>
10. **Post-install Configuration**: After installation completes, the
installer will launch the configuration form, which will guide you
through the first-time process of adjusting some of InvokeAI's
startup settings. To move around this form use ctrl-N for
&lt;N&gt;ext and ctrl-P for &lt;P&gt;revious, or use &lt;tab&gt;
and shift-&lt;tab&gt; to move forward and back. Once you are in a
multi-checkbox field use the up and down cursor keys to select the
item you want, and &lt;space&gt; to toggle it on and off. Within
a directory field, pressing &lt;tab&gt; will provide autocomplete
options.
Generally the defaults are fine, and you can come back to this screen at
any time to tweak your system. Here are the options you can adjust:
- ***HuggingFace Access Token***
InvokeAI has the ability to download embedded styles and subjects
from the HuggingFace Concept Library on-demand. However, some of
the concept library files are password protected. To make download
smoother, you can set up an account at huggingface.co, obtain an
access token, and paste it into this field. Note that you paste
to this screen using ctrl-shift-V
- ***Free GPU memory after each generation***
This is useful for low-memory machines and helps minimize the
amount of GPU VRAM used by InvokeAI.
- ***Enable xformers support if available***
If the xformers library was successfully installed, this will activate
it to reduce memory consumption and increase rendering speed noticeably.
Note that xformers has the side effect of generating slightly different
images even when presented with the same seed and other settings.
- ***Force CPU to be used on GPU systems***
This will use the (slow) CPU rather than the accelerated GPU. This
can be used to generate images on systems that don't have a compatible
GPU.
- ***Precision***
This controls whether to use float32 or float16 arithmetic.
float16 uses less memory but is also slightly less accurate.
Ordinarily the right arithmetic is picked automatically ("auto"),
but you may have to use float32 to get images on certain systems
and graphics cards. The "autocast" option is deprecated and
shouldn't be used unless you are asked to by a member of the team.
- **Size of the RAM cache used for fast model switching***
This allows you to keep models in memory and switch rapidly among
them rather than having them load from disk each time. This slider
controls how many models to keep loaded at once. A typical SD-1 or SD-2 model
uses 2-3 GB of memory. A typical SDXL model uses 6-7 GB. Providing more
RAM will allow more models to be co-resident.
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***Autoimport Folder***
This is the directory in which you can place models you have
downloaded and wish to load into InvokeAI. You can place a variety
of models in this directory, including diffusers folders, .ckpt files,
.safetensors files, as well as LoRAs, ControlNet and Textual Inversion
files (both folder and file versions). To help organize this folder,
you can create several levels of subfolders and drop your models into
whichever ones you want.
- ***LICENSE***
At the bottom of the screen you will see a checkbox for accepting
the CreativeML Responsible AI Licenses. You need to accept the license
in order to download Stable Diffusion models from the next screen.
_You can come back to the startup options form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
this script. On the command line, it is named `invokeai-configure`.
11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
to another screen that prompts you to download a series of starter models. The ones
we recommend are preselected for you, but you are encouraged to use the checkboxes to
pick and choose.
You will probably wish to download `autoencoder-840000` for use with models that
were trained with an older version of the Stability VAE.
<figure markdown>
![select-models-screenshot](../assets/installer-walkthrough/installing-models.png)
</figure>
Below the preselected list of starter models is a large text field which you can use
to specify a series of models to import. You can specify models in a variety of formats,
each separated by a space or newline. The formats accepted are:
- The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
the file browser to the textfield to automatically paste the path. Be sure to remove
extraneous quotation marks and other things that come along for the ride.
- The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
The directory will be scanned from top to bottom (including subfolders) and any
file that can be imported will be.
- A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
and paste directly from a web page, or simply drag the link from the web page
or navigation bar. (You can also use ctrl-shift-V to paste into this field)
The file will be downloaded and installed.
- The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
the format _author_name/model_name_, as in `andite/anything-v4.0`
- The path to a local directory containing a `diffusers`
model. These directories always have the file `model_index.json`
at their top level.
_Select a directory for models to import_ You may select a local
directory for autoimporting at startup time. If you select this
option, the directory you choose will be scanned for new
.ckpt/.safetensors files each time InvokeAI starts up, and any new
files will be automatically imported and made available for your
use.
_Convert imported models into diffusers_ When legacy checkpoint
files are imported, you may select to use them unmodified (the
default) or to convert them into `diffusers` models. The latter
load much faster and have slightly better rendering performance,
but not all checkpoint files can be converted. Note that Stable Diffusion
Version 2.X files are **only** supported in `diffusers` format and will
be converted regardless.
_You can come back to the model install form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
this script. On the command line, it is named `invokeai-model-install`.
12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
10. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
for the directory `invokeai` installed in the location you chose at the
beginning of the install session. Look for a shell script named `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
@ -349,14 +220,14 @@ experimental versions later.
http://localhost:9090. Click on this link to open up a browser
and start exploring InvokeAI's features.
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
customize its behavior. For example, you can change the location of the
12. **InvokeAI Options**: You can configure using the `invokeai.yaml` config file.
For example, you can change the location of the
image output directory or balance memory usage vs performance. See
[Configuration](../features/CONFIGURATION.md) for a full list of the options.
- To set defaults that will take effect every time you launch InvokeAI,
use a text editor (e.g. Notepad) to exit the file
`invokeai\invokeai.init`. It contains a variety of examples that you can
`invokeai\invokeai.yaml`. It contains a variety of examples that you can
follow to add and modify launch options.
- The launcher script also offers you an option labeled "open the developer
@ -394,7 +265,6 @@ rm .\.venv -r -force
python -mvenv .venv
.\.venv\Scripts\activate
pip install invokeai
invokeai-configure --yes --root .
```
If you see anything marked as an error during this process please stop
@ -426,16 +296,10 @@ error messages:
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
To address this, first go to the Web Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then navigate to HuggingFace and
manually download the .safetensors version of the model. The 1.0
version is located at
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
and the file is named `sd_xl_base_1.0.safetensors`.
Save this file to disk and then reenter the Model Manager. Navigate to
Import Models->Add Model, then type (or drag-and-drop) the path to the
.safetensors file. Press "Add Model".
To address this, first go to the Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then, click the HuggingFace tab,
paste the Repo ID stabilityai/stable-diffusion-xl-base-1.0 and install
the model.
### _Package dependency conflicts_
@ -488,15 +352,7 @@ download models, etc), but this doesn't fix the problem.
This issue is often caused by a misconfigured configuration directive in the
`invokeai\invokeai.init` initialization file that contains startup settings. The
easiest way to fix the problem is to move the file out of the way and re-run
`invokeai-configure`. Enter the developer's console (option 3 of the launcher
script) and run this command:
```cmd
invokeai-configure --root=.
```
Note the dot (.) after `--root`. It is part of the command.
easiest way to fix the problem is to move the file out of the way and restart the app.
_If none of these maneuvers fixes the problem_ then please report the problem to
the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
@ -565,16 +421,4 @@ This distribution is changing rapidly, and we add new features
regularly. Releases are announced at
http://github.com/invoke-ai/InvokeAI/releases, and at
https://pypi.org/project/InvokeAI/ To update to the latest released
version (recommended), follow these steps:
1. Start the `invoke.sh`/`invoke.bat` launch script from within the
`invokeai` root directory.
2. Choose menu item (10) "Update InvokeAI".
3. This will launch a menu that gives you the option of:
1. Updating to the latest official release;
2. Updating to the bleeding-edge development version; or
3. Manually entering the tag or branch name of a version of
InvokeAI you wish to try out.
version (recommended), download the latest release and run the installer.

View File

@ -26,7 +26,7 @@ driver).
🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- *Look for the file labelled "InvokeAI-installer-v4.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).

View File

@ -22,6 +22,24 @@ class MyInvocation(BaseInvocation):
...
```
The full API is documented below.
## Invocation Mixins
Two important mixins are provided to facilitate working with metadata and gallery boards.
### `WithMetadata`
Inherit from this class (in addition to `BaseInvocation`) to add a `metadata` input to your node. When you do this, you can access the metadata dict from `self.metadata` in the `invoke()` function.
The dict will be populated via the node's input, and you can add any metadata you'd like to it. When you call `context.images.save()`, if the metadata dict has any data, it be automatically embedded in the image.
### `WithBoard`
Inherit from this class (in addition to `BaseInvocation`) to add a `board` input to your node. This renders as a drop-down to select a board. The user's selection will be accessible from `self.board` in the `invoke()` function.
When you call `context.images.save()`, if a board was selected, the image will added to that board as it is saved.
<!-- prettier-ignore-start -->
::: invokeai.app.services.shared.invocation_context.InvocationContext
options:

View File

@ -149,9 +149,6 @@ class Installer:
# install the launch/update scripts into the runtime directory
self.instance.install_user_scripts()
# run through the configuration flow
self.instance.configure()
class InvokeAiInstance:
"""
@ -242,53 +239,6 @@ class InvokeAiInstance:
)
sys.exit(1)
def configure(self):
"""
Configure the InvokeAI runtime directory
"""
auto_install = False
# set sys.argv to a consistent state
new_argv = [sys.argv[0]]
for i in range(1, len(sys.argv)):
el = sys.argv[i]
if el in ["-r", "--root"]:
new_argv.append(el)
new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]:
auto_install = True
sys.argv = new_argv
import messages
import requests # to catch download exceptions
auto_install = auto_install or messages.user_wants_auto_configuration()
if auto_install:
sys.argv.append("--yes")
else:
messages.introduction()
from invokeai.frontend.install.invokeai_configure import invokeai_configure
# NOTE: currently the config script does its own arg parsing! this means the command-line switches
# from the installer will also automatically propagate down to the config script.
# this may change in the future with config refactoring!
succeeded = False
try:
invokeai_configure()
succeeded = True
except requests.exceptions.ConnectionError as e:
print(f"\nA network error was encountered during configuration and download: {str(e)}")
except OSError as e:
print(f"\nAn OS error was encountered during configuration and download: {str(e)}")
except Exception as e:
print(f"\nA problem was encountered during the configuration and download steps: {str(e)}")
finally:
if not succeeded:
print('To try again, find the "invokeai" directory, run the script "invoke.sh" or "invoke.bat"')
print("and choose option 7 to fix a broken install, optionally followed by option 5 to install models.")
print("Alternatively you can relaunch the installer.")
def install_user_scripts(self):
"""
Copy the launch and update scripts to the runtime dir

View File

@ -8,7 +8,7 @@ import platform
from enum import Enum
from pathlib import Path
from prompt_toolkit import HTML, prompt
from prompt_toolkit import prompt
from prompt_toolkit.completion import FuzzyWordCompleter, PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
@ -98,39 +98,6 @@ def choose_version(available_releases: tuple | None = None) -> str:
return "stable" if response == "" else response
def user_wants_auto_configuration() -> bool:
"""Prompt the user to choose between manual and auto configuration."""
console.rule("InvokeAI Configuration Section")
console.print(
Panel(
Group(
"\n".join(
[
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
"",
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
"",
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
]
),
),
box=box.MINIMAL,
padding=(1, 1),
)
)
choice = (
prompt(
HTML("Choose <b>&lt;a&gt;</b>utomatic or <b>&lt;m&gt;</b>anual configuration [a/m] (a): "),
validator=Validator.from_callable(
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
),
)
or "a"
)
return choice.lower().startswith("a")
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":stop_sign: Directory {dest} already exists!")
@ -351,34 +318,6 @@ def windows_long_paths_registry() -> None:
)
def introduction() -> None:
"""
Display a banner when starting configuration of the InvokeAI application
"""
console.rule()
console.print(
Panel(
title=":art: Configuring InvokeAI :art:",
renderable=Group(
"",
"[b]This script will:",
"",
"1. Configure the InvokeAI application directory",
"2. Help download the Stable Diffusion weight files",
" and other large models that are needed for text to image generation",
"3. Create initial configuration files.",
"",
"[i]At any point you may interrupt this program and resume later.",
"",
"[b]For the best user experience, please enlarge or maximize this window",
),
)
)
console.line(2)
def _platform_specific_help() -> Text | None:
if OS == "Darwin":
text = Text.from_markup(

View File

@ -9,15 +9,10 @@ set INVOKEAI_ROOT=.
:start
echo Desired action:
echo 1. Generate images with the browser-based interface
echo 2. Run textual inversion training
echo 3. Merge models (diffusers type only)
echo 4. Download and install models
echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console
echo 8. Update InvokeAI (DEPRECATED - please use the installer)
echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help
echo 2. Open the developer console
echo 3. Update InvokeAI (DEPRECATED - please use the installer)
echo 4. Run the InvokeAI image database maintenance script
echo 5. Command-line help
echo Q - Quit
set /P choice="Please enter 1-10, Q: [1] "
if not defined choice set choice=1
@ -25,21 +20,6 @@ IF /I "%choice%" == "1" (
echo Starting the InvokeAI browser-based UI..
python .venv\Scripts\invokeai-web.exe %*
) ELSE IF /I "%choice%" == "2" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%choice%" == "3" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%choice%" == "4" (
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%choice%" == "5" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%choice%" == "6" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
) ELSE IF /I "%choice%" == "7" (
echo Developer Console
echo Python command is:
where python
@ -51,15 +31,15 @@ IF /I "%choice%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "8" (
) ELSE IF /I "%choice%" == "3" (
echo UPDATING FROM WITHIN THE APP IS BEING DEPRECATED.
echo Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.
timeout 4
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" (
) ELSE IF /I "%choice%" == "4" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
) ELSE IF /I "%choice%" == "10" (
) ELSE IF /I "%choice%" == "5" (
echo Displaying command line help...
python .venv\Scripts\invokeai-web.exe --help %*
pause

View File

@ -58,49 +58,24 @@ do_choice() {
invokeai-web $PARAMS
;;
2)
clear
printf "Textual inversion training\n"
invokeai-ti --gui $PARAMS
;;
3)
clear
printf "Merge models (diffusers type only)\n"
invokeai-merge --gui $PARAMS
;;
4)
clear
printf "Download and install models\n"
invokeai-model-install --root ${INVOKEAI_ROOT}
;;
5)
clear
printf "Change InvokeAI startup options\n"
invokeai-configure --root ${INVOKEAI_ROOT} --skip-sd-weights --skip-support-models
;;
6)
clear
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
;;
7)
clear
printf "Open the developer console\n"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
8)
3)
clear
printf "UPDATING FROM WITHIN THE APP IS BEING DEPRECATED\n"
printf "Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.\n"
sleep 4
python -m invokeai.frontend.install.invokeai_update
;;
9)
4)
clear
printf "Running the db maintenance script\n"
invokeai-db-maintenance --root ${INVOKEAI_ROOT}
;;
10)
5)
clear
printf "Command-line help\n"
invokeai-web --help
@ -118,15 +93,10 @@ do_choice() {
do_dialog() {
options=(
1 "Generate images with a browser-based interface"
2 "Textual inversion training"
3 "Merge models (diffusers type only)"
4 "Download and install models"
5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console"
8 "Update InvokeAI (DEPRECATED - please use the installer)"
9 "Run the InvokeAI image database maintenance script"
10 "Command-line help"
2 "Open the developer console"
3 "Update InvokeAI (DEPRECATED - please use the installer)"
4 "Run the InvokeAI image database maintenance script"
5 "Command-line help"
)
choice=$(dialog --clear \
@ -151,15 +121,10 @@ do_line_input() {
printf " ** For a more attractive experience, please install the 'dialog' utility using your package manager. **\n\n"
printf "What would you like to do?\n"
printf "1: Generate images using the browser-based interface\n"
printf "2: Run textual inversion training\n"
printf "3: Merge models (diffusers type only)\n"
printf "4: Download and install models\n"
printf "5: Change InvokeAI startup options\n"
printf "6: Re-run the configure script to fix a broken install\n"
printf "7: Open the developer console\n"
printf "8: Update InvokeAI\n"
printf "9: Run the InvokeAI image database maintenance script\n"
printf "10: Command-line help\n"
printf "2: Open the developer console\n"
printf "3: Update InvokeAI\n"
printf "4: Run the InvokeAI image database maintenance script\n"
printf "5: Command-line help\n"
printf "Q: Quit\n\n"
read -p "Please enter 1-10, Q: [1] " yn
choice=${yn:='1'}

View File

@ -1,11 +0,0 @@
Organization of the source tree:
app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
in version/invokeai_version.py

View File

@ -64,9 +64,8 @@ class ApiDependencies:
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
output_folder = config.output_path
output_folder = config.outputs_path
if output_folder is None:
raise ValueError("Output folder is not set")

View File

@ -12,7 +12,6 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.util.logging import logging
@ -114,9 +113,7 @@ async def get_config() -> AppConfig:
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
watermarking_methods = []
if InvisibleWatermark.invisible_watermark_available():
watermarking_methods.append("invisible_watermark")
watermarking_methods = ["invisible_watermark"]
return AppConfig(
infill_methods=infill_methods,

View File

@ -1,17 +1,21 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import contextlib
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from enum import Enum
from typing import Any, Dict, List, Optional
import huggingface_hub
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, ConfigDict, Field
from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
@ -21,6 +25,7 @@ from invokeai.app.services.model_records import (
UnknownModelException,
)
from invokeai.app.services.model_records.model_records_base import DuplicateModelException, ModelRecordChanges
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@ -29,7 +34,10 @@ from invokeai.backend.model_manager.config import (
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel
from ..dependencies import ApiDependencies
@ -246,6 +254,40 @@ async def scan_for_models(
return scan_results
class HuggingFaceModels(BaseModel):
urls: List[AnyHttpUrl] | None = Field(description="URLs for all checkpoint format models in the metadata")
is_diffusers: bool = Field(description="Whether the metadata is for a Diffusers format model")
@model_manager_router.get(
"/hugging_face",
operation_id="get_hugging_face_models",
responses={
200: {"description": "Hugging Face repo scanned successfully"},
400: {"description": "Invalid hugging face repo"},
},
status_code=200,
response_model=HuggingFaceModels,
)
async def get_hugging_face_models(
hugging_face_repo: str = Query(description="Hugging face repo to search for models", default=None),
) -> HuggingFaceModels:
try:
metadata = HuggingFaceMetadataFetch().from_id(hugging_face_repo)
except UnknownMetadataException:
raise HTTPException(
status_code=400,
detail="No HuggingFace repository found",
)
assert isinstance(metadata, ModelMetadataWithFiles)
return HuggingFaceModels(
urls=metadata.ckpt_urls,
is_diffusers=metadata.is_diffusers,
)
@model_manager_router.patch(
"/i/{key}",
operation_id="update_model_record",
@ -744,3 +786,69 @@ async def convert_model(
# except ValueError as e:
# raise HTTPException(status_code=400, detail=str(e))
# return response
@model_manager_router.get("/starter_models", operation_id="get_starter_models", response_model=list[StarterModel])
async def get_starter_models() -> list[StarterModel]:
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
installed_model_sources = {m.source for m in installed_models}
starter_models = deepcopy(STARTER_MODELS)
for model in starter_models:
if model.source in installed_model_sources:
model.is_installed = True
# Remove already-installed dependencies
missing_deps: list[str] = []
for dep in model.dependencies or []:
if dep not in installed_model_sources:
missing_deps.append(dep)
model.dependencies = missing_deps
return starter_models
class HFTokenStatus(str, Enum):
VALID = "valid"
INVALID = "invalid"
UNKNOWN = "unknown"
class HFTokenHelper:
@classmethod
def get_status(cls) -> HFTokenStatus:
try:
if huggingface_hub.get_token_permission(huggingface_hub.get_token()):
# Valid token!
return HFTokenStatus.VALID
# No token set
return HFTokenStatus.INVALID
except Exception:
return HFTokenStatus.UNKNOWN
@classmethod
def set_token(cls, token: str) -> HFTokenStatus:
with SuppressOutput(), contextlib.suppress(Exception):
huggingface_hub.login(token=token, add_to_git_credential=False)
return cls.get_status()
@model_manager_router.get("/hf_login", operation_id="get_hf_login_status", response_model=HFTokenStatus)
async def get_hf_login_status() -> HFTokenStatus:
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status
@model_manager_router.post("/hf_login", operation_id="do_hf_login", response_model=HFTokenStatus)
async def do_hf_login(
token: str = Body(description="Hugging Face token to use for login", embed=True),
) -> HFTokenStatus:
HFTokenHelper.set_token(token)
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status

View File

@ -1,73 +1,59 @@
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
import sys
import asyncio
import mimetypes
import socket
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
from typing import Any
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
from torch.backends.mps import is_available as is_mps_available
# for PyCharm:
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from invokeai.version.invokeai_version import __version__
from .services.config import InvokeAIAppConfig
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
from .api.routers import (
app_info,
board_images,
boards,
download_queue,
images,
model_manager,
session_queue,
utilities,
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
UIConfigBase,
)
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
if app_config.version:
print(f"InvokeAI version {__version__}")
sys.exit(0)
if True: # hack to make flake8 happy with imports coming after setting up the config
import asyncio
import mimetypes
import socket
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
from typing import Any
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
from torch.backends.mps import is_available as is_mps_available
# for PyCharm:
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
from .api.routers import (
app_info,
board_images,
boards,
download_queue,
images,
model_manager,
session_queue,
utilities,
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
UIConfigBase,
)
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
app_config = get_config()
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.get_logger(config=app_config)
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
@ -247,10 +233,6 @@ def invoke_api() -> None:
else:
return port
from invokeai.backend.install.check_root import check_invokeai_root
check_invokeai_root(app_config) # note, may exit with an exception if root not set up
if app_config.dev_reload:
try:
import jurigged

View File

@ -3,9 +3,9 @@ import sys
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
custom_nodes_path = Path(InvokeAIAppConfig.get_config().custom_nodes_path.resolve())
custom_nodes_path = Path(get_config().custom_nodes_path)
custom_nodes_path.mkdir(parents=True, exist_ok=True)
custom_nodes_init_path = str(custom_nodes_path / "__init__.py")

View File

@ -33,7 +33,7 @@ from invokeai.app.invocations.fields import (
FieldKind,
Input,
)
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.metaenum import MetaEnum
from invokeai.app.util.misc import uuid_string
@ -191,7 +191,7 @@ class BaseInvocation(ABC, BaseModel):
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
app_config = InvokeAIAppConfig.get_config()
app_config = get_config()
allowed_invocations: set[BaseInvocation] = set()
for sc in cls._invocation_classes:
invocation_type = sc.get_type()

View File

@ -36,7 +36,7 @@ from .model import CLIPField
title="Prompt",
tags=["prompt", "compel"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@ -232,7 +232,7 @@ class SDXLPromptInvocationBase:
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -325,7 +325,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
title="SDXL Refiner Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -381,7 +381,7 @@ class CLIPSkipInvocationOutput(BaseInvocationOutput):
title="CLIP Skip",
tags=["clipskip", "clip", "skip"],
category="conditioning",
version="1.0.0",
version="1.1.0",
)
class CLIPSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""

View File

@ -171,11 +171,12 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.2.1",
version="1.3.1",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@ -189,7 +190,12 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
def run_processor(self, image):
canny_processor = CannyDetector()
processed_image = canny_processor(image, self.low_threshold, self.high_threshold)
processed_image = canny_processor(
image,
self.low_threshold,
self.high_threshold,
image_resolution=self.image_resolution,
)
return processed_image
@ -198,7 +204,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@ -227,7 +233,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@ -249,7 +255,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@ -272,13 +278,14 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
@ -288,6 +295,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
image_resolution=self.image_resolution,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
@ -299,7 +307,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@ -316,7 +324,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.1"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.2"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@ -339,7 +347,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.1"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.2"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@ -366,7 +374,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@ -396,7 +404,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@ -412,17 +420,20 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
processed_image = mediapipe_face_processor(
image, max_faces=self.max_faces, min_confidence=self.min_confidence, image_resolution=self.image_resolution
)
return processed_image
@ -431,7 +442,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@ -460,7 +471,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@ -500,18 +511,20 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints"
)
np_img = np.array(image, dtype=np.uint8)
processed_image = segment_anything_processor(np_img)
processed_image = segment_anything_processor(np_img, image_resolution=self.image_resolution)
return processed_image
@ -542,7 +555,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
@ -574,7 +587,7 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
version="1.1.1",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
@ -583,13 +596,12 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
return processed_image
@ -598,7 +610,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""

View File

@ -13,7 +13,7 @@ from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.2.1")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Simple inpaint using opencv."""

View File

@ -435,7 +435,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.1")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.2")
class FaceOffInvocation(BaseInvocation, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@ -514,7 +514,7 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.1")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.2")
class FaceMaskInvocation(BaseInvocation, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@ -617,7 +617,7 @@ class FaceMaskInvocation(BaseInvocation, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.1"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.2"
)
class FaceIdentifierInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""

View File

@ -49,7 +49,7 @@ class ShowImageInvocation(BaseInvocation):
title="Blank Image",
tags=["image"],
category="image",
version="1.2.1",
version="1.2.2",
)
class BlankImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Creates a blank image and forwards it to the pipeline"""
@ -72,7 +72,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Crop Image",
tags=["image", "crop"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageCropInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Crops an image to a specified box. The box can be outside of the image."""
@ -143,7 +143,7 @@ class CenterPadCropInvocation(BaseInvocation):
title="Paste Image",
tags=["image", "paste"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImagePasteInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Pastes an image into another image."""
@ -190,7 +190,7 @@ class ImagePasteInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Mask from Alpha",
tags=["image", "mask"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskFromAlphaInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Extracts the alpha channel of an image as a mask."""
@ -215,7 +215,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Multiply Images",
tags=["image", "multiply"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@ -242,7 +242,7 @@ IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
title="Extract Image Channel",
tags=["image", "channel"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Gets a channel from an image."""
@ -265,7 +265,7 @@ class ImageChannelInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Convert Image Mode",
tags=["image", "convert"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageConvertInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Converts an image to a different mode."""
@ -288,7 +288,7 @@ class ImageConvertInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur Image",
tags=["image", "blur"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Blurs an image"""
@ -316,7 +316,7 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Unsharp Mask",
tags=["image", "unsharp_mask"],
category="image",
version="1.2.1",
version="1.2.2",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata, WithBoard):
@ -385,7 +385,7 @@ PIL_RESAMPLING_MAP = {
title="Resize Image",
tags=["image", "resize"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Resizes an image to specific dimensions"""
@ -415,7 +415,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Scale Image",
tags=["image", "scale"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Scales an image by a factor"""
@ -450,7 +450,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Lerp Image",
tags=["image", "lerp"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Linear interpolation of all pixels of an image"""
@ -477,7 +477,7 @@ class ImageLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Inverse Lerp Image",
tags=["image", "ilerp"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Inverse linear interpolation of all pixels of an image"""
@ -504,7 +504,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add blur to NSFW-flagged images"""
@ -539,7 +539,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add an invisible watermark to an image"""
@ -560,7 +560,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Mask Edge",
tags=["image", "mask", "inpaint"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskEdgeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Applies an edge mask to an image"""
@ -599,7 +599,7 @@ class MaskEdgeInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskCombineInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@ -623,7 +623,7 @@ class MaskCombineInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Color Correct",
tags=["image", "color"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ColorCorrectInvocation(BaseInvocation, WithMetadata, WithBoard):
"""
@ -727,7 +727,7 @@ class ColorCorrectInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Adjust Image Hue",
tags=["image", "hue"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageHueAdjustmentInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Adjusts the Hue of an image."""
@ -816,7 +816,7 @@ CHANNEL_FORMATS = {
"value",
],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelOffsetInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add or subtract a value from a specific color channel of an image."""
@ -872,7 +872,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithMetadata, WithBoard):
"value",
],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Scale a specific color channel of an image."""
@ -916,7 +916,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.2.1",
version="1.2.2",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation, WithMetadata, WithBoard):

View File

@ -9,6 +9,7 @@ from PIL import Image, ImageOps
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
@ -120,7 +121,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with a solid color"""
@ -143,7 +144,7 @@ class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with tiles of the image"""
@ -168,7 +169,7 @@ class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
)
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@ -208,7 +209,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the LaMa model"""
@ -217,6 +218,13 @@ class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
# Downloads the LaMa model if it doesn't already exist
download_with_progress_bar(
name="LaMa Inpainting Model",
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
dest_path=context.config.get().models_path / "core/misc/lama/lama.pt",
)
infilled = infill_lama(image.copy())
image_dto = context.images.save(image=infilled)
@ -224,7 +232,7 @@ class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using OpenCV Inpainting"""

View File

@ -15,7 +15,7 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType, IPAdapterConfig, ModelType
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, IPAdapterConfig, ModelType
class IPAdapterField(BaseModel):
@ -48,7 +48,7 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.2")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.2.2")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
@ -89,17 +89,32 @@ class IPAdapterInvocation(BaseInvocation):
assert isinstance(ip_adapter_info, IPAdapterConfig)
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
assert len(image_encoder_models) == 1
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
ip_adapter_model=self.ip_adapter_model,
image_encoder_model=ModelIdentifierField.from_config(image_encoder_models[0]),
image_encoder_model=ModelIdentifierField.from_config(image_encoder_model),
weight=self.weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
),
)
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
found = False
while not found:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
found = len(image_encoder_models) > 0
if not found:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed."
)
context.logger.warning("Downloading and installing now. This may take a while.")
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # wait up to 10 minutes - then raise a TimeoutException
assert len(image_encoder_models) == 1
return image_encoder_models[0]

View File

@ -113,7 +113,7 @@ class SchedulerInvocation(BaseInvocation):
title="Create Denoise Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
@ -279,7 +279,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.5.2",
version="1.5.3",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -816,7 +816,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.2.1",
version="1.2.2",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
@ -837,14 +837,14 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL))
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, torch.nn.Module)
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = isinstance(
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
@ -866,7 +866,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
vae.to(dtype=torch.float16)
latents = latents.half()
if self.tiled or context.config.get().tiled_decode:
if self.tiled or context.config.get().force_tiled_decode:
vae.enable_tiling()
else:
vae.disable_tiling()
@ -903,7 +903,7 @@ LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic",
title="Resize Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
@ -953,7 +953,7 @@ class ResizeLatentsInvocation(BaseInvocation):
title="Scale Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
@ -995,7 +995,7 @@ class ScaleLatentsInvocation(BaseInvocation):
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@ -1018,7 +1018,7 @@ class ImageToLatentsInvocation(BaseInvocation):
if upcast:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = isinstance(
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
@ -1094,7 +1094,7 @@ class ImageToLatentsInvocation(BaseInvocation):
title="Blend Latents",
tags=["latents", "blend"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
@ -1185,7 +1185,7 @@ class BlendLatentsInvocation(BaseInvocation):
title="Crop Latents",
tags=["latents", "crop"],
category="latents",
version="1.0.1",
version="1.0.2",
)
# TODO(ryand): Named `CropLatentsCoreInvocation` to prevent a conflict with custom node `CropLatentsInvocation`.
# Currently, if the class names conflict then 'GET /openapi.json' fails.
@ -1246,7 +1246,7 @@ class IdealSizeOutput(BaseInvocationOutput):
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
version="1.0.3",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""

View File

@ -12,7 +12,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.0")
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
class AddInvocation(BaseInvocation):
"""Adds two numbers"""
@ -23,7 +23,7 @@ class AddInvocation(BaseInvocation):
return IntegerOutput(value=self.a + self.b)
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.0")
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.1")
class SubtractInvocation(BaseInvocation):
"""Subtracts two numbers"""
@ -34,7 +34,7 @@ class SubtractInvocation(BaseInvocation):
return IntegerOutput(value=self.a - self.b)
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.0")
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.1")
class MultiplyInvocation(BaseInvocation):
"""Multiplies two numbers"""
@ -45,7 +45,7 @@ class MultiplyInvocation(BaseInvocation):
return IntegerOutput(value=self.a * self.b)
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.0")
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.1")
class DivideInvocation(BaseInvocation):
"""Divides two numbers"""
@ -61,7 +61,7 @@ class DivideInvocation(BaseInvocation):
title="Random Integer",
tags=["math", "random"],
category="math",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class RandomIntInvocation(BaseInvocation):
@ -100,7 +100,7 @@ class RandomFloatInvocation(BaseInvocation):
title="Float To Integer",
tags=["math", "round", "integer", "float", "convert"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatToIntegerInvocation(BaseInvocation):
"""Rounds a float number to (a multiple of) an integer."""
@ -122,7 +122,7 @@ class FloatToIntegerInvocation(BaseInvocation):
return IntegerOutput(value=int(self.value / self.multiple) * self.multiple)
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.0")
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.1")
class RoundInvocation(BaseInvocation):
"""Rounds a float to a specified number of decimal places."""
@ -176,7 +176,7 @@ INTEGER_OPERATIONS_LABELS = {
"max",
],
category="math",
version="1.0.0",
version="1.0.1",
)
class IntegerMathInvocation(BaseInvocation):
"""Performs integer math."""
@ -250,7 +250,7 @@ FLOAT_OPERATIONS_LABELS = {
title="Float Math",
tags=["math", "float", "add", "subtract", "multiply", "divide", "power", "root", "absolute value", "min", "max"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatMathInvocation(BaseInvocation):
"""Performs floating point math."""

View File

@ -20,8 +20,8 @@ from invokeai.app.invocations.fields import (
OutputField,
UIType,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType, ModelType
from ...version import __version__
@ -31,20 +31,10 @@ class MetadataItemField(BaseModel):
value: Any = Field(description=FieldDescriptions.metadata_item_value)
class ModelMetadataField(BaseModel):
"""Model Metadata Field"""
key: str
hash: str
name: str
base: BaseModelType
type: ModelType
class LoRAMetadataField(BaseModel):
"""LoRA Metadata Field"""
model: ModelMetadataField = Field(description=FieldDescriptions.lora_model)
model: ModelIdentifierField = Field(description=FieldDescriptions.lora_model)
weight: float = Field(description=FieldDescriptions.lora_weight)
@ -52,19 +42,16 @@ class IPAdapterMetadataField(BaseModel):
"""IP Adapter Field, minus the CLIP Vision Encoder model"""
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: ModelMetadataField = Field(
description="The IP-Adapter model.",
)
weight: Union[float, list[float]] = Field(
description="The weight given to the IP-Adapter",
)
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")
class T2IAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The T2I-Adapter image prompt.")
t2i_adapter_model: ModelMetadataField = Field(description="The T2I-Adapter model to use.")
image: ImageField = Field(description="The control image.")
processed_image: Optional[ImageField] = Field(default=None, description="The control image, after processing.")
t2i_adapter_model: ModelIdentifierField = Field(description="The T2I-Adapter model to use.")
weight: Union[float, list[float]] = Field(default=1, description="The weight given to the T2I-Adapter")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
@ -77,7 +64,8 @@ class T2IAdapterMetadataField(BaseModel):
class ControlNetMetadataField(BaseModel):
image: ImageField = Field(description="The control image")
control_model: ModelMetadataField = Field(description="The ControlNet model to use")
processed_image: Optional[ImageField] = Field(default=None, description="The control image, after processing.")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: Union[float, list[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
@ -96,7 +84,7 @@ class MetadataItemOutput(BaseInvocationOutput):
item: MetadataItemField = OutputField(description="Metadata Item")
@invocation("metadata_item", title="Metadata Item", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("metadata_item", title="Metadata Item", tags=["metadata"], category="metadata", version="1.0.1")
class MetadataItemInvocation(BaseInvocation):
"""Used to create an arbitrary metadata item. Provide "label" and make a connection to "value" to store that data as the value."""
@ -112,7 +100,7 @@ class MetadataOutput(BaseInvocationOutput):
metadata: MetadataField = OutputField(description="Metadata Dict")
@invocation("metadata", title="Metadata", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("metadata", title="Metadata", tags=["metadata"], category="metadata", version="1.0.1")
class MetadataInvocation(BaseInvocation):
"""Takes a MetadataItem or collection of MetadataItems and outputs a MetadataDict."""
@ -133,7 +121,7 @@ class MetadataInvocation(BaseInvocation):
return MetadataOutput(metadata=MetadataField.model_validate(data))
@invocation("merge_metadata", title="Metadata Merge", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("merge_metadata", title="Metadata Merge", tags=["metadata"], category="metadata", version="1.0.1")
class MergeMetadataInvocation(BaseInvocation):
"""Merged a collection of MetadataDict into a single MetadataDict."""
@ -152,7 +140,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.1.1")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="2.0.0")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
@ -178,7 +166,7 @@ class CoreMetadataInvocation(BaseInvocation):
default=None,
description="The number of skipped CLIP layers",
)
model: Optional[ModelMetadataField] = InputField(default=None, description="The main model used for inference")
model: Optional[ModelIdentifierField] = InputField(default=None, description="The main model used for inference")
controlnets: Optional[list[ControlNetMetadataField]] = InputField(
default=None, description="The ControlNets used for inference"
)
@ -197,7 +185,7 @@ class CoreMetadataInvocation(BaseInvocation):
default=None,
description="The name of the initial image",
)
vae: Optional[ModelMetadataField] = InputField(
vae: Optional[ModelIdentifierField] = InputField(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
@ -228,7 +216,7 @@ class CoreMetadataInvocation(BaseInvocation):
)
# SDXL Refiner
refiner_model: Optional[ModelMetadataField] = InputField(
refiner_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="The SDXL Refiner model used",
)

View File

@ -98,7 +98,7 @@ class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
title="Main Model",
tags=["model"],
category="model",
version="1.0.1",
version="1.0.2",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
@ -134,7 +134,7 @@ class LoRALoaderOutput(BaseInvocationOutput):
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.1")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.2")
class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -204,7 +204,7 @@ class SDXLLoRALoaderOutput(BaseInvocationOutput):
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.1",
version="1.0.2",
)
class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -279,7 +279,7 @@ class SDXLLoRALoaderInvocation(BaseInvocation):
return output
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.1")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.2")
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
@ -309,7 +309,7 @@ class SeamlessModeOutput(BaseInvocationOutput):
title="Seamless",
tags=["seamless", "model"],
category="model",
version="1.0.0",
version="1.0.1",
)
class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE."""
@ -349,7 +349,7 @@ class SeamlessModeInvocation(BaseInvocation):
return SeamlessModeOutput(unet=unet, vae=vae)
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.0")
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.1")
class FreeUInvocation(BaseInvocation):
"""
Applies FreeU to the UNet. Suggested values (b1/b2/s1/s2):

View File

@ -81,7 +81,7 @@ class NoiseOutput(BaseInvocationOutput):
title="Noise",
tags=["latents", "noise"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""

View File

@ -51,7 +51,7 @@ from .fields import InputField
title="Float Range",
tags=["math", "range"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
@ -111,7 +111,7 @@ EASING_FUNCTION_KEYS = Literal[tuple(EASING_FUNCTIONS_MAP.keys())]
title="Step Param Easing",
tags=["step", "easing"],
category="step",
version="1.0.1",
version="1.0.2",
)
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""

View File

@ -54,7 +54,7 @@ class BooleanCollectionOutput(BaseInvocationOutput):
@invocation(
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.0"
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.1"
)
class BooleanInvocation(BaseInvocation):
"""A boolean primitive value"""
@ -70,7 +70,7 @@ class BooleanInvocation(BaseInvocation):
title="Boolean Collection Primitive",
tags=["primitives", "boolean", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
@ -103,7 +103,7 @@ class IntegerCollectionOutput(BaseInvocationOutput):
@invocation(
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.0"
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.1"
)
class IntegerInvocation(BaseInvocation):
"""An integer primitive value"""
@ -119,7 +119,7 @@ class IntegerInvocation(BaseInvocation):
title="Integer Collection Primitive",
tags=["primitives", "integer", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
@ -151,7 +151,7 @@ class FloatCollectionOutput(BaseInvocationOutput):
)
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.0")
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.1")
class FloatInvocation(BaseInvocation):
"""A float primitive value"""
@ -166,7 +166,7 @@ class FloatInvocation(BaseInvocation):
title="Float Collection Primitive",
tags=["primitives", "float", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
@ -198,7 +198,7 @@ class StringCollectionOutput(BaseInvocationOutput):
)
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.0")
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.1")
class StringInvocation(BaseInvocation):
"""A string primitive value"""
@ -213,7 +213,7 @@ class StringInvocation(BaseInvocation):
title="String Collection Primitive",
tags=["primitives", "string", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
@ -255,7 +255,7 @@ class ImageCollectionOutput(BaseInvocationOutput):
)
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.1")
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.2")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
@ -276,7 +276,7 @@ class ImageInvocation(BaseInvocation):
title="Image Collection Primitive",
tags=["primitives", "image", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class ImageCollectionInvocation(BaseInvocation):
"""A collection of image primitive values"""
@ -341,7 +341,7 @@ class LatentsCollectionOutput(BaseInvocationOutput):
@invocation(
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.1"
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.2"
)
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
@ -359,7 +359,7 @@ class LatentsInvocation(BaseInvocation):
title="Latents Collection Primitive",
tags=["primitives", "latents", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class LatentsCollectionInvocation(BaseInvocation):
"""A collection of latents tensor primitive values"""
@ -393,7 +393,7 @@ class ColorCollectionOutput(BaseInvocationOutput):
)
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.0")
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.1")
class ColorInvocation(BaseInvocation):
"""A color primitive value"""
@ -433,7 +433,7 @@ class ConditioningCollectionOutput(BaseInvocationOutput):
title="Conditioning Primitive",
tags=["primitives", "conditioning"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class ConditioningInvocation(BaseInvocation):
"""A conditioning tensor primitive value"""
@ -449,7 +449,7 @@ class ConditioningInvocation(BaseInvocation):
title="Conditioning Collection Primitive",
tags=["primitives", "conditioning", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""

View File

@ -17,7 +17,7 @@ from .fields import InputField, UIComponent
title="Dynamic Prompt",
tags=["prompt", "collection"],
category="prompt",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class DynamicPromptInvocation(BaseInvocation):
@ -46,7 +46,7 @@ class DynamicPromptInvocation(BaseInvocation):
title="Prompts from File",
tags=["prompt", "file"],
category="prompt",
version="1.0.1",
version="1.0.2",
)
class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file"""

View File

@ -30,7 +30,7 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.1")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.2")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
@ -67,7 +67,7 @@ class SDXLModelLoaderInvocation(BaseInvocation):
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.1",
version="1.0.2",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""

View File

@ -27,7 +27,7 @@ class StringPosNegOutput(BaseInvocationOutput):
title="String Split Negative",
tags=["string", "split", "negative"],
category="string",
version="1.0.0",
version="1.0.1",
)
class StringSplitNegInvocation(BaseInvocation):
"""Splits string into two strings, inside [] goes into negative string everthing else goes into positive string. Each [ and ] character is replaced with a space"""
@ -69,7 +69,7 @@ class String2Output(BaseInvocationOutput):
string_2: str = OutputField(description="string 2")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.0")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.1")
class StringSplitInvocation(BaseInvocation):
"""Splits string into two strings, based on the first occurance of the delimiter. The delimiter will be removed from the string"""
@ -89,7 +89,7 @@ class StringSplitInvocation(BaseInvocation):
return String2Output(string_1=part1, string_2=part2)
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.0")
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.1")
class StringJoinInvocation(BaseInvocation):
"""Joins string left to string right"""
@ -100,7 +100,7 @@ class StringJoinInvocation(BaseInvocation):
return StringOutput(value=((self.string_left or "") + (self.string_right or "")))
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.0")
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.1")
class StringJoinThreeInvocation(BaseInvocation):
"""Joins string left to string middle to string right"""
@ -113,7 +113,7 @@ class StringJoinThreeInvocation(BaseInvocation):
@invocation(
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.0"
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.1"
)
class StringReplaceInvocation(BaseInvocation):
"""Replaces the search string with the replace string"""

View File

@ -45,7 +45,7 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.1"
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.2"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""

View File

@ -39,7 +39,7 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
title="Calculate Image Tiles",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
@ -73,7 +73,7 @@ class CalculateImageTilesInvocation(BaseInvocation):
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.1.0",
version="1.1.1",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
@ -116,7 +116,7 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
title="Calculate Image Tiles Minimum Overlap",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesMinimumOverlapInvocation(BaseInvocation):
@ -167,7 +167,7 @@ class TileToPropertiesOutput(BaseInvocationOutput):
title="Tile to Properties",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class TileToPropertiesInvocation(BaseInvocation):
@ -200,7 +200,7 @@ class PairTileImageOutput(BaseInvocationOutput):
title="Pair Tile with Image",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class PairTileImageInvocation(BaseInvocation):
@ -229,7 +229,7 @@ BLEND_MODES = Literal["Linear", "Seam"]
title="Merge Tiles to Image",
tags=["tiles"],
category="tiles",
version="1.1.0",
version="1.1.1",
classification=Classification.Beta,
)
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata, WithBoard):

View File

@ -11,6 +11,7 @@ from pydantic import ConfigDict
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import choose_torch_device
@ -27,11 +28,18 @@ ESRGAN_MODELS = Literal[
"RealESRGAN_x2plus.pth",
]
ESRGAN_MODEL_URLS: dict[str, str] = {
"RealESRGAN_x4plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
"RealESRGAN_x4plus_anime_6B.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth",
"ESRGAN_SRx4_DF2KOST_official.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
"RealESRGAN_x2plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
}
if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.1")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.2")
class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Upscales an image using RealESRGAN."""
@ -45,7 +53,6 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
models_path = context.config.get().models_path
rrdbnet_model = None
netscale = None
@ -92,11 +99,16 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
context.logger.error(msg)
raise ValueError(msg)
esrgan_model_path = Path(f"core/upscaling/realesrgan/{self.model_name}")
esrgan_model_path = Path(context.config.get().models_path, f"core/upscaling/realesrgan/{self.model_name}")
# Downloads the ESRGAN model if it doesn't already exist
download_with_progress_bar(
name=self.model_name, url=ESRGAN_MODEL_URLS[self.model_name], dest_path=esrgan_model_path
)
upscaler = RealESRGAN(
scale=netscale,
model_path=models_path / esrgan_model_path,
model_path=esrgan_model_path,
model=rrdbnet_model,
half=False,
tile=self.tile_size,

12
invokeai/app/run_app.py Normal file
View File

@ -0,0 +1,12 @@
"""This is a wrapper around the main app entrypoint, to allow for CLI args to be parsed before running the app."""
def run_app() -> None:
# Before doing _anything_, parse CLI args!
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
InvokeAIArgs.parse_args()
from invokeai.app.api_app import invoke_api
invoke_api()

View File

@ -2,6 +2,6 @@
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_invokeai_config
from .config_default import InvokeAIAppConfig, get_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config", "PagingArgumentParser"]
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]

View File

@ -1,241 +0,0 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
"""
Base class for the InvokeAI configuration system.
It defines a type of pydantic BaseSettings object that
is able to read and write from an omegaconf-based config file,
with overriding of settings from environment variables and/or
the command line.
"""
from __future__ import annotations
import argparse
import os
import sys
from argparse import ArgumentParser
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_args, get_origin, get_type_hints
from omegaconf import DictConfig, DictKeyType, ListConfig, OmegaConf
from pydantic import BaseModel
from pydantic_settings import BaseSettings, SettingsConfigDict
from invokeai.app.services.config.config_common import PagingArgumentParser, int_or_float_or_str
class InvokeAISettings(BaseSettings):
"""Runtime configuration settings in which default values are read from an omegaconf .yaml file."""
initconf: ClassVar[Optional[DictConfig]] = None
argparse_groups: ClassVar[Dict[str, Any]] = {}
model_config = SettingsConfigDict(env_file_encoding="utf-8", arbitrary_types_allowed=True, case_sensitive=True)
def parse_args(self, argv: Optional[List[str]] = sys.argv[1:]) -> None:
"""Call to parse command-line arguments."""
parser = self.get_parser()
opt, unknown_opts = parser.parse_known_args(argv)
if len(unknown_opts) > 0:
print("Unknown args:", unknown_opts)
for name in self.model_fields:
if name not in self._excluded():
value = getattr(opt, name)
if isinstance(value, ListConfig):
value = list(value)
elif isinstance(value, DictConfig):
value = dict(value)
setattr(self, name, value)
def to_yaml(self) -> str:
"""Return a YAML string representing our settings. This can be used as the contents of `invokeai.yaml` to restore settings later."""
cls = self.__class__
type = get_args(get_type_hints(cls)["type"])[0]
field_dict: Dict[str, Dict[str, Any]] = {type: {}}
for name, field in self.model_fields.items():
if name in cls._excluded_from_yaml():
continue
assert isinstance(field.json_schema_extra, dict)
category = (
field.json_schema_extra.get("category", "Uncategorized") if field.json_schema_extra else "Uncategorized"
)
value = getattr(self, name)
assert isinstance(category, str)
if category not in field_dict[type]:
field_dict[type][category] = {}
if isinstance(value, BaseModel):
dump = value.model_dump(exclude_defaults=True, exclude_unset=True, exclude_none=True)
field_dict[type][category][name] = dump
continue
if isinstance(value, list):
if not value or len(value) == 0:
continue
primitive = isinstance(value[0], get_args(DictKeyType))
if not primitive:
val_list: List[Dict[str, Any]] = []
for list_val in value:
if isinstance(list_val, BaseModel):
dump = list_val.model_dump(exclude_defaults=True, exclude_unset=True, exclude_none=True)
val_list.append(dump)
field_dict[type][category][name] = val_list
continue
# keep paths as strings to make it easier to read
field_dict[type][category][name] = str(value) if isinstance(value, Path) else value
conf = OmegaConf.create(field_dict)
return OmegaConf.to_yaml(conf)
@classmethod
def add_parser_arguments(cls, parser: ArgumentParser) -> None:
"""Dynamically create arguments for a settings parser."""
if "type" in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)["type"])[0]
else:
settings_stanza = "Uncategorized"
env_prefix = getattr(cls.model_config, "env_prefix", None)
env_prefix = env_prefix if env_prefix is not None else settings_stanza.upper()
initconf = (
cls.initconf.get(settings_stanza)
if cls.initconf and settings_stanza in cls.initconf
else OmegaConf.create()
)
# create an upcase version of the environment in
# order to achieve case-insensitive environment
# variables (the way Windows does)
upcase_environ = {}
for key, value in os.environ.items():
upcase_environ[key.upper()] = value
fields = cls.model_fields
cls.argparse_groups = {}
for name, field in fields.items():
if name not in cls._excluded():
current_default = field.default
category = (
field.json_schema_extra.get("category", "Uncategorized")
if field.json_schema_extra
else "Uncategorized"
)
env_name = env_prefix + "_" + name
if category in initconf and name in initconf.get(category):
field.default = initconf.get(category).get(name)
if env_name.upper() in upcase_environ:
field.default = upcase_environ[env_name.upper()]
cls.add_field_argument(parser, name, field)
field.default = current_default
@classmethod
def cmd_name(cls, command_field: str = "type") -> str:
"""Return the category of a setting."""
hints = get_type_hints(cls)
if command_field in hints:
result: str = get_args(hints[command_field])[0]
return result
else:
return "Uncategorized"
@classmethod
def get_parser(cls) -> ArgumentParser:
"""Get the command-line parser for a setting."""
parser = PagingArgumentParser(
prog=cls.cmd_name(),
description=cls.__doc__,
)
cls.add_parser_arguments(parser)
return parser
@classmethod
def _excluded(cls) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ["type", "initconf"]
@classmethod
def _excluded_from_yaml(cls) -> List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return [
"type",
"initconf",
"version",
"from_file",
"model",
"root",
"max_cache_size",
"max_vram_cache_size",
"always_use_cpu",
"free_gpu_mem",
"xformers_enabled",
"tiled_decode",
"lora_dir",
"embedding_dir",
"controlnet_dir",
"conf_path",
]
@classmethod
def add_field_argument(cls, command_parser, name: str, field, default_override=None) -> None:
"""Add the argparse arguments for a setting parser."""
field_type = get_type_hints(cls).get(name)
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if category := (field.json_schema_extra.get("category", None) if field.json_schema_extra else None):
if category not in cls.argparse_groups:
cls.argparse_groups[category] = command_parser.add_argument_group(category)
argparse_group = cls.argparse_groups[category]
else:
argparse_group = command_parser
if get_origin(field_type) == Literal:
allowed_values = get_args(field.annotation)
allowed_types = set()
for val in allowed_values:
allowed_types.add(type(val))
allowed_types_list = list(allowed_types)
field_type = allowed_types_list[0] if len(allowed_types) == 1 else int_or_float_or_str
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field_type,
default=default,
choices=allowed_values,
help=field.description,
)
elif get_origin(field_type) == Union:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=int_or_float_or_str,
default=default,
help=field.description,
)
elif get_origin(field_type) == list:
argparse_group.add_argument(
f"--{name}",
dest=name,
nargs="*",
type=field.annotation,
default=default,
action=argparse.BooleanOptionalAction if field.annotation == bool else "store",
help=field.description,
)
else:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field.annotation,
default=default,
action=argparse.BooleanOptionalAction if field.annotation == bool else "store",
help=field.description,
)

View File

@ -12,7 +12,6 @@ from __future__ import annotations
import argparse
import pydoc
from typing import Union
class PagingArgumentParser(argparse.ArgumentParser):
@ -24,18 +23,3 @@ class PagingArgumentParser(argparse.ArgumentParser):
def print_help(self, file=None) -> None:
text = self.format_help()
pydoc.pager(text)
def int_or_float_or_str(value: str) -> Union[int, float, str]:
"""
Workaround for argparse type checking.
"""
try:
return int(value)
except Exception as e: # noqa F841
pass
try:
return float(value)
except Exception as e: # noqa F841
pass
return str(value)

View File

@ -1,185 +1,23 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
"""Invokeai configuration system.
Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
has a top-level key of "InvokeAI" and subheadings for each of the
categories returned by `invokeai --help`. The file looks like this:
[file: invokeai.yaml]
InvokeAI:
Web Server:
host: 127.0.0.1
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
ignore_missing_core_models: false
Paths:
autoimport_dir: autoimport
lora_dir: null
embedding_dir: null
controlnet_dir: null
models_dir: models
legacy_conf_dir: configs/stable-diffusion
db_dir: databases
outdir: /home/lstein/invokeai-main/outputs
use_memory_db: false
Logging:
log_handlers:
- console
log_format: plain
log_level: info
Model Cache:
ram: 13.5
vram: 0.25
lazy_offload: true
log_memory_usage: false
Device:
device: auto
precision: auto
Generation:
sequential_guidance: false
attention_type: xformers
attention_slice_size: auto
force_tiled_decode: false
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing any
OmegaConf dictionary object initialization time:
omegaconf = OmegaConf.load('/tmp/init.yaml')
conf = InvokeAIAppConfig()
conf.parse_args(conf=omegaconf)
InvokeAIAppConfig.parse_args() will parse the contents of `sys.argv`
at initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:
conf.parse_args(argv=['--log_tokenization'])
It is also possible to set a value at initialization time. However, if
you call parse_args() it may be overwritten.
conf = InvokeAIAppConfig(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# False
To avoid this, use `get_config()` to retrieve the application-wide
configuration object. This will retain any properties set at object
creation time:
conf = InvokeAIAppConfig.get_config(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# True
Any setting can be overwritten by setting an environment variable of
form: "INVOKEAI_<setting>", as in:
export INVOKEAI_port=8080
Order of precedence (from highest):
1) initialization options
2) command line options
3) environment variable options
4) config file options
5) pydantic defaults
Typical usage at the top level file:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its cache size
conf = InvokeAIAppConfig.get_config()
conf.parse_args()
print(conf.ram_cache_size)
Typical usage in a backend module:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its cache size value
conf = InvokeAIAppConfig.get_config()
print(conf.ram_cache_size)
Computed properties:
The InvokeAIAppConfig object has a series of properties that
resolve paths relative to the runtime root directory. They each return
a Path object:
root_path - path to InvokeAI root
output_path - path to default outputs directory
conf - alias for the above
embedding_path - path to the embeddings directory
lora_path - path to the LoRA directory
In most cases, you will want to create a single InvokeAIAppConfig
object for the entire application. The InvokeAIAppConfig.get_config() function
does this:
config = InvokeAIAppConfig.get_config()
config.parse_args() # read values from the command line/config file
print(config.root)
# Subclassing
If you wish to create a similar class, please subclass the
`InvokeAISettings` class and define a Literal field named "type",
which is set to the desired top-level name. For example, to create a
"InvokeBatch" configuration, define like this:
class InvokeBatch(InvokeAISettings):
type: Literal["InvokeBatch"] = "InvokeBatch"
node_count : int = Field(default=1, description="Number of nodes to run on", json_schema_extra=dict(category='Resources'))
cpu_count : int = Field(default=8, description="Number of GPUs to run on per node", json_schema_extra=dict(category='Resources'))
This will now read and write from the "InvokeBatch" section of the
config file, look for environment variables named INVOKEBATCH_*, and
accept the command-line arguments `--node_count` and `--cpu_count`. The
two configs are kept in separate sections of the config file:
# invokeai.yaml
InvokeBatch:
Resources:
node_count: 1
cpu_count: 8
InvokeAI:
Paths:
root: /home/lstein/invokeai-main
legacy_conf_dir: configs/stable-diffusion
outdir: outputs
...
"""
# TODO(psyche): pydantic-settings supports YAML settings sources. If we can figure out a way to integrate the YAML
# migration logic, we could use that for simpler config loading.
from __future__ import annotations
import os
import re
import shutil
from functools import lru_cache
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional
from typing import Any, Literal, Optional
from omegaconf import DictConfig, OmegaConf
from pydantic import BaseModel, Field, field_validator
from pydantic.config import JsonDict
from pydantic_settings import SettingsConfigDict
import psutil
import yaml
from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
from .config_base import InvokeAISettings
import invokeai.configs as model_configs
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
INIT_FILE = Path("invokeai.yaml")
DB_FILE = Path("invokeai.db")
@ -187,28 +25,34 @@ LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_RAM_CACHE = 10.0
DEFAULT_VRAM_CACHE = 0.25
DEFAULT_CONVERT_CACHE = 20.0
DEVICE = Literal["auto", "cpu", "cuda", "cuda:1", "mps"]
PRECISION = Literal["auto", "float16", "bfloat16", "float32", "autocast"]
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
CONFIG_SCHEMA_VERSION = "4.0.0"
class Categories(object):
"""Category headers for configuration variable groups."""
def get_default_ram_cache_size() -> float:
"""Run a heuristic for the default RAM cache based on installed RAM."""
WebServer: JsonDict = {"category": "Web Server"}
Features: JsonDict = {"category": "Features"}
Paths: JsonDict = {"category": "Paths"}
Logging: JsonDict = {"category": "Logging"}
Development: JsonDict = {"category": "Development"}
CLIArgs: JsonDict = {"category": "CLIArgs"}
ModelInstall: JsonDict = {"category": "Model Install"}
ModelCache: JsonDict = {"category": "Model Cache"}
Device: JsonDict = {"category": "Device"}
Generation: JsonDict = {"category": "Generation"}
Queue: JsonDict = {"category": "Queue"}
Nodes: JsonDict = {"category": "Nodes"}
MemoryPerformance: JsonDict = {"category": "Memory/Performance"}
Deprecated: JsonDict = {"category": "Deprecated"}
# On some machines, psutil.virtual_memory().total gives a value that is slightly less than the actual RAM, so the
# limits are set slightly lower than than what we expect the actual RAM to be.
GB = 1024**3
max_ram = psutil.virtual_memory().total / GB
if max_ram >= 60:
return 15.0
if max_ram >= 30:
return 7.5
if max_ram >= 14:
return 4.0
return 2.1 # 2.1 is just large enough for sd 1.5 ;-)
class URLRegexToken(BaseModel):
class URLRegexTokenPair(BaseModel):
url_regex: str = Field(description="Regular expression to match against the URL")
token: str = Field(description="Token to use when the URL matches the regex")
@ -223,397 +67,397 @@ class URLRegexToken(BaseModel):
return v
class InvokeAIAppConfig(InvokeAISettings):
"""Invoke App Configuration
class InvokeAIAppConfig(BaseSettings):
"""Invoke's global app configuration.
Typically, you won't need to interact with this class directly. Instead, use the `get_config` function from `invokeai.app.services.config` to get a singleton config object.
Attributes:
host: **Web Server**: IP address to bind to. Use `0.0.0.0` to serve to your local network.
port: **Web Server**: Port to bind to.
allow_origins: **Web Server**: Allowed CORS origins.
allow_credentials: **Web Server**: Allow CORS credentials.
allow_methods: **Web Server**: Methods allowed for CORS.
allow_headers: **Web Server**: Headers allowed for CORS.
ssl_certfile: **Web Server**: SSL certificate file for HTTPS.
ssl_keyfile: **Web Server**: SSL key file for HTTPS.
esrgan: **Features**: Enables or disables the upscaling code.
internet_available: **Features**: If true, attempt to download models on the fly; otherwise only use local models.
log_tokenization: **Features**: Enable logging of parsed prompt tokens.
patchmatch: **Features**: Enable patchmatch inpaint code.
ignore_missing_core_models: **Features**: Ignore missing core models on startup. If `True`, the app will attempt to download missing models on startup.
root: **Paths**: The InvokeAI runtime root directory.
autoimport_dir: **Paths**: Path to a directory of models files to be imported on startup.
models_dir: **Paths**: Path to the models directory.
convert_cache_dir: **Paths**: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
legacy_conf_dir: **Paths**: Path to directory of legacy checkpoint config files.
db_dir: **Paths**: Path to InvokeAI databases directory.
outdir: **Paths**: Path to directory for outputs.
custom_nodes_dir: **Paths**: Path to directory for custom nodes.
from_file: **Paths**: Take command input from the indicated file (command-line client only).
log_handlers: **Logging**: Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format: **Logging**: Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.
log_level: **Logging**: Emit logging messages at this level or higher.
log_sql: **Logging**: Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.
use_memory_db: **Development**: Use in-memory database. Useful for development.
dev_reload: **Development**: Automatically reload when Python sources are changed. Does not reload node definitions.
profile_graphs: **Development**: Enable graph profiling using `cProfile`.
profile_prefix: **Development**: An optional prefix for profile output files.
profiles_dir: **Development**: Path to profiles output directory.
version: **CLIArgs**: CLI arg - show InvokeAI version and exit.
skip_model_hash: **Model Install**: Skip model hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models.
remote_api_tokens: **Model Install**: List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
ram: **Model Cache**: Maximum memory amount used by memory model cache for rapid switching (GB).
vram: **Model Cache**: Amount of VRAM reserved for model storage (GB)
convert_cache: **Model Cache**: Maximum size of on-disk converted models cache (GB)
lazy_offload: **Model Cache**: Keep models in VRAM until their space is needed.
log_memory_usage: **Model Cache**: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
device: **Device**: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.
precision: **Device**: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.
sequential_guidance: **Generation**: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type: **Generation**: Attention type.
attention_slice_size: **Generation**: Slice size, valid when attention_type=="sliced".
force_tiled_decode: **Generation**: Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
png_compress_level: **Generation**: The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size: **Queue**: Maximum number of items in the session queue.
allow_nodes: **Nodes**: List of nodes to allow. Omit to allow all.
deny_nodes: **Nodes**: List of nodes to deny. Omit to deny none.
node_cache_size: **Nodes**: How many cached nodes to keep in memory.
host: IP address to bind to. Use `0.0.0.0` to serve to your local network.
port: Port to bind to.
allow_origins: Allowed CORS origins.
allow_credentials: Allow CORS credentials.
allow_methods: Methods allowed for CORS.
allow_headers: Headers allowed for CORS.
ssl_certfile: SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.
ssl_keyfile: SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.
log_tokenization: Enable logging of parsed prompt tokens.
patchmatch: Enable patchmatch inpaint code.
autoimport_dir: Path to a directory of models files to be imported on startup.
models_dir: Path to the models directory.
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
legacy_conf_dir: Path to directory of legacy checkpoint config files.
db_dir: Path to InvokeAI databases directory.
outputs_dir: Path to directory for outputs.
custom_nodes_dir: Path to directory for custom nodes.
log_handlers: Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format: Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.<br>Valid values: `plain`, `color`, `syslog`, `legacy`
log_level: Emit logging messages at this level or higher.<br>Valid values: `debug`, `info`, `warning`, `error`, `critical`
log_sql: Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.
use_memory_db: Use in-memory database. Useful for development.
dev_reload: Automatically reload when Python sources are changed. Does not reload node definitions.
profile_graphs: Enable graph profiling using `cProfile`.
profile_prefix: An optional prefix for profile output files.
profiles_dir: Path to profiles output directory.
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
vram: Amount of VRAM reserved for model storage (GB).
convert_cache: Maximum size of on-disk converted models cache (GB).
lazy_offload: Keep models in VRAM until their space is needed.
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`, `autocast`
sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
attention_slice_size: Slice size, valid when attention_type=="sliced".<br>Valid values: `auto`, `balanced`, `max`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`
force_tiled_decode: Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
pil_compress_level: The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size: Maximum number of items in the session queue.
allow_nodes: List of nodes to allow. Omit to allow all.
deny_nodes: List of nodes to deny. Omit to deny none.
node_cache_size: How many cached nodes to keep in memory.
hashing_algorithm: Model hashing algorthim for model installs. 'blake3' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.<br>Valid values: `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `blake2b`, `blake2s`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, `shake_128`, `shake_256`, `blake3`, `blake3_single`, `random`
remote_api_tokens: List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
"""
singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None
singleton_init: ClassVar[Optional[Dict[str, Any]]] = None
_root: Optional[Path] = PrivateAttr(default=None)
_config_file: Optional[Path] = PrivateAttr(default=None)
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
# INTERNAL
schema_version: str = Field(default=CONFIG_SCHEMA_VERSION, description="Schema version of the config file. This is not a user-configurable setting.")
# This is only used during v3 models.yaml migration
legacy_models_yaml_path: Optional[Path] = Field(default=None, description="Path to the legacy models.yaml file. This is not a user-configurable setting.")
# WEB
host : str = Field(default="127.0.0.1", description="IP address to bind to. Use `0.0.0.0` to serve to your local network.", json_schema_extra=Categories.WebServer)
port : int = Field(default=9090, description="Port to bind to.", json_schema_extra=Categories.WebServer)
allow_origins : List[str] = Field(default=[], description="Allowed CORS origins.", json_schema_extra=Categories.WebServer)
allow_credentials : bool = Field(default=True, description="Allow CORS credentials.", json_schema_extra=Categories.WebServer)
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS.", json_schema_extra=Categories.WebServer)
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS.", json_schema_extra=Categories.WebServer)
# SSL options correspond to https://www.uvicorn.org/settings/#https
ssl_certfile : Optional[Path] = Field(default=None, description="SSL certificate file for HTTPS.", json_schema_extra=Categories.WebServer)
ssl_keyfile : Optional[Path] = Field(default=None, description="SSL key file for HTTPS.", json_schema_extra=Categories.WebServer)
host: str = Field(default="127.0.0.1", description="IP address to bind to. Use `0.0.0.0` to serve to your local network.")
port: int = Field(default=9090, description="Port to bind to.")
allow_origins: list[str] = Field(default=[], description="Allowed CORS origins.")
allow_credentials: bool = Field(default=True, description="Allow CORS credentials.")
allow_methods: list[str] = Field(default=["*"], description="Methods allowed for CORS.")
allow_headers: list[str] = Field(default=["*"], description="Headers allowed for CORS.")
ssl_certfile: Optional[Path] = Field(default=None, description="SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.")
ssl_keyfile: Optional[Path] = Field(default=None, description="SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.")
# FEATURES
esrgan : bool = Field(default=True, description="Enables or disables the upscaling code.", json_schema_extra=Categories.Features)
# TODO(psyche): This is not used anywhere.
internet_available : bool = Field(default=True, description="If true, attempt to download models on the fly; otherwise only use local models.", json_schema_extra=Categories.Features)
log_tokenization : bool = Field(default=False, description="Enable logging of parsed prompt tokens.", json_schema_extra=Categories.Features)
patchmatch : bool = Field(default=True, description="Enable patchmatch inpaint code.", json_schema_extra=Categories.Features)
ignore_missing_core_models : bool = Field(default=False, description='Ignore missing core models on startup. If `True`, the app will attempt to download missing models on startup.', json_schema_extra=Categories.Features)
# MISC FEATURES
log_tokenization: bool = Field(default=False, description="Enable logging of parsed prompt tokens.")
patchmatch: bool = Field(default=True, description="Enable patchmatch inpaint code.")
# PATHS
root : Optional[Path] = Field(default=None, description='The InvokeAI runtime root directory.', json_schema_extra=Categories.Paths)
autoimport_dir : Path = Field(default=Path('autoimport'), description='Path to a directory of models files to be imported on startup.', json_schema_extra=Categories.Paths)
models_dir : Path = Field(default=Path('models'), description='Path to the models directory.', json_schema_extra=Categories.Paths)
convert_cache_dir : Path = Field(default=Path('models/.cache'), description='Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.', json_schema_extra=Categories.Paths)
legacy_conf_dir : Path = Field(default=Path('configs/stable-diffusion'), description='Path to directory of legacy checkpoint config files.', json_schema_extra=Categories.Paths)
db_dir : Path = Field(default=Path('databases'), description='Path to InvokeAI databases directory.', json_schema_extra=Categories.Paths)
outdir : Path = Field(default=Path('outputs'), description='Path to directory for outputs.', json_schema_extra=Categories.Paths)
custom_nodes_dir : Path = Field(default=Path('nodes'), description='Path to directory for custom nodes.', json_schema_extra=Categories.Paths)
# TODO(psyche): This is not used anywhere.
from_file : Optional[Path] = Field(default=None, description='Take command input from the indicated file (command-line client only).', json_schema_extra=Categories.Paths)
autoimport_dir: Path = Field(default=Path("autoimport"), description="Path to a directory of models files to be imported on startup.")
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
convert_cache_dir: Path = Field(default=Path("models/.cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
custom_nodes_dir: Path = Field(default=Path("nodes"), description="Path to directory for custom nodes.")
# LOGGING
log_handlers : List[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".', json_schema_extra=Categories.Logging)
log_handlers: list[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".')
# note - would be better to read the log_format values from logging.py, but this creates circular dependencies issues
log_format : Literal['plain', 'color', 'syslog', 'legacy'] = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.', json_schema_extra=Categories.Logging)
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher.", json_schema_extra=Categories.Logging)
log_sql : bool = Field(default=False, description="Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.", json_schema_extra=Categories.Logging)
log_format: LOG_FORMAT = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.')
log_level: LOG_LEVEL = Field(default="info", description="Emit logging messages at this level or higher.")
log_sql: bool = Field(default=False, description="Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.")
# Development
use_memory_db : bool = Field(default=False, description='Use in-memory database. Useful for development.', json_schema_extra=Categories.Development)
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed. Does not reload node definitions.", json_schema_extra=Categories.Development)
profile_graphs : bool = Field(default=False, description="Enable graph profiling using `cProfile`.", json_schema_extra=Categories.Development)
profile_prefix : Optional[str] = Field(default=None, description="An optional prefix for profile output files.", json_schema_extra=Categories.Development)
profiles_dir : Path = Field(default=Path('profiles'), description="Path to profiles output directory.", json_schema_extra=Categories.Development)
version : bool = Field(default=False, description="CLI arg - show InvokeAI version and exit.", json_schema_extra=Categories.CLIArgs)
use_memory_db: bool = Field(default=False, description="Use in-memory database. Useful for development.")
dev_reload: bool = Field(default=False, description="Automatically reload when Python sources are changed. Does not reload node definitions.")
profile_graphs: bool = Field(default=False, description="Enable graph profiling using `cProfile`.")
profile_prefix: Optional[str] = Field(default=None, description="An optional prefix for profile output files.")
profiles_dir: Path = Field(default=Path("profiles"), description="Path to profiles output directory.")
# CACHE
ram : float = Field(default=DEFAULT_RAM_CACHE, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).", json_schema_extra=Categories.ModelCache, )
vram : float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB)", json_schema_extra=Categories.ModelCache, )
convert_cache : float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB)", json_schema_extra=Categories.ModelCache)
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed.", json_schema_extra=Categories.ModelCache, )
log_memory_usage : bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.", json_schema_extra=Categories.ModelCache)
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
convert_cache: float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB).")
lazy_offload: bool = Field(default=True, description="Keep models in VRAM until their space is needed.")
log_memory_usage: bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
# DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "bfloat16", "float32", "autocast"] = Field(default="auto", description="Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.", json_schema_extra=Categories.Device)
device: DEVICE = Field(default="auto", description="Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.")
precision: PRECISION = Field(default="auto", description="Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.")
# GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.", json_schema_extra=Categories.Generation)
attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type.", json_schema_extra=Categories.Generation)
attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced".', json_schema_extra=Categories.Generation)
force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.", json_schema_extra=Categories.Generation)
# QUEUE
max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue.", json_schema_extra=Categories.Queue)
sequential_guidance: bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.")
attention_type: ATTENTION_TYPE = Field(default="auto", description="Attention type.")
attention_slice_size: ATTENTION_SLICE_SIZE = Field(default="auto", description='Slice size, valid when attention_type=="sliced".')
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).")
pil_compress_level: int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.")
max_queue_size: int = Field(default=10000, gt=0, description="Maximum number of items in the session queue.")
# NODES
allow_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.", json_schema_extra=Categories.Nodes)
deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory.", json_schema_extra=Categories.Nodes)
allow_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.")
deny_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.")
node_cache_size: int = Field(default=512, description="How many cached nodes to keep in memory.")
# MODEL INSTALL
skip_model_hash : bool = Field(default=False, description="Skip model hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models.", json_schema_extra=Categories.ModelInstall)
remote_api_tokens : Optional[list[URLRegexToken]] = Field(
default=None,
description="List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.",
json_schema_extra=Categories.ModelInstall
)
hashing_algorithm: HASHING_ALGORITHMS = Field(default="blake3", description="Model hashing algorthim for model installs. 'blake3' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.")
remote_api_tokens: Optional[list[URLRegexTokenPair]] = Field(default=None, description="List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.")
# TODO(psyche): Can we just remove these then?
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.Deprecated)
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.Deprecated)
max_vram_cache_size : Optional[float] = Field(default=None, ge=0, description="Amount of VRAM reserved for model storage", json_schema_extra=Categories.Deprecated)
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", json_schema_extra=Categories.Deprecated)
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Deprecated)
lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Deprecated)
embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Deprecated)
controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Deprecated)
conf_path : Path = Field(default=Path('configs/models.yaml'), description='Path to models definition file', json_schema_extra=Categories.Deprecated)
# this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
# See InvokeAIAppConfig subclass below for CACHE and DEVICE categories
# fmt: on
model_config = SettingsConfigDict(validate_assignment=True, env_prefix="INVOKEAI")
model_config = SettingsConfigDict(env_prefix="INVOKEAI_", env_ignore_empty=True)
def parse_args(
self,
argv: Optional[list[str]] = None,
conf: Optional[DictConfig] = None,
clobber: Optional[bool] = False,
) -> None:
def update_config(self, config: dict[str, Any] | InvokeAIAppConfig, clobber: bool = True) -> None:
"""Updates the config, overwriting existing values.
Args:
config: A dictionary of config settings, or instance of `InvokeAIAppConfig`. If an instance of \
`InvokeAIAppConfig`, only the explicitly set fields will be merged into the singleton config.
clobber: If `True`, overwrite existing values. If `False`, only update fields that are not already set.
"""
Update settings with contents of init file, environment, and command-line settings.
:param conf: alternate Omegaconf dictionary object
:param argv: aternate sys.argv list
:param clobber: ovewrite any initialization parameters passed during initialization
if isinstance(config, dict):
new_config = self.model_validate(config)
else:
new_config = config
for field_name in new_config.model_fields_set:
new_value = getattr(new_config, field_name)
current_value = getattr(self, field_name)
if field_name in self.model_fields_set and not clobber:
continue
if new_value != current_value:
setattr(self, field_name, new_value)
def write_file(self, dest_path: Path, as_example: bool = False) -> None:
"""Write the current configuration to file. This will overwrite the existing file.
A `meta` stanza is added to the top of the file, containing metadata about the config file. This is not stored in the config object.
Args:
dest_path: Path to write the config to.
"""
# Set the runtime root directory. We parse command-line switches here
# in order to pick up the --root_dir option.
super().parse_args(argv)
loaded_conf = None
if conf is None:
try:
loaded_conf = OmegaConf.load(self.root_dir / INIT_FILE)
except Exception:
pass
if isinstance(loaded_conf, DictConfig):
InvokeAISettings.initconf = loaded_conf
else:
InvokeAISettings.initconf = conf
dest_path.parent.mkdir(parents=True, exist_ok=True)
with open(dest_path, "w") as file:
# Meta fields should be written in a separate stanza - skip legacy_models_yaml_path
meta_dict = self.model_dump(mode="json", include={"schema_version"})
# parse args again in order to pick up settings in configuration file
super().parse_args(argv)
# User settings
config_dict = self.model_dump(
mode="json",
exclude_unset=False if as_example else True,
exclude_defaults=False if as_example else True,
exclude_none=True if as_example else False,
exclude={"schema_version", "legacy_models_yaml_path"},
)
if self.singleton_init and not clobber:
# When setting values in this way, set validate_assignment to true if you want to validate the value.
for k, v in self.singleton_init.items():
setattr(self, k, v)
@classmethod
def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object."""
if (
cls.singleton_config is None
or type(cls.singleton_config) is not cls
or (kwargs and cls.singleton_init != kwargs)
):
cls.singleton_config = cls(**kwargs)
cls.singleton_init = kwargs
return cls.singleton_config
@property
def root_path(self) -> Path:
"""Path to the runtime root directory."""
if self.root:
root = Path(self.root).expanduser().absolute()
else:
root = self.find_root().expanduser().absolute()
self.root = root # insulate ourselves from relative paths that may change
return root.resolve()
@property
def root_dir(self) -> Path:
"""Alias for above."""
return self.root_path
if as_example:
file.write(
"# This is an example file with default and example settings. Use the values here as a baseline.\n\n"
)
file.write("# Internal metadata - do not edit:\n")
file.write(yaml.dump(meta_dict, sort_keys=False))
file.write("\n")
file.write("# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:\n")
if len(config_dict) > 0:
file.write(yaml.dump(config_dict, sort_keys=False))
def _resolve(self, partial_path: Path) -> Path:
return (self.root_path / partial_path).resolve()
@property
def init_file_path(self) -> Path:
"""Path to invokeai.yaml."""
resolved_path = self._resolve(INIT_FILE)
def root_path(self) -> Path:
"""Path to the runtime root directory, resolved to an absolute path."""
if self._root:
root = Path(self._root).expanduser().absolute()
else:
root = self.find_root().expanduser().absolute()
self._root = root # insulate ourselves from relative paths that may change
return root.resolve()
@property
def config_file_path(self) -> Path:
"""Path to invokeai.yaml, resolved to an absolute path.."""
resolved_path = self._resolve(self._config_file or INIT_FILE)
assert resolved_path is not None
return resolved_path
@property
def output_path(self) -> Optional[Path]:
"""Path to defaults outputs directory."""
return self._resolve(self.outdir)
def autoimport_path(self) -> Path:
"""Path to the autoimports directory, resolved to an absolute path.."""
return self._resolve(self.autoimport_dir)
@property
def outputs_path(self) -> Optional[Path]:
"""Path to the outputs directory, resolved to an absolute path.."""
return self._resolve(self.outputs_dir)
@property
def db_path(self) -> Path:
"""Path to the invokeai.db file."""
"""Path to the invokeai.db file, resolved to an absolute path.."""
db_dir = self._resolve(self.db_dir)
assert db_dir is not None
return db_dir / DB_FILE
@property
def model_conf_path(self) -> Path:
"""Path to models configuration file."""
return self._resolve(self.conf_path)
@property
def legacy_conf_path(self) -> Path:
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml)."""
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml), resolved to an absolute path.."""
return self._resolve(self.legacy_conf_dir)
@property
def models_path(self) -> Path:
"""Path to the models directory."""
"""Path to the models directory, resolved to an absolute path.."""
return self._resolve(self.models_dir)
@property
def models_convert_cache_path(self) -> Path:
"""Path to the converted cache models directory."""
def convert_cache_path(self) -> Path:
"""Path to the converted cache models directory, resolved to an absolute path.."""
return self._resolve(self.convert_cache_dir)
@property
def custom_nodes_path(self) -> Path:
"""Path to the custom nodes directory."""
"""Path to the custom nodes directory, resolved to an absolute path.."""
custom_nodes_path = self._resolve(self.custom_nodes_dir)
assert custom_nodes_path is not None
return custom_nodes_path
# the following methods support legacy calls leftover from the Globals era
@property
def full_precision(self) -> bool:
"""Return true if precision set to float32."""
return self.precision == "float32"
@property
def try_patchmatch(self) -> bool:
"""Return true if patchmatch true."""
return self.patchmatch
@property
def nsfw_checker(self) -> bool:
"""Return value for NSFW checker. The NSFW node is always active and disabled from Web UI."""
return True
@property
def invisible_watermark(self) -> bool:
"""Return value of invisible watermark. It is always active and disabled from Web UI."""
return True
@property
def ram_cache_size(self) -> float:
"""Return the ram cache size using the legacy or modern setting (GB)."""
return self.max_cache_size or self.ram
@property
def vram_cache_size(self) -> float:
"""Return the vram cache size using the legacy or modern setting (GB)."""
return self.max_vram_cache_size or self.vram
@property
def convert_cache_size(self) -> float:
"""Return the convert cache size on disk (GB)."""
return self.convert_cache
@property
def use_cpu(self) -> bool:
"""Return true if the device is set to CPU or the always_use_cpu flag is set."""
return self.always_use_cpu or self.device == "cpu"
@property
def disable_xformers(self) -> bool:
"""Return true if enable_xformers is false (reversed logic) and attention type is not set to xformers."""
disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers"
@property
def profiles_path(self) -> Path:
"""Path to the graph profiles directory."""
"""Path to the graph profiles directory, resolved to an absolute path.."""
return self._resolve(self.profiles_dir)
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""
return _find_root()
@staticmethod
def generate_docstrings() -> str:
"""Helper function for mkdocs. Generates a docstring for the InvokeAIAppConfig class.
You shouldn't run this manually. Instead, run `scripts/update-config-docstring.py` to update the docstring.
A makefile target is also available: `make update-config-docstring`.
See that script for more information about why this is necessary.
"""
docstring = ' """Invoke App Configuration\n\n'
docstring += " Attributes:"
field_descriptions: dict[str, list[str]] = {}
for k, v in InvokeAIAppConfig.model_fields.items():
if not isinstance(v.json_schema_extra, dict):
# Should never happen
continue
category = v.json_schema_extra.get("category", None)
if not isinstance(category, str) or category == "Deprecated":
continue
if not field_descriptions.get(category):
field_descriptions[category] = []
field_descriptions[category].append(f" {k}: **{category}**: {v.description}")
for c in [
"Web Server",
"Features",
"Paths",
"Logging",
"Development",
"CLIArgs",
"Model Install",
"Model Cache",
"Device",
"Generation",
"Queue",
"Nodes",
]:
docstring += "\n"
docstring += "\n".join(field_descriptions[c])
docstring += '\n """'
return docstring
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any((venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root
def get_invokeai_config(**kwargs: Any) -> InvokeAIAppConfig:
"""Legacy function which returns InvokeAIAppConfig.get_config()."""
return InvokeAIAppConfig.get_config(**kwargs)
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
"""Migrate a v3 config dictionary to a current config object.
Args:
config_dict: A dictionary of settings from a v3 config file.
Returns:
An instance of `InvokeAIAppConfig` with the migrated settings.
"""
parsed_config_dict: dict[str, Any] = {}
for _category_name, category_dict in config_dict["InvokeAI"].items():
for k, v in category_dict.items():
# `outdir` was renamed to `outputs_dir` in v4
if k == "outdir":
parsed_config_dict["outputs_dir"] = v
# `max_cache_size` was renamed to `ram` some time in v3, but both names were used
if k == "max_cache_size" and "ram" not in category_dict:
parsed_config_dict["ram"] = v
# `max_vram_cache_size` was renamed to `vram` some time in v3, but both names were used
if k == "max_vram_cache_size" and "vram" not in category_dict:
parsed_config_dict["vram"] = v
if k == "conf_path":
parsed_config_dict["legacy_models_yaml_path"] = v
if k == "legacy_conf_dir":
# The old default for this was "configs/stable-diffusion". If if the incoming config has that as the value, we won't set it.
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
# Else we do not attempt to migrate this setting
if v != "configs/stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = v
elif Path(v).name == "stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
elif k in InvokeAIAppConfig.model_fields:
# skip unknown fields
parsed_config_dict[k] = v
config = InvokeAIAppConfig.model_validate(parsed_config_dict)
return config
def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any((venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]):
root = (venv.parent).resolve()
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
"""Load and migrate a config file to the latest version.
Args:
config_path: Path to the config file.
Returns:
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
"""
assert config_path.suffix == ".yaml"
with open(config_path) as file:
loaded_config_dict = yaml.safe_load(file)
assert isinstance(loaded_config_dict, dict)
if "InvokeAI" in loaded_config_dict:
# This is a v3 config file, attempt to migrate it
shutil.copy(config_path, config_path.with_suffix(".yaml.bak"))
try:
# This could be the wrong shape, but we will catch all exceptions below
config = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
except Exception as e:
shutil.copy(config_path.with_suffix(".yaml.bak"), config_path)
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
# By excluding defaults, we ensure that the new config file only contains the settings that were explicitly set
config.write_file(config_path)
return config
else:
root = Path("~/invokeai").expanduser().resolve()
return root
# Attempt to load as a v4 config file
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
assert (
config.schema_version == CONFIG_SCHEMA_VERSION
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
return config
except Exception as e:
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
@lru_cache(maxsize=1)
def get_config() -> InvokeAIAppConfig:
"""Get the global singleton app config.
When first called, this function:
- Creates a config object. `pydantic-settings` handles merging of settings from environment variables, but not the init file.
- Retrieves any provided CLI args from the InvokeAIArgs class. It does not _parse_ the CLI args; that is done in the main entrypoint.
- Sets the root dir, if provided via CLI args.
- Logs in to HF if there is no valid token already.
- Copies all legacy configs to the legacy conf dir (needed for conversion from ckpt to diffusers).
- Reads and merges in settings from the config file if it exists, else writes out a default config file.
On subsequent calls, the object is returned from the cache.
"""
config = InvokeAIAppConfig()
args = InvokeAIArgs.args
# This flag serves as a proxy for whether the config was retrieved in the context of the full application or not.
# If it is False, we should just return a default config and not set the root, log in to HF, etc.
if not InvokeAIArgs.did_parse:
return config
# Set CLI args
if root := getattr(args, "root", None):
config._root = Path(root)
if config_file := getattr(args, "config_file", None):
config._config_file = Path(config_file)
# Create the example file from a deep copy, with some extra values provided
example_config = config.model_copy(deep=True)
example_config.remote_api_tokens = [
URLRegexTokenPair(url_regex="cool-models.com", token="my_secret_token"),
URLRegexTokenPair(url_regex="nifty-models.com", token="some_other_token"),
]
example_config.write_file(config.config_file_path.with_suffix(".example.yaml"), as_example=True)
# Copy all legacy configs - We know `__path__[0]` is correct here
configs_src = Path(model_configs.__path__[0]) # pyright: ignore [reportUnknownMemberType, reportUnknownArgumentType, reportAttributeAccessIssue]
shutil.copytree(configs_src, config.legacy_conf_path, dirs_exist_ok=True)
if config.config_file_path.exists():
incoming_config = load_and_migrate_config(config.config_file_path)
# Clobbering here will overwrite any settings that were set via environment variables
config.update_config(incoming_config, clobber=False)
else:
config.write_file(config.config_file_path)
return config

View File

@ -12,6 +12,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.util.misc import get_timestamp
from invokeai.backend.model_manager import AnyModelConfig
from invokeai.backend.model_manager.config import SubModelType
class EventServiceBase:
@ -80,7 +81,7 @@ class EventServiceBase:
"graph_execution_state_id": graph_execution_state_id,
"node_id": node_id,
"source_node_id": source_node_id,
"progress_image": progress_image.model_dump() if progress_image is not None else None,
"progress_image": progress_image.model_dump(mode="json") if progress_image is not None else None,
"step": step,
"order": order,
"total_steps": total_steps,
@ -180,6 +181,7 @@ class EventServiceBase:
queue_batch_id: str,
graph_execution_state_id: str,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> None:
"""Emitted when a model is requested"""
self.__emit_queue_event(
@ -189,7 +191,8 @@ class EventServiceBase:
"queue_item_id": queue_item_id,
"queue_batch_id": queue_batch_id,
"graph_execution_state_id": graph_execution_state_id,
"model_config": model_config.model_dump(),
"model_config": model_config.model_dump(mode="json"),
"submodel_type": submodel_type,
},
)
@ -200,6 +203,7 @@ class EventServiceBase:
queue_batch_id: str,
graph_execution_state_id: str,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> None:
"""Emitted when a model is correctly loaded (returns model info)"""
self.__emit_queue_event(
@ -209,7 +213,8 @@ class EventServiceBase:
"queue_item_id": queue_item_id,
"queue_batch_id": queue_batch_id,
"graph_execution_state_id": graph_execution_state_id,
"model_config": model_config.model_dump(),
"model_config": model_config.model_dump(mode="json"),
"submodel_type": submodel_type,
},
)
@ -254,8 +259,8 @@ class EventServiceBase:
"started_at": str(session_queue_item.started_at) if session_queue_item.started_at else None,
"completed_at": str(session_queue_item.completed_at) if session_queue_item.completed_at else None,
},
"batch_status": batch_status.model_dump(),
"queue_status": queue_status.model_dump(),
"batch_status": batch_status.model_dump(mode="json"),
"queue_status": queue_status.model_dump(mode="json"),
},
)
@ -381,6 +386,17 @@ class EventServiceBase:
},
)
def emit_model_install_downloads_done(self, source: str) -> None:
"""
Emit once when all parts are downloaded, but before the probing and registration start.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_downloads_done",
payload={"source": source},
)
def emit_model_install_running(self, source: str) -> None:
"""
Emit once when an install job becomes active.
@ -405,7 +421,7 @@ class EventServiceBase:
payload={"source": source, "total_bytes": total_bytes, "key": key, "id": id},
)
def emit_model_install_cancelled(self, source: str) -> None:
def emit_model_install_cancelled(self, source: str, id: int) -> None:
"""
Emit when an install job is cancelled.
@ -413,7 +429,7 @@ class EventServiceBase:
"""
self.__emit_model_event(
event_name="model_install_cancelled",
payload={"source": source},
payload={"source": source, "id": id},
)
def emit_model_install_error(self, source: str, error_type: str, error: str, id: int) -> None:

View File

@ -82,7 +82,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
image_path,
"PNG",
pnginfo=pnginfo,
compress_level=self.__invoker.services.configuration.png_compress_level,
compress_level=self.__invoker.services.configuration.pil_compress_level,
)
thumbnail_name = get_thumbnail_name(image_name)

View File

@ -114,8 +114,10 @@ class HFModelSource(StringLikeSource):
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = self.repo_id
base += f":{self.variant or ''}"
base += f":{self.subfolder}" if self.subfolder else ""
if self.variant:
base += f":{self.variant or ''}"
if self.subfolder:
base += f":{self.subfolder}"
return base

View File

@ -11,8 +11,8 @@ from shutil import copyfile, copytree, move, rmtree
from tempfile import mkdtemp
from typing import Any, Dict, List, Optional, Set, Union
import yaml
from huggingface_hub import HfFolder
from omegaconf import DictConfig, OmegaConf
from pydantic.networks import AnyHttpUrl
from requests import Session
@ -22,7 +22,6 @@ from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.util.misc import uuid_string
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@ -125,15 +124,28 @@ class ModelInstallService(ModelInstallServiceBase):
if not self._running:
raise Exception("Attempt to stop the install service before it was started")
self._stop_event.set()
with self._install_queue.mutex:
self._install_queue.queue.clear() # get rid of pending jobs
active_jobs = [x for x in self.list_jobs() if x.running]
if active_jobs:
self._logger.warning("Waiting for active install job to complete")
self.wait_for_installs()
self._clear_pending_jobs()
self._download_cache.clear()
self._running = False
def _clear_pending_jobs(self) -> None:
for job in self.list_jobs():
if not job.in_terminal_state:
self._logger.warning("Cancelling job {job.id}")
self.cancel_job(job)
while True:
try:
job = self._install_queue.get(block=False)
self._install_queue.task_done()
except Empty:
break
def _put_in_queue(self, job: ModelInstallJob) -> None:
if self._stop_event.is_set():
self.cancel_job(job)
else:
self._install_queue.put(job)
def register_path(
self,
model_path: Union[Path, str],
@ -154,10 +166,7 @@ class ModelInstallService(ModelInstallServiceBase):
model_path = Path(model_path)
config = config or {}
if self._app_config.skip_model_hash:
config["hash"] = uuid_string()
info: AnyModelConfig = ModelProbe.probe(Path(model_path), config)
info: AnyModelConfig = ModelProbe.probe(Path(model_path), config, hash_algo=self._app_config.hashing_algorithm)
if preferred_name := config.get("name"):
preferred_name = Path(preferred_name).with_suffix(model_path.suffix)
@ -222,7 +231,7 @@ class ModelInstallService(ModelInstallServiceBase):
if isinstance(source, LocalModelSource):
install_job = self._import_local_model(source, config)
self._install_queue.put(install_job) # synchronously install
self._put_in_queue(install_job) # synchronously install
elif isinstance(source, HFModelSource):
install_job = self._import_from_hf(source, config)
elif isinstance(source, URLModelSource):
@ -257,7 +266,6 @@ class ModelInstallService(ModelInstallServiceBase):
raise TimeoutError("Timeout exceeded")
return job
# TODO: Better name? Maybe wait_for_jobs()? Maybe too easily confused with above
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]: # noqa D102
"""Block until all installation jobs are done."""
start = time.time()
@ -283,56 +291,66 @@ class ModelInstallService(ModelInstallServiceBase):
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the config record store database."""
self._scan_models_directory()
if autoimport := self._app_config.autoimport_dir:
if self._app_config.autoimport_path:
self._logger.info("Scanning autoimport directory for new models")
installed = self.scan_directory(self._app_config.root_path / autoimport)
installed = self.scan_directory(self._app_config.autoimport_path)
self._logger.info(f"{len(installed)} new models registered")
self._logger.info("Model installer (re)initialized")
def _migrate_yaml(self) -> None:
db_models = self.record_store.all_models()
try:
yaml = self._get_yaml()
except OSError:
return
yaml_metadata = yaml.pop("__metadata__")
yaml_version = yaml_metadata.get("version")
if yaml_version != "3.0.0":
raise ValueError(
f"Attempted migration of unsupported `models.yaml` v{yaml_version}. Only v3.0.0 is supported. Exiting."
)
self._logger.info(
f"Starting one-time migration of {len(yaml.items())} models from `models.yaml` to database. This may take a few minutes."
legacy_models_yaml_path = (
self._app_config.legacy_models_yaml_path or self._app_config.root_path / "configs" / "models.yaml"
)
if len(db_models) == 0 and len(yaml.items()) != 0:
for model_key, stanza in yaml.items():
_, _, model_name = str(model_key).split("/")
model_path = Path(stanza["path"])
if not model_path.is_absolute():
model_path = self._app_config.models_path / model_path
model_path = model_path.resolve()
# The old path may be relative to the root path
if not legacy_models_yaml_path.exists():
legacy_models_yaml_path = Path(self._app_config.root_path, legacy_models_yaml_path)
config: dict[str, Any] = {}
config["name"] = model_name
config["description"] = stanza.get("description")
config["config_path"] = stanza.get("config")
if legacy_models_yaml_path.exists():
legacy_models_yaml = yaml.safe_load(legacy_models_yaml_path.read_text())
try:
id = self.register_path(model_path=model_path, config=config)
self._logger.info(f"Migrated {model_name} with id {id}")
except Exception as e:
self._logger.warning(f"Model at {model_path} could not be migrated: {e}")
yaml_metadata = legacy_models_yaml.pop("__metadata__")
yaml_version = yaml_metadata.get("version")
# Rename `models.yaml` to `models.yaml.bak` to prevent re-migration
yaml_path = self._app_config.model_conf_path
yaml_path.rename(yaml_path.with_suffix(".yaml.bak"))
if yaml_version != "3.0.0":
raise ValueError(
f"Attempted migration of unsupported `models.yaml` v{yaml_version}. Only v3.0.0 is supported. Exiting."
)
self._logger.info(
f"Starting one-time migration of {len(legacy_models_yaml.items())} models from {str(legacy_models_yaml_path)}. This may take a few minutes."
)
if len(db_models) == 0 and len(legacy_models_yaml.items()) != 0:
for model_key, stanza in legacy_models_yaml.items():
_, _, model_name = str(model_key).split("/")
model_path = Path(stanza["path"])
if not model_path.is_absolute():
model_path = self._app_config.models_path / model_path
model_path = model_path.resolve()
config: dict[str, Any] = {}
config["name"] = model_name
config["description"] = stanza.get("description")
config["config_path"] = stanza.get("config")
try:
id = self.register_path(model_path=model_path, config=config)
self._logger.info(f"Migrated {model_name} with id {id}")
except Exception as e:
self._logger.warning(f"Model at {model_path} could not be migrated: {e}")
# Rename `models.yaml` to `models.yaml.bak` to prevent re-migration
legacy_models_yaml_path.rename(legacy_models_yaml_path.with_suffix(".yaml.bak"))
# Remove `legacy_models_yaml_path` from the config file - we are done with it either way
self._app_config.legacy_models_yaml_path = None
self._app_config.write_file(self._app_config.config_file_path)
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]: # noqa D102
self._cached_model_paths = {Path(x.path).absolute() for x in self.record_store.all_models()}
self._cached_model_paths = {Path(x.path).resolve() for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback)
self._models_installed.clear()
@ -346,7 +364,7 @@ class ModelInstallService(ModelInstallServiceBase):
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = Path(model.path)
model_path = models_dir / Path(model.path) # handle legacy relative model paths
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
@ -354,7 +372,7 @@ class ModelInstallService(ModelInstallServiceBase):
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
model_path = Path(model.path)
model_path = self.app_config.models_path / model.path
if model_path.is_dir():
rmtree(model_path)
else:
@ -369,7 +387,7 @@ class ModelInstallService(ModelInstallServiceBase):
) -> Path:
"""Download the model file located at source to the models cache and return its Path."""
model_hash = sha256(str(source).encode("utf-8")).hexdigest()[0:32]
model_path = self._app_config.models_convert_cache_path / model_hash
model_path = self._app_config.convert_cache_path / model_hash
# We expect the cache directory to contain one and only one downloaded file.
# We don't know the file's name in advance, as it is set by the download
@ -410,7 +428,6 @@ class ModelInstallService(ModelInstallServiceBase):
job = self._install_queue.get(timeout=1)
except Empty:
continue
assert job.local_path is not None
try:
if job.cancelled:
@ -460,8 +477,6 @@ class ModelInstallService(ModelInstallServiceBase):
self._install_completed_event.set()
self._install_queue.task_done()
self._logger.info("Install thread exiting")
# --------------------------------------------------------------------------------------------
# Internal functions that manage the models directory
# --------------------------------------------------------------------------------------------
@ -496,6 +511,8 @@ class ModelInstallService(ModelInstallServiceBase):
for cur_base_model in BaseModelType:
for cur_model_type in ModelType:
models_dir = self._app_config.models_path / Path(cur_base_model.value, cur_model_type.value)
if not models_dir.exists():
continue
installed.update(self.scan_directory(models_dir))
self._logger.info(f"{len(installed)} new models registered; {len(defunct_models)} unregistered")
@ -522,7 +539,7 @@ class ModelInstallService(ModelInstallServiceBase):
new_path = models_dir / model.base.value / model.type.value / old_path.name
if old_path == new_path:
if old_path == new_path or new_path.exists() and old_path == new_path.resolve():
return model
self._logger.info(f"Moving {model.name} to {new_path}.")
@ -585,18 +602,15 @@ class ModelInstallService(ModelInstallServiceBase):
) -> str:
config = config or {}
if self._app_config.skip_model_hash:
config["hash"] = uuid_string()
info = info or ModelProbe.probe(model_path, config)
info = info or ModelProbe.probe(model_path, config, hash_algo=self._app_config.hashing_algorithm)
model_path = model_path.resolve()
info.path = model_path.as_posix()
# add 'main' specific fields
# Checkpoints have a config file needed for conversion - resolve this to an absolute path
if isinstance(info, CheckpointConfigBase):
legacy_conf = (self.app_config.root_dir / self.app_config.legacy_conf_dir / info.config_path).resolve()
legacy_conf = (self.app_config.legacy_conf_path / info.config_path).resolve()
info.config_path = legacy_conf.as_posix()
self.record_store.add_model(info)
return info.key
@ -607,16 +621,6 @@ class ModelInstallService(ModelInstallServiceBase):
self._next_job_id += 1
return id
# --------------------------------------------------------------------------------------------
# Internal functions that manage the old yaml config
# --------------------------------------------------------------------------------------------
def _get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
yaml_path = self._app_config.model_conf_path
omegaconf = OmegaConf.load(yaml_path)
assert isinstance(omegaconf, DictConfig)
return omegaconf
@staticmethod
def _guess_variant() -> Optional[ModelRepoVariant]:
"""Guess the best HuggingFace variant type to download."""
@ -788,14 +792,14 @@ class ModelInstallService(ModelInstallServiceBase):
self._logger.info(f"{download_job.source}: model download complete")
with self._lock:
install_job = self._download_cache[download_job.source]
self._download_cache.pop(download_job.source, None)
# are there any more active jobs left in this task?
if install_job.downloading and all(x.complete for x in install_job.download_parts):
install_job.status = InstallStatus.DOWNLOADS_DONE
self._install_queue.put(install_job)
self._signal_job_downloads_done(install_job)
self._put_in_queue(install_job)
# Let other threads know that the number of downloads has changed
self._download_cache.pop(download_job.source, None)
self._downloads_changed_event.set()
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
@ -835,7 +839,7 @@ class ModelInstallService(ModelInstallServiceBase):
if all(x.in_terminal_state for x in install_job.download_parts):
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
self._install_queue.put(install_job)
self._put_in_queue(install_job)
# ------------------------------------------------------------------------------------------------
# Internal methods that put events on the event bus
@ -868,6 +872,12 @@ class ModelInstallService(ModelInstallServiceBase):
id=job.id,
)
def _signal_job_downloads_done(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.DOWNLOADS_DONE
self._logger.info(f"{job.source}: all parts of this model are downloaded")
if self._event_bus:
self._event_bus.emit_model_install_downloads_done(str(job.source))
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
@ -892,7 +902,7 @@ class ModelInstallService(ModelInstallServiceBase):
def _signal_job_cancelled(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation was cancelled")
if self._event_bus:
self._event_bus.emit_model_install_cancelled(str(job.source))
self._event_bus.emit_model_install_cancelled(str(job.source), id=job.id)
@staticmethod
def get_fetcher_from_url(url: str):

View File

@ -68,6 +68,7 @@ class ModelLoadService(ModelLoadServiceBase):
self._emit_load_event(
context_data=context_data,
model_config=model_config,
submodel_type=submodel_type,
)
implementation, model_config, submodel_type = self._registry.get_implementation(model_config, submodel_type) # type: ignore
@ -82,6 +83,7 @@ class ModelLoadService(ModelLoadServiceBase):
self._emit_load_event(
context_data=context_data,
model_config=model_config,
submodel_type=submodel_type,
loaded=True,
)
return loaded_model
@ -91,6 +93,7 @@ class ModelLoadService(ModelLoadServiceBase):
context_data: InvocationContextData,
model_config: AnyModelConfig,
loaded: Optional[bool] = False,
submodel_type: Optional[SubModelType] = None,
) -> None:
if not self._invoker:
return
@ -102,6 +105,7 @@ class ModelLoadService(ModelLoadServiceBase):
queue_batch_id=context_data.queue_item.batch_id,
graph_execution_state_id=context_data.queue_item.session_id,
model_config=model_config,
submodel_type=submodel_type,
)
else:
self._invoker.services.events.emit_model_load_completed(
@ -110,4 +114,5 @@ class ModelLoadService(ModelLoadServiceBase):
queue_batch_id=context_data.queue_item.batch_id,
graph_execution_state_id=context_data.queue_item.session_id,
model_config=model_config,
submodel_type=submodel_type,
)

View File

@ -78,14 +78,12 @@ class ModelManagerService(ModelManagerServiceBase):
logger.setLevel(app_config.log_level.upper())
ram_cache = ModelCache(
max_cache_size=app_config.ram_cache_size,
max_vram_cache_size=app_config.vram_cache_size,
max_cache_size=app_config.ram,
max_vram_cache_size=app_config.vram,
logger=logger,
execution_device=execution_device,
)
convert_cache = ModelConvertCache(
cache_path=app_config.models_convert_cache_path, max_size=app_config.convert_cache_size
)
convert_cache = ModelConvertCache(cache_path=app_config.convert_cache_path, max_size=app_config.convert_cache)
loader = ModelLoadService(
app_config=app_config,
ram_cache=ram_cache,

View File

@ -151,7 +151,7 @@ class SqliteSessionQueue(SessionQueueBase):
# TODO: how does this work in a multi-user scenario?
current_queue_size = self._get_current_queue_size(queue_id)
max_queue_size = self.__invoker.services.configuration.get_config().max_queue_size
max_queue_size = self.__invoker.services.configuration.max_queue_size
max_new_queue_items = max_queue_size - current_queue_size
priority = 0

View File

@ -423,7 +423,7 @@ class ConfigInterface(InvocationContextInterface):
The app's config.
"""
return self._services.configuration.get_config()
return self._services.configuration
class UtilInterface(InvocationContextInterface):

View File

@ -0,0 +1,51 @@
from pathlib import Path
from urllib import request
from tqdm import tqdm
from invokeai.backend.util.logging import InvokeAILogger
class ProgressBar:
"""Simple progress bar for urllib.request.urlretrieve using tqdm."""
def __init__(self, model_name: str = "file"):
self.pbar = None
self.name = model_name
def __call__(self, block_num: int, block_size: int, total_size: int):
if not self.pbar:
self.pbar = tqdm(
desc=self.name,
initial=0,
unit="iB",
unit_scale=True,
unit_divisor=1000,
total=total_size,
)
self.pbar.update(block_size)
def download_with_progress_bar(name: str, url: str, dest_path: Path) -> bool:
"""Download a file from a URL to a destination path, with a progress bar.
If the file already exists, it will not be downloaded again.
Exceptions are not caught.
Args:
name (str): Name of the file being downloaded.
url (str): URL to download the file from.
dest_path (Path): Destination path to save the file to.
Returns:
bool: True if the file was downloaded, False if it already existed.
"""
if dest_path.exists():
return False # already downloaded
InvokeAILogger.get_logger().info(f"Downloading {name}...")
dest_path.parent.mkdir(parents=True, exist_ok=True)
request.urlretrieve(url, dest_path, ProgressBar(name))
return True

View File

@ -0,0 +1,24 @@
import io
import sys
from typing import Any
class SuppressOutput:
"""Context manager to suppress stdout.
Example:
```
with SuppressOutput():
print("This will not be printed")
```
"""
def __enter__(self):
# Save the original stdout
self._original_stdout = sys.stdout
# Redirect stdout to a dummy StringIO object
sys.stdout = io.StringIO()
def __exit__(self, *args: Any, **kwargs: Any):
# Restore stdout
sys.stdout = self._original_stdout

View File

@ -1,4 +0,0 @@
"""Initialization file for invokeai.backend.embeddings modules."""
# from .model_patcher import ModelPatcher
# __all__ = ["ModelPatcher"]

View File

@ -1,12 +0,0 @@
"""Base class for LoRA and Textual Inversion models.
The EmbeddingRaw class is the base class of LoRAModelRaw and TextualInversionModelRaw,
and is used for type checking of calls to the model patcher.
The use of "Raw" here is a historical artifact, and carried forward in
order to avoid confusion.
"""
class EmbeddingModelRaw:
"""Base class for LoRA and Textual Inversion models."""

View File

@ -5,21 +5,4 @@ Initialization file for invokeai.backend.image_util methods.
from .patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .seamless import configure_model_padding # noqa: F401
from .txt2mask import Txt2Mask # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401
def debug_image(debug_image, debug_text, debug_show=True, debug_result=False, debug_status=False):
from PIL import ImageDraw
if not debug_status:
return
image_copy = debug_image.copy().convert("RGBA")
ImageDraw.Draw(image_copy).text((5, 5), debug_text, (255, 0, 0))
if debug_show:
image_copy.show()
if debug_result:
return image_copy

View File

@ -9,13 +9,15 @@ from einops import repeat
from PIL import Image
from torchvision.transforms import Compose
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.image_util.depth_anything.model.dpt import DPT_DINOv2
from invokeai.backend.image_util.depth_anything.utilities.util import NormalizeImage, PrepareForNet, Resize
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.util import download_with_progress_bar
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config = get_config()
logger = InvokeAILogger.get_logger(config=config)
DEPTH_ANYTHING_MODELS = {
"large": {
@ -54,11 +56,15 @@ class DepthAnythingDetector:
def __init__(self) -> None:
self.model = None
self.model_size: Union[Literal["large", "base", "small"], None] = None
self.device = choose_torch_device()
def load_model(self, model_size=Literal["large", "base", "small"]):
DEPTH_ANYTHING_MODEL_PATH = pathlib.Path(config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"])
if not DEPTH_ANYTHING_MODEL_PATH.exists():
download_with_progress_bar(DEPTH_ANYTHING_MODELS[model_size]["url"], DEPTH_ANYTHING_MODEL_PATH)
def load_model(self, model_size: Literal["large", "base", "small"] = "small"):
DEPTH_ANYTHING_MODEL_PATH = config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"]
download_with_progress_bar(
pathlib.Path(DEPTH_ANYTHING_MODELS[model_size]["url"]).name,
DEPTH_ANYTHING_MODELS[model_size]["url"],
DEPTH_ANYTHING_MODEL_PATH,
)
if not self.model or model_size != self.model_size:
del self.model
@ -71,8 +77,6 @@ class DepthAnythingDetector:
self.model = DPT_DINOv2(encoder="vitb", features=128, out_channels=[96, 192, 384, 768])
case "large":
self.model = DPT_DINOv2(encoder="vitl", features=256, out_channels=[256, 512, 1024, 1024])
case _:
raise TypeError("Not a supported model")
self.model.load_state_dict(torch.load(DEPTH_ANYTHING_MODEL_PATH.as_posix(), map_location="cpu"))
self.model.eval()
@ -80,20 +84,20 @@ class DepthAnythingDetector:
self.model.to(choose_torch_device())
return self.model
def to(self, device):
self.model.to(device)
return self
def __call__(self, image: Image.Image, resolution: int = 512) -> Image.Image:
if not self.model:
logger.warn("DepthAnything model was not loaded. Returning original image")
return image
def __call__(self, image, resolution=512, offload=False):
image = np.array(image, dtype=np.uint8)
image = image[:, :, ::-1] / 255.0
np_image = np.array(image, dtype=np.uint8)
np_image = np_image[:, :, ::-1] / 255.0
image_height, image_width = image.shape[:2]
image = transform({"image": image})["image"]
image = torch.from_numpy(image).unsqueeze(0).to(choose_torch_device())
image_height, image_width = np_image.shape[:2]
np_image = transform({"image": np_image})["image"]
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(choose_torch_device())
with torch.no_grad():
depth = self.model(image)
depth = self.model(tensor_image)
depth = F.interpolate(depth[None], (image_height, image_width), mode="bilinear", align_corners=False)[0, 0]
depth = (depth - depth.min()) / (depth.max() - depth.min()) * 255.0
@ -103,7 +107,4 @@ class DepthAnythingDetector:
new_height = int(image_height * (resolution / image_width))
depth_map = depth_map.resize((resolution, new_height))
if offload:
del self.model
return depth_map

View File

@ -1,14 +1,13 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
# Modified pathing to suit Invoke
import pathlib
import numpy as np
import onnxruntime as ort
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.util import download_with_progress_bar
from .onnxdet import inference_detector
from .onnxpose import inference_pose
@ -24,7 +23,7 @@ DWPOSE_MODELS = {
},
}
config = InvokeAIAppConfig.get_config()
config = get_config()
class Wholebody:
@ -33,13 +32,13 @@ class Wholebody:
providers = ["CUDAExecutionProvider"] if device == "cuda" else ["CPUExecutionProvider"]
DET_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"])
if not DET_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
DET_MODEL_PATH = config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"]
download_with_progress_bar("yolox_l.onnx", DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
POSE_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["local"])
if not POSE_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["url"], POSE_MODEL_PATH)
POSE_MODEL_PATH = config.models_path / DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["local"]
download_with_progress_bar(
"dw-ll_ucoco_384.onnx", DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["url"], POSE_MODEL_PATH
)
onnx_det = DET_MODEL_PATH
onnx_pose = POSE_MODEL_PATH

View File

@ -10,9 +10,9 @@ from imwatermark import WatermarkEncoder
from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
config = InvokeAIAppConfig.get_config()
config = get_config()
class InvisibleWatermark:
@ -20,14 +20,8 @@ class InvisibleWatermark:
Wrapper around InvisibleWatermark module.
"""
@classmethod
def invisible_watermark_available(cls) -> bool:
return config.invisible_watermark
@classmethod
def add_watermark(cls, image: Image.Image, watermark_text: str) -> Image.Image:
if not cls.invisible_watermark_available():
return image
logger.debug(f'Applying invisible watermark "{watermark_text}"')
bgr = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
encoder = WatermarkEncoder()

View File

@ -1,46 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""Very simple functions to fetch and print metadata from InvokeAI-generated images."""
import json
import sys
from pathlib import Path
from typing import Any, Dict
from PIL import Image
def get_invokeai_metadata(image_path: Path) -> Dict[str, Any]:
"""
Retrieve "invokeai_metadata" field from png image.
:param image_path: Path to the image to read metadata from.
May raise:
OSError -- image path not found
KeyError -- image doesn't contain the metadata field
"""
image: Image = Image.open(image_path)
return json.loads(image.text["invokeai_metadata"])
def print_invokeai_metadata(image_path: Path):
"""Pretty-print the metadata."""
try:
metadata = get_invokeai_metadata(image_path)
print(f"{image_path}:\n{json.dumps(metadata, sort_keys=True, indent=4)}")
except OSError:
print(f"{image_path}:\nNo file found.")
except KeyError:
print(f"{image_path}:\nNo metadata found.")
print()
def main():
"""Run the command-line utility."""
image_paths = sys.argv[1:]
if not image_paths:
print(f"Usage: {Path(sys.argv[0]).name} image1 image2 image3 ...")
print("\nPretty-print InvokeAI image metadata from the listed png files.")
sys.exit(-1)
for img in image_paths:
print_invokeai_metadata(img)

View File

@ -6,7 +6,7 @@ import torch
from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import get_invokeai_config
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.util.devices import choose_torch_device
@ -29,7 +29,7 @@ def load_jit_model(url_or_path, device):
class LaMA:
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
device = choose_torch_device()
model_location = get_invokeai_config().models_path / "core/misc/lama/lama.pt"
model_location = get_config().models_path / "core/misc/lama/lama.pt"
model = load_jit_model(model_location, device)
image = np.asarray(input_image.convert("RGB"))

View File

@ -8,9 +8,7 @@ be suppressed or deferred
import numpy as np
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
from invokeai.app.services.config.config_default import get_config
class PatchMatch:
@ -28,7 +26,7 @@ class PatchMatch:
def _load_patch_match(self):
if self.tried_load:
return
if config.try_patchmatch:
if get_config().patchmatch:
from patchmatch import patch_match as pm
if pm.patchmatch_available:

View File

@ -4,16 +4,18 @@ wraps the safety_checker model. It respects the global "nsfw_checker"
configuration variable, that allows the checker to be supressed.
"""
from pathlib import Path
import numpy as np
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from PIL import Image
from transformers import AutoFeatureExtractor
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.silence_warnings import SilenceWarnings
config = InvokeAIAppConfig.get_config()
CHECKER_PATH = "core/convert/stable-diffusion-safety-checker"
@ -31,30 +33,24 @@ class SafetyChecker:
if cls.tried_load:
return
if config.nsfw_checker:
try:
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from transformers import AutoFeatureExtractor
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(config.models_path / CHECKER_PATH)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(config.models_path / CHECKER_PATH)
logger.info("NSFW checker initialized")
except Exception as e:
logger.warning(f"Could not load NSFW checker: {str(e)}")
else:
logger.info("NSFW checker loading disabled")
try:
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(get_config().models_path / CHECKER_PATH)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(get_config().models_path / CHECKER_PATH)
except Exception as e:
logger.warning(f"Could not load NSFW checker: {str(e)}")
cls.tried_load = True
@classmethod
def safety_checker_available(cls) -> bool:
cls._load_safety_checker()
return cls.safety_checker is not None
return Path(get_config().models_path, CHECKER_PATH).exists()
@classmethod
def has_nsfw_concept(cls, image: Image.Image) -> bool:
if not cls.safety_checker_available():
if not cls.safety_checker_available() and cls.tried_load:
return False
cls._load_safety_checker()
if cls.safety_checker is None or cls.feature_extractor is None:
return False
device = choose_torch_device()
features = cls.feature_extractor([image], return_tensors="pt")
features.to(device)

View File

@ -1,115 +0,0 @@
"""Makes available the Txt2Mask class, which assists in the automatic
assignment of masks via text prompt using clipseg.
Here is typical usage:
from invokeai.backend.image_util.txt2mask import Txt2Mask, SegmentedGrayscale
from PIL import Image
txt2mask = Txt2Mask(self.device)
segmented = txt2mask.segment(Image.open('/path/to/img.png'),'a bagel')
# this will return a grayscale Image of the segmented data
grayscale = segmented.to_grayscale()
# this will return a semi-transparent image in which the
# selected object(s) are opaque and the rest is at various
# levels of transparency
transparent = segmented.to_transparent()
# this will return a masked image suitable for use in inpainting:
mask = segmented.to_mask(threshold=0.5)
The threshold used in the call to to_mask() selects pixels for use in
the mask that exceed the indicated confidence threshold. Values range
from 0.0 to 1.0. The higher the threshold, the more confident the
algorithm is. In limited testing, I have found that values around 0.5
work fine.
"""
import numpy as np
import torch
from PIL import Image, ImageOps
from transformers import AutoProcessor, CLIPSegForImageSegmentation
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
CLIPSEG_MODEL = "CIDAS/clipseg-rd64-refined"
CLIPSEG_SIZE = 352
config = InvokeAIAppConfig.get_config()
class SegmentedGrayscale(object):
def __init__(self, image: Image.Image, heatmap: torch.Tensor):
self.heatmap = heatmap
self.image = image
def to_grayscale(self, invert: bool = False) -> Image.Image:
return self._rescale(Image.fromarray(np.uint8(255 - self.heatmap * 255 if invert else self.heatmap * 255)))
def to_mask(self, threshold: float = 0.5) -> Image.Image:
discrete_heatmap = self.heatmap.lt(threshold).int()
return self._rescale(Image.fromarray(np.uint8(discrete_heatmap * 255), mode="L"))
def to_transparent(self, invert: bool = False) -> Image.Image:
transparent_image = self.image.copy()
# For img2img, we want the selected regions to be transparent,
# but to_grayscale() returns the opposite. Thus invert.
gs = self.to_grayscale(not invert)
transparent_image.putalpha(gs)
return transparent_image
# unscales and uncrops the 352x352 heatmap so that it matches the image again
def _rescale(self, heatmap: Image.Image) -> Image.Image:
size = self.image.width if (self.image.width > self.image.height) else self.image.height
resized_image = heatmap.resize((size, size), resample=Image.Resampling.LANCZOS)
return resized_image.crop((0, 0, self.image.width, self.image.height))
class Txt2Mask(object):
"""
Create new Txt2Mask object. The optional device argument can be one of
'cuda', 'mps' or 'cpu'.
"""
def __init__(self, device="cpu", refined=False):
logger.info("Initializing clipseg model for text to mask inference")
# BUG: we are not doing anything with the device option at this time
self.device = device
self.processor = AutoProcessor.from_pretrained(CLIPSEG_MODEL, cache_dir=config.cache_dir)
self.model = CLIPSegForImageSegmentation.from_pretrained(CLIPSEG_MODEL, cache_dir=config.cache_dir)
@torch.no_grad()
def segment(self, image: Image.Image, prompt: str) -> SegmentedGrayscale:
"""
Given a prompt string such as "a bagel", tries to identify the object in the
provided image and returns a SegmentedGrayscale object in which the brighter
pixels indicate where the object is inferred to be.
"""
if isinstance(image, str):
image = Image.open(image).convert("RGB")
image = ImageOps.exif_transpose(image)
img = self._scale_and_crop(image)
inputs = self.processor(text=[prompt], images=[img], padding=True, return_tensors="pt")
outputs = self.model(**inputs)
heatmap = torch.sigmoid(outputs.logits)
return SegmentedGrayscale(image, heatmap)
def _scale_and_crop(self, image: Image.Image) -> Image.Image:
scaled_image = Image.new("RGB", (CLIPSEG_SIZE, CLIPSEG_SIZE))
if image.width > image.height: # width is constraint
scale = CLIPSEG_SIZE / image.width
else:
scale = CLIPSEG_SIZE / image.height
scaled_image.paste(
image.resize(
(int(scale * image.width), int(scale * image.height)),
resample=Image.Resampling.LANCZOS,
),
box=(0, 0),
)
return scaled_image

View File

@ -1,41 +0,0 @@
"""
Check that the invokeai_root is correctly configured and exit if not.
"""
import sys
from invokeai.app.services.config import InvokeAIAppConfig
def check_invokeai_root(config: InvokeAIAppConfig):
try:
assert config.db_path.parent.exists(), f"{config.db_path.parent} not found"
assert config.models_path.exists(), f"{config.models_path} not found"
if not config.ignore_missing_core_models:
for model in [
"CLIP-ViT-bigG-14-laion2B-39B-b160k",
"bert-base-uncased",
"clip-vit-large-patch14",
"sd-vae-ft-mse",
"stable-diffusion-2-clip",
"stable-diffusion-safety-checker",
]:
path = config.models_path / f"core/convert/{model}"
assert path.exists(), f"{path} is missing"
except Exception as e:
print()
print(f"An exception has occurred: {str(e)}")
print("== STARTUP ABORTED ==")
print("** One or more necessary files is missing from your InvokeAI root directory **")
print("** Please rerun the configuration script to fix this problem. **")
print("** From the launcher, selection option [6]. **")
print(
'** From the command line, activate the virtual environment and run "invokeai-configure --yes --skip-sd-weights" **'
)
print(
'** (To skip this check completely, add "--ignore_missing_core_models" to your CLI args. Not installing '
"these core models will prevent the loading of some or all .safetensors and .ckpt files. However, you can "
"always come back and install these core models in the future.)"
)
input("Press any key to continue...")
sys.exit(0)

View File

@ -1,267 +0,0 @@
"""Utility (backend) functions used by model_install.py"""
from logging import Logger
from pathlib import Path
from typing import Any, Dict, List, Optional
import omegaconf
from pydantic import BaseModel, Field
from pydantic.dataclasses import dataclass
from requests import HTTPError
from tqdm import tqdm
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
from invokeai.app.services.model_install import (
ModelInstallService,
ModelInstallServiceBase,
)
from invokeai.app.services.model_records import ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager import (
BaseModelType,
InvalidModelConfigException,
ModelType,
)
from invokeai.backend.model_manager.metadata import UnknownMetadataException
from invokeai.backend.util.logging import InvokeAILogger
# name of the starter models file
INITIAL_MODELS = "INITIAL_MODELS.yaml"
def initialize_record_store(app_config: InvokeAIAppConfig) -> ModelRecordServiceBase:
"""Return an initialized ModelConfigRecordServiceBase object."""
logger = InvokeAILogger.get_logger(config=app_config)
image_files = DiskImageFileStorage(f"{app_config.output_path}/images")
db = init_db(config=app_config, logger=logger, image_files=image_files)
obj: ModelRecordServiceBase = ModelRecordServiceSQL(db)
return obj
def initialize_installer(
app_config: InvokeAIAppConfig, event_bus: Optional[EventServiceBase] = None
) -> ModelInstallServiceBase:
"""Return an initialized ModelInstallService object."""
record_store = initialize_record_store(app_config)
download_queue = DownloadQueueService()
installer = ModelInstallService(
app_config=app_config,
record_store=record_store,
download_queue=download_queue,
event_bus=event_bus,
)
download_queue.start()
installer.start()
return installer
class UnifiedModelInfo(BaseModel):
"""Catchall class for information in INITIAL_MODELS2.yaml."""
name: Optional[str] = None
base: Optional[BaseModelType] = None
type: Optional[ModelType] = None
source: Optional[str] = None
subfolder: Optional[str] = None
description: Optional[str] = None
recommended: bool = False
installed: bool = False
default: bool = False
requires: List[str] = Field(default_factory=list)
@dataclass
class InstallSelections:
"""Lists of models to install and remove."""
install_models: List[UnifiedModelInfo] = Field(default_factory=list)
remove_models: List[str] = Field(default_factory=list)
class TqdmEventService(EventServiceBase):
"""An event service to track downloads."""
def __init__(self) -> None:
"""Create a new TqdmEventService object."""
super().__init__()
self._bars: Dict[str, tqdm] = {}
self._last: Dict[str, int] = {}
self._logger = InvokeAILogger.get_logger(__name__)
def dispatch(self, event_name: str, payload: Any) -> None:
"""Dispatch an event by appending it to self.events."""
data = payload["data"]
source = data["source"]
if payload["event"] == "model_install_downloading":
dest = data["local_path"]
total_bytes = data["total_bytes"]
bytes = data["bytes"]
if dest not in self._bars:
self._bars[dest] = tqdm(desc=Path(dest).name, initial=0, total=total_bytes, unit="iB", unit_scale=True)
self._last[dest] = 0
self._bars[dest].update(bytes - self._last[dest])
self._last[dest] = bytes
elif payload["event"] == "model_install_completed":
self._logger.info(f"{source}: installed successfully.")
elif payload["event"] == "model_install_error":
self._logger.warning(f"{source}: installation failed with error {data['error']}")
elif payload["event"] == "model_install_cancelled":
self._logger.warning(f"{source}: installation cancelled")
class InstallHelper(object):
"""Capture information stored jointly in INITIAL_MODELS.yaml and the installed models db."""
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger):
"""Create new InstallHelper object."""
self._app_config = app_config
self.all_models: Dict[str, UnifiedModelInfo] = {}
omega = omegaconf.OmegaConf.load(Path(configs.__path__[0]) / INITIAL_MODELS)
assert isinstance(omega, omegaconf.dictconfig.DictConfig)
self._installer = initialize_installer(app_config, TqdmEventService())
self._initial_models = omega
self._installed_models: List[str] = []
self._starter_models: List[str] = []
self._default_model: Optional[str] = None
self._logger = logger
self._initialize_model_lists()
@property
def installer(self) -> ModelInstallServiceBase:
"""Return the installer object used internally."""
return self._installer
def _initialize_model_lists(self) -> None:
"""
Initialize our model slots.
Set up the following:
installed_models -- list of installed model keys
starter_models -- list of starter model keys from INITIAL_MODELS
all_models -- dict of key => UnifiedModelInfo
default_model -- key to default model
"""
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
for key in self._initial_models.keys():
assert isinstance(key, str)
if key in self.all_models:
# we want to preserve the description
description = self.all_models[key].description or self._initial_models[key].get("description")
self.all_models[key].description = description
else:
base_model, model_type, model_name = key.split("/")
info = UnifiedModelInfo(
name=model_name,
type=ModelType(model_type),
base=BaseModelType(base_model),
source=self._initial_models[key].source,
description=self._initial_models[key].get("description"),
recommended=self._initial_models[key].get("recommended", False),
default=self._initial_models[key].get("default", False),
subfolder=self._initial_models[key].get("subfolder"),
requires=list(self._initial_models[key].get("requires", [])),
)
self.all_models[key] = info
if not self.default_model():
self._default_model = key
elif self._initial_models[key].get("default", False):
self._default_model = key
self._starter_models.append(key)
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
def recommended_models(self) -> List[UnifiedModelInfo]:
"""List of the models recommended in INITIAL_MODELS.yaml."""
return [self._to_model(x) for x in self._starter_models if self._to_model(x).recommended]
def installed_models(self) -> List[UnifiedModelInfo]:
"""List of models already installed."""
return [self._to_model(x) for x in self._installed_models]
def starter_models(self) -> List[UnifiedModelInfo]:
"""List of starter models."""
return [self._to_model(x) for x in self._starter_models]
def default_model(self) -> Optional[UnifiedModelInfo]:
"""Return the default model."""
return self._to_model(self._default_model) if self._default_model else None
def _to_model(self, key: str) -> UnifiedModelInfo:
return self.all_models[key]
def _add_required_models(self, model_list: List[UnifiedModelInfo]) -> None:
installed = {x.source for x in self.installed_models()}
reverse_source = {x.source: x for x in self.all_models.values()}
additional_models: List[UnifiedModelInfo] = []
for model_info in model_list:
for requirement in model_info.requires:
if requirement not in installed and reverse_source.get(requirement):
additional_models.append(reverse_source[requirement])
model_list.extend(additional_models)
def add_or_delete(self, selections: InstallSelections) -> None:
"""Add or delete selected models."""
installer = self._installer
self._add_required_models(selections.install_models)
for model in selections.install_models:
assert model.source
model_path_id_or_url = model.source.strip("\"' ")
config = (
{
"description": model.description,
"name": model.name,
}
if model.name
else None
)
try:
installer.heuristic_import(
source=model_path_id_or_url,
config=config,
)
except (UnknownMetadataException, InvalidModelConfigException, HTTPError, OSError) as e:
self._logger.warning(f"{model.source}: {e}")
for model_to_remove in selections.remove_models:
parts = model_to_remove.split("/")
if len(parts) == 1:
base_model, model_type, model_name = (None, None, model_to_remove)
else:
base_model, model_type, model_name = parts
matches = installer.record_store.search_by_attr(
base_model=BaseModelType(base_model) if base_model else None,
model_type=ModelType(model_type) if model_type else None,
model_name=model_name,
)
if len(matches) > 1:
self._logger.error(
"{model_to_remove} is ambiguous. Please use model_base/model_type/model_name (e.g. sd-1/main/my_model) to disambiguate"
)
elif not matches:
self._logger.error(f"{model_to_remove}: unknown model")
else:
for m in matches:
self._logger.info(f"Deleting {m.type}:{m.name}")
installer.delete(m.key)
installer.wait_for_installs()

View File

@ -1,996 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein)
# Before running stable-diffusion on an internet-isolated machine,
# run this script from one with internet connectivity. The
# two machines must share a common .cache directory.
#
# Coauthor: Kevin Turner http://github.com/keturn
#
import argparse
import io
import os
import shutil
import sys
import textwrap
import traceback
import warnings
from argparse import Namespace
from enum import Enum
from pathlib import Path
from shutil import copy, get_terminal_size, move
from typing import Any, Optional, Set, Tuple, Type, get_args, get_type_hints
from urllib import request
import npyscreen
import psutil
import torch
import transformers
from diffusers import AutoencoderKL, ModelMixin
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from huggingface_hub import HfFolder
from huggingface_hub import login as hf_hub_login
from omegaconf import DictConfig, OmegaConf
from pydantic.error_wrappers import ValidationError
from tqdm import tqdm
from transformers import AutoFeatureExtractor, BertTokenizerFast, CLIPTextConfig, CLIPTextModel, CLIPTokenizer
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.install.install_helper import InstallHelper, InstallSelections
from invokeai.backend.install.legacy_arg_parsing import legacy_parser
from invokeai.backend.model_manager import BaseModelType, ModelType
from invokeai.backend.util import choose_precision, choose_torch_device
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.model_install import addModelsForm
# TO DO - Move all the frontend code into invokeai.frontend.install
from invokeai.frontend.install.widgets import (
MIN_COLS,
MIN_LINES,
CenteredButtonPress,
CyclingForm,
FileBox,
MultiSelectColumns,
SingleSelectColumnsSimple,
WindowTooSmallException,
set_min_terminal_size,
)
warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()
def get_literal_fields(field: str) -> Tuple[Any]:
return get_args(get_type_hints(InvokeAIAppConfig).get(field))
# --------------------------globals-----------------------
config = InvokeAIAppConfig.get_config()
Model_dir = "models"
Default_config_file = config.model_conf_path
SD_Configs = config.legacy_conf_path
PRECISION_CHOICES = get_literal_fields("precision")
DEVICE_CHOICES = get_literal_fields("device")
ATTENTION_CHOICES = get_literal_fields("attention_type")
ATTENTION_SLICE_CHOICES = get_literal_fields("attention_slice_size")
GENERATION_OPT_CHOICES = ["sequential_guidance", "force_tiled_decode", "lazy_offload"]
GB = 1073741824 # GB in bytes
HAS_CUDA = torch.cuda.is_available()
_, MAX_VRAM = torch.cuda.mem_get_info() if HAS_CUDA else (0.0, 0.0)
MAX_VRAM /= GB
MAX_RAM = psutil.virtual_memory().total / GB
FORCE_FULL_PRECISION = False
INIT_FILE_PREAMBLE = """# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
# or renaming it and then running invokeai-configure again.
"""
logger = InvokeAILogger.get_logger()
class DummyWidgetValue(Enum):
"""Dummy widget values."""
zero = 0
true = True
false = False
# --------------------------------------------
def postscript(errors: Set[str]) -> None:
if not any(errors):
message = f"""
** INVOKEAI INSTALLATION SUCCESSFUL **
If you installed manually from source or with 'pip install': activate the virtual environment
then run one of the following commands to start InvokeAI.
Web UI:
invokeai-web
If you installed using an installation script, run:
{config.root_path}/invoke.{"bat" if sys.platform == "win32" else "sh"}
Add the '--help' argument to see all of the command-line switches available for use.
"""
else:
message = (
"\n** There were errors during installation. It is possible some of the models were not fully downloaded.\n"
)
for err in errors:
message += f"\t - {err}\n"
message += "Please check the logs above and correct any issues."
print(message)
# ---------------------------------------------
def yes_or_no(prompt: str, default_yes=True):
default = "y" if default_yes else "n"
response = input(f"{prompt} [{default}] ") or default
if default_yes:
return response[0] not in ("n", "N")
else:
return response[0] in ("y", "Y")
# ---------------------------------------------
def HfLogin(access_token) -> None:
"""
Helper for logging in to Huggingface
The stdout capture is needed to hide the irrelevant "git credential helper" warning
"""
capture = io.StringIO()
sys.stdout = capture
try:
hf_hub_login(token=access_token, add_to_git_credential=False)
sys.stdout = sys.__stdout__
except Exception as exc:
sys.stdout = sys.__stdout__
print(exc)
raise exc
# -------------------------------------
class ProgressBar:
def __init__(self, model_name: str = "file"):
self.pbar = None
self.name = model_name
def __call__(self, block_num, block_size, total_size):
if not self.pbar:
self.pbar = tqdm(
desc=self.name,
initial=0,
unit="iB",
unit_scale=True,
unit_divisor=1000,
total=total_size,
)
self.pbar.update(block_size)
# ---------------------------------------------
def hf_download_from_pretrained(model_class: Type[ModelMixin], model_name: str, destination: Path, **kwargs: Any):
filter = lambda x: "fp16 is not a valid" not in x.getMessage() # noqa E731
logger.addFilter(filter)
try:
model = model_class.from_pretrained(
model_name,
resume_download=True,
**kwargs,
)
model.save_pretrained(destination, safe_serialization=True)
finally:
logger.removeFilter(filter)
return destination
# ---------------------------------------------
def download_with_progress_bar(model_url: str, model_dest: str, label: str = "the"):
try:
logger.info(f"Installing {label} model file {model_url}...")
if not os.path.exists(model_dest):
os.makedirs(os.path.dirname(model_dest), exist_ok=True)
request.urlretrieve(model_url, model_dest, ProgressBar(os.path.basename(model_dest)))
logger.info("...downloaded successfully")
else:
logger.info("...exists")
except Exception:
logger.info("...download failed")
logger.info(f"Error downloading {label} model")
print(traceback.format_exc(), file=sys.stderr)
def download_conversion_models():
target_dir = config.models_path / "core/convert"
kwargs = {} # for future use
try:
logger.info("Downloading core tokenizers and text encoders")
# bert
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
bert = BertTokenizerFast.from_pretrained("bert-base-uncased", **kwargs)
bert.save_pretrained(target_dir / "bert-base-uncased", safe_serialization=True)
# sd-1
repo_id = "openai/clip-vit-large-patch14"
hf_download_from_pretrained(CLIPTokenizer, repo_id, target_dir / "clip-vit-large-patch14")
hf_download_from_pretrained(CLIPTextModel, repo_id, target_dir / "clip-vit-large-patch14")
# sd-2
repo_id = "stabilityai/stable-diffusion-2"
pipeline = CLIPTokenizer.from_pretrained(repo_id, subfolder="tokenizer", **kwargs)
pipeline.save_pretrained(target_dir / "stable-diffusion-2-clip" / "tokenizer", safe_serialization=True)
pipeline = CLIPTextModel.from_pretrained(repo_id, subfolder="text_encoder", **kwargs)
pipeline.save_pretrained(target_dir / "stable-diffusion-2-clip" / "text_encoder", safe_serialization=True)
# sd-xl - tokenizer_2
repo_id = "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k"
_, model_name = repo_id.split("/")
pipeline = CLIPTokenizer.from_pretrained(repo_id, **kwargs)
pipeline.save_pretrained(target_dir / model_name, safe_serialization=True)
pipeline = CLIPTextConfig.from_pretrained(repo_id, **kwargs)
pipeline.save_pretrained(target_dir / model_name, safe_serialization=True)
# VAE
logger.info("Downloading stable diffusion VAE")
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", **kwargs)
vae.save_pretrained(target_dir / "sd-vae-ft-mse", safe_serialization=True)
# safety checking
logger.info("Downloading safety checker")
repo_id = "CompVis/stable-diffusion-safety-checker"
pipeline = AutoFeatureExtractor.from_pretrained(repo_id, **kwargs)
pipeline.save_pretrained(target_dir / "stable-diffusion-safety-checker", safe_serialization=True)
pipeline = StableDiffusionSafetyChecker.from_pretrained(repo_id, **kwargs)
pipeline.save_pretrained(target_dir / "stable-diffusion-safety-checker", safe_serialization=True)
except KeyboardInterrupt:
raise
except Exception as e:
logger.error(str(e))
# ---------------------------------------------
# TO DO: use the download queue here.
def download_realesrgan():
logger.info("Installing ESRGAN Upscaling models...")
URLs = [
{
"url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
"dest": "core/upscaling/realesrgan/RealESRGAN_x4plus.pth",
"description": "RealESRGAN_x4plus.pth",
},
{
"url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth",
"dest": "core/upscaling/realesrgan/RealESRGAN_x4plus_anime_6B.pth",
"description": "RealESRGAN_x4plus_anime_6B.pth",
},
{
"url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
"dest": "core/upscaling/realesrgan/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
"description": "ESRGAN_SRx4_DF2KOST_official.pth",
},
{
"url": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
"dest": "core/upscaling/realesrgan/RealESRGAN_x2plus.pth",
"description": "RealESRGAN_x2plus.pth",
},
]
for model in URLs:
download_with_progress_bar(model["url"], config.models_path / model["dest"], model["description"])
# ---------------------------------------------
def download_lama():
logger.info("Installing lama infill model")
download_with_progress_bar(
"https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
config.models_path / "core/misc/lama/lama.pt",
"lama infill model",
)
# ---------------------------------------------
def download_support_models() -> None:
download_realesrgan()
download_lama()
download_conversion_models()
# -------------------------------------
def get_root(root: Optional[str] = None) -> str:
if root:
return root
elif root := os.environ.get("INVOKEAI_ROOT"):
assert root is not None
return root
else:
return str(config.root_path)
# -------------------------------------
class editOptsForm(CyclingForm, npyscreen.FormMultiPage):
# for responsive resizing - disabled
# FIX_MINIMUM_SIZE_WHEN_CREATED = False
def create(self):
program_opts = self.parentApp.program_opts
old_opts = self.parentApp.invokeai_opts
first_time = not (config.root_path / "invokeai.yaml").exists()
access_token = HfFolder.get_token()
window_width, window_height = get_terminal_size()
label = """Configure startup settings. You can come back and change these later.
Use ctrl-N and ctrl-P to move to the <N>ext and <P>revious fields.
Use cursor arrows to make a checkbox selection, and space to toggle.
"""
self.nextrely -= 1
for i in textwrap.wrap(label, width=window_width - 6):
self.add_widget_intelligent(
npyscreen.FixedText,
value=i,
editable=False,
color="CONTROL",
)
self.nextrely += 1
label = """HuggingFace access token (OPTIONAL) for automatic model downloads. See https://huggingface.co/settings/tokens."""
for line in textwrap.wrap(label, width=window_width - 6):
self.add_widget_intelligent(
npyscreen.FixedText,
value=line,
editable=False,
color="CONTROL",
)
self.hf_token = self.add_widget_intelligent(
npyscreen.TitlePassword,
name="Access Token (ctrl-shift-V pastes):",
value=access_token,
begin_entry_at=42,
use_two_lines=False,
scroll_exit=True,
)
# old settings for defaults
precision = old_opts.precision or ("float32" if program_opts.full_precision else "auto")
device = old_opts.device
attention_type = old_opts.attention_type
attention_slice_size = old_opts.attention_slice_size
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Image Generation Options:",
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.generation_options = self.add_widget_intelligent(
MultiSelectColumns,
columns=3,
values=GENERATION_OPT_CHOICES,
value=[GENERATION_OPT_CHOICES.index(x) for x in GENERATION_OPT_CHOICES if getattr(old_opts, x)],
relx=30,
max_height=2,
max_width=80,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Floating Point Precision:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.precision = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(PRECISION_CHOICES),
name="Precision",
values=PRECISION_CHOICES,
value=PRECISION_CHOICES.index(precision),
begin_entry_at=3,
max_height=2,
relx=30,
max_width=80,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Generation Device:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.device = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(DEVICE_CHOICES),
values=DEVICE_CHOICES,
value=[DEVICE_CHOICES.index(device)],
begin_entry_at=3,
relx=30,
max_height=2,
max_width=60,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Attention Type:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.attention_type = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(ATTENTION_CHOICES),
values=ATTENTION_CHOICES,
value=[ATTENTION_CHOICES.index(attention_type)],
begin_entry_at=3,
max_height=2,
relx=30,
max_width=80,
scroll_exit=True,
)
self.attention_type.on_changed = self.show_hide_slice_sizes
self.attention_slice_label = self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Attention Slice Size:",
relx=5,
editable=False,
hidden=attention_type != "sliced",
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.attention_slice_size = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(ATTENTION_SLICE_CHOICES),
values=ATTENTION_SLICE_CHOICES,
value=[ATTENTION_SLICE_CHOICES.index(attention_slice_size)],
relx=30,
hidden=attention_type != "sliced",
max_height=2,
max_width=110,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Model disk conversion cache size (GB). This is used to cache safetensors files that need to be converted to diffusers..",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.disk = self.add_widget_intelligent(
npyscreen.Slider,
value=clip(old_opts.convert_cache, range=(0, 100), step=0.5),
out_of=100,
lowest=0.0,
step=0.5,
relx=8,
scroll_exit=True,
)
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model (2GB for SD-1, 6GB for SDXL).",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.ram = self.add_widget_intelligent(
npyscreen.Slider,
value=clip(old_opts.ram_cache_size, range=(3.0, MAX_RAM), step=0.5),
out_of=round(MAX_RAM),
lowest=0.0,
step=0.5,
relx=8,
scroll_exit=True,
)
if HAS_CUDA:
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Model VRAM cache size (GB). Reserving a small amount of VRAM will modestly speed up the start of image generation.",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.vram = self.add_widget_intelligent(
npyscreen.Slider,
value=clip(old_opts.vram_cache_size, range=(0, MAX_VRAM), step=0.25),
out_of=round(MAX_VRAM * 2) / 2,
lowest=0.0,
relx=8,
step=0.25,
scroll_exit=True,
)
else:
self.vram = DummyWidgetValue.zero
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.FixedText,
value="Location of the database used to store model path and configuration information:",
editable=False,
color="CONTROL",
)
self.nextrely += 1
self.outdir = self.add_widget_intelligent(
FileBox,
name="Output directory for images (<tab> autocompletes, ctrl-N advances):",
value=str(default_output_dir()),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=40,
max_height=3,
max_width=127,
scroll_exit=True,
)
self.autoimport_dirs = {}
self.autoimport_dirs["autoimport_dir"] = self.add_widget_intelligent(
FileBox,
name="Optional folder to scan for new checkpoints, ControlNets, LoRAs and TI models",
value=str(config.root_path / config.autoimport_dir) if config.autoimport_dir else "",
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
max_height=3,
max_width=127,
scroll_exit=True,
)
self.nextrely += 1
label = """BY DOWNLOADING THE STABLE DIFFUSION WEIGHT FILES, YOU AGREE TO HAVE READ
AND ACCEPTED THE CREATIVEML RESPONSIBLE AI LICENSES LOCATED AT
https://huggingface.co/spaces/CompVis/stable-diffusion-license and
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md
"""
for i in textwrap.wrap(label, width=window_width - 6):
self.add_widget_intelligent(
npyscreen.FixedText,
value=i,
editable=False,
color="CONTROL",
)
self.license_acceptance = self.add_widget_intelligent(
npyscreen.Checkbox,
name="I accept the CreativeML Responsible AI Licenses",
value=not first_time,
relx=2,
scroll_exit=True,
)
self.nextrely += 1
label = "DONE" if program_opts.skip_sd_weights or program_opts.default_only else "NEXT"
self.ok_button = self.add_widget_intelligent(
CenteredButtonPress,
name=label,
relx=(window_width - len(label)) // 2,
when_pressed_function=self.on_ok,
)
def show_hide_slice_sizes(self, value):
show = ATTENTION_CHOICES[value[0]] == "sliced"
self.attention_slice_label.hidden = not show
self.attention_slice_size.hidden = not show
def show_hide_model_conf_override(self, value):
self.model_conf_override.hidden = value
self.model_conf_override.display()
def on_ok(self):
options = self.marshall_arguments()
if self.validate_field_values(options):
self.parentApp.new_opts = options
if hasattr(self.parentApp, "model_select"):
self.parentApp.setNextForm("MODELS")
else:
self.parentApp.setNextForm(None)
self.editing = False
else:
self.editing = True
def validate_field_values(self, opt: Namespace) -> bool:
bad_fields = []
if not opt.license_acceptance:
bad_fields.append("Please accept the license terms before proceeding to model downloads")
if not Path(opt.outdir).parent.exists():
bad_fields.append(
f"The output directory does not seem to be valid. Please check that {str(Path(opt.outdir).parent)} is an existing directory."
)
if len(bad_fields) > 0:
message = "The following problems were detected and must be corrected:\n"
for problem in bad_fields:
message += f"* {problem}\n"
npyscreen.notify_confirm(message)
return False
else:
return True
def marshall_arguments(self) -> Namespace:
new_opts = Namespace()
for attr in [
"ram",
"vram",
"convert_cache",
"outdir",
]:
if hasattr(self, attr):
setattr(new_opts, attr, getattr(self, attr).value)
for attr in self.autoimport_dirs:
if not self.autoimport_dirs[attr].value:
continue
directory = Path(self.autoimport_dirs[attr].value)
if directory.is_relative_to(config.root_path):
directory = directory.relative_to(config.root_path)
setattr(new_opts, attr, directory)
new_opts.hf_token = self.hf_token.value
new_opts.license_acceptance = self.license_acceptance.value
new_opts.precision = PRECISION_CHOICES[self.precision.value[0]]
new_opts.device = DEVICE_CHOICES[self.device.value[0]]
new_opts.attention_type = ATTENTION_CHOICES[self.attention_type.value[0]]
new_opts.attention_slice_size = ATTENTION_SLICE_CHOICES[self.attention_slice_size.value[0]]
generation_options = [GENERATION_OPT_CHOICES[x] for x in self.generation_options.value]
for v in GENERATION_OPT_CHOICES:
setattr(new_opts, v, v in generation_options)
return new_opts
class EditOptApplication(npyscreen.NPSAppManaged):
def __init__(self, program_opts: Namespace, invokeai_opts: InvokeAIAppConfig, install_helper: InstallHelper):
super().__init__()
self.program_opts = program_opts
self.invokeai_opts = invokeai_opts
self.user_cancelled = False
self.autoload_pending = True
self.install_helper = install_helper
self.install_selections = default_user_selections(program_opts, install_helper)
def onStart(self):
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
self.options = self.addForm(
"MAIN",
editOptsForm,
name="InvokeAI Startup Options",
cycle_widgets=False,
)
if not (self.program_opts.skip_sd_weights or self.program_opts.default_only):
self.model_select = self.addForm(
"MODELS",
addModelsForm,
name="Install Stable Diffusion Models",
multipage=True,
cycle_widgets=False,
)
def default_ramcache() -> float:
"""Run a heuristic for the default RAM cache based on installed RAM."""
# Note that on my 64 GB machine, psutil.virtual_memory().total gives 62 GB,
# So we adjust everthing down a bit.
return (
15.0 if MAX_RAM >= 60 else 7.5 if MAX_RAM >= 30 else 4 if MAX_RAM >= 14 else 2.1
) # 2.1 is just large enough for sd 1.5 ;-)
def default_startup_options(init_file: Path) -> InvokeAIAppConfig:
opts = InvokeAIAppConfig.get_config()
opts.ram = default_ramcache()
opts.precision = "float32" if FORCE_FULL_PRECISION else choose_precision(torch.device(choose_torch_device()))
return opts
def default_user_selections(program_opts: Namespace, install_helper: InstallHelper) -> InstallSelections:
default_model = install_helper.default_model()
assert default_model is not None
default_models = [default_model] if program_opts.default_only else install_helper.recommended_models()
return InstallSelections(
install_models=default_models if program_opts.yes_to_all else [],
)
# -------------------------------------
def clip(value: float, range: tuple[float, float], step: float) -> float:
minimum, maximum = range
if value < minimum:
value = minimum
if value > maximum:
value = maximum
return round(value / step) * step
# -------------------------------------
def initialize_rootdir(root: Path, yes_to_all: bool = False):
logger.info("Initializing InvokeAI runtime directory")
for name in ("models", "databases", "text-inversion-output", "text-inversion-training-data", "configs"):
os.makedirs(os.path.join(root, name), exist_ok=True)
for model_type in ModelType:
Path(root, "autoimport", model_type.value).mkdir(parents=True, exist_ok=True)
configs_src = Path(configs.__path__[0])
configs_dest = root / "configs"
if not os.path.samefile(configs_src, configs_dest):
shutil.copytree(configs_src, configs_dest, dirs_exist_ok=True)
dest = root / "models"
for model_base in BaseModelType:
for model_type in ModelType:
path = dest / model_base.value / model_type.value
path.mkdir(parents=True, exist_ok=True)
path = dest / "core"
path.mkdir(parents=True, exist_ok=True)
# -------------------------------------
def run_console_ui(
program_opts: Namespace, initfile: Path, install_helper: InstallHelper
) -> Tuple[Optional[Namespace], Optional[InstallSelections]]:
first_time = not (config.root_path / "invokeai.yaml").exists()
invokeai_opts = default_startup_options(initfile) if first_time else config
invokeai_opts.root = program_opts.root
if not set_min_terminal_size(MIN_COLS, MIN_LINES):
raise WindowTooSmallException(
"Could not increase terminal size. Try running again with a larger window or smaller font size."
)
editApp = EditOptApplication(program_opts, invokeai_opts, install_helper)
editApp.run()
if editApp.user_cancelled:
return (None, None)
else:
return (editApp.new_opts, editApp.install_selections)
# -------------------------------------
def write_opts(opts: InvokeAIAppConfig, init_file: Path) -> None:
"""
Update the invokeai.yaml file with values from current settings.
"""
# this will load current settings
new_config = InvokeAIAppConfig.get_config()
new_config.root = config.root
for key, value in vars(opts).items():
if hasattr(new_config, key):
setattr(new_config, key, value)
with open(init_file, "w", encoding="utf-8") as file:
file.write(new_config.to_yaml())
if hasattr(opts, "hf_token") and opts.hf_token:
HfLogin(opts.hf_token)
# -------------------------------------
def default_output_dir() -> Path:
return config.root_path / "outputs"
# -------------------------------------
def write_default_options(program_opts: Namespace, initfile: Path) -> None:
opt = default_startup_options(initfile)
write_opts(opt, initfile)
# -------------------------------------
# Here we bring in
# the legacy Args object in order to parse
# the old init file and write out the new
# yaml format.
def migrate_init_file(legacy_format: Path) -> None:
old = legacy_parser.parse_args([f"@{str(legacy_format)}"])
new = InvokeAIAppConfig.get_config()
for attr in InvokeAIAppConfig.model_fields.keys():
if hasattr(old, attr):
try:
setattr(new, attr, getattr(old, attr))
except ValidationError as e:
print(f"* Ignoring incompatible value for field {attr}:\n {str(e)}")
# a few places where the field names have changed and we have to
# manually add in the new names/values
new.xformers_enabled = old.xformers
new.conf_path = old.conf
new.root = legacy_format.parent.resolve()
invokeai_yaml = legacy_format.parent / "invokeai.yaml"
with open(invokeai_yaml, "w", encoding="utf-8") as outfile:
outfile.write(new.to_yaml())
legacy_format.replace(legacy_format.parent / "invokeai.init.orig")
# -------------------------------------
def migrate_models(root: Path) -> None:
from invokeai.backend.install.migrate_to_3 import do_migrate
do_migrate(root, root)
def migrate_if_needed(opt: Namespace, root: Path) -> bool:
# We check for to see if the runtime directory is correctly initialized.
old_init_file = root / "invokeai.init"
new_init_file = root / "invokeai.yaml"
old_hub = root / "models/hub"
migration_needed = (old_init_file.exists() and not new_init_file.exists()) and old_hub.exists()
if migration_needed:
if opt.yes_to_all or yes_or_no(
f"{str(config.root_path)} appears to be a 2.3 format root directory. Convert to version 3.0?"
):
logger.info("** Migrating invokeai.init to invokeai.yaml")
migrate_init_file(old_init_file)
omegaconf = OmegaConf.load(new_init_file)
assert isinstance(omegaconf, DictConfig)
config.parse_args(argv=[], conf=omegaconf)
if old_hub.exists():
migrate_models(config.root_path)
else:
print("Cannot continue without conversion. Aborting.")
return migration_needed
# -------------------------------------
def main() -> None:
global FORCE_FULL_PRECISION # FIXME
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
parser.add_argument(
"--skip-sd-weights",
dest="skip_sd_weights",
action=argparse.BooleanOptionalAction,
default=False,
help="skip downloading the large Stable Diffusion weight files",
)
parser.add_argument(
"--skip-support-models",
dest="skip_support_models",
action=argparse.BooleanOptionalAction,
default=False,
help="skip downloading the support models",
)
parser.add_argument(
"--full-precision",
dest="full_precision",
action=argparse.BooleanOptionalAction,
type=bool,
default=False,
help="use 32-bit weights instead of faster 16-bit weights",
)
parser.add_argument(
"--yes",
"-y",
dest="yes_to_all",
action="store_true",
help='answer "yes" to all prompts',
)
parser.add_argument(
"--default_only",
action="store_true",
help="when --yes specified, only install the default model",
)
parser.add_argument(
"--config_file",
"-c",
dest="config_file",
type=str,
default=None,
help="path to configuration file to create",
)
parser.add_argument(
"--root_dir",
dest="root",
type=str,
default=None,
help="path to root of install directory",
)
opt = parser.parse_args()
invoke_args = []
if opt.root:
invoke_args.extend(["--root", opt.root])
if opt.full_precision:
invoke_args.extend(["--precision", "float32"])
config.parse_args(invoke_args)
logger = InvokeAILogger().get_logger(config=config)
errors = set()
FORCE_FULL_PRECISION = opt.full_precision # FIXME global
new_init_file = config.root_path / "invokeai.yaml"
backup_init_file = new_init_file.with_suffix(".bak")
if new_init_file.exists():
copy(new_init_file, backup_init_file)
try:
# if we do a root migration/upgrade, then we are keeping previous
# configuration and we are done.
if migrate_if_needed(opt, config.root_path):
sys.exit(0)
# run this unconditionally in case new directories need to be added
initialize_rootdir(config.root_path, opt.yes_to_all)
# this will initialize and populate the models tables if not present
install_helper = InstallHelper(config, logger)
models_to_download = default_user_selections(opt, install_helper)
if opt.yes_to_all:
write_default_options(opt, new_init_file)
init_options = Namespace(precision="float32" if opt.full_precision else "float16")
else:
init_options, models_to_download = run_console_ui(opt, new_init_file, install_helper)
if init_options:
write_opts(init_options, new_init_file)
else:
logger.info('\n** CANCELLED AT USER\'S REQUEST. USE THE "invoke.sh" LAUNCHER TO RUN LATER **\n')
sys.exit(0)
if opt.skip_support_models:
logger.info("Skipping support models at user's request")
else:
logger.info("Installing support models")
download_support_models()
if opt.skip_sd_weights:
logger.warning("Skipping diffusion weights download per user request")
elif models_to_download:
install_helper.add_or_delete(models_to_download)
postscript(errors=errors)
if not opt.yes_to_all:
input("Press any key to continue...")
except WindowTooSmallException as e:
logger.error(str(e))
if backup_init_file.exists():
move(backup_init_file, new_init_file)
except KeyboardInterrupt:
print("\nGoodbye! Come back soon.")
if backup_init_file.exists():
move(backup_init_file, new_init_file)
except Exception:
print("An error occurred during installation.")
if backup_init_file.exists():
move(backup_init_file, new_init_file)
print(traceback.format_exc(), file=sys.stderr)
# -------------------------------------
if __name__ == "__main__":
main()

View File

@ -1,379 +0,0 @@
# Copyright 2023 Lincoln D. Stein and the InvokeAI Team
import argparse
import shlex
from argparse import ArgumentParser
# note that this includes both old sampler names and new scheduler names
# in order to be able to parse both 2.0 and 3.0-pre-nodes versions of invokeai.init
SAMPLER_CHOICES = [
"ddim",
"ddpm",
"deis",
"lms",
"lms_k",
"pndm",
"heun",
"heun_k",
"euler",
"euler_k",
"euler_a",
"kdpm_2",
"kdpm_2_a",
"dpmpp_2s",
"dpmpp_2s_k",
"dpmpp_2m",
"dpmpp_2m_k",
"dpmpp_2m_sde",
"dpmpp_2m_sde_k",
"dpmpp_sde",
"dpmpp_sde_k",
"unipc",
"k_dpm_2_a",
"k_dpm_2",
"k_dpmpp_2_a",
"k_dpmpp_2",
"k_euler_a",
"k_euler",
"k_heun",
"k_lms",
"plms",
"lcm",
]
PRECISION_CHOICES = [
"auto",
"float32",
"autocast",
"float16",
]
class FileArgumentParser(ArgumentParser):
"""
Supports reading defaults from an init file.
"""
def convert_arg_line_to_args(self, arg_line):
return shlex.split(arg_line, comments=True)
legacy_parser = FileArgumentParser(
description="""
Generate images using Stable Diffusion.
Use --web to launch the web interface.
Use --from_file to load prompts from a file path or standard input ("-").
Otherwise you will be dropped into an interactive command prompt (type -h for help.)
Other command-line arguments are defaults that can usually be overridden
prompt the command prompt.
""",
fromfile_prefix_chars="@",
)
general_group = legacy_parser.add_argument_group("General")
model_group = legacy_parser.add_argument_group("Model selection")
file_group = legacy_parser.add_argument_group("Input/output")
web_server_group = legacy_parser.add_argument_group("Web server")
render_group = legacy_parser.add_argument_group("Rendering")
postprocessing_group = legacy_parser.add_argument_group("Postprocessing")
deprecated_group = legacy_parser.add_argument_group("Deprecated options")
deprecated_group.add_argument("--laion400m")
deprecated_group.add_argument("--weights") # deprecated
general_group.add_argument("--version", "-V", action="store_true", help="Print InvokeAI version number")
model_group.add_argument(
"--root_dir",
default=None,
help='Path to directory containing "models", "outputs" and "configs". If not present will read from environment variable INVOKEAI_ROOT. Defaults to ~/invokeai.',
)
model_group.add_argument(
"--config",
"-c",
"-config",
dest="conf",
default="./configs/models.yaml",
help="Path to configuration file for alternate models.",
)
model_group.add_argument(
"--model",
help='Indicates which diffusion model to load (defaults to "default" stanza in configs/models.yaml)',
)
model_group.add_argument(
"--weight_dirs",
nargs="+",
type=str,
help="List of one or more directories that will be auto-scanned for new model weights to import",
)
model_group.add_argument(
"--png_compression",
"-z",
type=int,
default=6,
choices=range(0, 9),
dest="png_compression",
help="level of PNG compression, from 0 (none) to 9 (maximum). Default is 6.",
)
model_group.add_argument(
"-F",
"--full_precision",
dest="full_precision",
action="store_true",
help="Deprecated way to set --precision=float32",
)
model_group.add_argument(
"--max_loaded_models",
dest="max_loaded_models",
type=int,
default=2,
help="Maximum number of models to keep in memory for fast switching, including the one in GPU",
)
model_group.add_argument(
"--free_gpu_mem",
dest="free_gpu_mem",
action="store_true",
help="Force free gpu memory before final decoding",
)
model_group.add_argument(
"--sequential_guidance",
dest="sequential_guidance",
action="store_true",
help="Calculate guidance in serial instead of in parallel, lowering memory requirement " "at the expense of speed",
)
model_group.add_argument(
"--xformers",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable xformers support (default enabled if installed)",
)
model_group.add_argument(
"--always_use_cpu", dest="always_use_cpu", action="store_true", help="Force use of CPU even if GPU is available"
)
model_group.add_argument(
"--precision",
dest="precision",
type=str,
choices=PRECISION_CHOICES,
metavar="PRECISION",
help=f'Set model precision. Defaults to auto selected based on device. Options: {", ".join(PRECISION_CHOICES)}',
default="auto",
)
model_group.add_argument(
"--ckpt_convert",
action=argparse.BooleanOptionalAction,
dest="ckpt_convert",
default=True,
help="Deprecated option. Legacy ckpt files are now always converted to diffusers when loaded.",
)
model_group.add_argument(
"--internet",
action=argparse.BooleanOptionalAction,
dest="internet_available",
default=True,
help="Indicate whether internet is available for just-in-time model downloading (default: probe automatically).",
)
model_group.add_argument(
"--nsfw_checker",
"--safety_checker",
action=argparse.BooleanOptionalAction,
dest="safety_checker",
default=False,
help="Check for and blur potentially NSFW images. Use --no-nsfw_checker to disable.",
)
model_group.add_argument(
"--autoimport",
default=None,
type=str,
help="Check the indicated directory for .ckpt/.safetensors weights files at startup and import directly",
)
model_group.add_argument(
"--autoconvert",
default=None,
type=str,
help="Check the indicated directory for .ckpt/.safetensors weights files at startup and import as optimized diffuser models",
)
model_group.add_argument(
"--patchmatch",
action=argparse.BooleanOptionalAction,
default=True,
help="Load the patchmatch extension for outpainting. Use --no-patchmatch to disable.",
)
file_group.add_argument(
"--from_file",
dest="infile",
type=str,
help="If specified, load prompts from this file",
)
file_group.add_argument(
"--outdir",
"-o",
type=str,
help="Directory to save generated images and a log of prompts and seeds. Default: ROOTDIR/outputs",
default="outputs",
)
file_group.add_argument(
"--prompt_as_dir",
"-p",
action="store_true",
help="Place images in subdirectories named after the prompt.",
)
render_group.add_argument(
"--fnformat",
default="{prefix}.{seed}.png",
type=str,
help="Overwrite the filename format. You can use any argument as wildcard enclosed in curly braces. Default is {prefix}.{seed}.png",
)
render_group.add_argument("-s", "--steps", type=int, default=50, help="Number of steps")
render_group.add_argument(
"-W",
"--width",
type=int,
help="Image width, multiple of 64",
)
render_group.add_argument(
"-H",
"--height",
type=int,
help="Image height, multiple of 64",
)
render_group.add_argument(
"-C",
"--cfg_scale",
default=7.5,
type=float,
help='Classifier free guidance (CFG) scale - higher numbers cause generator to "try" harder.',
)
render_group.add_argument(
"--sampler",
"-A",
"-m",
dest="sampler_name",
type=str,
choices=SAMPLER_CHOICES,
metavar="SAMPLER_NAME",
help=f'Set the default sampler. Supported samplers: {", ".join(SAMPLER_CHOICES)}',
default="k_lms",
)
render_group.add_argument(
"--log_tokenization", "-t", action="store_true", help="shows how the prompt is split into tokens"
)
render_group.add_argument(
"-f",
"--strength",
type=float,
help="img2img strength for noising/unnoising. 0.0 preserves image exactly, 1.0 replaces it completely",
)
render_group.add_argument(
"-T",
"-fit",
"--fit",
action=argparse.BooleanOptionalAction,
help="If specified, will resize the input image to fit within the dimensions of width x height (512x512 default)",
)
render_group.add_argument("--grid", "-g", action=argparse.BooleanOptionalAction, help="generate a grid")
render_group.add_argument(
"--embedding_directory",
"--embedding_path",
dest="embedding_path",
default="embeddings",
type=str,
help="Path to a directory containing .bin and/or .pt files, or a single .bin/.pt file. You may use subdirectories. (default is ROOTDIR/embeddings)",
)
render_group.add_argument(
"--lora_directory",
dest="lora_path",
default="loras",
type=str,
help="Path to a directory containing LoRA files; subdirectories are not supported. (default is ROOTDIR/loras)",
)
render_group.add_argument(
"--embeddings",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable embedding directory (default). Use --no-embeddings to disable.",
)
render_group.add_argument("--enable_image_debugging", action="store_true", help="Generates debugging image to display")
render_group.add_argument(
"--karras_max",
type=int,
default=None,
help="control the point at which the K* samplers will shift from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts). Set to 0 to use LatentDiffusion for all step values, and to a high value (e.g. 1000) to use Karras for all step values. [29].",
)
# Restoration related args
postprocessing_group.add_argument(
"--no_restore",
dest="restore",
action="store_false",
help="Disable face restoration with GFPGAN or codeformer",
)
postprocessing_group.add_argument(
"--no_upscale",
dest="esrgan",
action="store_false",
help="Disable upscaling with ESRGAN",
)
postprocessing_group.add_argument(
"--esrgan_bg_tile",
type=int,
default=400,
help="Tile size for background sampler, 0 for no tile during testing. Default: 400.",
)
postprocessing_group.add_argument(
"--esrgan_denoise_str",
type=float,
default=0.75,
help="esrgan denoise str. 0 is no denoise, 1 is max denoise. Default: 0.75",
)
postprocessing_group.add_argument(
"--gfpgan_model_path",
type=str,
default="./models/gfpgan/GFPGANv1.4.pth",
help="Indicates the path to the GFPGAN model",
)
web_server_group.add_argument(
"--web",
dest="web",
action="store_true",
help="Start in web server mode.",
)
web_server_group.add_argument(
"--web_develop",
dest="web_develop",
action="store_true",
help="Start in web server development mode.",
)
web_server_group.add_argument(
"--web_verbose",
action="store_true",
help="Enables verbose logging",
)
web_server_group.add_argument(
"--cors",
nargs="*",
type=str,
help="Additional allowed origins, comma-separated",
)
web_server_group.add_argument(
"--host",
type=str,
default="127.0.0.1",
help="Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network.",
)
web_server_group.add_argument("--port", type=int, default="9090", help="Web server: Port to listen on")
web_server_group.add_argument(
"--certfile",
type=str,
default=None,
help="Web server: Path to certificate file to use for SSL. Use together with --keyfile",
)
web_server_group.add_argument(
"--keyfile",
type=str,
default=None,
help="Web server: Path to private key file to use for SSL. Use together with --certfile",
)
web_server_group.add_argument(
"--gui",
dest="gui",
action="store_true",
help="Start InvokeAI GUI",
)

View File

@ -1,12 +1,4 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
Fast hashing of diffusers and checkpoint-style models.
Usage:
from invokeai.backend.model_managre.model_hash import FastModelHash
>>> FastModelHash.hash('/home/models/stable-diffusion-v1.5')
'a8e693a126ea5b831c96064dc569956f'
"""
import hashlib
import os
@ -15,9 +7,9 @@ from typing import Callable, Literal, Optional, Union
from blake3 import blake3
MODEL_FILE_EXTENSIONS = (".ckpt", ".safetensors", ".bin", ".pt", ".pth")
from invokeai.app.util.misc import uuid_string
ALGORITHM = Literal[
HASHING_ALGORITHMS = Literal[
"md5",
"sha1",
"sha224",
@ -33,12 +25,15 @@ ALGORITHM = Literal[
"shake_128",
"shake_256",
"blake3",
"blake3_single",
"random",
]
MODEL_FILE_EXTENSIONS = (".ckpt", ".safetensors", ".bin", ".pt", ".pth")
class ModelHash:
"""
Creates a hash of a model using a specified algorithm.
Creates a hash of a model using a specified algorithm. The hash is prefixed by the algorithm used.
Args:
algorithm: Hashing algorithm to use. Defaults to BLAKE3.
@ -53,20 +48,29 @@ class ModelHash:
The final hash is computed by hashing the hashes of all model files in the directory using BLAKE3, ensuring
that directory hashes are never weaker than the file hashes.
A convenience algorithm choice of "random" is also available, which returns a random string. This is not a hash.
Usage:
```py
# BLAKE3 hash
ModelHash().hash("path/to/some/model.safetensors")
ModelHash().hash("path/to/some/model.safetensors") # "blake3:ce3f0c5f3c05d119f4a5dcaf209b50d3149046a0d3a9adee9fed4c83cad6b4d0"
# MD5
ModelHash("md5").hash("path/to/model/dir/")
ModelHash("md5").hash("path/to/model/dir/") # "md5:a0cd925fc063f98dbf029eee315060c3"
```
"""
def __init__(self, algorithm: ALGORITHM = "blake3", file_filter: Optional[Callable[[str], bool]] = None) -> None:
def __init__(
self, algorithm: HASHING_ALGORITHMS = "blake3", file_filter: Optional[Callable[[str], bool]] = None
) -> None:
self.algorithm: HASHING_ALGORITHMS = algorithm
if algorithm == "blake3":
self._hash_file = self._blake3
elif algorithm == "blake3_single":
self._hash_file = self._blake3_single
elif algorithm in hashlib.algorithms_available:
self._hash_file = self._get_hashlib(algorithm)
elif algorithm == "random":
self._hash_file = self._random
else:
raise ValueError(f"Algorithm {algorithm} not available")
@ -87,10 +91,12 @@ class ModelHash:
"""
model_path = Path(model_path)
# blake3_single is a single-threaded version of blake3, prefix should still be "blake3:"
prefix = self._get_prefix(self.algorithm)
if model_path.is_file():
return self._hash_file(model_path)
return prefix + self._hash_file(model_path)
elif model_path.is_dir():
return self._hash_dir(model_path)
return prefix + self._hash_dir(model_path)
else:
raise OSError(f"Not a valid file or directory: {model_path}")
@ -114,6 +120,7 @@ class ModelHash:
composite_hasher = blake3()
for h in component_hashes:
composite_hasher.update(h.encode("utf-8"))
return composite_hasher.hexdigest()
@staticmethod
@ -137,7 +144,7 @@ class ModelHash:
@staticmethod
def _blake3(file_path: Path) -> str:
"""Hashes a file using BLAKE3
"""Hashes a file using BLAKE3, using parallelized and memory-mapped I/O to avoid reading the entire file into memory.
Args:
file_path: Path to the file to hash
@ -150,7 +157,21 @@ class ModelHash:
return file_hasher.hexdigest()
@staticmethod
def _get_hashlib(algorithm: ALGORITHM) -> Callable[[Path], str]:
def _blake3_single(file_path: Path) -> str:
"""Hashes a file using BLAKE3, without parallelism. Suitable for spinning hard drives.
Args:
file_path: Path to the file to hash
Returns:
Hexdigest of the hash of the file
"""
file_hasher = blake3()
file_hasher.update_mmap(file_path)
return file_hasher.hexdigest()
@staticmethod
def _get_hashlib(algorithm: HASHING_ALGORITHMS) -> Callable[[Path], str]:
"""Factory function that returns a function to hash a file with the given algorithm.
Args:
@ -172,6 +193,13 @@ class ModelHash:
return hashlib_hasher
@staticmethod
def _random(_file_path: Path) -> str:
"""Returns a random string. This is not a hash.
The string is a UUID, hashed with BLAKE3 to ensure that it is unique."""
return blake3(uuid_string().encode()).hexdigest()
@staticmethod
def _default_file_filter(file_path: str) -> bool:
"""A default file filter that only includes files with the following extensions: .ckpt, .safetensors, .bin, .pt, .pth
@ -183,3 +211,9 @@ class ModelHash:
True if the file matches the given extensions, otherwise False
"""
return file_path.endswith(MODEL_FILE_EXTENSIONS)
@staticmethod
def _get_prefix(algorithm: HASHING_ALGORITHMS) -> str:
"""Return the prefix for the given algorithm, e.g. \"blake3:\" or \"md5:\"."""
# blake3_single is a single-threaded version of blake3, prefix should still be "blake3:"
return "blake3:" if algorithm == "blake3_single" else f"{algorithm}:"

View File

@ -131,13 +131,20 @@ class ModelSourceType(str, Enum):
HFRepoID = "hf_repo_id"
DEFAULTS_PRECISION = Literal["fp16", "fp32"]
class MainModelDefaultSettings(BaseModel):
vae: str | None
vae_precision: str | None
scheduler: SCHEDULER_NAME_VALUES | None
steps: int | None
cfg_scale: float | None
cfg_rescale_multiplier: float | None
vae: str | None = Field(default=None, description="Default VAE for this model (model key)")
vae_precision: DEFAULTS_PRECISION | None = Field(default=None, description="Default VAE precision for this model")
scheduler: SCHEDULER_NAME_VALUES | None = Field(default=None, description="Default scheduler for this model")
steps: int | None = Field(default=None, gt=0, description="Default number of steps for this model")
cfg_scale: float | None = Field(default=None, ge=1, description="Default CFG Scale for this model")
cfg_rescale_multiplier: float | None = Field(
default=None, ge=0, lt=1, description="Default CFG Rescale Multiplier for this model"
)
width: int | None = Field(default=None, multiple_of=8, ge=64, description="Default width for this model")
height: int | None = Field(default=None, multiple_of=8, ge=64, description="Default height for this model")
class ControlAdapterDefaultSettings(BaseModel):
@ -324,7 +331,7 @@ class IPAdapterConfig(ModelConfigBase):
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.InvokeAI.value}")
class CLIPVisionDiffusersConfig(ModelConfigBase):
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
"""Model config for CLIPVision."""
type: Literal[ModelType.CLIPVision] = ModelType.CLIPVision
@ -335,7 +342,7 @@ class CLIPVisionDiffusersConfig(ModelConfigBase):
return Tag(f"{ModelType.CLIPVision.value}.{ModelFormat.Diffusers.value}")
class T2IAdapterConfig(ModelConfigBase, ControlAdapterConfigBase):
class T2IAdapterConfig(DiffusersConfigBase, ControlAdapterConfigBase):
"""Model config for T2I."""
type: Literal[ModelType.T2IAdapter] = ModelType.T2IAdapter

File diff suppressed because it is too large Load Diff

View File

@ -3,9 +3,6 @@
from pathlib import Path
import torch
from safetensors.torch import load_file as safetensors_load_file
from invokeai.backend.model_manager import (
AnyModelConfig,
BaseModelType,
@ -37,27 +34,23 @@ class ControlNetLoader(GenericDiffusersLoader):
return True
def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Path) -> Path:
if config.base not in {BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2}:
raise Exception(f"ControlNet conversion not supported for model type: {config.base}")
else:
assert isinstance(config, CheckpointConfigBase)
config_file = config.config_path
if model_path.suffix == ".safetensors":
checkpoint = safetensors_load_file(model_path, device="cpu")
else:
checkpoint = torch.load(model_path, map_location="cpu")
# sometimes weights are hidden under "state_dict", and sometimes not
if "state_dict" in checkpoint:
checkpoint = checkpoint["state_dict"]
convert_controlnet_to_diffusers(
model_path,
output_path,
original_config_file=self._app_config.root_path / config_file,
image_size=512,
scan_needed=True,
from_safetensors=model_path.suffix == ".safetensors",
assert isinstance(config, CheckpointConfigBase)
image_size = (
512
if config.base == BaseModelType.StableDiffusion1
else 768
if config.base == BaseModelType.StableDiffusion2
else 1024
)
self._logger.info(f"Converting {model_path} to diffusers format")
with open(config.config_path, "r") as config_stream:
convert_controlnet_to_diffusers(
model_path,
output_path,
original_config_file=config_stream,
image_size=image_size,
precision=self._torch_dtype,
from_safetensors=model_path.suffix == ".safetensors",
)
return output_path

View File

@ -36,7 +36,15 @@ class GenericDiffusersLoader(ModelLoader):
if submodel_type is not None:
raise Exception(f"There are no submodels in models of type {model_class}")
variant = model_variant.value if model_variant else None
result: AnyModel = model_class.from_pretrained(model_path, torch_dtype=self._torch_dtype, variant=variant) # type: ignore
try:
result: AnyModel = model_class.from_pretrained(model_path, torch_dtype=self._torch_dtype, variant=variant)
except OSError as e:
if variant and "no file named" in str(
e
): # try without the variant, just in case user's preferences changed
result = model_class.from_pretrained(model_path, torch_dtype=self._torch_dtype)
else:
raise e
return result
# TO DO: Add exception handling
@ -63,7 +71,7 @@ class GenericDiffusersLoader(ModelLoader):
assert class_name is not None
result = self._hf_definition_to_type(module="transformers", class_name=class_name[0])
if not class_name:
raise InvalidModelConfigException("Unable to decifer Load Class based on given config.json")
raise InvalidModelConfigException("Unable to decipher Load Class based on given config.json")
except KeyError as e:
raise InvalidModelConfigException("An expected config.json file is missing from this model.") from e
assert result is not None

View File

@ -4,9 +4,6 @@
from pathlib import Path
from typing import Optional
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipeline
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint import StableDiffusionInpaintPipeline
from invokeai.backend.model_manager import (
AnyModel,
AnyModelConfig,
@ -14,7 +11,7 @@ from invokeai.backend.model_manager import (
ModelFormat,
ModelRepoVariant,
ModelType,
ModelVariantType,
SchedulerPredictionType,
SubModelType,
)
from invokeai.backend.model_manager.config import CheckpointConfigBase, MainCheckpointConfig
@ -47,11 +44,20 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
load_class = self.get_hf_load_class(model_path, submodel_type)
variant = model_variant.value if model_variant else None
model_path = model_path / submodel_type.value
result: AnyModel = load_class.from_pretrained(
model_path,
torch_dtype=self._torch_dtype,
variant=variant,
) # type: ignore
try:
result: AnyModel = load_class.from_pretrained(
model_path,
torch_dtype=self._torch_dtype,
variant=variant,
)
except OSError as e:
if variant and "no file named" in str(
e
): # try without the variant, just in case user's preferences changed
result = load_class.from_pretrained(model_path, torch_dtype=self._torch_dtype)
else:
raise e
return result
def _needs_conversion(self, config: AnyModelConfig, model_path: Path, dest_path: Path) -> bool:
@ -68,27 +74,30 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Path) -> Path:
assert isinstance(config, MainCheckpointConfig)
variant = config.variant
base = config.base
pipeline_class = (
StableDiffusionInpaintPipeline if variant == ModelVariantType.Inpaint else StableDiffusionPipeline
)
config_file = config.config_path
prediction_type = config.prediction_type.value
upcast_attention = config.upcast_attention
image_size = (
1024
if base == BaseModelType.StableDiffusionXL
else 768
if config.prediction_type == SchedulerPredictionType.VPrediction and base == BaseModelType.StableDiffusion2
else 512
)
self._logger.info(f"Converting {model_path} to diffusers format")
convert_ckpt_to_diffusers(
model_path,
output_path,
model_type=self.model_base_to_model_type[base],
model_version=base,
model_variant=variant,
original_config_file=self._app_config.root_path / config_file,
original_config_file=config.config_path,
extract_ema=True,
scan_needed=True,
pipeline_class=pipeline_class,
from_safetensors=model_path.suffix == ".safetensors",
precision=self._torch_dtype,
prediction_type=prediction_type,
image_size=image_size,
upcast_attention=upcast_attention,
load_safety_checker=False,
)
return output_path

View File

@ -57,12 +57,12 @@ class VAELoader(GenericDiffusersLoader):
ckpt_config = OmegaConf.load(self._app_config.root_path / config_file)
assert isinstance(ckpt_config, DictConfig)
self._logger.info(f"Converting {model_path} to diffusers format")
vae_model = convert_ldm_vae_to_diffusers(
checkpoint=checkpoint,
vae_config=ckpt_config,
image_size=512,
precision=self._torch_dtype,
)
vae_model.to(self._torch_dtype) # set precision appropriately
vae_model.save_pretrained(output_path, safe_serialization=True)
return output_path

View File

@ -118,7 +118,7 @@ class ModelMerger(object):
config = self._installer.app_config
store = self._installer.record_store
base_models: Set[BaseModelType] = set()
variant = None if self._installer.app_config.full_precision else "fp16"
variant = None if self._installer.app_config.precision == "float32" else "fp16"
assert (
len(model_keys) <= 2 or interp == MergeInterpolationMethod.AddDifference

View File

@ -90,8 +90,35 @@ class HuggingFaceMetadataFetch(ModelMetadataFetchBase):
)
)
# diffusers models have a `model_index.json` or `config.json` file
is_diffusers = any(str(f.url).endswith(("model_index.json", "config.json")) for f in files)
# These URLs will be exposed to the user - I think these are the only file types we fully support
ckpt_urls = (
None
if is_diffusers
else [
f.url
for f in files
if str(f.url).endswith(
(
".safetensors",
".bin",
".pth",
".pt",
".ckpt",
)
)
]
)
return HuggingFaceMetadata(
id=model_info.id, name=name, files=files, api_response=json.dumps(model_info.__dict__, default=str)
id=model_info.id,
name=name,
files=files,
api_response=json.dumps(model_info.__dict__, default=str),
is_diffusers=is_diffusers,
ckpt_urls=ckpt_urls,
)
def from_url(self, url: AnyHttpUrl) -> AnyModelRepoMetadata:

View File

@ -84,6 +84,10 @@ class HuggingFaceMetadata(ModelMetadataWithFiles):
type: Literal["huggingface"] = "huggingface"
id: str = Field(description="The HF model id")
api_response: Optional[str] = Field(description="Response from the HF API as stringified JSON", default=None)
is_diffusers: bool = Field(description="Whether the metadata is for a Diffusers format model", default=False)
ckpt_urls: Optional[List[AnyHttpUrl]] = Field(
description="URLs for all checkpoint format models in the metadata", default=None
)
def download_urls(
self,

View File

@ -9,6 +9,7 @@ from picklescan.scanner import scan_file_path
import invokeai.backend.util.logging as logger
from invokeai.app.util.misc import uuid_string
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
from invokeai.backend.util.util import SilenceWarnings
from .config import (
@ -16,6 +17,7 @@ from .config import (
BaseModelType,
ControlAdapterDefaultSettings,
InvalidModelConfigException,
MainModelDefaultSettings,
ModelConfigFactory,
ModelFormat,
ModelRepoVariant,
@ -24,7 +26,6 @@ from .config import (
ModelVariantType,
SchedulerPredictionType,
)
from .hash import ModelHash
from .util.model_util import lora_token_vector_length, read_checkpoint_meta
CkptType = Dict[str, Any]
@ -113,9 +114,7 @@ class ModelProbe(object):
@classmethod
def probe(
cls,
model_path: Path,
fields: Optional[Dict[str, Any]] = None,
cls, model_path: Path, fields: Optional[Dict[str, Any]] = None, hash_algo: HASHING_ALGORITHMS = "blake3"
) -> AnyModelConfig:
"""
Probe the model at model_path and return its configuration record.
@ -133,11 +132,12 @@ class ModelProbe(object):
format_type = ModelFormat.Diffusers if model_path.is_dir() else ModelFormat.Checkpoint
model_info = None
model_type = None
if format_type is ModelFormat.Diffusers:
model_type = cls.get_model_type_from_folder(model_path)
else:
model_type = cls.get_model_type_from_checkpoint(model_path)
model_type = ModelType(fields["type"]) if "type" in fields and fields["type"] else None
if not model_type:
if format_type is ModelFormat.Diffusers:
model_type = cls.get_model_type_from_folder(model_path)
else:
model_type = cls.get_model_type_from_checkpoint(model_path)
format_type = ModelFormat.ONNX if model_type == ModelType.ONNX else format_type
probe_class = cls.PROBES[format_type].get(model_type)
@ -157,16 +157,18 @@ class ModelProbe(object):
fields["image_encoder_model_id"] = fields.get("image_encoder_model_id") or probe.get_image_encoder_model_id()
fields["name"] = fields.get("name") or cls.get_model_name(model_path)
fields["description"] = (
fields.get("description") or f"{fields['base'].value} {fields['type'].value} model {fields['name']}"
fields.get("description") or f"{fields['base'].value} {model_type.value} model {fields['name']}"
)
fields["format"] = fields.get("format") or probe.get_format()
fields["hash"] = fields.get("hash") or ModelHash().hash(model_path)
fields["hash"] = fields.get("hash") or ModelHash(algorithm=hash_algo).hash(model_path)
fields["default_settings"] = (
fields.get("default_settings") or probe.get_default_settings(fields["name"])
if isinstance(probe, ControlAdapterProbe)
else None
)
fields["default_settings"] = fields.get("default_settings")
if not fields["default_settings"]:
if fields["type"] in {ModelType.ControlNet, ModelType.T2IAdapter}:
fields["default_settings"] = get_default_settings_controlnet_t2i_adapter(fields["name"])
elif fields["type"] is ModelType.Main:
fields["default_settings"] = get_default_settings_main(fields["base"])
if format_type == ModelFormat.Diffusers and isinstance(probe, FolderProbeBase):
fields["repo_variant"] = fields.get("repo_variant") or probe.get_repo_variant()
@ -176,13 +178,14 @@ class ModelProbe(object):
fields["type"] in [ModelType.Main, ModelType.ControlNet, ModelType.VAE]
and fields["format"] is ModelFormat.Checkpoint
):
fields["config_path"] = cls._get_checkpoint_config_path(
ckpt_config_path = cls._get_checkpoint_config_path(
model_path,
model_type=fields["type"],
base_type=fields["base"],
variant_type=fields["variant"],
prediction_type=fields["prediction_type"],
).as_posix()
)
fields["config_path"] = str(ckpt_config_path)
# additional fields needed for main non-checkpoint models
elif fields["type"] == ModelType.Main and fields["format"] in [
@ -296,29 +299,29 @@ class ModelProbe(object):
config_file = LEGACY_CONFIGS[base_type][variant_type]
if isinstance(config_file, dict): # need another tier for sd-2.x models
config_file = config_file[prediction_type]
config_file = f"stable-diffusion/{config_file}"
elif model_type is ModelType.ControlNet:
config_file = (
"../controlnet/cldm_v15.yaml"
"controlnet/cldm_v15.yaml"
if base_type is BaseModelType.StableDiffusion1
else "../controlnet/cldm_v21.yaml"
else "controlnet/cldm_v21.yaml"
)
elif model_type is ModelType.VAE:
config_file = (
"../stable-diffusion/v1-inference.yaml"
"stable-diffusion/v1-inference.yaml"
if base_type is BaseModelType.StableDiffusion1
else "../stable-diffusion/v2-inference.yaml"
else "stable-diffusion/v2-inference.yaml"
)
else:
raise InvalidModelConfigException(
f"{model_path}: Unrecognized combination of model_type={model_type}, base_type={base_type}"
)
assert isinstance(config_file, str)
return Path(config_file)
@classmethod
def _scan_and_load_checkpoint(cls, model_path: Path) -> CkptType:
with SilenceWarnings():
if model_path.suffix.endswith((".ckpt", ".pt", ".bin")):
if model_path.suffix.endswith((".ckpt", ".pt", ".pth", ".bin")):
cls._scan_model(model_path.name, model_path)
model = torch.load(model_path)
assert isinstance(model, dict)
@ -338,36 +341,41 @@ class ModelProbe(object):
raise Exception("The model {model_name} is potentially infected by malware. Aborting import.")
class ControlAdapterProbe(ProbeBase):
"""Adds `get_default_settings` for ControlNet and T2IAdapter probes"""
# Probing utilities
MODEL_NAME_TO_PREPROCESSOR = {
"canny": "canny_image_processor",
"mlsd": "mlsd_image_processor",
"depth": "depth_anything_image_processor",
"bae": "normalbae_image_processor",
"normal": "normalbae_image_processor",
"sketch": "pidi_image_processor",
"scribble": "lineart_image_processor",
"lineart": "lineart_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"softedge": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",
"pose": "dw_openpose_image_processor",
"mediapipe": "mediapipe_face_processor",
"pidi": "pidi_image_processor",
"zoe": "zoe_depth_image_processor",
"color": "color_map_image_processor",
}
# TODO(psyche): It would be nice to get these from the invocations, but that creates circular dependencies.
# "canny": CannyImageProcessorInvocation.get_type()
MODEL_NAME_TO_PREPROCESSOR = {
"canny": "canny_image_processor",
"mlsd": "mlsd_image_processor",
"depth": "depth_anything_image_processor",
"bae": "normalbae_image_processor",
"normal": "normalbae_image_processor",
"sketch": "pidi_image_processor",
"scribble": "lineart_image_processor",
"lineart": "lineart_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"softedge": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",
"pose": "dw_openpose_image_processor",
"mediapipe": "mediapipe_face_processor",
"pidi": "pidi_image_processor",
"zoe": "zoe_depth_image_processor",
"color": "color_map_image_processor",
}
@classmethod
def get_default_settings(cls, model_name: str) -> Optional[ControlAdapterDefaultSettings]:
for k, v in cls.MODEL_NAME_TO_PREPROCESSOR.items():
if k in model_name:
return ControlAdapterDefaultSettings(preprocessor=v)
return None
def get_default_settings_controlnet_t2i_adapter(model_name: str) -> Optional[ControlAdapterDefaultSettings]:
for k, v in MODEL_NAME_TO_PREPROCESSOR.items():
if k in model_name:
return ControlAdapterDefaultSettings(preprocessor=v)
return None
def get_default_settings_main(model_base: BaseModelType) -> Optional[MainModelDefaultSettings]:
if model_base is BaseModelType.StableDiffusion1 or model_base is BaseModelType.StableDiffusion2:
return MainModelDefaultSettings(width=512, height=512)
elif model_base is BaseModelType.StableDiffusionXL:
return MainModelDefaultSettings(width=1024, height=1024)
# We don't provide defaults for BaseModelType.StableDiffusionXLRefiner, as they are not standalone models.
return None
# ##################################################3
@ -493,7 +501,7 @@ class TextualInversionCheckpointProbe(CheckpointProbeBase):
raise InvalidModelConfigException(f"{self.model_path}: Could not determine base type")
class ControlNetCheckpointProbe(CheckpointProbeBase, ControlAdapterProbe):
class ControlNetCheckpointProbe(CheckpointProbeBase):
"""Class for probing controlnets."""
def get_base_type(self) -> BaseModelType:
@ -521,7 +529,7 @@ class CLIPVisionCheckpointProbe(CheckpointProbeBase):
raise NotImplementedError()
class T2IAdapterCheckpointProbe(CheckpointProbeBase, ControlAdapterProbe):
class T2IAdapterCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
@ -659,7 +667,7 @@ class ONNXFolderProbe(PipelineFolderProbe):
return ModelVariantType.Normal
class ControlNetFolderProbe(FolderProbeBase, ControlAdapterProbe):
class ControlNetFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
if not config_file.exists():
@ -733,7 +741,7 @@ class CLIPVisionFolderProbe(FolderProbeBase):
return BaseModelType.Any
class T2IAdapterFolderProbe(FolderProbeBase, ControlAdapterProbe):
class T2IAdapterFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
if not config_file.exists():

View File

@ -0,0 +1,392 @@
from dataclasses import dataclass
from typing import Optional
from invokeai.backend.model_manager.config import BaseModelType, ModelType
@dataclass
class StarterModel:
description: str
source: str
name: str
base: BaseModelType
type: ModelType
# Optional list of model source dependencies that need to be installed before this model can be used
dependencies: Optional[list[str]] = None
is_installed: bool = False
# List of starter models, displayed on the frontend.
# The order/sort of this list is not changed by the frontend - set it how you want it here.
STARTER_MODELS: list[StarterModel] = [
# region: Main
StarterModel(
name="SD 1.5 (base)",
base=BaseModelType.StableDiffusion1,
source="runwayml/stable-diffusion-v1-5",
description="Stable Diffusion version 1.5 diffusers model (4.27 GB)",
type=ModelType.Main,
),
StarterModel(
name="SD 1.5 (inpainting)",
base=BaseModelType.StableDiffusion1,
source="runwayml/stable-diffusion-inpainting",
description="RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)",
type=ModelType.Main,
),
StarterModel(
name="Analog Diffusion",
base=BaseModelType.StableDiffusion1,
source="wavymulder/Analog-Diffusion",
description="An SD-1.5 model trained on diverse analog photographs (2.13 GB)",
type=ModelType.Main,
),
StarterModel(
name="Deliberate v5",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/XpucT/Deliberate/resolve/main/Deliberate_v5.safetensors",
description="Versatile model that produces detailed images up to 768px (4.27 GB)",
type=ModelType.Main,
),
StarterModel(
name="Dungeons and Diffusion",
base=BaseModelType.StableDiffusion1,
source="0xJustin/Dungeons-and-Diffusion",
description="Dungeons & Dragons characters (2.13 GB)",
type=ModelType.Main,
),
StarterModel(
name="dreamlike photoreal v2",
base=BaseModelType.StableDiffusion1,
source="dreamlike-art/dreamlike-photoreal-2.0",
description="A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)",
type=ModelType.Main,
),
StarterModel(
name="Inkpunk Diffusion",
base=BaseModelType.StableDiffusion1,
source="Envvi/Inkpunk-Diffusion",
description='Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)',
type=ModelType.Main,
),
StarterModel(
name="OpenJourney",
base=BaseModelType.StableDiffusion1,
source="prompthero/openjourney",
description='An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)',
type=ModelType.Main,
),
StarterModel(
name="seek.art MEGA",
base=BaseModelType.StableDiffusion1,
source="coreco/seek.art_MEGA",
description='A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)',
type=ModelType.Main,
),
StarterModel(
name="TrinArt v2",
base=BaseModelType.StableDiffusion1,
source="naclbit/trinart_stable_diffusion_v2",
description="An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)",
type=ModelType.Main,
),
StarterModel(
name="SD 2.1 (base)",
base=BaseModelType.StableDiffusion2,
source="stabilityai/stable-diffusion-2-1",
description="Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)",
type=ModelType.Main,
),
StarterModel(
name="SD 2.0 (inpainting)",
base=BaseModelType.StableDiffusion2,
source="stabilityai/stable-diffusion-2-inpainting",
description="Stable Diffusion version 2.0 inpainting model (5.21 GB)",
type=ModelType.Main,
),
StarterModel(
name="SDXL (base)",
base=BaseModelType.StableDiffusionXL,
source="stabilityai/stable-diffusion-xl-base-1.0",
description="Stable Diffusion XL base model (12 GB)",
type=ModelType.Main,
),
StarterModel(
name="SDXL Refiner",
base=BaseModelType.StableDiffusionXLRefiner,
source="stabilityai/stable-diffusion-xl-refiner-1.0",
description="Stable Diffusion XL refiner model (12 GB)",
type=ModelType.Main,
),
# endregion
# region VAE
StarterModel(
name="sdxl-vae-fp16-fix",
base=BaseModelType.StableDiffusionXL,
source="madebyollin/sdxl-vae-fp16-fix",
description="Version of the SDXL-1.0 VAE that works in half precision mode",
type=ModelType.VAE,
),
# endregion
# region LoRA
StarterModel(
name="FlatColor",
base=BaseModelType.StableDiffusion1,
source="https://civitai.com/models/6433/loraflatcolor",
description="A LoRA that generates scenery using solid blocks of color",
type=ModelType.LoRA,
),
StarterModel(
name="Ink scenery",
base=BaseModelType.StableDiffusion1,
source="https://civitai.com/api/download/models/83390",
description="Generate india ink-like landscapes",
type=ModelType.LoRA,
),
# endregion
# region IP Adapter
StarterModel(
name="IP Adapter",
base=BaseModelType.StableDiffusion1,
source="InvokeAI/ip_adapter_sd15",
description="IP-Adapter for SD 1.5 models",
type=ModelType.IPAdapter,
dependencies=["InvokeAI/ip_adapter_sd_image_encoder"],
),
StarterModel(
name="IP Adapter Plus",
base=BaseModelType.StableDiffusion1,
source="InvokeAI/ip_adapter_plus_sd15",
description="Refined IP-Adapter for SD 1.5 models",
type=ModelType.IPAdapter,
dependencies=["InvokeAI/ip_adapter_sd_image_encoder"],
),
StarterModel(
name="IP Adapter Plus Face",
base=BaseModelType.StableDiffusion1,
source="InvokeAI/ip_adapter_plus_face_sd15",
description="Refined IP-Adapter for SD 1.5 models, adapted for faces",
type=ModelType.IPAdapter,
dependencies=["InvokeAI/ip_adapter_sd_image_encoder"],
),
StarterModel(
name="IP Adapter SDXL",
base=BaseModelType.StableDiffusionXL,
source="InvokeAI/ip_adapter_sdxl",
description="IP-Adapter for SDXL models",
type=ModelType.IPAdapter,
dependencies=["InvokeAI/ip_adapter_sdxl_image_encoder"],
),
# endregion
# region ControlNet
StarterModel(
name="QRCode Monster",
base=BaseModelType.StableDiffusion1,
source="monster-labs/control_v1p_sd15_qrcode_monster",
description="Controlnet model that generates scannable creative QR codes",
type=ModelType.ControlNet,
),
StarterModel(
name="canny",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_canny",
description="Controlnet weights trained on sd-1.5 with canny conditioning.",
type=ModelType.ControlNet,
),
StarterModel(
name="inpaint",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_inpaint",
description="Controlnet weights trained on sd-1.5 with canny conditioning, inpaint version",
type=ModelType.ControlNet,
),
StarterModel(
name="mlsd",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_mlsd",
description="Controlnet weights trained on sd-1.5 with canny conditioning, MLSD version",
type=ModelType.ControlNet,
),
StarterModel(
name="depth",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1p_sd15_depth",
description="Controlnet weights trained on sd-1.5 with depth conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="normal_bae",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_normalbae",
description="Controlnet weights trained on sd-1.5 with normalbae image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="seg",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_seg",
description="Controlnet weights trained on sd-1.5 with seg image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="lineart",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_lineart",
description="Controlnet weights trained on sd-1.5 with lineart image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="lineart_anime",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15s2_lineart_anime",
description="Controlnet weights trained on sd-1.5 with anime image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="openpose",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_openpose",
description="Controlnet weights trained on sd-1.5 with openpose image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="scribble",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_scribble",
description="Controlnet weights trained on sd-1.5 with scribble image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="softedge",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_softedge",
description="Controlnet weights trained on sd-1.5 with soft edge conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="shuffle",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_shuffle",
description="Controlnet weights trained on sd-1.5 with shuffle image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="tile",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1e_sd15_tile",
description="Controlnet weights trained on sd-1.5 with tiled image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="ip2p",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_ip2p",
description="Controlnet weights trained on sd-1.5 with ip2p conditioning.",
type=ModelType.ControlNet,
),
StarterModel(
name="canny-sdxl",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlnet-canny-sdxl-1.0",
description="Controlnet weights trained on sdxl-1.0 with canny conditioning.",
type=ModelType.ControlNet,
),
StarterModel(
name="depth-sdxl",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlnet-depth-sdxl-1.0",
description="Controlnet weights trained on sdxl-1.0 with depth conditioning.",
type=ModelType.ControlNet,
),
StarterModel(
name="softedge-dexined-sdxl",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlnet-sd-xl-1.0-softedge-dexined",
description="Controlnet weights trained on sdxl-1.0 with dexined soft edge preprocessing.",
type=ModelType.ControlNet,
),
StarterModel(
name="depth-16bit-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe",
description="Controlnet weights trained on sdxl-1.0 with Zoe's preprocessor (16 bits).",
type=ModelType.ControlNet,
),
StarterModel(
name="depth-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlnet-zoe-depth-sdxl-1.0",
description="Controlnet weights trained on sdxl-1.0 with Zoe's preprocessor (32 bits).",
type=ModelType.ControlNet,
),
# endregion
# region T2I Adapter
StarterModel(
name="canny-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_canny_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with canny conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="sketch-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_sketch_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with sketch conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="depth-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_depth_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with depth conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="zoedepth-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_zoedepth_sd15v1",
description="T2I Adapter weights trained on sd-1.5 with zoe depth conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="canny-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-canny-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with canny conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="zoedepth-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-depth-zoe-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with zoe depth conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="lineart-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-lineart-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with lineart conditioning.",
type=ModelType.T2IAdapter,
),
StarterModel(
name="sketch-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-sketch-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with sketch conditioning.",
type=ModelType.T2IAdapter,
),
# endregion
# region TI
StarterModel(
name="EasyNegative",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors",
description="A textual inversion to use in the negative prompt to reduce bad anatomy",
type=ModelType.TextualInversion,
),
# endregion
]
assert len(STARTER_MODELS) == len({m.source for m in STARTER_MODELS}), "Duplicate starter models"

View File

@ -15,19 +15,18 @@ from diffusers.models.controlnet import ControlNetModel
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.schedulers import KarrasDiffusionSchedulers
from diffusers.schedulers.scheduling_utils import SchedulerMixin, SchedulerOutput
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from diffusers.utils.import_utils import is_xformers_available
from diffusers.utils.outputs import BaseOutput
from pydantic import Field
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.ip_adapter.unet_patcher import UNetPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningData
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
from ..util import auto_detect_slice_size, normalize_device
from invokeai.backend.util.attention import auto_detect_slice_size
from invokeai.backend.util.devices import normalize_device
@dataclass
@ -86,26 +85,8 @@ class AddsMaskGuidance:
noise: torch.Tensor
gradient_mask: bool
def __call__(self, step_output: Union[BaseOutput, SchedulerOutput], t: torch.Tensor, conditioning) -> BaseOutput:
output_class = step_output.__class__ # We'll create a new one with masked data.
# The problem with taking SchedulerOutput instead of the model output is that we're less certain what's in it.
# It's reasonable to assume the first thing is prev_sample, but then does it have other things
# like pred_original_sample? Should we apply the mask to them too?
# But what if there's just some other random field?
prev_sample = step_output[0]
# Mask anything that has the same shape as prev_sample, return others as-is.
return output_class(
{
k: self.apply_mask(v, self._t_for_field(k, t)) if are_like_tensors(prev_sample, v) else v
for k, v in step_output.items()
}
)
def _t_for_field(self, field_name: str, t):
if field_name == "pred_original_sample":
return self.scheduler.timesteps[-1]
return t
def __call__(self, latents: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
return self.apply_mask(latents, t)
def apply_mask(self, latents: torch.Tensor, t) -> torch.Tensor:
batch_size = latents.size(0)
@ -251,7 +232,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
"""
if xformers is available, use it, otherwise use sliced attention.
"""
config = InvokeAIAppConfig.get_config()
config = get_config()
if config.attention_type == "xformers":
self.enable_xformers_memory_efficient_attention()
return
@ -275,7 +256,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# the remainder if this code is called when attention_type=='auto'
if self.unet.device.type == "cuda":
if is_xformers_available() and not config.disable_xformers:
if is_xformers_available():
self.enable_xformers_memory_efficient_attention()
return
elif hasattr(torch.nn.functional, "scaled_dot_product_attention"):
@ -383,9 +364,15 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
finally:
self.invokeai_diffuser.model_forward_callback = self._unet_forward
# restore unmasked part
if mask is not None and not gradient_mask:
latents = torch.lerp(orig_latents, latents.to(dtype=orig_latents.dtype), mask.to(dtype=orig_latents.dtype))
# restore unmasked part after the last step is completed
# in-process masking happens before each step
if mask is not None:
if gradient_mask:
latents = torch.where(mask > 0, latents, orig_latents)
else:
latents = torch.lerp(
orig_latents, latents.to(dtype=orig_latents.dtype), mask.to(dtype=orig_latents.dtype)
)
return latents
@ -490,6 +477,10 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if additional_guidance is None:
additional_guidance = []
# one day we will expand this extension point, but for now it just does denoise masking
for guidance in additional_guidance:
latents = guidance(latents, timestep)
# TODO: should this scaling happen here or inside self._unet_forward?
# i.e. before or after passing it to InvokeAIDiffuserComponent
latent_model_input = self.scheduler.scale_model_input(latents, timestep)
@ -580,21 +571,17 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# compute the previous noisy sample x_t -> x_t-1
step_output = self.scheduler.step(noise_pred, timestep, latents, **conditioning_data.scheduler_args)
# TODO: issue to diffusers?
# undo internal counter increment done by scheduler.step, so timestep can be resolved as before call
# this needed to be able call scheduler.add_noise with current timestep
if self.scheduler.order == 2:
self.scheduler._index_counter[timestep.item()] -= 1
# TODO: this additional_guidance extension point feels redundant with InvokeAIDiffusionComponent.
# But the way things are now, scheduler runs _after_ that, so there was
# no way to use it to apply an operation that happens after the last scheduler.step.
# TODO: discuss injection point options. For now this is a patch to get progress images working with inpainting again.
for guidance in additional_guidance:
step_output = guidance(step_output, timestep, conditioning_data)
# restore internal counter
if self.scheduler.order == 2:
self.scheduler._index_counter[timestep.item()] += 1
# apply the mask to any "denoised" or "pred_original_sample" fields
if hasattr(step_output, "denoised"):
step_output.pred_original_sample = guidance(step_output.denoised, self.scheduler.timesteps[-1])
elif hasattr(step_output, "pred_original_sample"):
step_output.pred_original_sample = guidance(
step_output.pred_original_sample, self.scheduler.timesteps[-1]
)
else:
step_output.pred_original_sample = guidance(latents, self.scheduler.timesteps[-1])
return step_output

View File

@ -11,7 +11,7 @@ from compel.cross_attention_control import Arguments
from diffusers.models.attention_processor import Attention, SlicedAttnProcessor
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from ...util import torch_dtype
from invokeai.backend.util.devices import torch_dtype
class CrossAttentionType(enum.Enum):

View File

@ -8,7 +8,7 @@ import torch
from diffusers import UNet2DConditionModel
from typing_extensions import TypeAlias
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
ConditioningData,
ExtraConditioningInfo,
@ -54,7 +54,7 @@ class InvokeAIDiffuserComponent:
:param model: the unet model to pass through to cross attention control
:param model_forward_callback: a lambda with arguments (x, sigma, conditioning_to_apply). will be called repeatedly. most likely, this should simply call model.forward(x, sigma, conditioning)
"""
config = InvokeAIAppConfig.get_config()
config = get_config()
self.conditioning = None
self.model = model
self.model_forward_callback = model_forward_callback

View File

@ -5,6 +5,7 @@ from typing import Callable, List, Union
import torch.nn as nn
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
@ -26,7 +27,7 @@ def _conv_forward_asymmetric(self, input, weight, bias):
@contextmanager
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL], seamless_axes: List[str]):
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL, AutoencoderTiny], seamless_axes: List[str]):
# Callable: (input: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor
to_restore: list[tuple[nn.Conv2d | nn.ConvTranspose2d, Callable]] = []
try:

View File

@ -1,5 +0,0 @@
"""
Initialization file for invokeai.backend.training
"""
from .textual_inversion_training import do_textual_inversion_training, parse_args # noqa: F401

View File

@ -1,923 +0,0 @@
# This code was copied from
# https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py
# on January 2, 2023
# and modified slightly by Lincoln Stein (@lstein) to work with InvokeAI
"""
This is the backend to "textual_inversion.py"
"""
import logging
import math
import os
import random
from argparse import Namespace
from pathlib import Path
from typing import Optional
import datasets
import diffusers
import numpy as np
import PIL
import torch
import torch.nn.functional as F
import torch.utils.checkpoint
import transformers
from accelerate import Accelerator
from accelerate.logging import get_logger
from accelerate.utils import ProjectConfiguration, set_seed
from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
from diffusers.optimization import get_scheduler
from diffusers.utils import check_min_version
from diffusers.utils.import_utils import is_xformers_available
from huggingface_hub import HfFolder, Repository, whoami
from packaging import version
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms
from tqdm.auto import tqdm
from transformers import CLIPTextModel, CLIPTokenizer
# invokeai stuff
from invokeai.app.services.config import InvokeAIAppConfig, PagingArgumentParser
from invokeai.backend.install.install_helper import initialize_record_store
from invokeai.backend.model_manager import BaseModelType, ModelType
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
PIL_INTERPOLATION = {
"linear": PIL.Image.Resampling.BILINEAR,
"bilinear": PIL.Image.Resampling.BILINEAR,
"bicubic": PIL.Image.Resampling.BICUBIC,
"lanczos": PIL.Image.Resampling.LANCZOS,
"nearest": PIL.Image.Resampling.NEAREST,
}
else:
PIL_INTERPOLATION = {
"linear": PIL.Image.LINEAR,
"bilinear": PIL.Image.BILINEAR,
"bicubic": PIL.Image.BICUBIC,
"lanczos": PIL.Image.LANCZOS,
"nearest": PIL.Image.NEAREST,
}
# ------------------------------------------------------------------------------
# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.10.0.dev0")
logger = get_logger(__name__)
def save_progress(text_encoder, placeholder_token_id, accelerator, placeholder_token, save_path):
logger.info("Saving embeddings")
learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id]
learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()}
torch.save(learned_embeds_dict, save_path)
def parse_args() -> Namespace:
config = InvokeAIAppConfig.get_config()
parser = PagingArgumentParser(description="Textual inversion training")
general_group = parser.add_argument_group("General")
model_group = parser.add_argument_group("Models and Paths")
image_group = parser.add_argument_group("Training Image Location and Options")
trigger_group = parser.add_argument_group("Trigger Token")
training_group = parser.add_argument_group("Training Parameters")
checkpointing_group = parser.add_argument_group("Checkpointing and Resume")
integration_group = parser.add_argument_group("Integration")
general_group.add_argument(
"--front_end",
"--gui",
dest="front_end",
action="store_true",
default=False,
help="Activate the text-based graphical front end for collecting parameters. Aside from --root_dir, other parameters will be ignored.",
)
general_group.add_argument(
"--root_dir",
"--root",
type=Path,
default=config.root,
help="Path to the invokeai runtime directory",
)
general_group.add_argument(
"--logging_dir",
type=Path,
default="logs",
help=(
"[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
" *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
),
)
general_group.add_argument(
"--output_dir",
type=Path,
default=f"{config.root}/text-inversion-model",
help="The output directory where the model predictions and checkpoints will be written.",
)
model_group.add_argument(
"--model",
type=str,
default="sd-1/main/stable-diffusion-v1-5",
help="Name of the diffusers model to train against.",
)
model_group.add_argument(
"--revision",
type=str,
default=None,
required=False,
help="Revision of pretrained model identifier from huggingface.co/models.",
)
model_group.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
image_group.add_argument(
"--train_data_dir",
type=Path,
default=None,
help="A folder containing the training data.",
)
image_group.add_argument(
"--resolution",
type=int,
default=512,
help=(
"The resolution for input images, all the images in the train/validation dataset will be resized to this"
" resolution"
),
)
image_group.add_argument(
"--center_crop",
action="store_true",
help="Whether to center crop images before resizing to resolution",
)
trigger_group.add_argument(
"--placeholder_token",
"--trigger_term",
dest="placeholder_token",
type=str,
default=None,
help='A token to use as a placeholder for the concept. This token will trigger the concept when included in the prompt as "<trigger>".',
)
trigger_group.add_argument(
"--learnable_property",
type=str,
choices=["object", "style"],
default="object",
help="Choose between 'object' and 'style'",
)
trigger_group.add_argument(
"--initializer_token",
type=str,
default="*",
help="A symbol to use as the initializer word.",
)
checkpointing_group.add_argument(
"--checkpointing_steps",
type=int,
default=500,
help=(
"Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
" training using `--resume_from_checkpoint`."
),
)
checkpointing_group.add_argument(
"--resume_from_checkpoint",
type=Path,
default=None,
help=(
"Whether training should be resumed from a previous checkpoint. Use a path saved by"
' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
),
)
checkpointing_group.add_argument(
"--save_steps",
type=int,
default=500,
help="Save learned_embeds.bin every X updates steps.",
)
training_group.add_argument(
"--repeats",
type=int,
default=100,
help="How many times to repeat the training data.",
)
training_group.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
training_group.add_argument(
"--train_batch_size",
type=int,
default=16,
help="Batch size (per device) for the training dataloader.",
)
training_group.add_argument("--num_train_epochs", type=int, default=100)
training_group.add_argument(
"--max_train_steps",
type=int,
default=5000,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
training_group.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
training_group.add_argument(
"--gradient_checkpointing",
action="store_true",
help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
)
training_group.add_argument(
"--learning_rate",
type=float,
default=1e-4,
help="Initial learning rate (after the potential warmup period) to use.",
)
training_group.add_argument(
"--scale_lr",
action="store_true",
default=True,
help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
)
training_group.add_argument(
"--lr_scheduler",
type=str,
default="constant",
help=(
'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
' "constant", "constant_with_warmup"]'
),
)
training_group.add_argument(
"--lr_warmup_steps",
type=int,
default=500,
help="Number of steps for the warmup in the lr scheduler.",
)
training_group.add_argument(
"--adam_beta1",
type=float,
default=0.9,
help="The beta1 parameter for the Adam optimizer.",
)
training_group.add_argument(
"--adam_beta2",
type=float,
default=0.999,
help="The beta2 parameter for the Adam optimizer.",
)
training_group.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
training_group.add_argument(
"--adam_epsilon",
type=float,
default=1e-08,
help="Epsilon value for the Adam optimizer",
)
training_group.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help=(
"Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU."
),
)
training_group.add_argument(
"--allow_tf32",
action="store_true",
help=(
"Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
" https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
),
)
training_group.add_argument(
"--local_rank",
type=int,
default=-1,
help="For distributed training: local_rank",
)
parser.add_argument(
"--enable_xformers_memory_efficient_attention",
action="store_true",
help="Whether or not to use xformers.",
)
integration_group.add_argument(
"--only_save_embeds",
action="store_true",
default=False,
help="Save only the embeddings for the new concept.",
)
integration_group.add_argument(
"--hub_model_id",
type=str,
default=None,
help="The name of the repository to keep in sync with the local `output_dir`.",
)
integration_group.add_argument(
"--report_to",
type=str,
default="tensorboard",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
),
)
integration_group.add_argument(
"--push_to_hub",
action="store_true",
help="Whether or not to push the model to the Hub.",
)
integration_group.add_argument(
"--hub_token",
type=str,
default=None,
help="The token to use to push to the Model Hub.",
)
args = parser.parse_args()
return args
imagenet_templates_small = [
"a photo of a {}",
"a rendering of a {}",
"a cropped photo of the {}",
"the photo of a {}",
"a photo of a clean {}",
"a photo of a dirty {}",
"a dark photo of the {}",
"a photo of my {}",
"a photo of the cool {}",
"a close-up photo of a {}",
"a bright photo of the {}",
"a cropped photo of a {}",
"a photo of the {}",
"a good photo of the {}",
"a photo of one {}",
"a close-up photo of the {}",
"a rendition of the {}",
"a photo of the clean {}",
"a rendition of a {}",
"a photo of a nice {}",
"a good photo of a {}",
"a photo of the nice {}",
"a photo of the small {}",
"a photo of the weird {}",
"a photo of the large {}",
"a photo of a cool {}",
"a photo of a small {}",
]
imagenet_style_templates_small = [
"a painting in the style of {}",
"a rendering in the style of {}",
"a cropped painting in the style of {}",
"the painting in the style of {}",
"a clean painting in the style of {}",
"a dirty painting in the style of {}",
"a dark painting in the style of {}",
"a picture in the style of {}",
"a cool painting in the style of {}",
"a close-up painting in the style of {}",
"a bright painting in the style of {}",
"a cropped painting in the style of {}",
"a good painting in the style of {}",
"a close-up painting in the style of {}",
"a rendition in the style of {}",
"a nice painting in the style of {}",
"a small painting in the style of {}",
"a weird painting in the style of {}",
"a large painting in the style of {}",
]
class TextualInversionDataset(Dataset):
def __init__(
self,
data_root,
tokenizer,
learnable_property="object", # [object, style]
size=512,
repeats=100,
interpolation="bicubic",
flip_p=0.5,
set="train",
placeholder_token="*",
center_crop=False,
):
self.data_root = Path(data_root)
self.tokenizer = tokenizer
self.learnable_property = learnable_property
self.size = size
self.placeholder_token = placeholder_token
self.center_crop = center_crop
self.flip_p = flip_p
self.image_paths = [
self.data_root / file_path
for file_path in self.data_root.iterdir()
if file_path.is_file()
and file_path.name.endswith((".png", ".PNG", ".jpg", ".JPG", ".jpeg", ".JPEG", ".gif", ".GIF"))
]
self.num_images = len(self.image_paths)
self._length = self.num_images
if set == "train":
self._length = self.num_images * repeats
self.interpolation = {
"linear": PIL_INTERPOLATION["linear"],
"bilinear": PIL_INTERPOLATION["bilinear"],
"bicubic": PIL_INTERPOLATION["bicubic"],
"lanczos": PIL_INTERPOLATION["lanczos"],
}[interpolation]
self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small
self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p)
def __len__(self) -> int:
return self._length
def __getitem__(self, i):
example = {}
image = Image.open(self.image_paths[i % self.num_images])
if not image.mode == "RGB":
image = image.convert("RGB")
placeholder_string = self.placeholder_token
text = random.choice(self.templates).format(placeholder_string)
example["input_ids"] = self.tokenizer(
text,
padding="max_length",
truncation=True,
max_length=self.tokenizer.model_max_length,
return_tensors="pt",
).input_ids[0]
# default to score-sde preprocessing
img = np.array(image).astype(np.uint8)
if self.center_crop:
crop = min(img.shape[0], img.shape[1])
(
h,
w,
) = (
img.shape[0],
img.shape[1],
)
img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2]
image = Image.fromarray(img)
image = image.resize((self.size, self.size), resample=self.interpolation)
image = self.flip_transform(image)
image = np.array(image).astype(np.uint8)
image = (image / 127.5 - 1.0).astype(np.float32)
example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1)
return example
def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
if token is None:
token = HfFolder.get_token()
if organization is None:
username = whoami(token)["name"]
return f"{username}/{model_id}"
else:
return f"{organization}/{model_id}"
def do_textual_inversion_training(
config: InvokeAIAppConfig,
model: str,
train_data_dir: Path,
output_dir: Path,
placeholder_token: str,
initializer_token: str,
save_steps: int = 500,
only_save_embeds: bool = False,
tokenizer_name: Optional[str] = None,
learnable_property: str = "object",
repeats: int = 100,
seed: Optional[int] = None,
resolution: int = 512,
center_crop: bool = False,
train_batch_size: int = 16,
num_train_epochs: int = 100,
max_train_steps: int = 5000,
gradient_accumulation_steps: int = 1,
gradient_checkpointing: bool = False,
learning_rate: float = 1e-4,
scale_lr: bool = True,
lr_scheduler: str = "constant",
lr_warmup_steps: int = 500,
adam_beta1: float = 0.9,
adam_beta2: float = 0.999,
adam_weight_decay: float = 1e-02,
adam_epsilon: float = 1e-08,
push_to_hub: bool = False,
hub_token: Optional[str] = None,
logging_dir: Path = Path("logs"),
mixed_precision: str = "fp16",
allow_tf32: bool = False,
report_to: str = "tensorboard",
local_rank: int = -1,
checkpointing_steps: int = 500,
resume_from_checkpoint: Optional[Path] = None,
enable_xformers_memory_efficient_attention: bool = False,
hub_model_id: Optional[str] = None,
**kwargs,
) -> None:
assert model, "Please specify a base model with --model"
assert train_data_dir, "Please specify a directory containing the training images using --train_data_dir"
assert placeholder_token, "Please specify a trigger term using --placeholder_token"
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
if env_local_rank != -1 and env_local_rank != local_rank:
local_rank = env_local_rank
# setting up things the way invokeai expects them
if not os.path.isabs(output_dir):
output_dir = os.path.join(config.root, output_dir)
logging_dir = output_dir / logging_dir
accelerator_config = ProjectConfiguration()
accelerator_config.logging_dir = logging_dir
accelerator = Accelerator(
gradient_accumulation_steps=gradient_accumulation_steps,
mixed_precision=mixed_precision,
log_with=report_to,
project_config=accelerator_config,
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_warning()
diffusers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
diffusers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if seed is not None:
set_seed(seed)
# Handle the repository creation
if accelerator.is_main_process:
if push_to_hub:
if hub_model_id is None:
repo_name = get_full_repo_name(Path(output_dir).name, token=hub_token)
else:
repo_name = hub_model_id
repo = Repository(output_dir, clone_from=repo_name)
with open(os.path.join(output_dir, ".gitignore"), "w+") as gitignore:
if "step_*" not in gitignore:
gitignore.write("step_*\n")
if "epoch_*" not in gitignore:
gitignore.write("epoch_*\n")
elif output_dir is not None:
os.makedirs(output_dir, exist_ok=True)
model_records = initialize_record_store(config)
base, type, name = model.split("/") # note frontend still returns old-style keys
try:
model_config = model_records.search_by_attr(
model_name=name, model_type=ModelType(type), base_model=BaseModelType(base)
)[0]
except IndexError:
raise Exception(f"Unknown model {model}")
model_path = config.models_path / model_config.path
pipeline_args = {"local_files_only": True}
if tokenizer_name:
tokenizer = CLIPTokenizer.from_pretrained(tokenizer_name, **pipeline_args)
else:
tokenizer = CLIPTokenizer.from_pretrained(model_path, subfolder="tokenizer", **pipeline_args)
# Load scheduler and models
noise_scheduler = DDPMScheduler.from_pretrained(model_path, subfolder="scheduler", **pipeline_args)
text_encoder = CLIPTextModel.from_pretrained(
model_path,
subfolder="text_encoder",
**pipeline_args,
)
vae = AutoencoderKL.from_pretrained(
model_path,
subfolder="vae",
**pipeline_args,
)
unet = UNet2DConditionModel.from_pretrained(
model_path,
subfolder="unet",
**pipeline_args,
)
# Add the placeholder token in tokenizer
num_added_tokens = tokenizer.add_tokens(placeholder_token)
if num_added_tokens == 0:
raise ValueError(
f"The tokenizer already contains the token {placeholder_token}. Please pass a different"
" `placeholder_token` that is not already in the tokenizer."
)
# Convert the initializer_token, placeholder_token to ids
token_ids = tokenizer.encode(initializer_token, add_special_tokens=False)
# Check if initializer_token is a single token or a sequence of tokens
if len(token_ids) > 1:
raise ValueError(
f"The initializer token must be a single token. Provided initializer={initializer_token}. Token ids={token_ids}"
)
initializer_token_id = token_ids[0]
placeholder_token_id = tokenizer.convert_tokens_to_ids(placeholder_token)
# Resize the token embeddings as we are adding new special tokens to the tokenizer
text_encoder.resize_token_embeddings(len(tokenizer))
# Initialise the newly added placeholder token with the embeddings of the initializer token
token_embeds = text_encoder.get_input_embeddings().weight.data
token_embeds[placeholder_token_id] = token_embeds[initializer_token_id]
# Freeze vae and unet
vae.requires_grad_(False)
unet.requires_grad_(False)
# Freeze all parameters except for the token embeddings in text encoder
text_encoder.text_model.encoder.requires_grad_(False)
text_encoder.text_model.final_layer_norm.requires_grad_(False)
text_encoder.text_model.embeddings.position_embedding.requires_grad_(False)
if gradient_checkpointing:
# Keep unet in train mode if we are using gradient checkpointing to save memory.
# The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode.
unet.train()
text_encoder.gradient_checkpointing_enable()
unet.enable_gradient_checkpointing()
if enable_xformers_memory_efficient_attention:
if is_xformers_available():
unet.enable_xformers_memory_efficient_attention()
else:
raise ValueError("xformers is not available. Make sure it is installed correctly")
# Enable TF32 for faster training on Ampere GPUs,
# cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
if allow_tf32:
torch.backends.cuda.matmul.allow_tf32 = True
if scale_lr:
learning_rate = learning_rate * gradient_accumulation_steps * train_batch_size * accelerator.num_processes
# Initialize the optimizer
optimizer = torch.optim.AdamW(
text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings
lr=learning_rate,
betas=(adam_beta1, adam_beta2),
weight_decay=adam_weight_decay,
eps=adam_epsilon,
)
# Dataset and DataLoaders creation:
train_dataset = TextualInversionDataset(
data_root=train_data_dir,
tokenizer=tokenizer,
size=resolution,
placeholder_token=placeholder_token,
repeats=repeats,
learnable_property=learnable_property,
center_crop=center_crop,
set="train",
)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=train_batch_size, shuffle=True)
# Scheduler and math around the number of training steps.
overrode_max_train_steps = False
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / gradient_accumulation_steps)
if max_train_steps is None:
max_train_steps = num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
scheduler = get_scheduler(
lr_scheduler,
optimizer=optimizer,
num_warmup_steps=lr_warmup_steps * gradient_accumulation_steps,
num_training_steps=max_train_steps * gradient_accumulation_steps,
)
# Prepare everything with our `accelerator`.
text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
text_encoder, optimizer, train_dataloader, scheduler
)
# For mixed precision training we cast the unet and vae weights to half-precision
# as these models are only used for inference, keeping weights in full precision is not required.
weight_dtype = torch.float32
if accelerator.mixed_precision == "fp16":
weight_dtype = torch.float16
elif accelerator.mixed_precision == "bf16":
weight_dtype = torch.bfloat16
# Move vae and unet to device and cast to weight_dtype
unet.to(accelerator.device, dtype=weight_dtype)
vae.to(accelerator.device, dtype=weight_dtype)
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / gradient_accumulation_steps)
if overrode_max_train_steps:
max_train_steps = num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
num_train_epochs = math.ceil(max_train_steps / num_update_steps_per_epoch)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if accelerator.is_main_process:
params = locals()
for k in params: # init_trackers() doesn't like objects
params[k] = str(params[k]) if isinstance(params[k], object) else params[k]
accelerator.init_trackers("textual_inversion", config=params)
# Train!
total_batch_size = train_batch_size * accelerator.num_processes * gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {max_train_steps}")
global_step = 0
first_epoch = 0
resume_step = None
# Potentially load in the weights and states from a previous save
if resume_from_checkpoint:
if resume_from_checkpoint != "latest":
path = os.path.basename(resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = os.listdir(output_dir)
dirs = [d for d in dirs if d.startswith("checkpoint")]
dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
path = dirs[-1] if len(dirs) > 0 else None
if path is None:
accelerator.print(f"Checkpoint '{resume_from_checkpoint}' does not exist. Starting a new training run.")
resume_from_checkpoint = None
else:
accelerator.print(f"Resuming from checkpoint {path}")
accelerator.load_state(os.path.join(output_dir, path))
global_step = int(path.split("-")[1])
resume_global_step = global_step * gradient_accumulation_steps
first_epoch = global_step // num_update_steps_per_epoch
resume_step = resume_global_step % (num_update_steps_per_epoch * gradient_accumulation_steps)
# Only show the progress bar once on each machine.
progress_bar = tqdm(
range(global_step, max_train_steps),
disable=not accelerator.is_local_main_process,
)
progress_bar.set_description("Steps")
# keep original embeddings as reference
orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone()
for epoch in range(first_epoch, num_train_epochs):
text_encoder.train()
for step, batch in enumerate(train_dataloader):
# Skip steps until we reach the resumed step
if resume_step and resume_from_checkpoint and epoch == first_epoch and step < resume_step:
if step % gradient_accumulation_steps == 0:
progress_bar.update(1)
continue
with accelerator.accumulate(text_encoder):
# Convert images to latent space
latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach()
latents = latents * 0.18215
# Sample noise that we'll add to the latents
noise = torch.randn_like(latents)
bsz = latents.shape[0]
# Sample a random timestep for each image
timesteps = torch.randint(
0,
noise_scheduler.config.num_train_timesteps,
(bsz,),
device=latents.device,
)
timesteps = timesteps.long()
# Add noise to the latents according to the noise magnitude at each timestep
# (this is the forward diffusion process)
noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
# Get the text embedding for conditioning
encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype)
# Predict the noise residual
model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
# Get the target for loss depending on the prediction type
if noise_scheduler.config.prediction_type == "epsilon":
target = noise
elif noise_scheduler.config.prediction_type == "v_prediction":
target = noise_scheduler.get_velocity(latents, noise, timesteps)
else:
raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
# Let's make sure we don't update any embedding weights besides the newly added token
index_no_updates = torch.arange(len(tokenizer)) != placeholder_token_id
with torch.no_grad():
accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[index_no_updates] = (
orig_embeds_params[index_no_updates]
)
# Checks if the accelerator has performed an optimization step behind the scenes
if accelerator.sync_gradients:
progress_bar.update(1)
global_step += 1
if global_step % save_steps == 0:
save_path = os.path.join(output_dir, f"learned_embeds-steps-{global_step}.bin")
save_progress(
text_encoder,
placeholder_token_id,
accelerator,
placeholder_token,
save_path,
)
if global_step % checkpointing_steps == 0:
if accelerator.is_main_process:
save_path = os.path.join(output_dir, f"checkpoint-{global_step}")
accelerator.save_state(save_path)
logger.info(f"Saved state to {save_path}")
logs = {"loss": loss.detach().item(), "lr": scheduler.get_last_lr()[0]}
progress_bar.set_postfix(**logs)
accelerator.log(logs, step=global_step)
if global_step >= max_train_steps:
break
# Create the pipeline using using the trained modules and save it.
accelerator.wait_for_everyone()
if accelerator.is_main_process:
if push_to_hub and only_save_embeds:
logger.warn("Enabling full model saving because --push_to_hub=True was specified.")
save_full_model = True
else:
save_full_model = not only_save_embeds
if save_full_model:
pipeline = StableDiffusionPipeline.from_pretrained(
model_path,
text_encoder=accelerator.unwrap_model(text_encoder),
vae=vae,
unet=unet,
tokenizer=tokenizer,
**pipeline_args,
)
pipeline.save_pretrained(output_dir)
# Save the newly trained embeddings
save_path = os.path.join(output_dir, "learned_embeds.bin")
save_progress(
text_encoder,
placeholder_token_id,
accelerator,
placeholder_token,
save_path,
)
if push_to_hub:
repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
accelerator.end_training()

View File

@ -2,32 +2,14 @@
Initialization file for invokeai.backend.util
"""
from .attention import auto_detect_slice_size # noqa: F401
from .devices import ( # noqa: F401
CPU_DEVICE,
CUDA_DEVICE,
MPS_DEVICE,
choose_precision,
choose_torch_device,
normalize_device,
torch_dtype,
)
from .devices import choose_precision, choose_torch_device
from .logging import InvokeAILogger
from .util import ( # TO DO: Clean this up; remove the unused symbols
GIG,
Chdir,
ask_user, # noqa
directory_size,
download_with_resume,
instantiate_from_config, # noqa
url_attachment_name, # noqa
)
from .util import GIG, Chdir, directory_size
__all__ = [
"GIG",
"directory_size",
"Chdir",
"download_with_resume",
"InvokeAILogger",
"choose_precision",
"choose_torch_device",

View File

@ -7,18 +7,17 @@ import torch
from torch import autocast
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
CPU_DEVICE = torch.device("cpu")
CUDA_DEVICE = torch.device("cuda")
MPS_DEVICE = torch.device("mps")
config = InvokeAIAppConfig.get_config()
def choose_torch_device() -> torch.device:
"""Convenience routine for guessing which GPU device to run model on"""
if config.use_cpu: # legacy setting - force CPU
return CPU_DEVICE
elif config.device == "auto":
config = get_config()
if config.device == "auto":
if torch.cuda.is_available():
return torch.device("cuda")
if hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
@ -35,7 +34,7 @@ def choose_precision(
device: torch.device, app_config: Optional[InvokeAIAppConfig] = None
) -> Literal["float32", "float16", "bfloat16"]:
"""Return an appropriate precision for the given torch device."""
app_config = app_config or config
app_config = app_config or get_config()
if device.type == "cuda":
device_name = torch.cuda.get_device_name(device)
if not ("GeForce GTX 1660" in device_name or "GeForce GTX 1650" in device_name):

View File

@ -1,67 +0,0 @@
"""
Functions for better format logging
write_log -- logs the name of the output image, prompt, and prompt args to the terminal and different types of file
1 write_log_message -- Writes a message to the console
2 write_log_files -- Writes a message to files
2.1 write_log_default -- File in plain text
2.2 write_log_txt -- File in txt format
2.3 write_log_markdown -- File in markdown format
"""
import os
def write_log(results, log_path, file_types, output_cntr):
"""
logs the name of the output image, prompt, and prompt args to the terminal and files
"""
output_cntr = write_log_message(results, output_cntr)
write_log_files(results, log_path, file_types)
return output_cntr
def write_log_message(results, output_cntr):
"""logs to the terminal"""
if len(results) == 0:
return output_cntr
log_lines = [f"{path}: {prompt}\n" for path, prompt in results]
if len(log_lines) > 1:
subcntr = 1
for ll in log_lines:
print(f"[{output_cntr}.{subcntr}] {ll}", end="")
subcntr += 1
else:
print(f"[{output_cntr}] {log_lines[0]}", end="")
return output_cntr + 1
def write_log_files(results, log_path, file_types):
for file_type in file_types:
if file_type == "txt":
write_log_txt(log_path, results)
elif file_type == "md" or file_type == "markdown":
write_log_markdown(log_path, results)
else:
print(f"'{file_type}' format is not supported, so write in plain text")
write_log_default(log_path, results, file_type)
def write_log_default(log_path, results, file_type):
plain_txt_lines = [f"{path}: {prompt}\n" for path, prompt in results]
with open(log_path + "." + file_type, "a", encoding="utf-8") as file:
file.writelines(plain_txt_lines)
def write_log_txt(log_path, results):
txt_lines = [f"{path}: {prompt}\n" for path, prompt in results]
with open(log_path + ".txt", "a", encoding="utf-8") as file:
file.writelines(txt_lines)
def write_log_markdown(log_path, results):
md_lines = []
for path, prompt in results:
file_name = os.path.basename(path)
md_lines.append(f"## {file_name}\n![]({file_name})\n\n{prompt}\n")
with open(log_path + ".md", "a", encoding="utf-8") as file:
file.writelines(md_lines)

View File

@ -181,6 +181,7 @@ from pathlib import Path
from typing import Any, Dict, Optional
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
try:
import syslog
@ -338,9 +339,8 @@ class InvokeAILogger(object): # noqa D102
loggers: Dict[str, logging.Logger] = {}
@classmethod
def get_logger(
cls, name: str = "InvokeAI", config: InvokeAIAppConfig = InvokeAIAppConfig.get_config()
) -> logging.Logger: # noqa D102
def get_logger(cls, name: str = "InvokeAI", config: Optional[InvokeAIAppConfig] = None) -> logging.Logger: # noqa D102
config = config or get_config()
if name in cls.loggers:
return cls.loggers[name]

View File

@ -1,29 +1,13 @@
import base64
import importlib
import io
import math
import multiprocessing as mp
import os
import re
import warnings
from collections import abc
from inspect import isfunction
from pathlib import Path
from queue import Queue
from threading import Thread
import numpy as np
import requests
import torch
from diffusers import logging as diffusers_logging
from PIL import Image, ImageDraw, ImageFont
from tqdm import tqdm
from PIL import Image
from transformers import logging as transformers_logging
import invokeai.backend.util.logging as logger
from .devices import torch_dtype
# actual size of a gig
GIG = 1073741824
@ -41,340 +25,6 @@ def directory_size(directory: Path) -> int:
return sum
def log_txt_as_img(wh, xc, size=10):
# wh a tuple of (width, height)
# xc a list of captions to plot
b = len(xc)
txts = []
for bi in range(b):
txt = Image.new("RGB", wh, color="white")
draw = ImageDraw.Draw(txt)
font = ImageFont.load_default()
nc = int(40 * (wh[0] / 256))
lines = "\n".join(xc[bi][start : start + nc] for start in range(0, len(xc[bi]), nc))
try:
draw.text((0, 0), lines, fill="black", font=font)
except UnicodeEncodeError:
logger.warning("Cant encode string for logging. Skipping.")
txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
txts.append(txt)
txts = np.stack(txts)
txts = torch.tensor(txts)
return txts
def ismap(x):
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] > 3)
def isimage(x):
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
def exists(x):
return x is not None
def default(val, d):
if exists(val):
return val
return d() if isfunction(d) else d
def mean_flat(tensor):
"""
https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86
Take the mean over all non-batch dimensions.
"""
return tensor.mean(dim=list(range(1, len(tensor.shape))))
def count_params(model, verbose=False):
total_params = sum(p.numel() for p in model.parameters())
if verbose:
logger.debug(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.")
return total_params
def instantiate_from_config(config, **kwargs):
if "target" not in config:
if config == "__is_first_stage__":
return None
elif config == "__is_unconditional__":
return None
raise KeyError("Expected key `target` to instantiate.")
return get_obj_from_str(config["target"])(**config.get("params", {}), **kwargs)
def get_obj_from_str(string, reload=False):
module, cls = string.rsplit(".", 1)
if reload:
module_imp = importlib.import_module(module)
importlib.reload(module_imp)
return getattr(importlib.import_module(module, package=None), cls)
def _do_parallel_data_prefetch(func, Q, data, idx, idx_to_fn=False):
# create dummy dataset instance
# run prefetching
if idx_to_fn:
res = func(data, worker_id=idx)
else:
res = func(data)
Q.put([idx, res])
Q.put("Done")
def parallel_data_prefetch(
func: callable,
data,
n_proc,
target_data_type="ndarray",
cpu_intensive=True,
use_worker_id=False,
):
# if target_data_type not in ["ndarray", "list"]:
# raise ValueError(
# "Data, which is passed to parallel_data_prefetch has to be either of type list or ndarray."
# )
if isinstance(data, np.ndarray) and target_data_type == "list":
raise ValueError("list expected but function got ndarray.")
elif isinstance(data, abc.Iterable):
if isinstance(data, dict):
logger.warning(
'"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.'
)
data = list(data.values())
if target_data_type == "ndarray":
data = np.asarray(data)
else:
data = list(data)
else:
raise TypeError(
f"The data, that shall be processed parallel has to be either an np.ndarray or an Iterable, but is actually {type(data)}."
)
if cpu_intensive:
Q = mp.Queue(1000)
proc = mp.Process
else:
Q = Queue(1000)
proc = Thread
# spawn processes
if target_data_type == "ndarray":
arguments = [[func, Q, part, i, use_worker_id] for i, part in enumerate(np.array_split(data, n_proc))]
else:
step = int(len(data) / n_proc + 1) if len(data) % n_proc != 0 else int(len(data) / n_proc)
arguments = [
[func, Q, part, i, use_worker_id]
for i, part in enumerate([data[i : i + step] for i in range(0, len(data), step)])
]
processes = []
for i in range(n_proc):
p = proc(target=_do_parallel_data_prefetch, args=arguments[i])
processes += [p]
# start processes
logger.info("Start prefetching...")
import time
start = time.time()
gather_res = [[] for _ in range(n_proc)]
try:
for p in processes:
p.start()
k = 0
while k < n_proc:
# get result
res = Q.get()
if res == "Done":
k += 1
else:
gather_res[res[0]] = res[1]
except Exception as e:
logger.error("Exception: ", e)
for p in processes:
p.terminate()
raise e
finally:
for p in processes:
p.join()
logger.info(f"Prefetching complete. [{time.time() - start} sec.]")
if target_data_type == "ndarray":
if not isinstance(gather_res[0], np.ndarray):
return np.concatenate([np.asarray(r) for r in gather_res], axis=0)
# order outputs
return np.concatenate(gather_res, axis=0)
elif target_data_type == "list":
out = []
for r in gather_res:
out.extend(r)
return out
else:
return gather_res
def rand_perlin_2d(shape, res, device, fade=lambda t: 6 * t**5 - 15 * t**4 + 10 * t**3):
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = (
torch.stack(
torch.meshgrid(
torch.arange(0, res[0], delta[0]),
torch.arange(0, res[1], delta[1]),
indexing="ij",
),
dim=-1,
).to(device)
% 1
)
rand_val = torch.rand(res[0] + 1, res[1] + 1)
angles = 2 * math.pi * rand_val
gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim=-1).to(device)
def tile_grads(slice1, slice2):
return (
gradients[slice1[0] : slice1[1], slice2[0] : slice2[1]]
.repeat_interleave(d[0], 0)
.repeat_interleave(d[1], 1)
)
def dot(grad, shift):
return (
torch.stack(
(
grid[: shape[0], : shape[1], 0] + shift[0],
grid[: shape[0], : shape[1], 1] + shift[1],
),
dim=-1,
)
* grad[: shape[0], : shape[1]]
).sum(dim=-1)
n00 = dot(tile_grads([0, -1], [0, -1]), [0, 0]).to(device)
n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0]).to(device)
n01 = dot(tile_grads([0, -1], [1, None]), [0, -1]).to(device)
n11 = dot(tile_grads([1, None], [1, None]), [-1, -1]).to(device)
t = fade(grid[: shape[0], : shape[1]])
noise = math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1]).to(
device
)
return noise.to(dtype=torch_dtype(device))
def ask_user(question: str, answers: list):
from itertools import chain, repeat
user_prompt = f"\n>> {question} {answers}: "
invalid_answer_msg = "Invalid answer. Please try again."
pose_question = chain([user_prompt], repeat("\n".join([invalid_answer_msg, user_prompt])))
user_answers = map(input, pose_question)
valid_response = next(filter(answers.__contains__, user_answers))
return valid_response
# -------------------------------------
def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path:
"""
Download a model file.
:param url: https, http or ftp URL
:param dest: A Path object. If path exists and is a directory, then we try to derive the filename
from the URL's Content-Disposition header and copy the URL contents into
dest/filename
:param access_token: Access token to access this resource
"""
header = {"Authorization": f"Bearer {access_token}"} if access_token else {}
open_mode = "wb"
exist_size = 0
resp = requests.get(url, headers=header, stream=True, allow_redirects=True)
content_length = int(resp.headers.get("content-length", 0))
if dest.is_dir():
try:
file_name = re.search('filename="(.+)"', resp.headers.get("Content-Disposition")).group(1)
except AttributeError:
file_name = os.path.basename(url)
dest = dest / file_name
else:
dest.parent.mkdir(parents=True, exist_ok=True)
if dest.exists():
exist_size = dest.stat().st_size
header["Range"] = f"bytes={exist_size}-"
open_mode = "ab"
resp = requests.get(url, headers=header, stream=True) # new request with range
if exist_size > content_length:
logger.warning("corrupt existing file found. re-downloading")
os.remove(dest)
exist_size = 0
if resp.status_code == 416 or (content_length > 0 and exist_size == content_length):
logger.warning(f"{dest}: complete file found. Skipping.")
return dest
elif resp.status_code == 206 or exist_size > 0:
logger.warning(f"{dest}: partial file found. Resuming...")
elif resp.status_code != 200:
logger.error(f"An error occurred during downloading {dest}: {resp.reason}")
else:
logger.info(f"{dest}: Downloading...")
try:
if content_length < 2000:
logger.error(f"ERROR DOWNLOADING {url}: {resp.text}")
return None
with (
open(dest, open_mode) as file,
tqdm(
desc=str(dest),
initial=exist_size,
total=content_length,
unit="iB",
unit_scale=True,
unit_divisor=1000,
) as bar,
):
for data in resp.iter_content(chunk_size=1024):
size = file.write(data)
bar.update(size)
except Exception as e:
logger.error(f"An error occurred while downloading {dest}: {str(e)}")
return None
return dest
def url_attachment_name(url: str) -> dict:
try:
resp = requests.get(url, stream=True)
match = re.search('filename="(.+)"', resp.headers.get("Content-Disposition"))
return match.group(1)
except Exception:
return None
def download_with_progress_bar(url: str, dest: Path) -> bool:
result = download_with_resume(url, dest, access_token=None)
return result is not None
def image_to_dataURL(image: Image.Image, image_format: str = "PNG") -> str:
"""
Converts an image into a base64 image dataURL.

View File

@ -1,157 +0,0 @@
# This file predefines a few models that the user may want to install.
sd-1/main/stable-diffusion-v1-5:
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
source: runwayml/stable-diffusion-v1-5
recommended: True
default: True
sd-1/main/stable-diffusion-v1-5-inpainting:
description: RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
source: runwayml/stable-diffusion-inpainting
recommended: True
sd-2/main/stable-diffusion-2-1:
description: Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
source: stabilityai/stable-diffusion-2-1
recommended: False
sd-2/main/stable-diffusion-2-inpainting:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
source: stabilityai/stable-diffusion-2-inpainting
recommended: False
sdxl/main/stable-diffusion-xl-base-1-0:
description: Stable Diffusion XL base model (12 GB)
source: stabilityai/stable-diffusion-xl-base-1.0
recommended: True
sdxl-refiner/main/stable-diffusion-xl-refiner-1-0:
description: Stable Diffusion XL refiner model (12 GB)
source: stabilityai/stable-diffusion-xl-refiner-1.0
recommended: False
sdxl/vae/sdxl-vae-fp16-fix:
description: Version of the SDXL-1.0 VAE that works in half precision mode
source: madebyollin/sdxl-vae-fp16-fix
recommended: True
sd-1/main/Analog-Diffusion:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
source: wavymulder/Analog-Diffusion
recommended: False
sd-1/main/Deliberate:
description: Versatile model that produces detailed images up to 768px (4.27 GB)
source: https://huggingface.co/XpucT/Deliberate/resolve/main/Deliberate_v5.safetensors?download=true
recommended: False
sd-1/main/Dungeons-and-Diffusion:
description: Dungeons & Dragons characters (2.13 GB)
source: 0xJustin/Dungeons-and-Diffusion
recommended: False
sd-1/main/dreamlike-photoreal-2:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
source: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
sd-1/main/Inkpunk-Diffusion:
description: Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)
source: Envvi/Inkpunk-Diffusion
recommended: False
sd-1/main/openjourney:
description: An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)
source: prompthero/openjourney
recommended: False
sd-1/main/seek.art_MEGA:
source: coreco/seek.art_MEGA
description: A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)
recommended: False
sd-1/main/trinart_stable_diffusion_v2:
description: An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)
source: naclbit/trinart_stable_diffusion_v2
recommended: False
sd-1/controlnet/qrcode_monster:
source: monster-labs/control_v1p_sd15_qrcode_monster
subfolder: v2
sd-1/controlnet/canny:
source: lllyasviel/control_v11p_sd15_canny
recommended: True
sd-1/controlnet/inpaint:
source: lllyasviel/control_v11p_sd15_inpaint
sd-1/controlnet/mlsd:
source: lllyasviel/control_v11p_sd15_mlsd
sd-1/controlnet/depth:
source: lllyasviel/control_v11f1p_sd15_depth
recommended: True
sd-1/controlnet/normal_bae:
source: lllyasviel/control_v11p_sd15_normalbae
sd-1/controlnet/seg:
source: lllyasviel/control_v11p_sd15_seg
sd-1/controlnet/lineart:
source: lllyasviel/control_v11p_sd15_lineart
recommended: True
sd-1/controlnet/lineart_anime:
source: lllyasviel/control_v11p_sd15s2_lineart_anime
sd-1/controlnet/openpose:
source: lllyasviel/control_v11p_sd15_openpose
recommended: True
sd-1/controlnet/scribble:
source: lllyasviel/control_v11p_sd15_scribble
recommended: False
sd-1/controlnet/softedge:
source: lllyasviel/control_v11p_sd15_softedge
sd-1/controlnet/shuffle:
source: lllyasviel/control_v11e_sd15_shuffle
sd-1/controlnet/tile:
source: lllyasviel/control_v11f1e_sd15_tile
sd-1/controlnet/ip2p:
source: lllyasviel/control_v11e_sd15_ip2p
sd-1/t2i_adapter/canny-sd15:
source: TencentARC/t2iadapter_canny_sd15v2
sd-1/t2i_adapter/sketch-sd15:
source: TencentARC/t2iadapter_sketch_sd15v2
sd-1/t2i_adapter/depth-sd15:
source: TencentARC/t2iadapter_depth_sd15v2
sd-1/t2i_adapter/zoedepth-sd15:
source: TencentARC/t2iadapter_zoedepth_sd15v1
sdxl/t2i_adapter/canny-sdxl:
source: TencentARC/t2i-adapter-canny-sdxl-1.0
sdxl/t2i_adapter/zoedepth-sdxl:
source: TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
sdxl/t2i_adapter/lineart-sdxl:
source: TencentARC/t2i-adapter-lineart-sdxl-1.0
sdxl/t2i_adapter/sketch-sdxl:
source: TencentARC/t2i-adapter-sketch-sdxl-1.0
sd-1/embedding/EasyNegative:
source: https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors
recommended: True
description: A textual inversion to use in the negative prompt to reduce bad anatomy
sd-1/lora/FlatColor:
source: https://civitai.com/models/6433/loraflatcolor
recommended: True
description: A LoRA that generates scenery using solid blocks of color
sd-1/lora/Ink scenery:
source: https://civitai.com/api/download/models/83390
description: Generate india ink-like landscapes
sd-1/ip_adapter/ip_adapter_sd15:
source: InvokeAI/ip_adapter_sd15
recommended: True
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: IP-Adapter for SD 1.5 models
sd-1/ip_adapter/ip_adapter_plus_sd15:
source: InvokeAI/ip_adapter_plus_sd15
recommended: False
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: Refined IP-Adapter for SD 1.5 models
sd-1/ip_adapter/ip_adapter_plus_face_sd15:
source: InvokeAI/ip_adapter_plus_face_sd15
recommended: False
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: Refined IP-Adapter for SD 1.5 models, adapted for faces
sdxl/ip_adapter/ip_adapter_sdxl:
source: InvokeAI/ip_adapter_sdxl
recommended: False
requires:
- InvokeAI/ip_adapter_sdxl_image_encoder
description: IP-Adapter for SDXL models
any/clip_vision/ip_adapter_sd_image_encoder:
source: InvokeAI/ip_adapter_sd_image_encoder
recommended: False
description: Required model for using IP-Adapters with SD-1/2 models
any/clip_vision/ip_adapter_sdxl_image_encoder:
source: InvokeAI/ip_adapter_sdxl_image_encoder
recommended: False
description: Required model for using IP-Adapters with SDXL models

View File

@ -1,5 +0,0 @@
"""
Initialization file for invokeai.frontend.CLI
"""
from .CLI import main as invokeai_command_line_interface # noqa: F401

Some files were not shown because too many files have changed in this diff Show More