Compare commits

...

492 Commits

Author SHA1 Message Date
653a8f67a4 Run ruff 2024-03-26 15:10:28 -04:00
90f32d17ba Use correct format type on model 2024-03-26 15:10:28 -04:00
a7b8bbc7c6 Use model_path defined in probe run 2024-03-26 15:10:28 -04:00
befcf779dc Fix paths in probe 2024-03-26 15:10:28 -04:00
18098cc4b9 For loras/embeddings treat folders the same as single files 2024-03-26 15:10:28 -04:00
bbecb99eb4 Allow Embedding and Lora Folder Models to have Different Names 2024-03-26 15:10:28 -04:00
3aad6f975b docs: restore mention of WinLongPathsEnabled.reg 2024-03-26 09:18:01 -04:00
4e93b05736 docs: update all installation docs 2024-03-26 09:18:01 -04:00
3493b26c55 docs: remove deprecated installatino docs 2024-03-26 09:18:01 -04:00
fb317d0967 feat(installer): update invoke.bat.in
Forgot to update this when removing references to updating the app in invoke.sh.in
2024-03-26 14:48:29 +11:00
56af97866e docs(installer): update docstrings for install methods 2024-03-26 14:48:29 +11:00
44626c9452 tidy(installer): remove no-op --version CLI arg of installer
This wasn't used at all.
2024-03-26 14:48:29 +11:00
dfc08bd964 feat(installer): add support for installing from wheel
This allows us to easily test the installer without needing the desired version to be published on PyPI:
```sh
python3 installer/lib/main.py --wheel installer/dist/InvokeAI-4.0.0rc6-py3-none-any.whl
```

A warning message and confirmation are displayed when the arg is used.

The rest of the installer is unchanged.
2024-03-26 14:48:29 +11:00
eb2a1993f1 fix(installer): resolve all typing issues 2024-03-26 14:48:29 +11:00
743234e3d0 feat(installer): remove updater
Updating should always be done via the installer. We initially planned to only deprecate the updater, but given the scale of changes for v4, there's no point in waiting to remove it entirely.
2024-03-26 14:48:29 +11:00
a3f734af9a Update transformers 4.38.2 -> 4.39.1 and diffusers 0.27.0 -> 0.27.2 2024-03-26 11:51:08 +11:00
b4b1dbdd34 tests: add test for iterate execution order 2024-03-26 09:18:46 +11:00
83b3828b55 prioritize iterate in _get_next_node 2024-03-26 09:18:46 +11:00
3cdfc6ab16 fix tests 2024-03-26 09:06:37 +11:00
42f72b19a6 fix localization and change in settings 2024-03-26 09:06:37 +11:00
148e6d9b97 docs(installer): better comments for PYTORCH_ENABLE_MPS_FALLBACK 2024-03-26 09:04:13 +11:00
354d59df62 fix(installer): typo "10" -> "5" 2024-03-26 09:04:13 +11:00
8205c56c8a feat(installer): remove dialog references in the launcher
The launcher isn't well served by the additional complexity of two different UIs.
2024-03-26 09:04:13 +11:00
c856fab7d1 fix(ui): load default workflows mutation error
Loading default workflows sometimes requires we mutate the workflow object in order to change the category or ID of the workflow.

This happens in `invokeai/frontend/web/src/features/nodes/util/workflow/validateWorkflow.ts`

The data we get back from the query hooks is frozen and sealed by redux, because they are part of redux state. We need to clone the workflow before operating on it.

It's not clear how this ever worked in the past, because redux state has always been frozen and sealed.
2024-03-26 08:30:25 +11:00
d63d26924b tests: default settings union evaluation 2024-03-25 07:40:52 -04:00
2ec03ae95c fix(mm): default settings pydantic error
Add `extra="forbid"` to the default settings models.

Closes #6035.

Pydantic has some quirks related to unions. This affected how the union of default settings was evaluated. See https://github.com/pydantic/pydantic/issues/9095 for a detailed description of the behaviour that this change addresses.
2024-03-25 07:40:52 -04:00
0f02a72cb9 allow deletion of symlinked models in models dir 2024-03-22 18:29:24 -07:00
37fd57d4d9 Update probe.py
Minor case-sensitive typo. `ModelType.Lora` should be `ModelType.LoRA`
2024-03-22 09:09:56 -07:00
cf0c7d66ed chore: v4.0.0rc5 2024-03-22 02:35:16 -07:00
5b016bf376 fix(nodes): esrgan model name typo 2024-03-22 02:22:19 -07:00
e7a096dec1 fix(mm): remove proteus model
This model is SDXL and relies on CLIP Skip. We don't support that yet.
2024-03-22 02:22:03 -07:00
281ecd5a9a chore(nodes): update default workflows for v4
All workflows updated and tested
2024-03-22 02:21:33 -07:00
9cbf78542c remove dangling comment 2024-03-22 16:35:42 +11:00
34f5259980 catch ^C at startup time while models are being scanned 2024-03-22 16:35:42 +11:00
2ecbb9f720 fix(ui): model dependency parsing 2024-03-22 14:59:33 +11:00
ab36d7c0f2 chore(ui): typegen 2024-03-22 14:59:33 +11:00
05d6661877 feat(mm): revised list of starter models
- Enriched dependencies to not just be a string - allows reuse of a dependency as a starter model _and_ dependency of another model. For example, all the SDXL models have the fp16 VAE as a dependency, but you can also download it on its own.
- Looked at popular models on the major model sites to select the list. No SD2 models. All hosted on HF.
2024-03-22 14:59:33 +11:00
eb558d72d8 Fix minor bugs involving model manager handling of model paths (#6024)
* Fix minor bugs involving model manager handling of model paths

- Leave models found in the `autoimport` directory there. Do not move them
  into the `models` hierarchy.
- If model name, type or base is updated and model is in the `models` directory,
  update its path as appropriate.
- On startup during model scanning, if a model's path is a symbolic link, then resolve
  to an absolute path before deciding it is a new model that must be hashed and
  registered. (This prevents needless hashing at startup time).

* fix issue with dropped suffix

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-22 01:14:45 +00:00
4687739319 ui: Update rgbaToHex to optionally return alpha value or not 2024-03-22 06:23:51 +05:30
168b35f86d fix: make the styling of the hex code component consistent with others 2024-03-22 06:23:51 +05:30
07fe0e8dc8 chore: Move color transformers to new file 2024-03-22 06:23:51 +05:30
45fc7d8054 feat: add Hex Code to ColorField Component 2024-03-22 06:23:51 +05:30
eafc85cfe3 feat: Add Mask from ID Node 2024-03-22 06:23:51 +05:30
ddf917f68c translationBot(ui): update translation (Russian)
Currently translated at 99.5% (1117 of 1122 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-22 10:57:47 +11:00
c90807ba33 translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1102 of 1122 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.9% (1099 of 1122 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.9% (1099 of 1122 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-22 10:57:47 +11:00
842b57e57c tests: update config tests
- Add patched rootdir fixture to all config tests. I think this isn't strictly necessary but it does ensure that any config tests that need to write files don't accidentally write to user data locations.
- Be more careful when calling `get_config()` in the tests, by clearing the LRU cache before and after. This ensures a test doesn't reference the singleton config created by a previously run test.
- Add test for env var parsing.
- Add test for config writing in the context of `get_config()`. This is effectively a mini e2e test for the config lifecycle.
2024-03-22 09:53:02 +11:00
f538ed54fb fix(config): do not write env vars to config files
Add class `DefaultInvokeAIAppConfig`, which inherits from `InvokeAIAppConfig`. When instantiated, this class does not parse environment variables, so it outputs a "clean" default config. That's the only difference.

Then, we can use this new class in the 3 places:
- When creating the example config file (no env vars should be here)
- When migrating a v3 config (we want to instantiate the migrated config without env vars, so that when we write it out, they are not written to disk)
- When creating a fresh config file (i.e. on first run with an uninitialized root or new config file path - no env vars here!)
2024-03-22 09:53:02 +11:00
d0a936ebd4 fix(mm): do not write config file when migrating models.yaml 2024-03-22 09:53:02 +11:00
27622dfd5e allow checkpoint config files to use root-relative paths 2024-03-22 08:57:45 +11:00
72b44f7ebc feat(mm): rename "blake3" to "blake3_multi"
Just make it clearer which is which.
2024-03-22 08:26:36 +11:00
7726d312e1 feat(mm): default hashing algo to blake3_single
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.

For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.

For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.

The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
2024-03-22 08:26:36 +11:00
61520dfb86 gh: update pr template
Minor tweaks
2024-03-22 07:56:37 +11:00
6e869e6038 fix(ui): migrate redux state that has models
With the change to model identifiers from v3 to v4, if a user had persisted redux state with the old format, we could get unexpected runtime errors when rehydrating state if we try to access model attributes that no longer exist.

For example, the CLIP Skip component does this:

```ts
CLIP_SKIP_MAP[model.base].maxClip
```

In v3, models had a `base_type` attribute, but it is renamed to `base` in v4. This code therefore causes a runtime error:
- `model.base` is `undefined`
- `CLIP_SKIP_MAP[undefined]` is also undefined
- `undefined.maxClip` is a runtime error!

Resolved by adding a migration for the redux slices that have model identifiers. The migration simply resets the slice or the part of the slice that is affected, when it's simple to do a partial reset.

Closes #6000
2024-03-22 07:55:13 +11:00
9eacc0c189 fix(ui): use the old combobox component for dropdowns
Closes #6011
2024-03-22 07:33:52 +11:00
23606d9e83 pkg: pin version of ruff
If you switch between different branches, by the time you get back to `main`, a different version of `ruff` might be installed that has slightly different formatting rules. This leads to incorrect formatting changes.

Pinning `ruff` avoids this issue.
2024-03-22 07:27:06 +11:00
d4d0fea078 [feature] Add probe for SDXL controlnet models (#5382)
* add probe for SDXL controlnet models

* Update invokeai/backend/model_management/model_probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-03-21 14:49:45 +00:00
a5771f6120 chore(docker): remove outdated comments from docker-compose 2024-03-21 10:34:52 -04:00
35f847d5b7 fix(docker): add env vars for host and port to the Dockerfile 2024-03-21 10:34:52 -04:00
3278497674 feat(docker): remove separate pre-installation of PyTorch in the image 2024-03-21 10:34:52 -04:00
c9350f71be feat(docker): improve directory handling and expand environment variable documentation 2024-03-21 10:34:52 -04:00
b00e27b022 fix(docker): ensure the container has write permission to the runtime directory 2024-03-21 10:34:52 -04:00
a6283b9fb6 tidy: "fit_image_to_resolution" -> "resize_image_to_resolution" 2024-03-21 07:02:57 -07:00
64fb15e117 chore: ruff 2024-03-21 07:02:57 -07:00
7019d93ff0 feat(ui): add missing detect_resolution to processors 2024-03-21 07:02:57 -07:00
7467768d48 chore(ui): typegen 2024-03-21 07:02:57 -07:00
e2d7b514e0 tidy: correct attributions for controlnet processors 2024-03-21 07:02:57 -07:00
c36d12a50f feat: adaptation of Lineart Anime processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
c7f8fe4d5e feat: adaptation of Lineart processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
ffb41c3616 feat: adaptation of HED processor
Adapted from controlnet repo
2024-03-21 07:02:57 -07:00
611006b692 feat: adaptation of Canny processor
Adapted from controlnet processors package

fix: do final resize in canny processor

canny
2024-03-21 07:02:57 -07:00
ca496f0380 feat: add image utils
These all support controlnet processors.

- `pil_to_cv2`
- `cv2_to_pil`
- `pil_to_np`
- `np_to_pil`
- `normalize_image_channel_count` (a readable version of `HWC3` from the controlnet repo)
- `fit_image_to_resolution` (a readable version of `resize_image` from the controlnet repo)
- `non_maximum_suppression` (a readable version of `nms` from the controlnet repo)
- `safe_step` (a readable version of `safe_step` from the controlnet repo)
2024-03-21 07:02:57 -07:00
01d8ab04a5 feat(nodes): add missing detect_resolution to processors
Some processors, like Canny, didn't use `detect_resolution`. The resultant control images were then resized by the processors from 512x512 to the desired dimensions. The result is that the control images are the right size, but very low quality.

Using detect_resolution fixes this.
2024-03-21 07:02:57 -07:00
7a4122235f feat(mm): display progress when hashing files 2024-03-21 17:24:48 +11:00
75f4e27522 tidy(mm): clean up model download/install logs 2024-03-21 16:41:20 +11:00
8ae757334e feat(mm): make installer thread logging stmts debug 2024-03-21 16:41:20 +11:00
2038064a34 add timeouts to the download tests 2024-03-21 16:41:20 +11:00
689cb9d31d after stopping install and download services, wait for thread exit 2024-03-21 16:41:20 +11:00
0cab1d1e04 added debugging statements 2024-03-21 16:41:20 +11:00
9bd7dabed3 refactor big _install_next_item() loop 2024-03-21 16:41:20 +11:00
30283a4767 fix(ui): set aspect ratio to free when using default model settings
We need to use the `widthRecalled` and `heightRecalled` actions, which handle the aspect ratio.

Closes  #5974
2024-03-21 16:30:52 +11:00
dacfe6853e Update rc version, regenerate schema 2024-03-20 08:21:23 -07:00
2b093da4b0 Remove no longer used pwinput dependency 2024-03-20 08:15:37 -07:00
368c1b709c chore: v4.0.0rc3
RC version bump
2024-03-20 05:59:08 -07:00
2269253a6c docs: update installation docs
Remove/edit references to configure script.
2024-03-20 05:48:02 -07:00
3490aee247 tidy(installer): remove unused messages 2024-03-20 05:48:02 -07:00
f592ad3649 fix(installer): remove configure flow from installer 2024-03-20 05:48:02 -07:00
42a2bad936 fix(installer): remove deleted scripts from launcher
These scripts no longer exist and need to be removed from the launcher:
- invokeai-ti
- invokeai-merge
- invokeai-model-install
- invokeai-configure
2024-03-20 05:48:02 -07:00
ba2fd875ad fix(ui): typo 2024-03-20 16:26:14 +11:00
9d30a063e7 fix: remaining strings 2024-03-20 16:26:14 +11:00
dc9a9c0160 fix: not translated strings 2024-03-20 16:26:14 +11:00
d45931a0af fix(ui): localize text 2024-03-20 16:26:14 +11:00
c1de129bbc fix(ui): use refiner's seamless node for i2l VAE
Closes  #5995
2024-03-20 16:08:27 +11:00
bf852348aa Update pytorch and xFormers 2.1.2 -> 2.2.1 2024-03-20 15:16:08 +11:00
fc63419c6e fix(ui): refresh starter models on model add/update/delete 2024-03-20 15:05:25 +11:00
c356cabe97 chore(ui): lint 2024-03-20 15:05:25 +11:00
97fe6e483d fix(mm): do not attempt to reinstall starter model dependencies 2024-03-20 15:05:25 +11:00
1069303309 fix(config): remove configure arg & logic from docker image 2024-03-20 15:05:25 +11:00
eb607498bf fix(config): create parent dir when writing config file 2024-03-20 15:05:25 +11:00
bdb52cfcf7 feat(ui): set HF token in MM tab
- Display a toast on UI launch if the HF token is invalid
- Show form in MM if token is invalid or unable to be verified, let user set the token via this form
2024-03-20 15:05:25 +11:00
3f6f8199f6 chore(ui): typegen 2024-03-20 15:05:25 +11:00
9a5575b46b feat(mm): move HF token helper to route 2024-03-20 15:05:25 +11:00
dea9142cb8 tests: fix config test after changing config schema version format 2024-03-20 15:05:25 +11:00
02329df1df feat(config): write example config file out on app startup 2024-03-20 15:05:25 +11:00
f5337c7ce2 fix(config): handle relative paths to v3 models.yamls 2024-03-20 15:05:25 +11:00
b02f2da71d fix(config): handle legacy_conf_dir setting migration 2024-03-20 15:05:25 +11:00
6c13fa13ea fix(mm): regression from change to legacy conf dir change 2024-03-20 15:05:25 +11:00
040ea8f41b tidy: do not show msg when loading NSFW checker 2024-03-20 15:05:25 +11:00
13c72206d8 docs: update CONFIGURATION.md 2024-03-20 15:05:25 +11:00
96ef7e3889 docs: add link to docs to invokeai.yaml template 2024-03-20 15:05:25 +11:00
2eacbb4d9d fix(nodes): do not load NSFW checker model on startup
Just check if the path exists to determine if it is "available". When needed, load it.
2024-03-20 15:05:25 +11:00
0e51495071 chore(ui): lint 2024-03-20 15:05:25 +11:00
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
6c558279dd feat(config): add CLI arg to specify config file
This allows users to create simple "profiles" via separate `invokeai.yaml` files.

- Remove `InvokeAIAppConfig.set_root()`, it's extraneous
- Remove `InvokeAIAppConfig.merge_from_file()`, it's extraneous
- Add `--config` to the app arg parser, add `InvokeAIAppConfig._config_file`, and consume in the config singleton getter
- `InvokeAIAppConfig.init_file_path` -> `InvokeAIAppConfig.config_file_path`
2024-03-20 15:05:25 +11:00
bd3e8cbdfb feat(ui): add starter models tab to MM
Lists all starter models with an install button if the model is not yet installed.
2024-03-20 15:05:25 +11:00
aa689e5384 style(ui): tweak ModelBaseBadge style 2024-03-20 15:05:25 +11:00
484488dee4 feat(ui): add useStarterModelsToast
This displays a toast linking to the MM tab when there are no main models installed. It is a no-op when the `starterModels` feature is disabled.
2024-03-20 15:05:25 +11:00
e40b715f39 feat(ui): add starterModels feature
This can be disabled to prevent a toast from appearing, linking users to the model manager tab.
2024-03-20 15:05:25 +11:00
e8f4012b56 feat(ui): extract FetchingModelsLoader into reusable component 2024-03-20 15:05:25 +11:00
bc12ca9220 chore(ui): typegen 2024-03-20 15:05:25 +11:00
5ceaeb234d feat(mm): add starter models route
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
2024-03-20 15:05:25 +11:00
429f87c60b fix(mm): HFModelSource string format
The dunder `__str__` method for `HFModelSource` was appending a colon `:` to the end of the source strings.
2024-03-20 15:05:25 +11:00
ee3096f616 feat(config): add flag to indicate if args were parsed
This flag acts as a proxy for the `get_config()` function to determine if the full application is running.

If it was, the config will set the root, do HF login, etc.

If not (e.g. it's called by an external script), all that stuff will be skipped.
2024-03-20 15:05:25 +11:00
6af6673a4f feat: move all config-related initialization to app
HF login, legacy yaml confs, and default init file are all handled during app setup.

All directories are created as they are needed by the app.

No need to check for a valid root dir - we will make it if it doesn't exist.
2024-03-20 15:05:25 +11:00
b173e4c08d tidy(config): type checker ignores + comment 2024-03-20 15:05:25 +11:00
059f869737 tidy(config): remove ignore_missing_core_models CLI arg and setting
This is now a no-op, with all models being downloaded when they are first requested.
2024-03-20 15:05:25 +11:00
813e679b77 feat: add hf_login util
This provides a simple way to provide a HF token. If HF reports no valid token, one is prompted for until a valid token is provided, or the user presses Ctrl + C to cancel.
2024-03-20 15:05:25 +11:00
857e9c9b5f feat: add SuppressOutput util
This context manager suppresses/hides stdout.
2024-03-20 15:05:25 +11:00
5c1aa02e7b fix(config): set default legacy_conf_dir to configs
It was `configs/stable-diffusion` before, which broke conversions.
2024-03-20 15:05:25 +11:00
0e3fb4e97a deps: add pwinput, pinned to specific fork
This simple package provides a cross-platform way to type a password on the CLI and have it show up as asterisks.

The fork, pending merge into the upstream package, adds support for Ctrl+C to cancel input.
2024-03-20 15:05:25 +11:00
6e882d3fd6 feat(config): dynamic ram cache size
Use the util function to calculate ram cache size on startup. This way, the `ram` setting will always be optimized for a system, even if they add or remove RAM. In other words, the default value is now dynamic.
2024-03-20 15:05:25 +11:00
fabef8b45b feat(mm): download upscaling & lama models as they are requested 2024-03-20 15:05:25 +11:00
97f16b2b7e fix(ui): fix model install progress display 2024-03-20 15:05:25 +11:00
609c2c0abf Fix: progress image preview for inpainting 2024-03-20 13:36:05 +11:00
fe5fa7f8cc chore: make ruff 2024-03-20 13:36:05 +11:00
8b30cbe81e chore: clean up old code comments 2024-03-20 13:36:05 +11:00
2af9286345 fix: denoise mask incorectly applied after step 2024-03-20 13:36:05 +11:00
29b04b7e83 chore: bump nodes versions
Bump all nodes in prep for v4.0.0.
2024-03-20 10:28:07 +11:00
04aa97f0fd gh: update pr template 2024-03-20 10:19:04 +11:00
39fa8874fc undo 2024-03-20 10:05:46 +11:00
4e245e9331 fix refiner metadata 2024-03-20 10:05:46 +11:00
74a51571a0 Fix race condition causing hangs during model install unit tests (#5994)
* fix race condition causing hangs during model install unit tests

* remove extraneous sanity checks

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-19 20:54:49 +00:00
c87497fd54 record model_variant in t2i and clip_vision configs (#5989)
- Move base of t2i and clip_vision config models to DiffusersBase, which contains
  a field to record the model variant (e.g. "fp16")
- This restore the ability to load fp16 t2i and clip_vision models
- Also add defensive coding to load the vanilla model when the fp16 model
  has been replaced (or more likely, user's preferences changed since installation)

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-19 20:14:12 +00:00
3f61c51c3a fix(ui): model list refreshes after changes
When consolidating all the model queries I messed up the query tags. Fixed now, so that when a model is installed, removed, or changed, the list refreshes.
2024-03-20 06:25:57 +11:00
07c9c0b0ab translationBot(ui): update translation (Russian)
Currently translated at 99.5% (1091 of 1096 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
2322d3cbbe translationBot(ui): update translation (Japanese)
Currently translated at 52.5% (576 of 1096 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 52.0% (570 of 1096 strings)

Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
419ce02aae translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1077 of 1096 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.2% (1077 of 1096 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
629ccd059e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
2dfa51c2e5 translationBot(ui): update translation (Russian)
Currently translated at 99.0% (1518 of 1533 strings)

translationBot(ui): update translation (Russian)

Currently translated at 99.0% (1518 of 1533 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
edde0fe174 translationBot(ui): update translation (Bulgarian)
Currently translated at 3.9% (61 of 1533 strings)

translationBot(ui): update translation (Bulgarian)

Currently translated at 1.8% (28 of 1533 strings)

translationBot(ui): added translation (Bulgarian)

Co-authored-by: Sufi2425 <sufi24251@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/bg/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
3e46f7a010 translationBot(ui): update translation (Italian)
Currently translated at 97.8% (1510 of 1543 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.1% (1503 of 1532 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.1% (1503 of 1532 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-19 22:34:53 +11:00
faa555df20 chore(ui): lint 2024-03-19 22:11:48 +11:00
7a3e19227f feat(ui): display created_by using valueOrNull 2024-03-19 22:11:48 +11:00
e706afe8a6 feat(ui): add valueOrNull to useMetadataItem
In order to allow for null and undefined metadata values, this hook returned a symbol to indicate that parsing failed or was pending.

For values where the parsed value will never be null or undefined, it is useful get the value or null (instead of a symbol).
2024-03-19 22:11:48 +11:00
acca197893 revert(ui): restore metadata parsers for created_by 2024-03-19 22:11:48 +11:00
aa2c404cab move created_by out of recall panel 2024-03-19 22:11:48 +11:00
300a4693ae prettier 2024-03-19 21:59:51 +11:00
820614e4d8 ruff 2024-03-19 21:59:51 +11:00
fe563f05fc tsc 2024-03-19 21:59:51 +11:00
d89e653588 fix(ui): remove image_resolution from colormap 2024-03-19 21:59:51 +11:00
4e9207a10b fix(worker): remove resolution from zoe as it seems to break it 2024-03-19 21:59:51 +11:00
8c6c33a315 'feat(ui): update processor constants to calculate default resolution based on current base model, add image_resolution to the processors that didn't have it in the UI as a configurable op
tion
2024-03-19 21:59:51 +11:00
ed0f9f7d66 feat(worker): add image_resolution as option for all cnet procesors 2024-03-19 21:59:51 +11:00
b25850a585 typegen 2024-03-19 21:59:51 +11:00
94257e35f5 Comment out test failing to run due to issue in model install service 2024-03-19 01:16:11 -04:00
19e16384f7 Remove redundant embedding file read 2024-03-19 01:16:11 -04:00
a3ed6e694c Use wait_for_job instead of wait_for_installs 2024-03-19 01:16:11 -04:00
20d9d10798 Wrap in try except for InvalidModelConfigException 2024-03-19 01:16:11 -04:00
77a70a8a9c Skip hashing in test_heuristic_import_with_type 2024-03-19 01:16:11 -04:00
06abea8db0 Increase timeout for test_heuristic_import_with_type, fix Url import 2024-03-19 01:16:11 -04:00
a28f0932e6 Run ruff 2024-03-19 01:16:11 -04:00
6968a068bb Fix test to run on windows vms 2024-03-19 01:16:11 -04:00
9d5b96c119 Pull format type setting out of model_type if statement 2024-03-19 01:16:11 -04:00
5daefccf77 Simplify logic for determining model type in probe 2024-03-19 01:16:11 -04:00
1f3c35ee90 Run Ruff 2024-03-19 01:16:11 -04:00
f78ed3a952 Add unit test 2024-03-19 01:16:11 -04:00
d38262a7ea Allow type field to be a string 2024-03-19 01:16:11 -04:00
5feb62d440 Allow users to specify model type and skip detection step of probe 2024-03-19 01:16:11 -04:00
f8df293d2c Revert "fix(mm): provide ckpt config as stream to diffusers"
This reverts commit 9d045964d6.
2024-03-19 14:24:54 +11:00
9d045964d6 fix(mm): provide ckpt config as stream to diffusers
Fixes converting ckpt main models
2024-03-19 09:24:28 +11:00
9fa9ebe386 fix(config): set ignore_missing_core_models when provided as CLI arg 2024-03-19 09:24:28 +11:00
1cb1b60b4c tidy: "check_root.py" -> "check_directories.py" 2024-03-19 09:24:28 +11:00
1d4517d00d tidy: "validate_root" -> "validate_directories" 2024-03-19 09:24:28 +11:00
982b513af3 tidy(config): move a few get_config calls to inside the functions where they are needed 2024-03-19 09:24:28 +11:00
f1450c2c24 update textual inversion training to use root_path rather than root_dir 2024-03-19 09:24:28 +11:00
5d16a40b95 fix invokeai-configure to use isolated argument-parsing pattern 2024-03-19 09:24:28 +11:00
d871fca643 partially address --root CLI argument handling
- fix places where `get_config()` is being called at import time rather
  than at run time.

- add regression test for import time get_config() calling.
2024-03-19 09:24:28 +11:00
8cd65755ef make invokeai-model-install use the --root argument correctly 2024-03-19 09:24:28 +11:00
e76cc71e81 fix(config): edge cases in models.yaml migration
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.

- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
2024-03-19 09:24:28 +11:00
1ed1c1fb24 chore: ruff 2024-03-19 09:24:28 +11:00
9063ea9173 tests: comprehensive config migration tests
Add testing for the settings that had to be explicitly migrated.
2024-03-19 09:24:28 +11:00
4633242503 tidy(config): move config docstring builder to its script 2024-03-19 09:24:28 +11:00
e8b030427d fix(config): do not discard conf_path, migrate custom models.yaml
Hold onto `conf_path` temporarily while migrating `invokeai.yaml` so that it gets migrated correctly as the model installer starts up. Stashed as `legacy_models_yaml_path` in the config, excluded from serialization.
2024-03-19 09:24:28 +11:00
415a4baf78 docs: add note about pydantic-settings' yaml support 2024-03-19 09:24:28 +11:00
09a8c0328a docs: update CONFIGURATION.md 2024-03-19 09:24:28 +11:00
e32c609fec fix(config): ignore empty environment variables (use default values instead) 2024-03-19 09:24:28 +11:00
beffca6b49 docs: update CONFIGURATION.md 2024-03-19 09:24:28 +11:00
a281671e6c docs: update InvokeAIAppConfig doc generator
It now renders the valid values.
2024-03-19 09:24:28 +11:00
5179587b5a feat(config): restore ignore_missing_core_models arg 2024-03-19 09:24:28 +11:00
cb180909f7 fix(install): resolve config-related issues with configurator
- Do not use the singleton app config, this causes a lot of weirdness
- Update logic to use new config object
- Remove unused code
2024-03-19 09:24:28 +11:00
ce9aeeece3 feat: single app entrypoint with CLI arg parsing
We have two problems with how argparse is being utilized:
- We parse CLI args as the `api_app.py` file is read. This causes a problem pytest, which has an incompatible set of CLI args. Some tests import the FastAPI app, which triggers the config to parse CLI args, which receives the pytest args and fails.
- We've repeatedly had problems when something that uses the config is imported before the CLI args are parsed. When this happens, the root dir may not be set correctly, so we attempt to operate on incorrect paths.

To resolve these issues, we need to lift CLI arg parsing outside of the application code, but still let the application access the CLI args. We can create a external app entrypoint to do this.

- `InvokeAIArgs` is a simple helper class that parses CLI args and stores the result.
- `run_app()` is the new entrypoint. It first parses CLI args, then runs `invoke_api` to start the app.

The `invokeai-web` project script and `invokeai-web.py` dev script now call `run_app()` instead of `invoke_api()`.

The first time `get_config()` is called to get the singleton config object, it retrieves the args from `InvokeAIArgs`, sets the root dir if provided, then merges settings in from `invokeai.yaml`.

CLI arg parsing is now safely insulated from application code, but still accessible. And we don't need to worry about import order having an impact on anything, because by the time the app is running, we have already parsed CLI args. Whew!
2024-03-19 09:24:28 +11:00
5ecfa86cd0 tests: fix test on macos 2024-03-19 09:24:28 +11:00
d09f03ef25 fix(config): if no invokeai.yaml is found, create a default one
This fixes an issue with `test_images.py`, which tests the bulk images routers and imports the whole FastAPI app. This triggers the config logic which fails on the test runner, because it has no `invokeai.yaml`.

Also probably just good for graceful fallback.
2024-03-19 09:24:28 +11:00
3f8e2bfd18 fix(config): migrate deprecated max_cache_size and max_vram_cache_size settings 2024-03-19 09:24:28 +11:00
60492500db chore: ruff 2024-03-19 09:24:28 +11:00
f69938c6a8 fix(config): revised config methods
- `write_file` requires an destination file path
- `read_config` -> `merge_from_file`, if no path is provided, reads from `self.init_file_path`
- update app, tests to use new methods
- fix configurator, was overwriting config file data unexpectedly
2024-03-19 09:24:28 +11:00
5e39e46954 feat(config): more resiliant update_config method
Only set values that have changed.
2024-03-19 09:24:28 +11:00
1079bf3ccf feat(config): fix bad compress_level setting
Tweak the name of it so that incoming configs with the old default value of 6 have the setting stripped out. The result is all configs will now have the new, much better default value of 1.
2024-03-19 09:24:28 +11:00
fbbf9c01b5 tests: fix remaining tests 2024-03-19 09:24:28 +11:00
15cef98a8b tests: fix docs test 2024-03-19 09:24:28 +11:00
5606f4d627 tests: redo config tests 2024-03-19 09:24:28 +11:00
53c8f36029 docs(config): clarify comment during config migration 2024-03-19 09:24:28 +11:00
b9884a6166 feat(config): split out parse_args and read_config logic from get_config
Having this all in the `get_config` function makes testing hard. Move these two functions to their own methods, and call them on app startup explicitly.
2024-03-19 09:24:28 +11:00
77b86e9ad5 fix(install): remove broken v2.3 -> v3 migration logic from configurator 2024-03-19 09:24:28 +11:00
a6181b5759 fix(install): update configurator to use new config system 2024-03-19 09:24:28 +11:00
b4b0af7c60 fix(install): do not use deprecated pydantic methods 2024-03-19 09:24:28 +11:00
3d1f3818cb fix(config): use set_root to set root 2024-03-19 09:24:28 +11:00
deffeb9655 fix(config): use get_config singleton, new paths 2024-03-19 09:24:28 +11:00
b8c46fb15b fix(config): split check_invokeai_root into separate function to validate, use this in model_install to determine if need to run configurator 2024-03-19 09:24:28 +11:00
9539ecce79 fix(config): use correct config in textual_inversion_training 2024-03-19 09:24:28 +11:00
7716a4a8c7 fix(config): use correct config in install_helper 2024-03-19 09:24:28 +11:00
dedce2d896 fix(config): remove unnecessary resolve on config path 2024-03-19 09:24:28 +11:00
e43bfa3d70 docs(mm): do not hide members in InvokeAIAppConfig autodoc 2024-03-19 09:24:28 +11:00
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
7b1f9409bc fix(config): drop nonexistent config.use_cpu setting 2024-03-19 09:24:28 +11:00
a72cea014c fix(config): drop usage of deprecated config.xformers, just use the existing utility function 2024-03-19 09:24:28 +11:00
b4182b190f fix(config): use new config.patchmatch 2024-03-19 09:24:28 +11:00
22ac204678 fix(config): fix invisible_watermark handling
This setting was hardcoded to True. Simplified logic around it to not have a conditional that does nothing.
2024-03-19 09:24:28 +11:00
7ca447ded1 fix(config): use new config setup in api_app.py 2024-03-19 09:24:28 +11:00
4df28f1de6 fix(config): use yaml module instead of omegaconf when migrating models.yaml
Also use new paths.
2024-03-19 09:24:28 +11:00
ebd0cb6113 fix(config): remove reference to internet_available
Nothing ever set this. Only a debug print statement referenced it.
2024-03-19 09:24:28 +11:00
fbe3afa5e1 fix(config): fix nsfw_checker handling
This setting was hardcoded to True. Rework logic around it to not conditionally check the setting.
2024-03-19 09:24:28 +11:00
3fb116155b refactor(config): simplified config
- Remove OmegaConf. It functioned as an intermediary data format, between YAML/argparse and pydantic. It's not necessary - we can parse YAML or CLI args directly with pydantic.

- Remove dynamic CLI args. Only `root` is explicitly supported. This greatly simplifies config handling. Configuration is done by editing the YAML file. Frequently-used args can be added if there is a demand.

- A separate arg parser is created to handle the slimmed-down CLI args. It's run immediately in the `invokeai-web` script to handle `--version` and `--help`. It is also used inside the singleton config getter (see below).

- Remove categories from the config. Our settings model is mostly flat. Handling categories adds complexity for both us and users - we have to handle transforming a flat config to categorized config (and vice-versa), while users have to be careful with indentation in their YAML file.

- Add a `meta` key to the config file. Currently, this holds the config schema version only. It is not a part of the config object itself.

- Remove legacy settings that are no longer referenced, or were effectively no-op settings when referenced in code.

- Implement simple migration logic to for v3 configs. If migration is successful, the v3 config file is backed up to `invokeai.yaml.bak` and the new config written to `invokeai.yaml`.

- Previously, the singleton config was accessed by calling `InvokeAIAppConfig.get_config()`. This returned an instance of `InvokeAIAppConfig`, which _also_ has the `get_config` function. This created to a confusing situation where you weren't sure if you needed to call `get_config` or just use the config object. This method is replaced by a standalone `get_config` function which returns a singleton config object.

- Wrap CLI arg parsing (for `root`) and loading/migrating `invokeai.yaml` into the new `get_config()` function.

- Move `generate_config_docstrings` into standalone utility function.

- Make `root` a private attr (`_root`). This reduces the temptation to directly modify and or use this sensitive field and ensures it is neither serialized nor read from input data. Use `root_path` to access the resolved root path, or `set_root` to set the root to something.
2024-03-19 09:24:28 +11:00
7387b0bdc9 install missing clip_vision encoders if required by an ip adapter (#5982)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 02:19:53 +00:00
7ea9cac9a3 Add sdxl controlnet models (#5980)
* allow removal of models with legacy relative path addressing

* added five controlnet models for sdxl to INITIAL_MODELS

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 01:15:15 +00:00
ea5bc94b9c Resolve when instantiating _cached_model_paths 2024-03-18 11:17:23 +11:00
a1743647b7 Stop registering and moving models which have symlinks in the models dir 2024-03-18 11:17:23 +11:00
a6d64f69e1 Remove core conversion models (#5981)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

We've been using a forked copy of the diffusers safetensors->diffusers
model conversion code, which was hacked to read CLIP and the other
models needed for conversion from the local invokeai root models
directory. This was getting unsustainable as the code bases diverged,
and also required the installation and maintenance of the "core/convert"
directory.

This PR gets rid of the hacked conversion code and reverts to using the
native diffusers methods. Core convert models are no longer installed at
root configure time. Instead we rely on the HuggingFace hub system to
download the conversion models if and when they are needed. They are
relatively small and the initial delay seems minor.

Conversion of SD-1, SD-2 (both epsilon and v-prediction), SDXL, VAE and
ControlNet SD-1/2 models has been tested. ControlNet SDXL models are
still a WIP due to the need for some work on the prober.

The main implication of this change is that InvokeAI is no longer
internet-independent and will need an internet connection at least the
first time a safetensors file needs to be converted. However, there are
several other places where the "no internet" rule is already violated,
and I suggest that we abandon this principle.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #5964 

## QA Instructions, Screenshots, Recordings

1. Remove or move `$INVOKEAI_ROOT/models/.cache`
2. Move `$INVOKEAI/models/core/convert`
3. Try generating with an unconverted .safetensors model.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-03-18 11:11:15 +11:00
e74e78894f Merge branch 'main' into optimization/remove-core-conversion-models 2024-03-17 19:25:44 -04:00
71a1740740 Remove core safetensors->diffusers conversion models
- No longer install core conversion models. Use the HuggingFace cache to load
  them if and when needed.

- Call directly into the diffusers library to perform conversions with only shallow
   wrappers around them to massage arguments, etc.

- At root configuration time, do not create all the possible model subdirectories,
  but let them be created and populated at model install time.

- Remove checks for missing core conversion files, since they are no
  longer installed.
2024-03-17 19:13:18 -04:00
b79f2f337e Allow removal of models with legacy relative path addressing (#5979)
* allow removal of models with legacy relative path addressing

* fix ruff error

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-17 18:07:35 +00:00
a0420d1442 fix ruff error 2024-03-17 14:01:04 -04:00
a17021ba0c allow removal of models with legacy relative path addressing 2024-03-17 09:58:16 -04:00
faa1ffb06f Update diffusers 0.26.3 -> 0.27.0 and other HF packages 2024-03-16 05:44:58 -07:00
8c04eec210 fix initial main model logic 2024-03-15 10:22:16 -04:00
330e1354b4 Run typegen, update version to 4.0.0rc2 2024-03-14 17:01:36 -04:00
21621eebf0 feat(ui): handle control adapter processed images
- Add helper functions to build metadata for control adapters, including the processed images
- Update parses to parse the new metadata
2024-03-14 12:34:03 -07:00
c24f2046e7 chore(ui): typegen 2024-03-14 12:34:03 -07:00
297408d67e feat(nodes): add control adapter processed images to metadata
In the client, a controlnet or t2i adapter has two images:
- The source control image: the image the user selected (required)
- The processed control image: the user's image after we've processed it (optional)

The processed image is optional because a user may provide a pre-processed image.

We only actually use one of these images when building the graph, and until this change, we only stored one of the in image metadata. This created a situation where only a processed image was stored in metadata - say, a canny edge map - and the user-selected image wasn't provided.

By adding the processed image to metadata, we can recall both the control image and optional processed image.

This commit is followed by a UI-facing changes to support the change.
2024-03-14 12:34:03 -07:00
0131e7d928 fix(ui): recall control adapter metadata fields 2024-03-14 12:34:03 -07:00
06ff105a1f fix(ui): reset loras/control adapters when using recall all or remix 2024-03-14 12:34:03 -07:00
bb8e6bbee6 docs: update CONFIGURATION.md
Update model hashing docs
2024-03-15 00:14:48 +11:00
328dc99f3a fix(ui): log model load events
- Fix types
- Fix logging in listener
2024-03-14 18:29:55 +05:30
ef55077e84 feat(events): add submodel_type to model load events
This was lost during MM2 migration
2024-03-14 18:29:55 +05:30
ba3d8af161 fix(events): dump event payloads to serializable format 2024-03-14 18:29:55 +05:30
b07b7af710 feat(ui): single getModelConfigs query (#5962)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [z] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Single query, with simple wrapper hooks (type-safe). Updated everywhere
in frontend.

## QA Instructions, Screenshots, Recordings

Things that use models should work. All of this code is strictly
typechecked, so we can be confident in this change.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-03-14 18:20:38 +05:30
19d66d5ec7 feat(ui): single getModelConfigs query
Single query, with simple wrapper hooks (type-safe). Updated everywhere in frontend.
2024-03-14 23:37:40 +11:00
ed20255abf fix(nodes): depth anything processor (#5956) (#5961)
We were passing a PIL image when we needed to pass the np image.

Closes #5956

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

We were passing a PIL image when we needed to pass the np image.

Closes #5956

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #5956

## QA Instructions, Screenshots, Recordings

Depth anything processor should work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-03-14 14:57:40 +05:30
fed1f983db fix(nodes): depth anything processor (#5956)
We were passing a PIL image when we needed to pass the np image.

Closes #5956
2024-03-14 20:14:53 +11:00
a386544a1d chore: ruff 2024-03-14 17:32:02 +11:00
0851de9090 closes #5932 2024-03-14 17:32:02 +11:00
1bd8e33f8c Work around missing core conversion model issue
- This adds additional logic to the safetensors->diffusers conversion script
  to check for and install missing core conversion models at runtime.

- Fixes #5934
2024-03-14 16:52:01 +11:00
e3f29ed320 tests: update default settings tests 2024-03-14 16:03:37 +11:00
3fd824306c feat(mm): probe for main model default settings
Currently, this is just the width and height, derived from the model base.
2024-03-14 16:03:37 +11:00
2584a950aa feat(ui): add w/h to default model settings 2024-03-14 16:03:37 +11:00
1adaf63253 chore(ui): typegen 2024-03-14 16:03:37 +11:00
b9f1a4bd65 feat(nodes): add w/h defaults for models 2024-03-14 16:03:37 +11:00
731942dbed feat(nodes): add constraints & descriptions to default settings 2024-03-14 16:03:37 +11:00
4117cea5bf tidy(mm): remove misplaced comment 2024-03-14 15:54:42 +11:00
21617f3bc1 docs: update description for hashing_algorithm in config 2024-03-14 15:54:42 +11:00
9fcd67b5c0 feat(mm): add algorithm prefix to hashes
For example:
- md5:a0cd925fc063f98dbf029eee315060c3
- sha1:9e362940e5603fdc60566ea100a288ba2fe48b8c
- blake3:ce3f0c5f3c05d119f4a5dcaf209b50d3149046a0d3a9adee9fed4c83cad6b4d0
2024-03-14 15:54:42 +11:00
a4be935458 docs: update config docs 2024-03-14 15:54:42 +11:00
eb6e6548ed feat(mm): faster hashing for spinning disk HDDs
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31

- Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted.
- Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality.
- Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue
- Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3`
- Update all calls of the probe to use the app's configured hashing algorithm
- Update an external script that probes models
- Update tests
- Move ModelHash into its own module to avoid circuclar import issues
2024-03-14 15:54:42 +11:00
8287fcf097 feat: ✏️ rename "Workflow Editor" tab label to "Workflows" 2024-03-14 12:22:23 +11:00
dd475e28ed chore(ui): remove unused translation keys via script 2024-03-14 11:38:29 +11:00
24e741e2d1 feat(ui): add script to clean translations
This script removes unused translations from the `en.json` source translation file:
- Parse `en.json` to build a list of all keys, e.g. `controlnet.depthAnything`
- Check every frontend file for every key
- If the key is not found, it is removed from the translation file
- Exact matches (e.g. `controlnet.depthAnything`) and stem matches (e.g. `depthAnything`) are ignored
2024-03-14 11:38:29 +11:00
e0bf9ce5c6 tidy(ui): use normal quotes in translations 2024-03-14 11:38:29 +11:00
c66e8b395e fix(ui): remove unused input on depth anything processor node 2024-03-14 10:53:57 +11:00
4c417adc82 fix(ui): use revised metadata model types
We can also totally remove the fetch logic because we store the same model data in state now.
2024-03-14 10:53:57 +11:00
437a413ca3 chore(ui): typegen 2024-03-14 10:53:57 +11:00
4492bedd19 tidy(nodes): use ModelIdentifierField for model metadata
Until recently, this had a different shape than the ModelMetadataField. They are now the same, so we can re-use the ModelIdentifierField.
2024-03-14 10:53:57 +11:00
db12ce95a8 fix(ui): invalid collect node error w/ control adapters
The graph builders used awaited functions within `Array.prototype.forEach` loops. This doesn't do what you'd think. This caused graphs to be enqueued before they were fully constructed.

 Changed to `for..of` loops to fix this.
2024-03-14 10:53:57 +11:00
ee3a1a95ef fix(ui): control adapters require control images
There wasn't enough validation of control adapters during graph building. It would be possible for a graph to be built with empty collect node, causing an error. Addressed with an extra check.

This should never happen in practice, because the invoke button should be disabled if an invalid CA is active.
2024-03-14 10:53:57 +11:00
4bb5aba70e feat(ui): only fetch TIs on first load, add comment 2024-03-14 07:38:09 +11:00
cd55c23713 initiate TI model query when socket connects so user doesnt have to wait when opening prompt trigger phrases 2024-03-14 07:38:09 +11:00
1d2743af1b remove log 2024-03-14 07:25:48 +11:00
99d2099ccd add key for controladapter CustomSelect too 2024-03-14 07:25:48 +11:00
b64a693f16 try adding a key to force rerender when items load 2024-03-14 07:25:48 +11:00
9d523a3094 chore: cleanup DepthAnything code (#5945)
## What type of PR is this? (check all applicable)

- [x] Optimization

## Description

Was merged into next but never carried over to main. So cleaning up
again.
2024-03-13 20:46:54 +05:30
af660163ca chore: cleanup DepthAnything code 2024-03-13 20:35:52 +05:30
7e4b462fca docs: OVERVIEW.md typo 2024-03-13 22:43:20 +11:00
4468dd6948 docs: update OVERVIEW.md
Update pkg scripts.
2024-03-13 22:43:20 +11:00
4f39e248dd docs: update OVERVIEW.md
Fix links
2024-03-13 22:43:20 +11:00
44b3e5d43f docs: update INVOCATION_API.md
Add blurb about `WithMetadata` and `WithBoard` mixins.
2024-03-13 22:43:20 +11:00
8894a9e48a docs: update WORKFLOWS.md 2024-03-13 22:43:20 +11:00
c73f58e486 docs: move frontend docs to mkdocs 2024-03-13 22:43:20 +11:00
614fece147 chore(ui): prettier 2024-03-13 21:02:29 +11:00
8ef8082d65 feat(ui): style add model panel 2024-03-13 21:02:29 +11:00
d93d4afbb7 feat(ui): style HF scan tab 2024-03-13 21:02:29 +11:00
01207a2fa5 fix(mm): config.json to indicates diffusers model 2024-03-13 21:02:29 +11:00
d0800c4888 ui consistency, moved is_diffusers logic to backend, extended HuggingFaceMetadata, removed logic from service 2024-03-13 21:02:29 +11:00
2a300ecada updated add model copy, added search to hugging face results 2024-03-13 21:02:29 +11:00
90340a39c7 clean up python errors 2024-03-13 21:02:29 +11:00
ee77abb4fe updated simple install button to match other tabs 2024-03-13 21:02:29 +11:00
004bca5c42 updated endpoint types 2024-03-13 21:02:29 +11:00
5ad048a161 fixed error handling 2024-03-13 21:02:29 +11:00
6369ccd05e added placeholders, updated some copy 2024-03-13 21:02:29 +11:00
3a5314f1ca install model if diffusers or single file, cleaned up backend logic to not mess with existing model install 2024-03-13 21:02:29 +11:00
4c0896e436 removed log 2024-03-13 21:02:29 +11:00
f7cd3cf1f4 added hf models import tab and route for getting available hf models 2024-03-13 21:02:29 +11:00
efea1a8a7d ci: add always_run input to checks & tests, use this on release workflow
This bypasses the `changed-files` check, and forces the checks to run. The release workflow sets this flag to ensure that the checks and tests are always run for a release.
2024-03-13 15:32:42 +11:00
d0d695c020 disable trigger phrase form if empty 2024-03-12 21:08:15 -04:00
2a648da557 updated model manager to display when import item is cancelled 2024-03-13 09:18:05 +11:00
54f1a1f952 Update l2i invoke and seamless to support AutoencoderTiny, remove att… (#5936)
…ention processors if no mid_block is detected

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
L2i throws an assertion error when run with `madebyollin/taesdxl` due to
it requiring a different class in diffusers to load it. This is a small
PR to update seamless and l2i to accept AutoencoderTiny models and not
throw exceptions while processing them.

## QA Instructions, Screenshots, Recordings

<img width="445" alt="Screenshot 2024-03-12 at 12 04 29 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/58442074/34a17e44-d911-4fef-8fc1-71f7b688688c">
Run an sdxl pipeline using a vae that requires AutoencoderTiny and
validate that the image successfully encodes and decodes.

## Merge Plan

This PR can be merged when approved
2024-03-12 21:52:32 +05:30
8d2a4db902 Found another instance of expecting a mid_block on the decoder in a vae 2024-03-12 12:11:38 -04:00
7b393656de Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
43948e0758 feat(ui): add setting for always show image size badge 2024-03-12 18:52:23 +11:00
cc03fcbcb6 style(ui): tweak image dimension badge overlay styles 2024-03-12 18:52:23 +11:00
d1e445fa49 fix(ui): changed to theme tokens 2024-03-12 18:52:23 +11:00
adba8489f2 fix(ui): made changes to avoid overlapping 2024-03-12 18:52:23 +11:00
d919022ba5 fix(ui): fixed requested changes and made the badge display on hover 2024-03-12 18:52:23 +11:00
e076898798 fix(ui): logic to remove badge for small image size 2024-03-12 18:52:23 +11:00
9f19b766a4 feat(ui): Add image size badge to gallery images 2024-03-12 18:52:23 +11:00
4688623711 ci: add missing permission to release workflow 2024-03-12 10:16:38 +11:00
be951da99d {release} 4.0.0rc1 2024-03-12 10:05:03 +11:00
9ee2e7ff25 Do not override log_memory_usage when debug logs are enabled. The speed cost of log_memory_usage=True is large. It is common to want debug log without enabling log_memory_usage. 2024-03-12 09:48:50 +11:00
149ff758b9 Run ruff 2024-03-11 15:53:00 -04:00
65d415d5aa Remove redundant with_suffix call 2024-03-11 15:53:00 -04:00
c74c1927ec Gracefully error without deleting invokeai.yaml 2024-03-11 15:53:00 -04:00
c454ccc65c Run ruff 2024-03-11 15:53:00 -04:00
46fd3465ce Skip list logic if the list only contains primitives 2024-03-11 15:53:00 -04:00
97afa6e2a6 Allow lists of basemodel objects in omegaconf 2024-03-11 15:53:00 -04:00
96730107d1 chore(py): bump mkdocs deps 2024-03-12 02:21:43 +11:00
6a9dede66f chore: bump app deps
- `fastapi-events`: 0.10.1 -> 1.11.0
- `fastapi`: 0.109.2 -> 0.110.0
- `pydantic-settings`: 2.1.0 -> 2.2.1
- `pydantic`: 2.6.1 -> 2.6.3
- `uvicorn`: 0.27.1 -> 0.28.0
2024-03-12 02:21:43 +11:00
8c2ff794d5 fix(nodes): ip adapter uses valid ModelIdentifierField for image encoder model
- Add class method to `ModelIdentifierField` to construct the field from a model config
- Use this to construct a valid IP adapter model field
2024-03-10 17:28:58 -05:00
145bb45858 Remove dead code related to an old symmetry feature. 2024-03-10 00:13:18 -06:00
9376b13435 fix(mm): models lose file extension when syncing
We were stripping the file extension from file models when  moving them in `_sync_model_path`. For example, `some_model.safetensors` would be moved to `some_model`, which of course breaks things.

Instead of using the model's name as the new path, use the model's path's last segment. This is the same behaviour for directories, but for files, it retains the file extension.
2024-03-10 13:36:09 +11:00
eec82afd89 fix(mm): fix models.yaml backup filename
Was erroneously `models.bak`, now `models.yaml.bak`
2024-03-10 13:36:09 +11:00
c47dbf7258 docs(mm): format docstrings for ModelSearch 2024-03-10 12:09:47 +11:00
92b2e8186a tidy(mm): simplify types for ModelSearch
- Use `set` instead of `Set`
- Methods accept only `Path`s
2024-03-10 12:09:47 +11:00
70a88c6b99 docs(mm): update docstrings for ModelSearch 2024-03-10 12:09:47 +11:00
56e7c04475 tidy(mm): remove extraneous dependencies in model search
- `config` is unused
- `stats` is created on instantiation
- `logger` uses the app logger
2024-03-10 12:09:47 +11:00
bd5b43c00d tidy(mm): ModelSearch cleanup
- No need for it to by a pydantic model. Just a class now.
- Remove ABC, it made it hard to understand what was going on as attributes were spread across the ABC and implementation. Also, there is no other implementation.
- Add tests
2024-03-10 12:09:47 +11:00
631e789195 fix(canvas): create masked latents when None 2024-03-10 11:58:41 +11:00
133c90e116 fix(ui): update all components and logic to use enriched ModelIdentifierField 2024-03-10 11:03:38 +11:00
4433b78e59 chore(ui): typegen 2024-03-10 11:03:38 +11:00
daeb766468 feat(api): add ModelIdentifierField to openapi schema
- Also add `ProgressImage`
2024-03-10 11:03:38 +11:00
92b0d13d0e feat(nodes): "ModelField" -> "ModelIdentifierField", add hash/name/base/type 2024-03-10 11:03:38 +11:00
67d26cd633 docs: update CONFIGURATION.md 2024-03-10 10:38:52 +11:00
9e28317a12 docs: add DATABASE.md 2024-03-10 10:38:52 +11:00
5b51ebf1c4 docs: regenerate config docstrings 2024-03-10 10:38:52 +11:00
59228643a9 docs: skip_model_hash -> model install category, use_memory_db -> development category 2024-03-10 10:38:52 +11:00
b24657df11 docs: roll back adding examples to config docstrings
This isn't a valid docstring syntax and breaks the autogeneration
2024-03-10 10:38:52 +11:00
d4686b7f64 fix(mm): yaml migration fixup
- If the metadata yaml has an invalid version, exist the app. If we don't, the app will crawl the models dir and add models to the db without having first parsed `models.yaml`. This should not happen often, as the vast majority of users are on v3.0.0 models.yaml files.
- Fix off-by-one error with models count (need to pop the `__metadata__` stanza
- After a successful migration, rename `models.yaml` to `models.yaml.bak` to prevent the migration logic from re-running on subsequent app startups.
2024-03-09 08:37:45 -06:00
67163c2224 fix(mm): only move model files if necessary
The old logic to check if a model needed to be moved relied on the model path being a relative path. Paths are now absolute, causing this check to fail. We then assumed the paths were different and moved the model from its current location to, well, its current location.

Use more resilient method to check if a model should be moved.
2024-03-09 22:58:26 +11:00
f01e81d382 Run ruff 2024-03-08 18:46:17 -05:00
a50e0a4802 use correct key name from yaml 2024-03-08 18:46:17 -05:00
df0a5aa92a pass config_path to migration path, make sure it uses absolute path 2024-03-08 18:46:17 -05:00
0bd9a0a9ea Add ability to provide config examples in docs 2024-03-08 16:31:39 -05:00
4ae2cd242e Update to include remote_api_tokens in the config docs 2024-03-08 16:31:39 -05:00
0696094d95 tests: fix tests
The tests were testing deprecated settings (not the settings themselves, just the class's functionality).
2024-03-08 16:31:39 -05:00
fb1ae55010 docs: update CONFIGURATION.md to use autogenerated docs 2024-03-08 16:31:39 -05:00
deb1d4eb14 docs: run script to update config class's docstring 2024-03-08 16:31:39 -05:00
d156fd2093 tests: validate config docstring is current 2024-03-08 16:31:39 -05:00
c41e87160a scripts: add script to update config docstring
- Add script to call config docstring helper function and write the docstring to the file directly
- Add `make` target for this script
2024-03-08 16:31:39 -05:00
eba1fc1355 docs: autogenerated app config docs
mkdocs can autogenerate python class docs from its docstrings. Our config is a pydantic model.

It's tedious and error-prone to duplicate docstrings from the pydantic field descriptions to the class docstrings.

- Add helper function to generate a mkdocs-compatible docstring from the InvokeAIAppConfig class fields
2024-03-08 16:31:39 -05:00
96702c395e feat(config): add deprecated category for config settings
It's not clear why these are still in the config class.
2024-03-08 16:31:39 -05:00
3361aec065 docs(nodes): update config field descriptions 2024-03-08 16:31:39 -05:00
8ba4b2a150 Run ruff 2024-03-08 15:36:14 -05:00
df12e12e09 Run ruff 2024-03-08 15:36:14 -05:00
ee38fbe89c Remove check for models dir in model deletion, update tests to always assume the model path is an absolute path 2024-03-08 15:36:14 -05:00
6e2cef1db5 Remove instances making models relative to the model dir 2024-03-08 15:36:14 -05:00
b1f5ac4548 fix path 2024-03-08 15:36:14 -05:00
e52274ecac Experiment with using absolute paths within model management 2024-03-08 15:36:14 -05:00
66f0ff5b13 add ordering to search_by_attr that is used for model lists 2024-03-08 13:38:38 -06:00
cab5b64f0b only render convert button if ckpt model 2024-03-08 13:19:08 -06:00
a42812d78d ui(model_manager): Remember Scan Path 2024-03-08 14:05:57 -05:00
281222df3c remove old data migration from previous schema version 2024-03-08 13:10:27 -05:00
d5674150fa ruff 2024-03-08 13:02:04 -05:00
0cb2cf6644 wrap version check in try/except 2024-03-08 13:02:04 -05:00
da87266c9c remove log 2024-03-08 13:02:04 -05:00
35731a6f51 fix null description, add logging 2024-03-08 13:02:04 -05:00
a3dfa161a8 Run ruff 2024-03-08 13:02:04 -05:00
42d606f07c use register instead of heuristic import, get rid of typing warnings 2024-03-08 13:02:04 -05:00
9063b1ae61 on model manager start, look to see if yaml needs to be migrated and do it if so 2024-03-08 13:02:04 -05:00
6aae88bd88 Rerun typegen 2024-03-08 12:44:58 -05:00
57c1954da7 feat(ui): use control adapter processor helper in metadata parser 2024-03-08 12:44:58 -05:00
2410ed689a tests(mm): add tests for control adapter probe default settings 2024-03-08 12:44:58 -05:00
a10dccdd43 fix(mm): fix bug in control adapter probe default settings
Wasn't checking for matches correctly.
2024-03-08 12:44:58 -05:00
a3570901f7 fix(ui): do not show default settings for refiner models 2024-03-08 12:44:58 -05:00
fd457955bc feat(ui): update default settings for control adapters
- Split out main model defaults
- Add controlnet/t2i defaults (which includes only the preprocessor)
2024-03-08 12:44:58 -05:00
1f69613f5d chore(ui): typegen 2024-03-08 12:44:58 -05:00
7a87ebb3b2 fix(mm): add control adapter default settings to ModelRecordChanges schema
This is needed to update Control Adapter defaults.
2024-03-08 12:44:58 -05:00
4ee4a801c6 feat(ui): update default settings for main models
Needed some massaging now that only main models get main model default settings.
2024-03-08 12:44:58 -05:00
53b7f6be37 feat(ui): use default settings for control adapters for processor 2024-03-08 12:44:58 -05:00
dbd7c94e7c chore(ui): typegen 2024-03-08 12:44:58 -05:00
50bb9a6b41 fix(mm): remove default settings from IP adapter config 2024-03-08 12:44:58 -05:00
13bb3c5e15 feat(mm): add control adapter default settings while probing 2024-03-08 12:44:58 -05:00
80c2a4b925 feat(mm): add AnyDefaultSettings union 2024-03-08 12:44:58 -05:00
8ce485b036 feat(mm): add default settings for control adapters
Only includes `preprocessor` at this time.
2024-03-08 12:44:58 -05:00
6fc3e86061 tidy(mm): only main models get the main default settings 2024-03-08 12:44:58 -05:00
33ded359e6 Run typegen 2024-03-08 11:10:44 -05:00
effbd8a1ba chore: ruff 2024-03-08 11:10:44 -05:00
ddde355b09 fix(mm): add ui_type to model fields
Recently the schema for models was changed to a generic `ModelField`, and the UI was unable to derive the type of those fields. This didn't affect functionality, but it did break the styling of handles.

Add `ui_type` to the affected fields and update the UI to use the correct capitalizations.
2024-03-08 11:10:44 -05:00
fe2c6f621a fix(ui): do not allow model add when no location is provided 2024-03-08 14:41:03 +11:00
d0fcdbe8a3 tweak(ui): simplify layout of inplace install form elements 2024-03-08 14:41:03 +11:00
a28547b3dd make inplace optional, default to true 2024-03-08 14:41:03 +11:00
c7b2bdb846 allow inplace installs 2024-03-08 14:41:03 +11:00
4a20377fef tidy(config): move version "setting" to new CLIArgs category
It's not actually a setting.
2024-03-08 13:59:59 +11:00
ed803640f7 tidy(mm): move remote_api_tokens to new ModelInstall category 2024-03-08 13:59:59 +11:00
576bb4a61d feat(mm): support generic API tokens via regex/token pairs in config
A list of regex and token pairs is accepted. As a file is downloaded by the model installer, the URL is tested against the provided regex/token pairs. The token for the first matching regex is used during download, added as a bearer token.
2024-03-08 13:59:59 +11:00
b6065d6328 Run ruff with newest version of ruff 2024-03-08 13:59:59 +11:00
04229f4a21 Run ruff 2024-03-08 13:59:59 +11:00
73a190fb6e Add remote_repo_api_key config to be added as a token query param for all remote url model downloads 2024-03-08 13:59:59 +11:00
952d97741e Remove civit ai from tests and documentation 2024-03-08 13:59:59 +11:00
afd08c5f46 Regenerate typegen 2024-03-08 13:59:59 +11:00
d1f859a446 Remove civit AI model install resources 2024-03-08 13:59:59 +11:00
5118160282 docs(mm): update comment about model images 2024-03-08 12:26:35 +11:00
8e694992bb chore(ui): lint 2024-03-08 12:26:35 +11:00
4077dfe0c3 fix(ui): clear pending trigger phrase immediately
If we don't clear it, there's an awkward flash of error state as the mutation completes.
2024-03-08 12:26:35 +11:00
fe8e391aad fix(ui): display trigger phrases for loras in mm editor 2024-03-08 12:26:35 +11:00
ac8f606d99 fix(ui): default settings linked incorrectly 2024-03-08 12:26:35 +11:00
0aa2070ce0 perf(mm): add manual query cache updates for the update model route
This greatly reduces the number of network requests when editing models.
2024-03-08 12:26:35 +11:00
ff66779aa3 tweak(ui): add colors to base/format badges 2024-03-08 12:26:35 +11:00
2ca65ab9fa tweak(ui): style trigger phrases 2024-03-08 12:26:35 +11:00
b34624a2a8 tweak(ui): style model edit 2024-03-08 12:26:35 +11:00
b8aa9752f1 tweak(ui): update default settings layouts 2024-03-08 12:26:35 +11:00
1b5d8eb9e7 tweak(ui): use check icon for model save button 2024-03-08 12:26:35 +11:00
773182f425 fix(ui): reset model edit form state with new values
Without this, the form will incorrectly compare its state to its initial default values to determine if it is dirty. Instead, it should reset its default values to the new values after successful submit.
2024-03-08 12:26:35 +11:00
6386109fc5 feat(ui): move model save/close buttons to model header 2024-03-08 12:26:35 +11:00
c008704bc8 feat(ui): model header styling 2024-03-08 12:26:35 +11:00
a3a42d25d3 fix(mm): model images reload when changed
When we change a model image, its URL remains the same. The browser will aggressively cache the image. The easiest way to fix this is to append a random query parameter to the URL whenever we build a model config in the API.
2024-03-08 12:26:35 +11:00
8959d1bf51 fix(ui): do not persist model manager state 2024-03-08 12:26:35 +11:00
8fd9342712 fix(ui): typing issues related to trigger phrase changes 2024-03-08 12:26:35 +11:00
f0b815aa9b fix(ui): missing translation 2024-03-08 12:26:35 +11:00
3a5b0b819c chore(ui): typegen 2024-03-08 12:26:35 +11:00
bbcbcd9b63 fix(mm): only loras and main models get trigger_phrases 2024-03-08 12:26:35 +11:00
fdecb886b2 feat(ui): add main model trigger phrases 2024-03-08 12:26:35 +11:00
2f0a653a7f feat(ui): improved model list styling 2024-03-08 12:26:35 +11:00
b0add805c5 feat(ui): use stickyscrollable for models list 2024-03-08 12:26:35 +11:00
ed4e8624dd feat(ui): model manager UI tweaks
- Move image display to left
- Move description into model header
- Move model edit & convert buttons to top right of model header
- Tweak styles for model display component
2024-03-08 12:26:35 +11:00
ad70cdfe87 feat: undo/redo discard canvas staged image 2024-03-07 19:24:55 +11:00
549d461107 refactor: 🚨 satisfy the linter 2024-03-07 19:24:55 +11:00
cab3748010 feat: discard current inpaint item 2024-03-07 19:24:55 +11:00
779b3e0e8e tidy(ui): remove npm lockfile 2024-03-06 21:57:41 -05:00
9b48029bc9 tidy(mm): ModelImages service 2024-03-06 21:57:41 -05:00
347f1fd0b7 fix tests 2024-03-06 21:57:41 -05:00
4af5a09a68 cleanup 2024-03-06 21:57:41 -05:00
8df02623f2 cleanup 2024-03-06 21:57:41 -05:00
aa88fadc30 use webp images 2024-03-06 21:57:41 -05:00
8411029d93 get model image url from model config, added thumbnail formatting for images 2024-03-06 21:57:41 -05:00
239b1e8cc7 moved upload image field and added delete image functionality 2024-03-06 21:57:41 -05:00
8a68355926 got model images displaying, still need to clean up types and unused code 2024-03-06 21:57:41 -05:00
86aef9f31d removed modelimage for now 2024-03-06 21:57:41 -05:00
2f6964bfa5 fetching model image, still not working 2024-03-06 21:57:41 -05:00
c1cdfd132b moved model image to edit page, added model_images service 2024-03-06 21:57:41 -05:00
f6bfe5e6f2 created ugly model image upload component 2024-03-06 21:57:41 -05:00
b5a8455b5f translationBot(ui): update translation (Russian)
Currently translated at 94.6% (1431 of 1512 strings)

translationBot(ui): update translation (Russian)

Currently translated at 94.6% (1431 of 1512 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-03-07 11:47:01 +11:00
645ef081ea translationBot(ui): update translation (Italian)
Currently translated at 98.0% (1487 of 1516 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.0% (1482 of 1512 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.0% (1475 of 1505 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-03-07 11:47:01 +11:00
e68d7fa6d7 fix(ui): update types 2024-03-07 10:56:59 +11:00
c5ab1c7ad6 chore(ui): typegen 2024-03-07 10:56:59 +11:00
5a561cab78 fix(ui): typo 2024-03-07 10:56:59 +11:00
132790eebe tidy(nodes): use canonical capitalizations 2024-03-07 10:56:59 +11:00
c57f6ee885 fix(ui): fix metadata for graphs to use new enriched format 2024-03-07 10:56:59 +11:00
d4a2ea68fc chore(ui): typegen 2024-03-07 10:56:59 +11:00
528ac5dd25 refactor(nodes): model identifiers
- All models are identified by a key and optionally a submodel type via new model `ModelField`. Previously, a few model types had their own class, but not all of them. This inconsistency just added complexity without any benefit.
- Update all invocation to use the new format.
- In the node API, models are loaded by key or an instance of `ModelField` as a convenience.
- Add an enriched model schema for metadata. It includes key, hash, name, base and type.
2024-03-07 10:56:59 +11:00
afd9ae7712 tidy(mm): remove convenience methods from high level model manager service
These were added as a hold-me-over for the nodes API changes, no longer needed. A followup commit will fix the nodes API to not rely on these.
2024-03-07 10:56:59 +11:00
4eefed12f0 refactor: 🚨 please the almighty linter 2024-03-07 10:44:40 +11:00
4301a3d6fd feat: invert scroll direction for brush size 2024-03-07 10:44:40 +11:00
99c0662e3f fix(nodes): load config before doing anything else
This was preventing custom nodes from loading if a custom nodes dir was specified

Closes #5862
2024-03-07 10:36:27 +11:00
cdc0d0c182 add config_path to ModelRecordChanges 2024-03-07 10:29:29 +11:00
a00369a67a add config path as field in model update form when model is a checkpoint 2024-03-07 10:29:29 +11:00
4f096ac3ba feat(scripts): add frontend-types to Makefile to generate types 2024-03-07 10:16:44 +11:00
f5e3341465 feat(scripts): add support for file path & stdin to frontend typegen script 2024-03-07 10:16:44 +11:00
474852ef7e feat(scripts): add script to generate openapi schema 2024-03-07 10:16:44 +11:00
b1d72d411e only show default settings on main models 2024-03-07 09:07:43 +11:00
46614ee28f lint fix 2024-03-06 15:06:27 -05:00
b019f9bb8b make sure all metadata in viewer is rendered at correct font size - specifically fixes control adapter metadata being too big 2024-03-06 15:06:27 -05:00
b857692073 update uploads from canvas to controlnet to be intermediates so they do not appear in gallery 2024-03-06 15:06:27 -05:00
90fb7a1a59 move linear tab to be first on workflow edit mode 2024-03-06 15:06:27 -05:00
56fcf6af78 empty state for workflow with no linear fields in view mode 2024-03-06 15:06:27 -05:00
c4fe7e697b add right-padding to prompt textareas so that text does not go behind icons 2024-03-06 15:06:27 -05:00
2fd483dfc8 use base.800 on invokeBlue.400 for all gallery selected states 2024-03-06 15:06:27 -05:00
b9a9507422 update padding in color picker 2024-03-06 15:06:27 -05:00
f2744fd7d1 fix URL for get image workflow (#5882)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-03-06 12:46:16 -05:00
fe6e879d38 fix(ui): invalidate InvocationCacheStatus query cache after clearing intermediates 2024-03-06 08:14:12 -05:00
d3ab08fe10 tests: add invocation cache tests 2024-03-06 08:14:12 -05:00
b0615bdfd4 fix(nodes): correctly serialize outputs
In order for delete by match to work, we need the whole invocation output to be stringified.

For some reason, the serialization of the output was set to only include the `type` field. It should instead include the whole output.

I don't understand how this ever worked unless pydantic had different serialization behaviour in v1 (though it appears to have been the same).

Closes #5805
2024-03-06 08:14:12 -05:00
bab20467fb fix(nodes): fix invocation cache clear method args 2024-03-06 08:14:12 -05:00
e24624109e fix(nodes): fix invocation cache ABC typing 2024-03-06 08:14:12 -05:00
458e7185b8 fix: 🐛 didn't include renamed file 2024-03-06 20:06:14 +11:00
a95128f5f2 refactor: ✏️ canvas mask compositor naming
changes `...MaskCompositer` spelling to `...MaskCompositor`
2024-03-06 20:06:14 +11:00
46f32c5e3c Remove references to the no longer existing invokeai.app.services.model_metadata package 2024-03-05 19:58:25 -05:00
420 changed files with 13892 additions and 29512 deletions

View File

@ -1,66 +1,21 @@
## What type of PR is this? (check all applicable)
## Summary
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
<!--A description of the changes in this PR. Include the kind of change (fix, feature, docs, etc), the "why" and the "how". Screenshots or videos are useful for frontend changes.-->
## Related Issues / Discussions
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
<!--WHEN APPLICABLE: List any related issues or discussions on github or discord. If this PR closes an issue, please use the "Closes #1234" format, so that the issue will be automatically closed when the PR merges.-->
## QA Instructions
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like DB schemas, may need some care when merging. For example, a careful rebase by the change author, timing to not interfere with a pending release, or a message to contributors on discord after merging.-->
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
## Checklist
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- [ ] _The PR has a short but descriptive title, suitable for a changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_

View File

@ -1,7 +1,7 @@
# Runs frontend code quality checks.
#
# Checks for changes to frontend files before running the checks.
# When manually triggered or when called from another workflow, always runs the checks.
# If always_run is true, always runs the checks.
name: 'frontend checks'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
defaults:
run:
@ -30,7 +42,7 @@ jobs:
- uses: actions/checkout@v4
- name: check for changed frontend files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -39,30 +51,30 @@ jobs:
- 'invokeai/frontend/web/**'
- name: install dependencies
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
uses: ./.github/actions/install-frontend-deps
- name: tsc
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:tsc'
shell: bash
- name: dpdm
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:dpdm'
shell: bash
- name: eslint
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:eslint'
shell: bash
- name: prettier
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:prettier'
shell: bash
- name: knip
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm lint:knip'
shell: bash

View File

@ -1,7 +1,7 @@
# Runs frontend tests.
#
# Checks for changes to frontend files before running the tests.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the tests.
name: 'frontend tests'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
defaults:
run:
@ -30,7 +42,7 @@ jobs:
- uses: actions/checkout@v4
- name: check for changed frontend files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -39,10 +51,10 @@ jobs:
- 'invokeai/frontend/web/**'
- name: install dependencies
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
uses: ./.github/actions/install-frontend-deps
- name: vitest
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.frontend_any_changed == 'true' || inputs.always_run == true }}
run: 'pnpm test:no-watch'
shell: bash

View File

@ -1,7 +1,7 @@
# Runs python code quality checks.
#
# Checks for changes to python files before running the checks.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the checks.
#
# TODO: Add mypy or pyright to the checks.
@ -18,7 +18,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the checks'
required: true
type: boolean
default: true
jobs:
python-checks:
@ -29,7 +41,7 @@ jobs:
uses: actions/checkout@v4
- name: check for changed python files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -41,7 +53,7 @@ jobs:
- 'tests/**'
- name: setup python
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: '3.10'
@ -49,16 +61,16 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install ruff
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pip install ruff
shell: bash
- name: ruff check
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff check --output-format=github .
shell: bash
- name: ruff format
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff format --check .
shell: bash

View File

@ -1,7 +1,7 @@
# Runs python tests on a matrix of python versions and platforms.
#
# Checks for changes to python files before running the tests.
# When manually triggered or called from another workflow, always runs the tests.
# If always_run is true, always runs the tests.
name: 'python tests'
@ -16,7 +16,19 @@ on:
- 'synchronize'
merge_group:
workflow_dispatch:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
workflow_call:
inputs:
always_run:
description: 'Always run the tests'
required: true
type: boolean
default: true
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@ -63,7 +75,7 @@ jobs:
uses: actions/checkout@v4
- name: check for changed python files
if: ${{ github.event_name != 'workflow_dispatch' && github.event_name != 'workflow_call' }}
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
with:
@ -75,7 +87,7 @@ jobs:
- 'tests/**'
- name: setup python
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
@ -83,12 +95,12 @@ jobs:
cache-dependency-path: pyproject.toml
- name: install dependencies
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install --editable=".[test]"
- name: run pytest
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call' }}
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pytest

View File

@ -30,15 +30,23 @@ jobs:
frontend-checks:
uses: ./.github/workflows/frontend-checks.yml
with:
always_run: true
frontend-tests:
uses: ./.github/workflows/frontend-tests.yml
with:
always_run: true
python-checks:
uses: ./.github/workflows/python-checks.yml
with:
always_run: true
python-tests:
uses: ./.github/workflows/python-tests.yml
with:
always_run: true
build:
uses: ./.github/workflows/build-installer.yml
@ -58,6 +66,8 @@ jobs:
environment:
name: testpypi
url: https://test.pypi.org/p/invokeai
permissions:
id-token: write
steps:
- name: download distribution from build job
uses: actions/download-artifact@v4
@ -85,6 +95,8 @@ jobs:
environment:
name: pypi
url: https://pypi.org/p/invokeai
permissions:
id-token: write
steps:
- name: download distribution from build job
uses: actions/download-artifact@v4

View File

@ -10,10 +10,12 @@ help:
@echo "ruff-unsafe Run ruff, fixing all fixable errors and formatting"
@echo "mypy Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors"
@echo "mypy-all Run mypy ignoring the config in pyproject.tom but still ignoring missing imports"
@echo "test" Run the unit tests.
@echo "frontend-install" Install the pnpm modules needed for the front end
@echo "test Run the unit tests."
@echo "update-config-docstring Update the app's config docstring so mkdocs can autogenerate it correctly."
@echo "frontend-install Install the pnpm modules needed for the front end"
@echo "frontend-build Build the frontend in order to run on localhost:9090"
@echo "frontend-dev Run the frontend in developer mode on localhost:5173"
@echo "frontend-typegen Generate types for the frontend from the OpenAPI schema"
@echo "installer-zip Build the installer .zip file for the current version"
@echo "tag-release Tag the GitHub repository with the current version (use at release time only!)"
@ -40,6 +42,10 @@ mypy-all:
test:
pytest ./tests
# Update config docstring
update-config-docstring:
python scripts/update_config_docstring.py
# Install the pnpm modules needed for the front end
frontend-install:
rm -rf invokeai/frontend/web/node_modules
@ -53,6 +59,9 @@ frontend-build:
frontend-dev:
cd invokeai/frontend/web && pnpm dev
frontend-typegen:
cd invokeai/frontend/web && python ../../../scripts/generate_openapi_schema.py | pnpm typegen
# Installer zip file
installer-zip:
cd installer && ./create_installer.sh

View File

@ -2,17 +2,25 @@
## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# HOST_INVOKEAI_ROOT is the path on the docker host's filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# If relative, it will be relative to the docker directory in which the docker-compose.yml file is located
#HOST_INVOKEAI_ROOT=../../invokeai-data
# INVOKEAI_ROOT is the path to the root of the InvokeAI repository within the container.
## INVOKEAI_ROOT is the path *on the host system* where Invoke will store its data.
## It is mounted into the container and allows both containerized and non-containerized usage of Invoke.
# Usually this is the only variable you need to set. It can be relative or absolute.
# INVOKEAI_ROOT=~/invokeai
# Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## HOST_INVOKEAI_ROOT and CONTAINER_INVOKEAI_ROOT can be used to control the on-host
## and in-container paths separately, if needed.
## HOST_INVOKEAI_ROOT is the path on the docker host's filesystem where Invoke will store data.
## If relative, it will be relative to the docker directory in which the docker-compose.yml file is located
## CONTAINER_INVOKEAI_ROOT is the path within the container where Invoke will expect to find the runtime directory.
## It MUST be absolute. There is usually no need to change this.
# HOST_INVOKEAI_ROOT=../../invokeai-data
# CONTAINER_INVOKEAI_ROOT=/invokeai
## optional variables specific to the docker setup.
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
# INVOKEAI_PORT=9090
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=nvidia #| rocm
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
# CONTAINER_UID=1000

View File

@ -18,8 +18,6 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.2
ARG TORCHVISION_VERSION=0.16.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@ -27,7 +25,12 @@ ARG BUILDPLATFORM
WORKDIR ${INVOKEAI_SRC}
# Install pytorch before all other pip packages
COPY invokeai ./invokeai
COPY pyproject.toml ./
# Editable mode helps use the same image for development:
# the local working copy can be bind-mounted into the image
# at path defined by ${INVOKEAI_SRC}
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is default
RUN --mount=type=cache,target=/root/.cache/pip \
@ -39,20 +42,10 @@ RUN --mount=type=cache,target=/root/.cache/pip \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
pip install $extra_index_url_arg \
torch==$TORCH_VERSION \
torchvision==$TORCHVISION_VERSION
# Install the local package.
# Editable mode helps use the same image for development:
# the local working copy can be bind-mounted into the image
# at path defined by ${INVOKEAI_SRC}
COPY invokeai ./invokeai
COPY pyproject.toml ./
RUN --mount=type=cache,target=/root/.cache/pip \
# xformers + triton fails to install on arm64
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
pip install $extra_index_url_arg -e ".[xformers]"; \
else \
pip install $extra_index_url_arg -e "."; \
fi
@ -101,6 +94,8 @@ RUN apt update && apt install -y --no-install-recommends \
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV INVOKEAI_HOST=0.0.0.0
ENV INVOKEAI_PORT=9090
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
ENV CONTAINER_UID=${CONTAINER_UID:-1000}
ENV CONTAINER_GID=${CONTAINER_GID:-1000}
@ -125,4 +120,4 @@ RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${IN
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
CMD ["invokeai-web", "--host", "0.0.0.0"]
CMD ["invokeai-web"]

View File

@ -8,35 +8,28 @@ x-invokeai: &invokeai
context: ..
dockerfile: docker/Dockerfile
# variables without a default will automatically inherit from the host environment
environment:
- INVOKEAI_ROOT
- HF_HOME
# Create a .env file in the same directory as this docker-compose.yml file
# and populate it with environment variables. See .env.sample
env_file:
- .env
# variables without a default will automatically inherit from the host environment
environment:
# if set, CONTAINER_INVOKEAI_ROOT will override the Invoke runtime directory location *inside* the container
- INVOKEAI_ROOT=${CONTAINER_INVOKEAI_ROOT:-/invokeai}
- HF_HOME
ports:
- "${INVOKEAI_PORT:-9090}:9090"
- "${INVOKEAI_PORT:-9090}:${INVOKEAI_PORT:-9090}"
volumes:
- type: bind
source: ${HOST_INVOKEAI_ROOT:-${INVOKEAI_ROOT:-~/invokeai}}
target: ${INVOKEAI_ROOT:-/invokeai}
target: ${CONTAINER_INVOKEAI_ROOT:-/invokeai}
bind:
create_host_path: true
- ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface}
# - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}}
# - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}}
tty: true
stdin_open: true
# # Example of running alternative commands/scripts in the container
# command:
# - bash
# - -c
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0
services:
invokeai-nvidia:

View File

@ -9,10 +9,6 @@ set -e -o pipefail
### Set INVOKEAI_ROOT pointing to a valid runtime directory
# Otherwise configure the runtime dir first.
### Configure the InvokeAI runtime directory (done by default)):
# docker run --rm -it <this image> --configure
# or skip with --no-configure
### Set the CONTAINER_UID envvar to match your user.
# Ensures files created in the container are owned by you:
# docker run --rm -it -v /some/path:/invokeai -e CONTAINER_UID=$(id -u) <this image>
@ -22,27 +18,6 @@ USER_ID=${CONTAINER_UID:-1000}
USER=ubuntu
usermod -u ${USER_ID} ${USER} 1>/dev/null
configure() {
# Configure the runtime directory
if [[ -f ${INVOKEAI_ROOT}/invokeai.yaml ]]; then
echo "${INVOKEAI_ROOT}/invokeai.yaml exists. InvokeAI is already configured."
echo "To reconfigure InvokeAI, delete the above file."
echo "======================================================================"
else
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
gosu ${USER} invokeai-configure --yes --default_only
fi
}
## Skip attempting to configure.
## Must be passed first, before any other args.
if [[ $1 != "--no-configure" ]]; then
configure
else
shift
fi
### Set the $PUBLIC_KEY env var to enable SSH access.
# We do not install openssh-server in the image by default to avoid bloat.
# but it is useful to have the full SSH server e.g. on Runpod.
@ -58,7 +33,8 @@ if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
service ssh start
fi
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).

View File

@ -16,11 +16,6 @@ model. These are the:
information. It is also responsible for managing the InvokeAI
`models` directory and its contents.
* _ModelMetadataStore_ and _ModelMetaDataFetch_ Backend modules that
are able to retrieve metadata from online model repositories,
transform them into Pydantic models, and cache them to the InvokeAI
SQL database.
* _DownloadQueueServiceBase_
A multithreaded downloader responsible
for downloading models from a remote source to disk. The download
@ -383,16 +378,13 @@ functionality:
* Downloading a model from an arbitrary URL and installing it in
`models_dir`.
* Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link
* Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative
variants such as fp16.
* Saving tags and other metadata about the model into the invokeai database
when fetching from a repo that provides that type of information,
(currently only Civitai and HuggingFace).
(currently only HuggingFace).
### Initializing the installer
@ -436,7 +428,6 @@ required parameters:
| `app_config` | InvokeAIAppConfig | InvokeAI app configuration object |
| `record_store` | ModelRecordServiceBase | Config record storage database |
| `download_queue` | DownloadQueueServiceBase | Download queue object |
| `metadata_store` | Optional[ModelMetadataStore] | Metadata storage object |
|`session` | Optional[requests.Session] | Swap in a different Session object (usually for debugging) |
Once initialized, the installer will provide the following methods:
@ -580,33 +571,7 @@ The `AnyHttpUrl` class can be imported from `pydantic.networks`.
Ordinarily, no metadata is retrieved from these sources. However,
there is special-case code in the installer that looks for HuggingFace
and Civitai URLs and fetches the corresponding model metadata from
the corresponding repo.
#### CivitaiModelSource
This is used for a model that is hosted by the Civitai web site.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `version_id` | int | None | The ID of the particular version of the desired model. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
Civitai has two model IDs, both of which are integers. The `model_id`
corresponds to a collection of model versions that may different in
arbitrary ways, such as derivation from different checkpoint training
steps, SFW vs NSFW generation, pruned vs non-pruned, etc. The
`version_id` points to a specific version. Please use the latter.
Some Civitai models require an access token to download. These can be
generated from the Civitai profile page of a logged-in
account. Somewhat annoyingly, if you fail to provide the access token
when downloading a model that needs it, Civitai generates a redirect
to a login page rather than a 403 Forbidden error. The installer
attempts to catch this event and issue an informative error
message. Otherwise you will get an "unrecognized model suffix" error
when the model prober tries to identify the type of the HTML login
page.
and fetches the corresponding model metadata from the corresponding repo.
#### HFModelSource
@ -1253,9 +1218,9 @@ queue and have not yet reached a terminal state.
The modules found under `invokeai.backend.model_manager.metadata`
provide a straightforward API for fetching model metadatda from online
repositories. Currently two repositories are supported: HuggingFace
and Civitai. However, the modules are easily extended for additional
repos, provided that they have defined APIs for metadata access.
repositories. Currently only HuggingFace is supported. However, the
modules are easily extended for additional repos, provided that they
have defined APIs for metadata access.
Metadata comprises any descriptive information that is not essential
for getting the model to run. For example "author" is metadata, while
@ -1267,37 +1232,16 @@ model's config, as defined in `invokeai.backend.model_manager.config`.
```
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
CivitaiMetadata
ModelMetadataStore,
)
# to access the initialized sql database
from invokeai.app.api.dependencies import ApiDependencies
civitai = CivitaiMetadataFetch()
hf = HuggingFaceMetadataFetch()
# fetch the metadata
model_metadata = civitai.from_url("https://civitai.com/models/215796")
model_metadata = hf.from_id("<repo_id>")
# get some common metadata fields
author = model_metadata.author
tags = model_metadata.tags
# get some Civitai-specific fields
assert isinstance(model_metadata, CivitaiMetadata)
trained_words = model_metadata.trained_words
base_model = model_metadata.base_model_trained_on
thumbnail = model_metadata.thumbnail_url
# cache the metadata to the database using the key corresponding to
# an existing model config record in the `model_config` table
sql_cache = ModelMetadataStore(ApiDependencies.invoker.services.db)
sql_cache.add_metadata('fb237ace520b6716adc98bcb16e8462c', model_metadata)
# now we can search the database by tag, author or model name
# matches will contain a list of model keys that match the search
matches = sql_cache.search_by_tag({"tool", "turbo"})
assert isinstance(model_metadata, HuggingFaceMetadata)
```
### Structure of the Metadata objects
@ -1334,52 +1278,14 @@ This descends from `ModelMetadataBase` and adds the following fields:
| `last_modified`| datetime | Date of last commit of this model to the repo |
| `files` | List[Path] | List of the files in the model repo |
#### `CivitaiMetadata`
This descends from `ModelMetadataBase` and adds the following fields:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `type` | Literal["civitai"] | Used for the discriminated union of metadata classes|
| `id` | int | Civitai model id |
| `version_name` | str | Name of this version of the model (distinct from model name) |
| `version_id` | int | Civitai model version id (distinct from model id) |
| `created` | datetime | Date this version of the model was created |
| `updated` | datetime | Date this version of the model was last updated |
| `published` | datetime | Date this version of the model was published to Civitai |
| `description` | str | Model description. Quite verbose and contains HTML tags |
| `version_description` | str | Model version description, usually describes changes to the model |
| `nsfw` | bool | Whether the model tends to generate NSFW content |
| `restrictions` | LicenseRestrictions | An object that describes what is and isn't allowed with this model |
| `trained_words`| Set[str] | Trigger words for this model, if any |
| `download_url` | AnyHttpUrl | URL for downloading this version of the model |
| `base_model_trained_on` | str | Name of the model that this version was trained on |
| `thumbnail_url` | AnyHttpUrl | URL to access a representative thumbnail image of the model's output |
| `weight_min` | int | For LoRA sliders, the minimum suggested weight to apply |
| `weight_max` | int | For LoRA sliders, the maximum suggested weight to apply |
Note that `weight_min` and `weight_max` are not currently populated
and take the default values of (-1.0, +2.0). The issue is that these
values aren't part of the structured data but appear in the text
description. Some regular expression or LLM coding may be able to
extract these values.
Also be aware that `base_model_trained_on` is free text and doesn't
correspond to our `ModelType` enum.
`CivitaiMetadata` also defines some convenience properties relating to
licensing restrictions: `credit_required`, `allow_commercial_use`,
`allow_derivatives` and `allow_different_license`.
#### `AnyModelRepoMetadata`
This is a discriminated Union of `CivitaiMetadata` and
`HuggingFaceMetadata`.
This is a discriminated Union of `HuggingFaceMetadata`.
### Fetching Metadata from Online Repos
The `HuggingFaceMetadataFetch` and `CivitaiMetadataFetch` classes will
retrieve metadata from their corresponding repositories and return
The `HuggingFaceMetadataFetch` class will
retrieve metadata from its corresponding repository and return
`AnyModelRepoMetadata` objects. Their base class
`ModelMetadataFetchBase` is an abstract class that defines two
methods: `from_url()` and `from_id()`. The former accepts the type of
@ -1397,96 +1303,17 @@ provide a `requests.Session` argument. This allows you to customize
the low-level HTTP fetch requests and is used, for instance, in the
testing suite to avoid hitting the internet.
The HuggingFace and Civitai fetcher subclasses add additional
repo-specific fetching methods:
The HuggingFace fetcher subclass add additional repo-specific fetching methods:
#### HuggingFaceMetadataFetch
This overrides its base class `from_json()` method to return a
`HuggingFaceMetadata` object directly.
#### CivitaiMetadataFetch
This adds the following methods:
`from_civitai_modelid()` This takes the ID of a model, finds the
default version of the model, and then retrieves the metadata for
that version, returning a `CivitaiMetadata` object directly.
`from_civitai_versionid()` This takes the ID of a model version and
retrieves its metadata. Functionally equivalent to `from_id()`, the
only difference is that it returna a `CivitaiMetadata` object rather
than an `AnyModelRepoMetadata`.
### Metadata Storage
The `ModelMetadataStore` provides a simple facility to store model
metadata in the `invokeai.db` database. The data is stored as a JSON
blob, with a few common fields (`name`, `author`, `tags`) broken out
to be searchable.
When a metadata object is saved to the database, it is identified
using the model key, _and this key must correspond to an existing
model key in the model_config table_. There is a foreign key integrity
constraint between the `model_config.id` field and the
`model_metadata.id` field such that if you attempt to save metadata
under an unknown key, the attempt will result in an
`UnknownModelException`. Likewise, when a model is deleted from
`model_config`, the deletion of the corresponding metadata record will
be triggered.
Tags are stored in a normalized fashion in the tables `model_tags` and
`tags`. Triggers keep the tag table in sync with the `model_metadata`
table.
To create the storage object, initialize it with the InvokeAI
`SqliteDatabase` object. This is often done this way:
```
from invokeai.app.api.dependencies import ApiDependencies
metadata_store = ModelMetadataStore(ApiDependencies.invoker.services.db)
```
You can then access the storage with the following methods:
#### `add_metadata(key, metadata)`
Add the metadata using a previously-defined model key.
There is currently no `delete_metadata()` method. The metadata will
persist until the matching config is deleted from the `model_config`
table.
#### `get_metadata(key) -> AnyModelRepoMetadata`
Retrieve the metadata corresponding to the model key.
#### `update_metadata(key, new_metadata)`
Update an existing metadata record with new metadata.
#### `search_by_tag(tags: Set[str]) -> Set[str]`
Given a set of tags, find models that are tagged with them. If
multiple tags are provided then a matching model must be tagged with
*all* the tags in the set. This method returns a set of model keys and
is intended to be used in conjunction with the `ModelRecordService`:
```
model_config_store = ApiDependencies.invoker.services.model_records
matches = metadata_store.search_by_tag({'license:other'})
models = [model_config_store.get(x) for x in matches]
```
#### `search_by_name(name: str) -> Set[str]
Find all model metadata records that have the given name and return a
set of keys to the corresponding model config objects.
#### `search_by_author(author: str) -> Set[str]
Find all model metadata records that have the given author and return
a set of keys to the corresponding model config objects.
The `ModelConfigBase` stores this response in the `source_api_response` field
as a JSON blob.
***

View File

@ -0,0 +1,133 @@
# Invoke UI
Invoke's UI is made possible by many contributors and open-source libraries. Thank you!
## Dev environment
### Setup
1. Install [node] and [pnpm].
1. Run `pnpm i` to install all packages.
#### Run in dev mode
1. From `invokeai/frontend/web/`, run `pnpm dev`.
1. From repo root, run `python scripts/invokeai-web.py`.
1. Point your browser to the dev server address, e.g. <http://localhost:5173/>
### Package scripts
- `dev`: run the frontend in dev mode, enabling hot reloading
- `build`: run all checks (madge, eslint, prettier, tsc) and then build the frontend
- `typegen`: generate types from the OpenAPI schema (see [Type generation])
- `lint:dpdm`: check circular dependencies
- `lint:eslint`: check code quality
- `lint:prettier`: check code formatting
- `lint:tsc`: check type issues
- `lint:knip`: check for unused exports or objects (failures here are just suggestions, not hard fails)
- `lint`: run all checks concurrently
- `fix`: run `eslint` and `prettier`, fixing fixable issues
### Type generation
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
The generated types are committed to the repo in [schema.ts].
```sh
# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
pnpm typegen
```
### Localization
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project.
Only the English source strings should be changed on this repo.
### VSCode
#### Example debugger config
```jsonc
{
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Invoke UI",
"url": "http://localhost:5173",
"webRoot": "${workspaceFolder}/invokeai/frontend/web"
}
]
}
```
#### Remote dev
We've noticed an intermittent timeout issue with the VSCode remote dev port forwarding.
We suggest disabling the editor's port forwarding feature and doing it manually via SSH:
```sh
ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host
```
## Contributing Guidelines
Thanks for your interest in contributing to the Invoke Web UI!
Please follow these guidelines when contributing.
### Check in before investing your time
Please check in before you invest your time on anything besides a trivial fix, in case it conflicts with ongoing work or isn't aligned with the vision for the app.
If a feature request or issue doesn't already exist for the thing you want to work on, please create one.
Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the feature request / issue you want to work on - we're happy to chat.
### Code conventions
- This is a fairly complex app with a deep component tree. Please use memoization (`useCallback`, `useMemo`, `memo`) with enthusiasm.
- If you need to add some global, ephemeral state, please use [nanostores] if possible.
- Be careful with your redux selectors. If they need to be parameterized, consider creating them inside a `useMemo`.
- Feel free to use `lodash` (via `lodash-es`) to make the intent of your code clear.
- Please add comments describing the "why", not the "how" (unless it is really arcane).
### Commit format
Please use the [conventional commits] spec for the web UI, with a scope of "ui":
- `chore(ui): bump deps`
- `chore(ui): lint`
- `feat(ui): add some cool new feature`
- `fix(ui): fix some bug`
### Submitting a PR
- Ensure your branch is tidy. Use an interactive rebase to clean up the commit history and reword the commit messages if they are not descriptive.
- Run `pnpm lint`. Some issues are auto-fixable with `pnpm fix`.
- Fill out the PR form when creating the PR.
- It doesn't need to be super detailed, but a screenshot or video is nice if you changed something visually.
- If a section isn't relevant, delete it. There are no UI tests at this time.
## Other docs
- [Workflows - Design and Implementation]
- [State Management]
[node]: https://nodejs.org/en/download/
[pnpm]: https://github.com/pnpm/pnpm
[discord]: https://discord.gg/ZmtBAhwWhy
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[Type generation]: #type-generation
[schema.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/services/api/schema.ts
[conventional commits]: https://www.conventionalcommits.org/en/v1.0.0/
[Workflows - Design and Implementation]: ./WORKFLOWS.md
[State Management]: ./STATE_MGMT.md

View File

@ -1,40 +1,5 @@
# Workflows - Design and Implementation
<!-- @import "[TOC]" {cmd="toc" depthFrom=1 depthTo=6 orderedList=false} -->
<!-- code_chunk_output -->
- [Workflows - Design and Implementation](#workflows---design-and-implementation)
- [Design](#design)
- [Linear UI](#linear-ui)
- [Workflow Editor](#workflow-editor)
- [Workflows](#workflows)
- [Workflow -> reactflow state -> InvokeAI graph](#workflow---reactflow-state---invokeai-graph)
- [Nodes vs Invocations](#nodes-vs-invocations)
- [Workflow Linear View](#workflow-linear-view)
- [OpenAPI Schema](#openapi-schema)
- [Field Instances and Templates](#field-instances-and-templates)
- [Stateful vs Stateless Fields](#stateful-vs-stateless-fields)
- [Collection and Polymorphic Fields](#collection-and-polymorphic-fields)
- [Implementation](#implementation)
- [zod Schemas and Types](#zod-schemas-and-types)
- [OpenAPI Schema Parsing](#openapi-schema-parsing)
- [Parsing Field Types](#parsing-field-types)
- [Primitive Types](#primitive-types)
- [Complex Types](#complex-types)
- [Collection Types](#collection-types)
- [Collection or Scalar Types](#collection-or-scalar-types)
- [Optional Fields](#optional-fields)
- [Building Field Input Templates](#building-field-input-templates)
- [Building Field Output Templates](#building-field-output-templates)
- [Managing reactflow State](#managing-reactflow-state)
- [Building Nodes and Edges](#building-nodes-and-edges)
- [Building a Workflow](#building-a-workflow)
- [Loading a Workflow](#loading-a-workflow)
- [Workflow Migrations](#workflow-migrations)
<!-- /code_chunk_output -->
> This document describes, at a high level, the design and implementation of workflows in the InvokeAI frontend. There are a substantial number of implementation details not included, but which are hopefully clear from the code.
InvokeAI's backend uses graphs, composed of **nodes** and **edges**, to process data and generate images.
@ -152,13 +117,13 @@ Stateless fields do not store their value in the node, so their field instances
"Custom" fields will always be treated as stateless fields.
##### Collection and Polymorphic Fields
##### Collection and Scalar Fields
Field types have a name and two flags which may identify it as a **collection** or **polymorphic** field.
Field types have a name and two flags which may identify it as a **collection** or **collection or scalar** field.
If a field is annotated in python as a list, its field type is parsed and flagged as a collection type (e.g. `list[int]`).
If a field is annotated in python as a list, its field type is parsed and flagged as a **collection** type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a polymorphic type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
If it is annotated as a union of a type and list, the type will be flagged as a **collection or scalar** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
@ -338,13 +303,13 @@ Migration logic is in [migrations.ts].
[reactflow]: https://github.com/xyflow/xyflow 'reactflow'
[reactflow-concepts]: https://reactflow.dev/learn/concepts/terms-and-definitions
[reactflow-events]: https://reactflow.dev/api-reference/react-flow#event-handlers
[buildWorkflow.ts]: ../src/features/nodes/util/workflow/buildWorkflow.ts
[nodesSlice.ts]: ../src/features/nodes/store/nodesSlice.ts
[buildLinearTextToImageGraph.ts]: ../src/features/nodes/util/graph/buildLinearTextToImageGraph.ts
[buildNodesGraph.ts]: ../src/features/nodes/util/graph/buildNodesGraph.ts
[buildInvocationNode.ts]: ../src/features/nodes/util/node/buildInvocationNode.ts
[validateWorkflow.ts]: ../src/features/nodes/util/workflow/validateWorkflow.ts
[migrations.ts]: ../src/features/nodes/util/workflow/migrations.ts
[parseSchema.ts]: ../src/features/nodes/util/schema/parseSchema.ts
[buildFieldInputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldInputTemplate.ts
[buildFieldOutputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldOutputTemplate.ts
[buildWorkflow.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/buildWorkflow.ts
[nodesSlice.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts
[buildLinearTextToImageGraph.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/graph/buildLinearTextToImageGraph.ts
[buildNodesGraph.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/graph/buildNodesGraph.ts
[buildInvocationNode.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/node/buildInvocationNode.ts
[validateWorkflow.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/validateWorkflow.ts
[migrations.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/migrations.ts
[parseSchema.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/parseSchema.ts
[buildFieldInputTemplate.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts
[buildFieldOutputTemplate.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldOutputTemplate.ts

View File

@ -6,259 +6,161 @@ title: Configuration
## Intro
InvokeAI has numerous runtime settings which can be used to adjust
many aspects of its operations, including the location of files and
directories, memory usage, and performance. These settings can be
viewed and customized in several ways:
Runtime settings, including the location of files and
directories, memory usage, and performance, are managed via the
`invokeai.yaml` config file or environment variables. A subset
of settings may be set via commandline arguments.
1. By editing settings in the `invokeai.yaml` file.
2. By setting environment variables.
3. On the command-line, when InvokeAI is launched.
Settings sources are used in this order:
In addition, the most commonly changed settings are accessible
graphically via the `invokeai-configure` script.
- CLI args
- Environment variables
- `invokeai.yaml` settings
- Fallback: defaults
### How the Configuration System Works
### InvokeAI Root Directory
When InvokeAI is launched, the very first thing it needs to do is to
find its "root" directory, which contains its configuration files,
installed models, its database of images, and the folder(s) of
generated images themselves. In this document, the root directory will
be referred to as ROOT.
On startup, InvokeAI searches for its "root" directory. This is the directory
that contains models, images, the database, and so on. It also contains
a configuration file called `invokeai.yaml`.
#### Finding the Root Directory
InvokeAI searches for the root directory in this order:
To find its root directory, InvokeAI uses the following recipe:
1. The `--root <path>` CLI arg.
2. The environment variable INVOKEAI_ROOT.
3. The directory containing the currently active virtual environment.
4. Fallback: a directory in the current user's home directory named `invokeai`.
1. It first looks for the argument `--root <path>` on the command line
it was launched from, and uses the indicated path if present.
### InvokeAI Configuration File
2. Next it looks for the environment variable INVOKEAI_ROOT, and uses
the directory path found there if present.
Inside the root directory, we read settings from the `invokeai.yaml` file.
3. If neither of these are present, then InvokeAI looks for the
folder containing the `.venv` Python virtual environment directory for
the currently active environment. This directory is checked for files
expected inside the InvokeAI root before it is used.
It has two sections - one for internal use and one for user settings:
4. Finally, InvokeAI looks for a directory in the current user's home
directory named `invokeai`.
```yaml
# Internal metadata - do not edit:
schema_version: 4
#### Reading the InvokeAI Configuration File
Once the root directory has been located, InvokeAI looks for a file
named `ROOT/invokeai.yaml`, and if present reads configuration values
from it. The top of this file looks like this:
```
InvokeAI:
Web Server:
host: localhost
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
restore: true
...
# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:
host: 0.0.0.0 # serve the app on your local network
models_dir: D:\invokeai\models # store models on an external drive
precision: float16 # always use fp16 precision
```
This lines in this file are used to establish default values for
Invoke's settings. In the above fragment, the Web Server's listening
port is set to 9090 by the `port` setting.
The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
You can edit this file with a text editor such as "Notepad" (do not
use Word or any other word processor). When editing, be careful to
maintain the indentation, and do not add extraneous text, as syntax
errors will prevent InvokeAI from launching. A basic guide to the
format of YAML files can be found
[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/).
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Reading Environment Variables
#### Custom Config File Location
Next InvokeAI looks for defined environment variables in the format
`INVOKEAI_<setting_name>`, for example `INVOKEAI_port`. Environment
variable values take precedence over configuration file variables. On
a Macintosh system, for example, you could change the port that the
web server listens on by setting the environment variable this way:
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.
```
export INVOKEAI_port=8000
invokeai-web
Note that environment variables will trump any settings in the config file.
### Environment Variables
All settings may be set via environment variables by prefixing `INVOKEAI_`
to the variable name. For example, `INVOKEAI_HOST` would set the `host`
setting.
For non-primitive values, pass a JSON-encoded string:
```sh
export INVOKEAI_REMOTE_API_TOKENS='[{"url_regex":"modelmarketplace", "token": "12345"}]'
```
Please check out these
[Macintosh](https://phoenixnap.com/kb/set-environment-variable-mac)
and
[Windows](https://phoenixnap.com/kb/windows-set-environment-variable)
guides for setting temporary and permanent environment variables.
We suggest using `invokeai.yaml`, as it is more user-friendly.
#### Reading the Command Line
### CLI Args
Lastly, InvokeAI takes settings from the command line, which override
everything else. The command-line settings have the same name as the
corresponding configuration file settings, preceded by a `--`, for
example `--port 8000`.
A subset of settings may be specified using CLI args:
If you are using the launcher (`invoke.sh` or `invoke.bat`) to launch
InvokeAI, then just pass the command-line arguments to the launcher:
- `--root`: specify the root directory
- `--config`: override the default `invokeai.yaml` file location
```
invoke.bat --port 8000 --host 0.0.0.0
### All Settings
Following the table are additional explanations for certain settings.
<!-- prettier-ignore-start -->
::: invokeai.app.services.config.config_default.InvokeAIAppConfig
options:
heading_level: 4
members: false
show_docstring_description: false
group_by_category: true
show_category_heading: false
<!-- prettier-ignore-end -->
#### Model Marketplace API Keys
Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your `invokeai.yaml` file to provide that API key.
The pattern can be any valid regex (you may need to surround the pattern with quotes):
```yaml
remote_api_tokens:
# Any URL containing `models.com` will automatically use `your_models_com_token`
- url_regex: models.com
token: your_models_com_token
# Any URL matching this contrived regex will use `some_other_token`
- url_regex: '^[a-z]{3}whatever.*\.com$'
token: some_other_token
```
The arguments will be applied when you select the web server option
(and the other options as well).
The provided token will be added as a `Bearer` token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.
If, on the other hand, you prefer to launch InvokeAI directly from the
command line, you would first activate the virtual environment (known
as the "developer's console" in the launcher), and run `invokeai-web`:
#### Model Hashing
```
> C:\Users\Fred\invokeai\.venv\scripts\activate
(.venv) > invokeai-web --port 8000 --host 0.0.0.0
Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.
```yaml
hashing_algorithm: blake3_single # default value
```
You can get a listing and brief instructions for each of the
command-line options by giving the `--help` argument:
You might want to change this setting, depending on your system:
```
(.venv) > invokeai-web --help
usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials] [--allow_methods [ALLOW_METHODS ...]]
[--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan] [--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization]
[--patchmatch | --no-patchmatch] [--restore | --no-restore]
[--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_loaded_models MAX_LOADED_MODELS] [--max_cache_size MAX_CACHE_SIZE]
[--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--gpu_mem_reserved GPU_MEM_RESERVED] [--precision {auto,float16,float32,autocast}]
[--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled] [--tiled_decode | --no-tiled_decode] [--root ROOT]
[--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR] [--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH]
[--models_dir MODELS_DIR] [--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE]
[--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]] [--log_format {plain,color,syslog,legacy}]
[--log_level {debug,info,warning,error,critical}] [--version | --no-version]
```
- `blake3_single` (default): Single-threaded - best for spinning HDDs, still OK for SSDs
- `blake3_multi`: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disks
- `random`: Skip hashing entirely - fastest but of course no hash
## The Configuration Settings
During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.
The configuration settings are divided into several distinct
groups in `invokeia.yaml`:
Most common algorithms are supported, like `md5`, `sha256`, and `sha512`. These are typically much, much slower than either of the BLAKE3 variants.
### Web Server
| Setting | Default Value | Description |
|---------------------|---------------|----------------------------------------------------------------------------------------------------------------------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| `ssl_certfile` | null | Path to an SSL certificate file, used to enable HTTPS. |
| `ssl_keyfile` | null | Path to an SSL keyfile, if the key is not included in the certificate file. |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].
### Features
These configuration settings allow you to enable and disable various InvokeAI features:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `esrgan` | `true` | Activate the ESRGAN upscaling options|
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
### Generation
These options tune InvokeAI's memory and performance characteristics.
| Setting | Default Value | Description |
|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `attention_type` | `auto` | Select the type of attention to use. One of `auto`,`normal`,`xformers`,`sliced`, or `torch-sdp` |
| `attention_slice_size` | `auto` | When "sliced" attention is selected, set the slice size. One of `auto`, `balanced`, `max` or the integers 1-8|
| `force_tiled_decode` | `false` | Force the VAE step to decode in tiles, reducing memory consumption at the cost of performance |
### Device
These options configure the generation execution device.
| Setting | Default Value | Description |
|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `device` | `auto` | Preferred execution device. One of `auto`, `cpu`, `cuda`, `cuda:1`, `mps`. `auto` will choose the device depending on the hardware platform and the installed torch capabilities. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
### Paths
#### Path Settings
These options set the paths of various directories and files used by
InvokeAI. Relative paths are interpreted relative to INVOKEAI_ROOT, so
if INVOKEAI_ROOT is `/home/fred/invokeai` and the path is
InvokeAI. Relative paths are interpreted relative to the root directory, so
if root is `/home/fred/invokeai` and the path is
`autoimport/main`, then the corresponding directory will be located at
`/home/fred/invokeai/autoimport/main`.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `autoimport_dir` | `autoimport/main` | At startup time, read and import any main model files found in this directory |
| `lora_dir` | `autoimport/lora` | At startup time, read and import any LoRA/LyCORIS models found in this directory |
| `embedding_dir` | `autoimport/embedding` | At startup time, read and import any textual inversion (embedding) models found in this directory |
| `controlnet_dir` | `autoimport/controlnet` | At startup time, read and import any ControlNet models found in this directory |
| `conf_path` | `configs/models.yaml` | Location of the `models.yaml` model configuration file |
| `models_dir` | `models` | Location of the directory containing models installed by InvokeAI's model manager |
| `legacy_conf_dir` | `configs/stable-diffusion` | Location of the directory containing the .yaml configuration files for legacy checkpoint models |
| `db_dir` | `databases` | Location of the directory containing InvokeAI's image, schema and session database |
| `outdir` | `outputs` | Location of the directory in which the gallery of generated and uploaded images will be stored |
| `use_memory_db` | `false` | Keep database information in memory rather than on disk; this will not preserve image gallery information across restarts |
Note that the autoimport directories will be searched recursively,
Note that the autoimport directory will be searched recursively,
allowing you to organize the models into folders and subfolders in any
way you wish. In addition, while we have split up autoimport
directories by the type of model they contain, this isn't
necessary. You can combine different model types in the same folder
and InvokeAI will figure out what they are. So you can easily use just
one autoimport directory by commenting out the unneeded paths:
way you wish.
```
Paths:
autoimport_dir: autoimport
# lora_dir: null
# embedding_dir: null
# controlnet_dir: null
```
### Logging
These settings control the information, warning, and debugging
messages printed to the console log while InvokeAI is running:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `log_handlers` | `console` | This controls where log messages are sent, and can be a list of one or more destinations. Values include `console`, `file`, `syslog` and `http`. These are described in more detail below |
| `log_format` | `color` | This controls the formatting of the log messages. Values are `plain`, `color`, `legacy` and `syslog` |
| `log_level` | `debug` | This filters messages according to the level of severity and can be one of `debug`, `info`, `warning`, `error` and `critical`. For example, setting to `warning` will display all messages at the warning level or higher, but won't display "debug" or "info" messages |
#### Logging
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```
```yaml
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
* `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
- `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
* `syslog` is only available on Linux and Macintosh systems. It uses
- `syslog` is only available on Linux and Macintosh systems. It uses
the operating system's "syslog" facility to write log file entries
locally or to a remote logging machine. `syslog` offers a variety
of configuration options:
@ -271,7 +173,7 @@ Several different log handler destinations are available, and multiple destinati
- Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets.
```
* `http` can be used to log to a remote web server. The server must be
- `http` can be used to log to a remote web server. The server must be
properly configured to receive and act on log messages. The option
accepts the URL to the web server, and a `method` argument
indicating whether the message should be submitted using the GET or
@ -283,7 +185,10 @@ Several different log handler destinations are available, and multiple destinati
The `log_format` option provides several alternative formats:
* `color` - default format providing time, date and a message, using text colors to distinguish different log severities
* `plain` - same as above, but monochrome text only
* `syslog` - the log level and error message only, allowing the syslog system to attach the time and date
* `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases.
- `color` - default format providing time, date and a message, using text colors to distinguish different log severities
- `plain` - same as above, but monochrome text only
- `syslog` - the log level and error message only, allowing the syslog system to attach the time and date
- `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases.
[basic guide to yaml files]: https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/
[Model Marketplace API Keys]: #model-marketplace-api-keys

35
docs/features/DATABASE.md Normal file
View File

@ -0,0 +1,35 @@
---
title: Database
---
# Invoke's SQLite Database
Invoke uses a SQLite database to store image, workflow, model, and execution data.
We take great care to ensure your data is safe, by utilizing transactions and a database migration system.
Even so, when testing an prerelease version of the app, we strongly suggest either backing up your database or using an in-memory database. This ensures any prelease hiccups or databases schema changes will not cause problems for your data.
## Database Backup
Backing up your database is very simple. Invoke's data is stored in an `$INVOKEAI_ROOT` directory - where your `invoke.sh`/`invoke.bat` and `invokeai.yaml` files live.
To back up your database, copy the `invokeai.db` file from `$INVOKEAI_ROOT/databases/invokeai.db` to somewhere safe.
If anything comes up during prelease testing, you can simply copy your backup back into `$INVOKEAI_ROOT/databases/`.
## In-Memory Database
SQLite can run on an in-memory database. Your existing database is untouched when this mode is enabled, but your existing data won't be accessible.
This is very useful for testing, as there is no chance of a database change modifying your "physical" database.
To run Invoke with a memory database, edit your `invokeai.yaml` file, and add `use_memory_db: true` to the `Paths:` stanza:
```yaml
InvokeAI:
Development:
use_memory_db: true
```
Delete this line (or set it to `false`) to use your main database.

View File

@ -1,580 +1,105 @@
---
title: Installing with the Automated Installer
---
# Automatic Install
# InvokeAI Automated Installation
The installer is used for both new installs and updates.
## Introduction
Both release and pre-release versions can be installed using it. It also supports install a wheel if needed.
The automated installer is a Python script that automates the steps
needed to install and run InvokeAI on a stock computer running recent
versions of Linux, MacOS or Windows. It will leave you with a version
that runs a stable version of InvokeAI with the option to upgrade to
experimental versions later.
Be sure to review the [installation requirements] and ensure your system has everything it needs to install Invoke.
## Walk through
## Getting the Latest Installer
1. <a name="hardware_requirements">**Hardware Requirements**: </a>Make sure that your system meets the [hardware
requirements](../index.md#hardware-requirements) and has the
appropriate GPU drivers installed. For a system with an NVIDIA
card installed, you will need to install the CUDA driver, while
AMD-based cards require the ROCm driver. In most cases, if you've
already used the system for gaming or other graphics-intensive
tasks, the appropriate drivers will already be installed. If
unsure, check the [GPU Driver Guide](030_INSTALL_CUDA_AND_ROCM.md)
Download the `InvokeAI-installer-vX.Y.Z.zip` file from the [latest release] page. It is at the bottom of the page, under **Assets**.
!!! info "Required Space"
After unzipping the installer, you should have a `InvokeAI-Installer` folder with some files inside, including `install.bat` and `install.sh`.
Installation requires roughly 18G of free disk space to load
the libraries and recommended model weights files.
## Running the Installer
Regardless of your destination disk, your *system drive*
(`C:\` on Windows, `/` on macOS/Linux) requires at least 6GB
of free disk space to download and cache python
dependencies.
!!! tip
NOTE for Linux users: if your temporary directory is mounted
as a `tmpfs`, ensure it has sufficient space.
Windows users should first double-click the `WinLongPathsEnabled.reg` file to prevent a failed installation due to long file paths.
2. <a name="software_requirements">**Software Requirements**: </a>Check that your system has an up-to-date Python installed. To do
this, open up a command-line window ("Terminal" on Linux and
Macintosh, "Command" or "Powershell" on Windows) and type `python
--version`. If Python is installed, it will print out the version
number. If it is version `3.10.*` or `3.11.*` you meet
requirements.
!!! warning "What to do if you have an unsupported version"
Go to [Python Downloads](https://www.python.org/downloads/)
and download the appropriate installer package for your
platform. We recommend [Version
3.10.12](https://www.python.org/downloads/release/python-3109/),
which has been extensively tested with InvokeAI.
_Please select your platform in the section below for platform-specific
setup requirements._
Double-click the install script:
=== "Windows"
During the Python configuration process, look out for a
checkbox to add Python to your PATH and select it. If the
install script complains that it can't find python, then open
the Python installer again and choose "Modify" existing
installation.
Installation requires an up to date version of the Microsoft
Visual C libraries. Please install the 2015-2022 libraries
available here:
https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
Please double-click on the file `WinLongPathsEnabled.reg` and
accept the dialog box that asks you if you wish to modify your registry.
This activates long filename support on your system and will prevent
mysterious errors during installation.
=== "Linux"
To install an appropriate version of Python on Ubuntu 22.04
and higher, run the following:
```
sudo apt update
sudo apt install -y python3 python3-pip python3-venv
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.10 3
```sh
install.bat
```
On Ubuntu 20.04, the process is slightly different:
=== "Linux/macOS"
```
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt install -y python3.10 python3-pip python3.10-venv
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.10 3
```sh
install.sh
```
Both `python` and `python3` commands are now pointing at
Python3.10. You can still access older versions of Python by
calling `python2`, `python3.8`, etc.
!!! info "Running the Installer from the commandline"
Linux systems require a couple of additional graphics
libraries to be installed for proper functioning of
`python3-opencv`. Please run the following:
You can also run the install script from cmd/powershell (Windows) or terminal (Linux/macOS).
`sudo apt update && sudo apt install -y libglib2.0-0 libgl1-mesa-glx`
!!! warning "Untrusted Publisher (Windows)"
=== "Mac"
You may get a popup saying the file comes from an `Untrusted Publisher`. Click `More Info` and `Run Anyway` to get past this.
After installing Python, you may need to run the
following command from the Terminal in order to install the Web
certificates needed to download model data from https sites. If
you see lots of CERTIFICATE ERRORS during the last part of the
install, this is the problem, and you can fix it with this command:
The installation process is simple, with a few prompts:
`/Applications/Python\ 3.10/Install\ Certificates.command`
- Select the version to install. Unless you have a specific reason to install a specific version, select the default (the latest version).
- Select location for the install. Be sure you have enough space in this folder for the base application, as described in the [installation requirements].
- Select a GPU device. If you are unsure, you can let the installer figure it out.
You may need to install the Xcode command line tools. These
are a set of tools that are needed to run certain applications in a
Terminal, including InvokeAI. This package is provided
directly by Apple. To install, open a terminal window and run `xcode-select --install`. You will get a macOS system popup guiding you through the
install. If you already have them installed, you will instead see some
output in the Terminal advising you that the tools are already installed. More information can be found at [FreeCode Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
!!! info "Slow Installation"
3. **Download the Installer**: The InvokeAI installer is distributed as a ZIP files. Go to the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest),
and look for a file named:
The installer needs to download several GB of data and install it all. It may appear to get stuck at 99.9% when installing `pytorch` or during a step labeled "Installing collected packages".
- InvokeAI-installer-v3.X.X.zip
If it is stuck for over 10 minutes, something has probably gone wrong and you should close the window and restart.
where "3.X.X" is the latest released version. The file is located
at the very bottom of the release page, under **Assets**.
## Running the Application
4. **Unpack the installer**: Unpack the zip file into a convenient directory. This will create a new
directory named "InvokeAI-Installer". When unpacked, the directory
will look like this:
Find the install location you selected earlier. Double-click the launcher script to run the app:
<figure markdown>
![zipfile-screenshot](../assets/installer-walkthrough/unpacked-zipfile.png)
</figure>
=== "Windows"
5. **Launch the installer script from the desktop**: If you are using a desktop GUI, double-click the installer file
appropriate for your platform. It will be named `install.bat` on
Windows systems and `install.sh` on Linux and Macintosh
systems. Be aware that your system's file browser may suppress the
display of the file extension.
On Windows systems if you get an "Untrusted Publisher" warning.
Click on "More Info" and then select "Run Anyway." You trust us, right?
6. **[Alternative] Launch the installer script from the command line**: Alternatively, from the command line, run the shell script or .bat file:
```cmd
C:\Documents\Linco> cd InvokeAI-Installer
C:\Documents\Linco\invokeAI> .\install.bat
```sh
invoke.bat
```
7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a
directory with at least 18G of free space for a full install. InvokeAI and
all its support files will be installed into a new directory named
`invokeai` located at the location you specify.
=== "Linux/macOS"
<figure markdown>
![confirm-install-directory-screenshot](../assets/installer-walkthrough/confirm-directory.png)
</figure>
- The default is to install the `invokeai` directory in your home directory,
usually `C:\Users\YourName\invokeai` on Windows systems,
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
on Macintoshes, where "YourName" is your login name.
-If you have previously installed InvokeAI, you will be asked to
confirm whether you want to reinstall into this directory. You
may choose to reinstall, in which case your version will be upgraded,
or choose a different directory.
- The script uses tab autocompletion to suggest directory path completions.
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
to suggest completions.
8. **Select your GPU**: The installer will autodetect your platform and will request you to
confirm the type of GPU your graphics card has. On Linux systems,
you will have the choice of CUDA (NVidia cards), ROCm (AMD cards),
or CPU (no graphics acceleration). On Windows, you'll have the
choice of CUDA vs CPU, and on Macs you'll be offered CPU only. When
you select CPU on M1/M2/M3 Macintoshes, you will get MPS-based
graphics acceleration without installing additional drivers. If you
are unsure what GPU you are using, you can ask the installer to
guess.
9. **Watch it go!**: Sit back and let the install script work. It will install the third-party
libraries needed by InvokeAI and the application itself.
Be aware that some of the library download and install steps take a long
time. In particular, the `pytorch` package is quite large and often appears
to get "stuck" at 99.9%. Have patience and the installation step will
eventually resume. However, there are occasions when the library install
does legitimately get stuck. If you have been waiting for more than ten
minutes and nothing is happening, you can interrupt the script with ^C. You
may restart it and it will pick up where it left off.
<figure markdown>
![initial-settings-screenshot](../assets/installer-walkthrough/settings-form.png)
</figure>
10. **Post-install Configuration**: After installation completes, the
installer will launch the configuration form, which will guide you
through the first-time process of adjusting some of InvokeAI's
startup settings. To move around this form use ctrl-N for
&lt;N&gt;ext and ctrl-P for &lt;P&gt;revious, or use &lt;tab&gt;
and shift-&lt;tab&gt; to move forward and back. Once you are in a
multi-checkbox field use the up and down cursor keys to select the
item you want, and &lt;space&gt; to toggle it on and off. Within
a directory field, pressing &lt;tab&gt; will provide autocomplete
options.
Generally the defaults are fine, and you can come back to this screen at
any time to tweak your system. Here are the options you can adjust:
- ***HuggingFace Access Token***
InvokeAI has the ability to download embedded styles and subjects
from the HuggingFace Concept Library on-demand. However, some of
the concept library files are password protected. To make download
smoother, you can set up an account at huggingface.co, obtain an
access token, and paste it into this field. Note that you paste
to this screen using ctrl-shift-V
- ***Free GPU memory after each generation***
This is useful for low-memory machines and helps minimize the
amount of GPU VRAM used by InvokeAI.
- ***Enable xformers support if available***
If the xformers library was successfully installed, this will activate
it to reduce memory consumption and increase rendering speed noticeably.
Note that xformers has the side effect of generating slightly different
images even when presented with the same seed and other settings.
- ***Force CPU to be used on GPU systems***
This will use the (slow) CPU rather than the accelerated GPU. This
can be used to generate images on systems that don't have a compatible
GPU.
- ***Precision***
This controls whether to use float32 or float16 arithmetic.
float16 uses less memory but is also slightly less accurate.
Ordinarily the right arithmetic is picked automatically ("auto"),
but you may have to use float32 to get images on certain systems
and graphics cards. The "autocast" option is deprecated and
shouldn't be used unless you are asked to by a member of the team.
- **Size of the RAM cache used for fast model switching***
This allows you to keep models in memory and switch rapidly among
them rather than having them load from disk each time. This slider
controls how many models to keep loaded at once. A typical SD-1 or SD-2 model
uses 2-3 GB of memory. A typical SDXL model uses 6-7 GB. Providing more
RAM will allow more models to be co-resident.
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***Autoimport Folder***
This is the directory in which you can place models you have
downloaded and wish to load into InvokeAI. You can place a variety
of models in this directory, including diffusers folders, .ckpt files,
.safetensors files, as well as LoRAs, ControlNet and Textual Inversion
files (both folder and file versions). To help organize this folder,
you can create several levels of subfolders and drop your models into
whichever ones you want.
- ***LICENSE***
At the bottom of the screen you will see a checkbox for accepting
the CreativeML Responsible AI Licenses. You need to accept the license
in order to download Stable Diffusion models from the next screen.
_You can come back to the startup options form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
this script. On the command line, it is named `invokeai-configure`.
11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
to another screen that prompts you to download a series of starter models. The ones
we recommend are preselected for you, but you are encouraged to use the checkboxes to
pick and choose.
You will probably wish to download `autoencoder-840000` for use with models that
were trained with an older version of the Stability VAE.
<figure markdown>
![select-models-screenshot](../assets/installer-walkthrough/installing-models.png)
</figure>
Below the preselected list of starter models is a large text field which you can use
to specify a series of models to import. You can specify models in a variety of formats,
each separated by a space or newline. The formats accepted are:
- The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
the file browser to the textfield to automatically paste the path. Be sure to remove
extraneous quotation marks and other things that come along for the ride.
- The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
The directory will be scanned from top to bottom (including subfolders) and any
file that can be imported will be.
- A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
and paste directly from a web page, or simply drag the link from the web page
or navigation bar. (You can also use ctrl-shift-V to paste into this field)
The file will be downloaded and installed.
- The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
the format _author_name/model_name_, as in `andite/anything-v4.0`
- The path to a local directory containing a `diffusers`
model. These directories always have the file `model_index.json`
at their top level.
_Select a directory for models to import_ You may select a local
directory for autoimporting at startup time. If you select this
option, the directory you choose will be scanned for new
.ckpt/.safetensors files each time InvokeAI starts up, and any new
files will be automatically imported and made available for your
use.
_Convert imported models into diffusers_ When legacy checkpoint
files are imported, you may select to use them unmodified (the
default) or to convert them into `diffusers` models. The latter
load much faster and have slightly better rendering performance,
but not all checkpoint files can be converted. Note that Stable Diffusion
Version 2.X files are **only** supported in `diffusers` format and will
be converted regardless.
_You can come back to the model install form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
this script. On the command line, it is named `invokeai-model-install`.
12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
for the directory `invokeai` installed in the location you chose at the
beginning of the install session. Look for a shell script named `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
it or typing its name at the command-line:
```cmd
C:\Documents\Linco> cd invokeai
C:\Documents\Linco\invokeAI> invoke.bat
```sh
invoke.sh
```
- The `invoke.bat` (`invoke.sh`) script will give you the choice
of starting (1) the command-line interface, (2) the web GUI, (3)
textual inversion training, and (4) model merging.
Choose the first option to run the UI. After a series of startup messages, you'll see something like this:
- By default, the script will launch the web interface. When you
do this, you'll see a series of startup messages ending with
instructions to point your browser at
http://localhost:9090. Click on this link to open up a browser
and start exploring InvokeAI's features.
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
customize its behavior. For example, you can change the location of the
image output directory or balance memory usage vs performance. See
[Configuration](../features/CONFIGURATION.md) for a full list of the options.
- To set defaults that will take effect every time you launch InvokeAI,
use a text editor (e.g. Notepad) to exit the file
`invokeai\invokeai.init`. It contains a variety of examples that you can
follow to add and modify launch options.
- The launcher script also offers you an option labeled "open the developer
console". If you choose this option, you will be dropped into a
command-line interface in which you can run python commands directly,
access developer tools, and launch InvokeAI with customized options.
!!! warning "Do not move or remove the `invokeai` directory"
The `invokeai` directory contains the `invokeai` application, its
configuration files, the model weight files, and outputs of image generation.
Once InvokeAI is installed, do not move or remove this directory."
<a name="troubleshooting"></a>
## Troubleshooting
### _OSErrors on Windows while installing dependencies_
During a zip file installation or an online update, installation stops
with an error like this:
![broken-dependency-screenshot](../assets/troubleshooting/broken-dependency.png){:width="800px"}
This seems to happen particularly often with the `pydantic` and
`numpy` packages. The most reliable solution requires several manual
steps to complete installation.
Open up a Powershell window and navigate to the `invokeai` directory
created by the installer. Then give the following series of commands:
```cmd
rm .\.venv -r -force
python -mvenv .venv
.\.venv\Scripts\activate
pip install invokeai
invokeai-configure --yes --root .
```
Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
```
If you see anything marked as an error during this process please stop
and seek help on the Discord [installation support
channel](https://discord.com/channels/1020123559063990373/1041391462190956654). A
few warning messages are OK.
Copy the URL into your browser and you should see the UI.
If you are updating from a previous version, this should restore your
system to a working state. If you are installing from scratch, there
is one additional command to give:
## First-time Setup
```cmd
wget -O invoke.bat https://raw.githubusercontent.com/invoke-ai/InvokeAI/main/installer/templates/invoke.bat.in
```
You will need to [install some models] before you can generate.
This will create the `invoke.bat` script needed to launch InvokeAI and
its related programs.
Check the [configuration docs] for details on configuring the application.
## Updating
### _Stable Diffusion XL Generation Fails after Trying to Load unet_
Updating is exactly the same as installing - download the latest installer, choose the latest version and off you go.
InvokeAI is working in other respects, but when trying to generate
images with Stable Diffusion XL you get a "Server Error". The text log
in the launch window contains this log line above several more lines of
error messages:
!!! info "Dependency Resolution Issues"
```INFO --> Loading model:D:\LONG\PATH\TO\MODEL, type sdxl:main:unet```
We've found that pip's dependency resolution can cause issues when upgrading packages. One very common problem was pip "downgrading" torch from CUDA to CPU, but things broke in other novel ways.
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
The installer doesn't have this kind of problem, so we use it for updating as well.
To address this, first go to the Web Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then navigate to HuggingFace and
manually download the .safetensors version of the model. The 1.0
version is located at
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
and the file is named `sd_xl_base_1.0.safetensors`.
## Installation Issues
Save this file to disk and then reenter the Model Manager. Navigate to
Import Models->Add Model, then type (or drag-and-drop) the path to the
.safetensors file. Press "Add Model".
If you have installation issues, please review the [installation troubleshooting] docs. You can also [create an issue] or ask for help on [discord].
### _Package dependency conflicts_
If you have previously installed InvokeAI or another Stable Diffusion
package, the installer may occasionally pick up outdated libraries and
either the installer or `invoke` will fail with complaints about
library conflicts. In this case, run the `invoke.sh`/`invoke.bat`
command and enter the Developer's Console by picking option (5). This
will take you to a command-line prompt.
Then give this command:
`pip install InvokeAI --force-reinstall`
This should fix the issues.
### InvokeAI runs extremely slowly on Linux or Windows systems
The most frequent cause of this problem is when the installation
process installed the CPU-only version of the torch machine-learning
library, rather than a version that takes advantage of GPU
acceleration. To confirm this issue, look at the InvokeAI startup
messages. If you see a message saying ">> Using device CPU", then
this is what happened.
To fix this problem, first determine whether you have an NVidia or an
AMD GPU. The former uses the CUDA driver, and the latter uses ROCm
(only available on Linux). Then run the `invoke.sh`/`invoke.bat`
command and enter the Developer's Console by picking option (5). This
will take you to a command-line prompt.
Then type the following commands:
=== "NVIDIA System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121
pip install xformers
```
=== "AMD System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
### Corrupted configuration file
Everything seems to install ok, but `invokeai` complains of a corrupted
configuration file and goes back into the configuration process (asking you to
download models, etc), but this doesn't fix the problem.
This issue is often caused by a misconfigured configuration directive in the
`invokeai\invokeai.init` initialization file that contains startup settings. The
easiest way to fix the problem is to move the file out of the way and re-run
`invokeai-configure`. Enter the developer's console (option 3 of the launcher
script) and run this command:
```cmd
invokeai-configure --root=.
```
Note the dot (.) after `--root`. It is part of the command.
_If none of these maneuvers fixes the problem_ then please report the problem to
the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive
assistance.
### Out of Memory Issues
The models are large, VRAM is expensive, and you may find yourself
faced with Out of Memory errors when generating images. Here are some
tips to reduce the problem:
* **4 GB of VRAM**
This should be adequate for 512x512 pixel images using Stable Diffusion 1.5
and derived models, provided that you **disable** the NSFW checker. To
disable the filter, do one of the following:
* Select option (6) "_change InvokeAI startup options_" from the
launcher. This will bring up the console-based startup settings
dialogue and allow you to unselect the "NSFW Checker" option.
* Start the startup settings dialogue directly by running
`invokeai-configure --skip-sd-weights --skip-support-models`
from the command line.
* Find the `invokeai.init` initialization file in the InvokeAI root
directory, open it in a text editor, and change `--nsfw_checker`
to `--no-nsfw_checker`
If you are on a CUDA system, you can realize significant memory
savings by activating the `xformers` library as described above. The
downside is `xformers` introduces non-deterministic behavior, such
that images generated with exactly the same prompt and settings will
be slightly different from each other. See above for more information.
* **6 GB of VRAM**
This is a border case. Using the SD 1.5 series you should be able to
generate images up to 640x640 with the NSFW checker enabled, and up to
1024x1024 with it disabled and `xformers` activated.
If you run into persistent memory issues there are a series of
environment variables that you can set before launching InvokeAI that
alter how the PyTorch machine learning library manages memory. See
https://pytorch.org/docs/stable/notes/cuda.html#memory-management for
a list of these tweaks.
* **12 GB of VRAM**
This should be sufficient to generate larger images up to about
1280x1280. If you wish to push further, consider activating
`xformers`.
### Other Problems
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.
## Updating to newer versions
This distribution is changing rapidly, and we add new features
regularly. Releases are announced at
http://github.com/invoke-ai/InvokeAI/releases, and at
https://pypi.org/project/InvokeAI/ To update to the latest released
version (recommended), follow these steps:
1. Start the `invoke.sh`/`invoke.bat` launch script from within the
`invokeai` root directory.
2. Choose menu item (10) "Update InvokeAI".
3. This will launch a menu that gives you the option of:
1. Updating to the latest official release;
2. Updating to the bleeding-edge development version; or
3. Manually entering the tag or branch name of a version of
InvokeAI you wish to try out.
[installation requirements]: INSTALLATION.md#installation-requirements
[installation troubleshooting]: INSTALL_TROUBLESHOOTING.md
[install some models]: 050_INSTALLING_MODELS.md
[configuration docs]: ../features/CONFIGURATION.md
[latest release]: https://github.com/invoke-ai/InvokeAI/releases/latest
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[discord]: https://discord.gg/ZmtBAhwWhy

View File

@ -1,122 +1,59 @@
---
title: Installing Manually
---
<figure markdown>
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
</figure>
# Manual Install
!!! warning "This is for Advanced Users"
**Python experience is mandatory**
**Python experience is mandatory.**
## Introduction
!!! tip "Conda"
As of InvokeAI v2.3.0 installation using the `conda` package manager is no longer being supported. It will likely still work, but we are not testing this installation method.
On Windows systems, you are encouraged to install and use the
[PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
which provides compatibility with Linux and Mac shells and nice
features such as command-line completion.
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer that you'll need to manage manually, described in this guide.
### Prerequisites
### Requirements
Before you start, make sure you have the following preqrequisites
installed. These are described in more detail in [Automated
Installation](010_INSTALL_AUTOMATED.md), and in many cases will
already be installed (if, for example, you have used your system for
gaming):
* **Python**
version 3.10 through 3.11
* **CUDA Tools**
For those with _NVidia GPUs_, you will need to
install the [CUDA toolkit and optionally the XFormers library](070_INSTALL_XFORMERS.md).
* **ROCm Tools**
For _Linux users with AMD GPUs_, you will need
to install the [ROCm toolkit](./030_INSTALL_CUDA_AND_ROCM.md). Note that
InvokeAI does not support AMD GPUs on Windows systems due to
lack of a Windows ROCm library.
* **Visual C++ Libraries**
_Windows users_ must install the free
[Visual C++ libraries from Microsoft](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170)
* **The Xcode command line tools**
for _Macintosh users_. Instructions are available at
[Free Code Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
* _Macintosh users_ may also need to run the `Install Certificates` command
if model downloads give lots of certificate errors. Run:
`/Applications/Python\ 3.10/Install\ Certificates.command`
Before you start, go through the [installation requirements].
### Installation Walkthrough
To install InvokeAI with virtual environments and the PIP package
manager, please follow these steps:
1. Please make sure you are using Python 3.10 through 3.11. The rest of the install
procedure depends on this and will not work with other versions:
```bash
python -V
```
2. Create a directory to contain your InvokeAI library, configuration
1. Create a directory to contain your InvokeAI library, configuration
files, and models. This is known as the "runtime" or "root"
directory, and often lives in your home directory under the name `invokeai`.
Please keep in mind the disk space requirements - you will need at
least 20GB for the models and the virtual environment. From now
on we will refer to this directory as `INVOKEAI_ROOT`. For convenience,
the steps below create a shell variable of that name which contains the
path to `HOME/invokeai`.
We will refer to this directory as `INVOKEAI_ROOT`. For convenience, create an environment variable pointing to the directory.
=== "Linux/Mac"
=== "Linux/macOS"
```bash
export INVOKEAI_ROOT=~/invokeai
mkdir $INVOKEAI_ROOT
```
=== "Windows (Powershell)"
=== "Windows (PowerShell)"
```bash
Set-Variable -Name INVOKEAI_ROOT -Value $Home/invokeai
mkdir $INVOKEAI_ROOT
```
3. Enter the root (invokeai) directory and create a virtual Python
environment within it named `.venv`. If the command `python`
doesn't work, try `python3`. Note that while you may create the
virtual environment anywhere in the file system, we recommend that
you create it within the root directory as shown here. This makes
it possible for the InvokeAI applications to find the model data
and configuration. If you do not choose to install the virtual
environment inside the root directory, then you **must** set the
`INVOKEAI_ROOT` environment variable in your shell environment, for
example, by editing `~/.bashrc` or `~/.zshrc` files, or setting the
Windows environment variable using the Advanced System Settings dialogue.
Refer to your operating system documentation for details.
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
!!! info "Virtual Environment Location"
While you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This allows the application to automatically detect its data directories.
If you choose a different location for the venv, then you must set the `INVOKEAI_ROOT` environment variable or pass the directory using the `--root` CLI arg.
```terminal
cd $INVOKEAI_ROOT
python -m venv .venv --prompt InvokeAI
python3 -m venv .venv --prompt InvokeAI
```
4. Activate the new environment:
1. Activate the new environment:
=== "Linux/Mac"
=== "Linux/macOS"
```bash
source .venv/bin/activate
@ -128,22 +65,23 @@ manager, please follow these steps:
.venv\Scripts\activate
```
!!! info "Permissions Error (Windows)"
If you get a permissions error at this point, run this command and try again
`Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser`
The command-line prompt should change to to show `(InvokeAI)` at the
beginning of the prompt. Note that all the following steps should be
run while inside the INVOKEAI_ROOT directory
The command-line prompt should change to to show `(InvokeAI)` at the beginning of the prompt.
5. Make sure that pip is installed in your virtual environment and up to date:
The following steps should be run while inside the `INVOKEAI_ROOT` directory.
1. Make sure that pip is installed in your virtual environment and up to date:
```bash
python -m pip install --upgrade pip
python3 -m pip install --upgrade pip
```
6. Install the InvokeAI Package. The `--extra-index-url` option is used to select among
CUDA, ROCm and CPU/MPS drivers as shown below:
1. Install the InvokeAI Package. The `--extra-index-url` option is used to select the correct `torch` backend:
=== "CUDA (NVidia)"
@ -163,16 +101,15 @@ manager, please follow these steps:
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
=== "MPS (M1 and M2 Macs)"
=== "MPS (Apple Silicon)"
```bash
pip install InvokeAI --use-pep517
```
7. Deactivate and reactivate your runtime directory so that the invokeai-specific commands
become available in the environment
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
=== "Linux/Macintosh"
=== "Linux/macOS"
```bash
deactivate && source .venv/bin/activate
@ -185,221 +122,45 @@ manager, please follow these steps:
.venv\Scripts\activate
```
8. Set up the runtime directory
1. Run the application:
In this step you will initialize your runtime directory with the downloaded
models, model config files, directory for textual inversion embeddings, and
your outputs.
Run `invokeai-web` to start the UI. You must activate the virtual environment before running the app.
```terminal
invokeai-configure --root .
```
Don't miss the dot at the end of the command!
The script `invokeai-configure` will interactively guide you through the
process of downloading and installing the weights files needed for InvokeAI.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you have to agree to. The script will list the steps you need
to take to create an account on the site that hosts the weights files,
accept the agreement, and provide an access token that allows InvokeAI to
legally download and install the weights files.
If you get an error message about a module not being installed, check that
the `invokeai` environment is active and if not, repeat step 5.
!!! tip
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](050_INSTALLING_MODELS.md).
9. Run the command-line- or the web- interface:
From within INVOKEAI_ROOT, activate the environment
(with `source .venv/bin/activate` or `.venv\scripts\activate`), and then run
the script `invokeai`. If the virtual environment you selected is NOT inside
INVOKEAI_ROOT, then you must specify the path to the root directory by adding
`--root_dir \path\to\invokeai` to the commands below:
!!! example ""
!!! warning "Make sure that the virtual environment is activated, which should create `(.venv)` in front of your prompt!"
=== "local Webserver"
```bash
invokeai-web
```
=== "Public Webserver"
```bash
invokeai-web --host 0.0.0.0
```
=== "CLI"
```bash
invokeai
```
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
If the virtual environment you selected is NOT inside `INVOKEAI_ROOT`, then you must specify the path to the root directory by adding
`--root_dir \path\to\invokeai`.
!!! tip
You can permanently set the location of the runtime directory
by setting the environment variable `INVOKEAI_ROOT` to the
path of the directory. As mentioned previously, this is
*highly recommended** if your virtual environment is located outside of
recommended if your virtual environment is located outside of
your runtime directory.
!!! tip
On linux, it is recommended to run invokeai with the following env var: `MALLOC_MMAP_THRESHOLD_=1048576`. For example: `MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web`. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via `invoke.sh`.
On Linux, it is recommended to run invokeai with the following env var: `MALLOC_MMAP_THRESHOLD_=1048576`. For example: `MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web`. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via `invoke.sh`.
10. Render away!
## Unsupported Conda Install
Browse the [features](../features/index.md) section to learn about all the
things you can do with InvokeAI.
Congratulations, you found the "secret" Conda installation instructions. If you really **really** want to use Conda with InvokeAI, you can do so using this unsupported recipe:
11. Subsequently, to relaunch the script, activate the virtual environment, and
then launch `invokeai` command. If you forget to activate the virtual
environment you will most likeley receive a `command not found` error.
!!! warning
Do not move the runtime directory after installation. The virtual environment will get confused if the directory is moved.
12. Other scripts
The [Textual Inversion](../features/TRAINING.md) script can be launched with the command:
```bash
invokeai-ti --gui
```
Similarly, the [Model Merging](../features/MODEL_MERGING.md) script can be launched with the command:
```bash
invokeai-merge --gui
```
Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument
to get usage instructions.
## Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
`git` source code management program. If it is not already installed
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
> **Why do I need the frontend toolchain**?
>
> The InvokeAI project uses trunk-based development. That means our `main` branch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in `main` - we only build it for production releases.
>
> That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
2. From the command line, run this command:
```bash
git clone https://github.com/<your_github_username>/InvokeAI.git
```
This will create a directory named `InvokeAI` and populate it with the
full source code from your fork of the InvokeAI repository.
3. Activate the InvokeAI virtual environment as per step (4) of the manual
installation protocol (important!)
4. Enter the InvokeAI repository directory and run one of these
commands, based on your GPU:
=== "CUDA (NVidia)"
```bash
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
=== "ROCm (AMD)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
=== "MPS (M1 and M2 Macs)"
```bash
pip install -e . --use-pep517
```
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files
and watch the code's behavior change.
When you pull in new changes to the repo, be sure to re-build the UI.
7. If you wish to contribute to the InvokeAI project, you are
encouraged to establish a GitHub account and "fork"
https://github.com/invoke-ai/InvokeAI into your own copy of the
repository. You can then use GitHub functions to create and submit
pull requests to contribute improvements to the project.
Please see [Contributing](../index.md#contributing) for hints
on getting started.
### Unsupported Conda Install
Congratulations, you found the "secret" Conda installation
instructions. If you really **really** want to use Conda with InvokeAI
you can do so using this unsupported recipe:
```
```sh
mkdir ~/invokeai
conda create -n invokeai python=3.10
conda create -n invokeai python=3.11
conda activate invokeai
# Adjust this as described above for the appropriate torch backend
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
invokeai-configure --root ~/invokeai
invokeai --root ~/invokeai --web
invokeai-web --root ~/invokeai
```
The `pip install` command shown in this recipe is for Linux/Windows
systems with an NVIDIA GPU. See step (6) above for the command to use
with other platforms/GPU combinations. If you don't wish to pass the
`--root` argument to `invokeai` with each launch, you may set the
environment variable INVOKEAI_ROOT to point to the installation directory.
environment variable `INVOKEAI_ROOT` to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
[installation requirements]: INSTALL_REQUIREMENTS.md

View File

@ -1,149 +0,0 @@
---
title: NVIDIA Cuda / AMD ROCm
---
<figure markdown>
# :simple-nvidia: CUDA | :simple-amd: ROCm
</figure>
In order for InvokeAI to run at full speed, you will need a graphics
card with a supported GPU. InvokeAI supports NVidia cards via the CUDA
driver on Windows and Linux, and AMD cards via the ROCm driver on Linux.
## :simple-nvidia: CUDA
### Linux and Windows Install
If you have used your system for other graphics-intensive tasks, such
as gaming, you may very well already have the CUDA drivers
installed. To confirm, open up a command-line window and type:
```
nvidia-smi
```
If this command produces a status report on the GPU(s) installed on
your system, CUDA is installed and you have no more work to do. If
instead you get "command not found", or similar, then the driver will
need to be installed.
We strongly recommend that you install the CUDA Toolkit package
directly from NVIDIA. **Do not try to install Ubuntu's
nvidia-cuda-toolkit package. It is out of date and will cause
conflicts among the NVIDIA driver and binaries.**
Go to [CUDA Toolkit
Downloads](https://developer.nvidia.com/cuda-downloads), and use the
target selection wizard to choose your operating system, hardware
platform, and preferred installation method (e.g. "local" versus
"network").
This will provide you with a downloadable install file or, depending
on your choices, a recipe for downloading and running a install shell
script. Be sure to read and follow the full installation instructions.
After an install that seems successful, you can confirm by again
running `nvidia-smi` from the command line.
### Linux Install with a Runtime Container
On Linux systems, an alternative to installing CUDA Toolkit directly on
your system is to run an NVIDIA software container that has the CUDA
libraries already in place. This is recommended if you are already
familiar with containerization technologies such as Docker.
For downloads and instructions, visit the [NVIDIA CUDA Container
Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
### cuDNN Installation for 40/30 Series Optimization* (Optional)
1. Find the InvokeAI folder
2. Click on .venv folder - e.g., YourInvokeFolderHere\\.venv
3. Click on Lib folder - e.g., YourInvokeFolderHere\\.venv\Lib
4. Click on site-packages folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages
5. Click on Torch directory - e.g., YourInvokeFolderHere\InvokeAI\\.venv\Lib\site-packages\torch
6. Click on the lib folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib
7. Copy everything inside the folder and save it elsewhere as a backup.
8. Go to __https://developer.nvidia.com/cudnn__
9. Login or create an Account.
10. Choose the newer version of cuDNN. **Note:**
There are two versions, 11.x or 12.x for the differents architectures(Turing,Maxwell Etc...) of GPUs.
You can find which version you should download from [this link](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html).
13. Download the latest version and extract it from the download location
14. Find the bin folder E\cudnn-windows-x86_64-__Whatever Version__\bin
15. Copy and paste the .dll files into YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib **Make sure to copy, and not move the files**
16. If prompted, replace any existing files
**Notes:**
* If no change is seen or any issues are encountered, follow the same steps as above and paste the torch/lib backup folder you made earlier and replace it. If you didn't make a backup, you can also uninstall and reinstall torch through the command line to repair this folder.
* This optimization is intended for the newer version of graphics card (40/30 series) but results have been seen with older graphics card.
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/cu121` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
## :simple-amd: ROCm
### Linux Install
AMD GPUs are only supported on Linux platforms due to the lack of a
Windows ROCm driver at the current time. Also be aware that support
for newer AMD GPUs is spotty. Your mileage may vary.
It is possible that the ROCm driver is already installed on your
machine. To test, open up a terminal window and issue the following
command:
```
rocm-smi
```
If you get a table labeled "ROCm System Management Interface" the
driver is installed and you are done. If you get "command not found,"
then the driver needs to be installed.
Go to AMD's [ROCm Downloads
Guide](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation_new.html#installation-methods)
and scroll to the _Installation Methods_ section. Find the subsection
for the install method for your preferred Linux distribution, and
issue the commands given in the recipe.
Annoyingly, the official AMD site does not have a recipe for the most
recent version of Ubuntu, 22.04. However, this [community-contributed
recipe](https://novaspirit.github.io/amdgpu-rocm-ubu22/) is reported
to work well.
After installation, please run `rocm-smi` a second time to confirm
that the driver is present and the GPU is recognized. You may need to
do a reboot in order to load the driver.
### Linux Install with a ROCm-docker Container
If you are comfortable with the Docker containerization system, then
you can build a ROCm docker file. The source code and installation
recipes are available
[Here](https://github.com/RadeonOpenCompute/ROCm-docker/blob/master/quick-start.md)
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.6` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer
script.
Be aware that the torch machine learning library does not seamlessly
interoperate with all AMD GPUs and you may experience garbled images,
black images, or long startup delays before rendering commences. Most
of these issues can be solved by Googling for workarounds. If you have
a problem and find a solution, please post an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) so that other
users benefit and we can update this document.

View File

@ -1,186 +1,55 @@
---
title: Installing Models
---
# :octicons-paintbrush-16: Installing Models
# Installing Models
## Checkpoint and Diffusers Models
The model checkpoint files ('\*.ckpt') are the Stable Diffusion
"secret sauce". They are the product of training the AI on millions of
captioned images gathered from multiple sources.
The model checkpoint files (`*.ckpt`) are the Stable Diffusion "secret sauce". They are the product of training the AI on millions of captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are dozens or more
that have been fine tuned to provide particulary styles, genres, or
other features. In addition, there are several new formats that
improve on the original checkpoint format: a `.safetensors` format
which prevents malware from masquerading as a model, and `diffusers`
models, the most recent innovation.
Originally there was only a single Stable Diffusion weights file, which many people named `model.ckpt`.
InvokeAI supports all three formats but strongly prefers the
`diffusers` format. These are distributed as directories containing
multiple subfolders, each of which contains a different aspect of the
model. The advantage of this is that the models load from disk really
fast. Another advantage is that `diffusers` models are supported by a
large and active set of open source developers working at and with
HuggingFace organization, and improvements in both rendering quality
and performance are being made at a rapid pace. Among other features
is the ability to download and install a `diffusers` model just by
providing its HuggingFace repository ID.
Today, there are thousands of models, fine tuned to excel at specific styles, genres, or themes.
While InvokeAI will continue to support `.ckpt` and `.safetensors`
models for the near future, these are deprecated and support will
likely be withdrawn at some point in the not-too-distant future.
!!! tip "Model Formats"
This manual will guide you through installing and configuring model
weight files and converting legacy `.ckpt` and `.safetensors` files
into performant `diffusers` models.
We also have two more popular model formats, both created [HuggingFace]:
## Base Models
- `safetensors`: Single file, like `.ckpt` files. Prevents malware from lurking in a model.
- `diffusers`: Splits the model components into separate files, allowing very fast loading.
InvokeAI comes with support for a good set of starter models. You'll
find them listed in the master models file
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
subset that are currently installed are found in
`configs/models.yaml`.
InvokeAI supports all three formats. Our backend will convert models to `diffusers` format before running them. This is a transparent process.
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. When you initially download them, you are asked
to accept the license terms. In addition, some of these models carry
additional license terms that limit their use in commercial
applications or on public servers. Be sure to familiarize yourself
with the model terms by visiting the URLs in the table above.
## Starter Models
## Community-Contributed Models
When you first start InvokeAI, you'll see a popup prompting you to install some starter models from the Model Manager.
[HuggingFace](https://huggingface.co/models?library=diffusers)
is a great resource for diffusers models, and is also the home of a
[fast-growing repository](https://huggingface.co/sd-concepts-library)
of embedding (".bin") models that add subjects and/or styles to your
images. The latter are automatically installed on the fly when you
include the text `<concept-name>` in your prompt. See [Concepts
Library](../features/CONCEPTS.md) for more information.
You'll find a collection of popular and high-quality models available for easy download.
Another popular site for community-contributed models is
[CIVITAI](https://civitai.com). This extensive site currently supports
only `.safetensors` and `.ckpt` models, but they can be easily loaded
into InvokeAI and/or converted into optimized `diffusers` models. Be
aware that CIVITAI hosts many models that generate NSFW content.
Some models carry license terms that limit their use in commercial applications or on public servers. It's your responsibility to adhere to the license terms.
## Installation
## Other Models
There are two ways to install and manage models:
You can install other models using the Model Manager. Supported install sources include:
1. The `invokeai-model-install` script which will download and install
them for you. In addition to supporting main models, you can install
ControlNet, LoRA and Textual Inversion models.
- Local path: The file path to the model on your computer.
- URL: A link directly to the model, typically to a model marketplace. Some sites require you to use an API token to download models, which you can [set up in the config file].
- `HuggingFace` repo ID: This points to a HF model. Repo IDs look like this: `XpucT/Deliberate`.
- Folder: Scan a local folder for models. You can install all of the detected models in one click.
2. The web interface (WebUI) has a GUI for importing and managing
models.
### Diffusers models in HF repo subfolders
3. By placing models (or symbolic links to models) inside one of the
InvokeAI root directory's `autoimport` folder.
HuggingFace repos can be structured in any way. Some model authors include multiple models within the same folder.
### Installation via `invokeai-model-install`
In this situation, you may need to provide some additional information to identify the model you want, by adding `:subfolder_name` to the repo ID.
From the `invoke` launcher, choose option [4] "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
was previously downloaded; the script will just confirm that the files
are complete.
!!! example
The installer has different panels for installing main models from
HuggingFace, models from Civitai and other arbitrary web sites,
ControlNet models, LoRA/LyCORIS models, and Textual Inversion
embeddings. Each section has a text box in which you can enter a new
model to install. You can refer to a model using its:
Say you have a repo ID `monster-labs/control_v1p_sd15_qrcode_monster`, and the model you want is inside the `v2` subfolder.
1. Local path to the .ckpt, .safetensors or diffusers folder on your local machine
2. A directory on your machine that contains multiple models
3. A URL that points to a downloadable model
4. A HuggingFace repo id
Add `:v2` to the repo ID and use that when installing the model: `monster-labs/control_v1p_sd15_qrcode_monster:v2`
Previously-installed models are shown with checkboxes. Uncheck a box
to unregister the model from InvokeAI. Models that are physically
installed inside the InvokeAI root directory will be deleted and
purged (after a confirmation warning). Models that are located outside
the InvokeAI root directory will be unregistered but not deleted.
## Autoimport
Note: The installer script uses a console-based text interface that requires
significant amounts of horizontal and vertical space. If the display
looks messed up, just enlarge the terminal window and/or relaunch the
script.
In the InvokeAI root directory you will find an `autoimport` directory. On startup, any models in this directory will be installed and copied into the Invoke-managed models directory.
If you wish you can script model addition and deletion, as well as
listing installed models. Start the "developer's console" and give the
command `invokeai-model-install --help`. This will give you a series
of command-line parameters that will let you control model
installation. Examples:
The location of the autoimport directories are controlled by settings in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
```
# (list all controlnet models)
invokeai-model-install --list controlnet
# (install the model at the indicated URL)
invokeai-model-install --add https://civitai.com/api/download/models/128713
# (delete the named model)
invokeai-model-install --delete sd-1/main/analog-diffusion
```
### Installation via the Web GUI
To install a new model using the Web GUI, do the following:
1. Open the InvokeAI Model Manager (cube at the bottom of the
left-hand panel) and navigate to *Import Models*
2. In the field labeled *Location* type in the path to the model you
wish to install. You may use a URL, HuggingFace repo id, or a path on
your local disk.
3. Alternatively, the *Scan for Models* button allows you to paste in
the path to a folder somewhere on your machine. It will be scanned for
importable models and prompt you to add the ones of your choice.
4. Press *Add Model* and wait for confirmation that the model
was added.
To delete a model, Select *Model Manager* to list all the currently
installed models. Press the trash can icons to delete any models you
wish to get rid of. Models whose weights are located inside the
InvokeAI `models` directory will be purged from disk, while those
located outside will be unregistered from InvokeAI, but not deleted.
You can see where model weights are located by clicking on the model name.
This will bring up an editable info panel showing the model's characteristics,
including the `Model Location` of its files.
### Installation via the `autoimport` function
In the InvokeAI root directory you will find a series of folders under
`autoimport`, one each for main models, controlnets, embeddings and
Loras. Any models that you add to these directories will be scanned
at startup time and registered automatically.
You may create symbolic links from these folders to models located
elsewhere on disk and they will be autoimported. You can also create
subfolders and organize them as you wish.
The location of the autoimport directories are controlled by settings
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
### Installing models that live in HuggingFace subfolders
On rare occasions you may need to install a diffusers-style model that
lives in a subfolder of a HuggingFace repo id. In this event, simply
add ":_subfolder-name_" to the end of the repo id. For example, if the
repo id is "monster-labs/control_v1p_sd15_qrcode_monster" and the model
you wish to fetch lives in a subfolder named "v2", then the repo id to
pass to the various model installers should be
```
monster-labs/control_v1p_sd15_qrcode_monster:v2
```
[set up in the config file]: ../features/CONFIGURATION.md#model-marketplace-api-keys

View File

@ -1,204 +0,0 @@
---
title: Installing xFormers
---
# :material-image-size-select-large: Installing xformers
xFormers is toolbox that integrates with the pyTorch and CUDA
libraries to provide accelerated performance and reduced memory
consumption for applications using the transformers machine learning
architecture. After installing xFormers, InvokeAI users who have
CUDA GPUs will see a noticeable decrease in GPU memory consumption and
an increase in speed.
xFormers can be installed into a working InvokeAI installation without
any code changes or other updates. This document explains how to
install xFormers.
## Pip Install
For both Windows and Linux, you can install `xformers` in just a
couple of steps from the command line.
If you are used to launching `invoke.sh` or `invoke.bat` to start
InvokeAI, then run the launcher and select the "developer's console"
to get to the command line. If you run invoke.py directly from the
command line, then just be sure to activate it's virtual environment.
Then run the following three commands:
```sh
pip install xformers~=0.0.22
pip install triton # WON'T WORK ON WINDOWS
python -m xformers.info output
```
The first command installs `xformers`, the second installs the
`triton` training accelerator, and the third prints out the `xformers`
installation status. On Windows, please omit the `triton` package,
which is not available on that platform.
If all goes well, you'll see a report like the
following:
```sh
xFormers 0.0.22
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
memory_efficient_attention.flshattB: available
memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: available
memory_efficient_attention.tritonflashattB: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
is_functorch_available: False
pytorch.version: 2.1.0+cu121
pytorch.cuda: available
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4070
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.1.0+cu121
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.20
build.nvcc_version: 11.8.89
source.privacy: open source
```
## Source Builds
`xformers` is currently under active development and at some point you
may wish to build it from sourcce to get the latest features and
bugfixes.
### Source Build on Linux
Note that xFormers only works with true NVIDIA GPUs and will not work
properly with the ROCm driver for AMD acceleration.
xFormers is not currently available as a pip binary wheel and must be
installed from source. These instructions were written for a system
running Ubuntu 22.04, but other Linux distributions should be able to
adapt this recipe.
#### 1. Install CUDA Toolkit 12.1
You will need the CUDA developer's toolkit in order to compile and
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
package.** It is out of date and will cause conflicts among the NVIDIA
driver and binaries. Instead install the CUDA Toolkit package provided
by NVIDIA itself. Go to [CUDA Toolkit 12.1
Downloads](https://developer.nvidia.com/cuda-12-1-0-download-archive)
and use the target selection wizard to choose your platform and Linux
distribution. Select an installer type of "runfile (local)" at the
last step.
This will provide you with a recipe for downloading and running a
install shell script that will install the toolkit and drivers.
#### 2. Confirm/Install pyTorch 2.1.0 with CUDA 12.1 support
If you are using InvokeAI 3.0.2 or higher, these will already be
installed. If not, you can check whether you have the needed libraries
using a quick command. Activate the invokeai virtual environment,
either by entering the "developer's console", or manually with a
command similar to `source ~/invokeai/.venv/bin/activate` (depending
on where your `invokeai` directory is.
Then run the command:
```sh
python -c 'exec("import torch\nprint(torch.__version__)")'
```
If it prints __2.1.0+cu121__ you're good. If not, you can install the
most up to date libraries with this command:
```sh
pip install --upgrade --force-reinstall torch torchvision
```
#### 3. Install the triton module
This module isn't necessary for xFormers image inference optimization,
but avoids a startup warning.
```sh
pip install triton
```
#### 4. Install source code build prerequisites
To build xFormers from source, you will need the `build-essentials`
package. If you don't have it installed already, run:
```sh
sudo apt install build-essential
```
#### 5. Build xFormers
There is no pip wheel package for xFormers at this time (January
2023). Although there is a conda package, InvokeAI no longer
officially supports conda installations and you're on your own if you
wish to try this route.
Following the recipe provided at the [xFormers GitHub
page](https://github.com/facebookresearch/xformers), and with the
InvokeAI virtual environment active (see step 1) run the following
commands:
```sh
pip install ninja
export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
```
The TORCH_CUDA_ARCH_LIST is a list of GPU architectures to compile
xFormer support for. You can speed up compilation by selecting
the architecture specific for your system. You'll find the list of
GPUs and their architectures at NVIDIA's [GPU Compute
Capability](https://developer.nvidia.com/cuda-gpus) table.
If the compile and install completes successfully, you can check that
xFormers is installed with this command:
```sh
python -m xformers.info
```
If suiccessful, the top of the listing should indicate "available" for
each of the `memory_efficient_attention` modules, as shown here:
```sh
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
memory_efficient_attention.flshattB: available
memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: available
memory_efficient_attention.tritonflashattB: available
[...]
```
You can now launch InvokeAI and enjoy the benefits of xFormers.
### Windows
To come
---
(c) Copyright 2023 Lincoln Stein and the InvokeAI Development Team

View File

@ -1,89 +0,0 @@
---
title: build binary installers
---
# :simple-buildkite: How to build "binary" installers (InvokeAI-mac/windows/linux_on_*.zip)
## 1. Ensure `installers/requirements.in` is correct
and up to date on the branch to be installed.
## <a name="step-2"></a> 2. Run `pip-compile` on each platform.
On each target platform, in the branch that is to be installed, and
inside the InvokeAI git root folder, run the following commands:
```commandline
conda activate invokeai # or however you activate python
pip install pip-tools
pip-compile --allow-unsafe --generate-hashes --output-file=binary_installer/<reqsfile>.txt binary_installer/requirements.in
```
where `<reqsfile>.txt` is whichever of
```commandline
py3.10-darwin-arm64-mps-reqs.txt
py3.10-darwin-x86_64-reqs.txt
py3.10-linux-x86_64-cuda-reqs.txt
py3.10-windows-x86_64-cuda-reqs.txt
```
matches the current OS and architecture.
> There is no way to cross-compile these. They must be done on a system matching the target OS and arch.
## <a name="step-3"></a> 3. Set github repository and branch
Once all reqs files have been collected and committed **to the branch
to be installed**, edit `binary_installer/install.sh.in` and `binary_installer/install.bat.in` so that `RELEASE_URL`
and `RELEASE_SOURCEBALL` point to the github repo and branch that is
to be installed.
For example, to install `main` branch of `InvokeAI`, they should be
set as follows:
`install.sh.in`:
```commandline
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
```
`install.bat.in`:
```commandline
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
```
Or, to install `damians-cool-feature` branch of `damian0815`, set them
as follows:
`install.sh.in`:
```commandline
RELEASE_URL=https://github.com/damian0815/InvokeAI
RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
```
`install.bat.in`:
```commandline
set RELEASE_URL=https://github.com/damian0815/InvokeAI
set RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
```
The branch and repo specified here **must** contain the correct reqs
files. The installer zip files **do not** contain requirements files,
they are pulled from the specified branch during the installation
process.
## 4. Create zip files.
cd into the `installers/` folder and run
`./create_installers.sh`. This will create
`InvokeAI-mac_on_<branch>.zip`,
`InvokeAI-windows_on_<branch>.zip` and
`InvokeAI-linux_on_<branch>.zip`. These files can be distributed to end users.
These zips will continue to function as installers for all future
pushes to those branches, as long as necessary changes to
`requirements.in` are propagated in a timely manner to the
`py3.10-*-reqs.txt` files using pip-compile as outlined in [step
2](#step-2).
To actually install, users should unzip the appropriate zip file into an empty
folder and run `install.sh` on macOS/Linux or `install.bat` on
Windows.

View File

@ -1,88 +1,41 @@
# Overview
# Installation Overview
We offer several ways to install InvokeAI, each one suited to your
experience and preferences. We suggest that everyone start by
reviewing the
[hardware](010_INSTALL_AUTOMATED.md#hardware_requirements) and
[software](010_INSTALL_AUTOMATED.md#software_requirements)
requirements, as they are the same across each install method. Then
pick the install method most suitable to your level of experience and
needs.
Before installing, review the [installation requirements] to ensure your system is set up properly.
See the [troubleshooting
section](010_INSTALL_AUTOMATED.md#troubleshooting) of the automated
install guide for frequently-encountered installation issues.
See the [installation troubleshooting] guide for frequently-encountered installation issues.
This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
If you need more help, join our [discord] or [create an issue].
<h2>Automatic Install</h2>
## **[Automated Installer (Recommended)](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself.
<h2>Manual Install</h2>
🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).
<h2>Developer Install</h2>
If you want to contribute to InvokeAI, consult the [developer install guide].
<h2>Docker Install</h2>
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
## **[Docker Installation](040_INSTALL_DOCKER.md)**
This method is recommended for those familiar with running Docker containers.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
See the [docker installation guide].
<h2>Other Installation Guides</h2>
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](050_INSTALLING_MODELS.md)
## :fontawesome-solid-computer: Hardware Requirements
### :octicons-cpu-24: System
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
** SDXL 1.0 Requirements*
To use SDXL, user must have one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 8 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 16 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
### :fontawesome-solid-memory: Memory and Disk
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python, and
all its dependencies.
We do **not recommend** the following video cards due to issues with their
running in half-precision mode and having insufficient VRAM to render 512x512
images in full-precision mode:
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
- [Installing Models](050_INSTALLING_MODELS.md)
[install InvokeAI manually via PyPI]: 020_INSTALL_MANUAL.md
[developer install guide]: INSTALL_DEVELOPMENT.md
[docker installation guide]: 040_INSTALL_DOCKER.md
[installation guide]: 010_INSTALL_AUTOMATED.md
[installation troubleshooting]: INSTALL_TROUBLESHOOTING.md
[discord]: discord.gg/invoke-ai
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[installation requirements]: INSTALL_REQUIREMENTS.md

View File

@ -0,0 +1,33 @@
# Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to add features or bugfixes, you are encouraged to install the source code for InvokeAI.
!!! info "Why do I need the frontend toolchain?"
The repo doesn't contain a build of the frontend. You'll be responsible for rebuilding it (or running it in dev mode) to use the app, as described in the [frontend dev toolchain] docs.
<h2> Installation </h2>
1. [Fork and clone] the [InvokeAI repo].
1. Follow the [manual installation] docs to create a new virtual environment for the development install.
1. Install the [frontend dev toolchain] and do a production build of the UI as described.
1. You can now run the app as described in the [manual installation] docs.
As described in the [frontend dev toolchain] docs, you can run the UI using a dev server. If you do this, you won't need to continually rebuild the frontend. Instead, you run the dev server and use the app with the server URL it provides.
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
[manual installation]: installation/020_INSTALL_MANUAL.md

View File

@ -0,0 +1,181 @@
# Requirements
## GPU
!!! warning "Problematic Nvidia GPUs"
We do not recommend these GPUs. They cannot operate with half precision, but have insufficient VRAM to generate 512x512 images at full precision.
- NVIDIA 10xx series cards such as the 1080 TI
- GTX 1650 series cards
- GTX 1660 series cards
Invoke runs best with a dedicated GPU, but will fall back to running on CPU, albeit much slower. You'll need a beefier GPU for SDXL.
!!! example "Stable Diffusion 1.5"
=== "Nvidia"
```
Any GPU with at least 4GB VRAM.
```
=== "AMD"
```
Any GPU with at least 4GB VRAM. Linux only.
```
=== "Mac"
```
Any Apple Silicon Mac with at least 8GB memory.
```
!!! example "Stable Diffusion XL"
=== "Nvidia"
```
Any GPU with at least 8GB VRAM. Linux only.
```
=== "AMD"
```
Any GPU with at least 16GB VRAM.
```
=== "Mac"
```
Any Apple Silicon Mac with at least 16GB memory.
```
## RAM
At least 12GB of RAM.
## Disk
SSDs will, of course, offer the best performance.
The base application disk usage depends on the torch backend.
!!! example "Disk"
=== "Nvidia (CUDA)"
```
~6.5GB
```
=== "AMD (ROCm)"
```
~12GB
```
=== "Mac (MPS)"
```
~3.5GB
```
You'll need to set aside some space for images, depending on how much you generate. A couple GB is enough to get started.
You'll need a good chunk of space for models. Even if you only install the most popular models and the usual support models (ControlNet, IP Adapter ,etc), you will quickly hit 50GB of models.
!!! info "`tmpfs` on Linux"
If your temporary directory is mounted as a `tmpfs`, ensure it has sufficient space.
## Python
Invoke requires python 3.10 or 3.11. If you don't already have one of these versions installed, we suggest installing 3.11, as it will be supported for longer.
Check that your system has an up-to-date Python installed by running `python --version` in the terminal (Linux, macOS) or cmd/powershell (Windows).
<h3>Installing Python (Windows)</h3>
- Install python 3.11 with [an official installer].
- The installer includes an option to add python to your PATH. Be sure to enable this. If you missed it, re-run the installer, choose to modify an existing installation, and tick that checkbox.
- You may need to install [Microsoft Visual C++ Redistributable].
<h3>Installing Python (macOS)</h3>
- Install python 3.11 with [an official installer].
- If model installs fail with a certificate error, you may need to run this command (changing the python version to match what you have installed): `/Applications/Python\ 3.10/Install\ Certificates.command`
- If you haven't already, you will need to install the XCode CLI Tools by running `xcode-select --install` in a terminal.
<h3>Installing Python (Linux)</h3>
- Follow the [linux install instructions], being sure to install python 3.11.
- You'll need to install `libglib2.0-0` and `libgl1-mesa-glx` for OpenCV to work. For example, on a Debian system: `sudo apt update && sudo apt install -y libglib2.0-0 libgl1-mesa-glx`
## Drivers
If you have an Nvidia or AMD GPU, you may need to manually install drivers or other support packages for things to work well or at all.
### Nvidia
Run `nvidia-smi` on your system's command line to verify that drivers and CUDA are installed. If this command fails, or doesn't report versions, you will need to install drivers.
Go to the [CUDA Toolkit Downloads] and carefully follow the instructions for your system to get everything installed.
Confirm that `nvidia-smi` displays driver and CUDA versions after installation.
#### Linux - via Nvidia Container Runtime
An alternative to installing CUDA locally is to use the [Nvidia Container Runtime] to run the application in a container.
#### Windows - Nvidia cuDNN DLLs
An out-of-date cuDNN library can greatly hamper performance on 30-series and 40-series cards. Check with the community on discord to compare your `it/s` if you think you may need this fix.
First, locate the destination for the DLL files and make a quick back up:
1. Find your InvokeAI installation folder, e.g. `C:\Users\Username\InvokeAI\`.
1. Open the `.venv` folder, e.g. `C:\Users\Username\InvokeAI\.venv` (you may need to show hidden files to see it).
1. Navigate deeper to the `torch` package, e.g. `C:\Users\Username\InvokeAI\.venv\Lib\site-packages\torch`.
1. Copy the `lib` folder inside `torch` and back it up somewhere.
Next, download and copy the updated cuDNN DLLs:
1. Go to <https://developer.nvidia.com/cudnn>.
1. Create an account if needed and log in.
1. Choose the newest version of cuDNN that works with your GPU architecture. Consult the [cuDNN support matrix] to determine the correct version for your GPU.
1. Download the latest version and extract it.
1. Find the `bin` folder, e.g. `cudnn-windows-x86_64-SOME_VERSION\bin`.
1. Copy and paste the `.dll` files into the `lib` folder you located earlier. Replace files when prompted.
If, after restarting the app, this doesn't improve your performance, either restore your back up or re-run the installer to reset `torch` back to its original state.
### AMD
!!! info "Linux Only"
AMD GPUs are supported on Linux only, due to ROCm (the AMD equivalent of CUDA) support being Linux only.
!!! warning "Bumps Ahead"
While the application does run on AMD GPUs, there are occasional bumps related to spotty torch support.
Run `rocm-smi` on your system's command line verify that drivers and ROCm are installed. If this command fails, or doesn't report versions, you will need to install them.
Go to the [ROCm Documentation] and carefully follow the instructions for your system to get everything installed.
Confirm that `rocm-smi` displays driver and CUDA versions after installation.
#### Linux - via Docker Container
An alternative to installing ROCm locally is to use a [ROCm docker container] to run the application in a container.
[ROCm docker container]: https://github.com/ROCm/ROCm-docker
[ROCm Documentation]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
[cuDNN support matrix]: https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html
[Nvidia Container Runtime]: https://developer.nvidia.com/container-runtime
[linux install instructions]: https://docs.python-guide.org/starting/install3/linux/
[Microsoft Visual C++ Redistributable]: https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
[an official installer]: https://www.python.org/downloads/release/python-3118/
[CUDA Toolkit Downloads]: https://developer.nvidia.com/cuda-downloads

View File

@ -0,0 +1,96 @@
# Installation Troubleshooting
!!! info "How to Reinstall"
Many issues can be resolved by re-installing the application. You won't lose any data by re-installing. We suggest downloading the [latest release] and using it to re-install the application.
When you run the installer, you'll have an option to select the version to install. If you aren't ready to upgrade, you choose the current version to fix a broken install.
If the troubleshooting steps on this page don't get you up and running, please either [create an issue] or hop on [discord] for help.
## OSErrors on Windows while installing dependencies
During a zip file installation or an online update, installation stops
with an error like this:
![broken-dependency-screenshot](../assets/troubleshooting/broken-dependency.png){:width="800px"}
To resolve this, re-install the application as described above.
## Stable Diffusion XL generation fails after trying to load UNet
InvokeAI is working in other respects, but when trying to generate
images with Stable Diffusion XL you get a "Server Error". The text log
in the launch window contains this log line above several more lines of
error messages:
`INFO --> Loading model:D:\LONG\PATH\TO\MODEL, type sdxl:main:unet`
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
To address this, first go to the Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then, click the HuggingFace tab,
paste the Repo ID stabilityai/stable-diffusion-xl-base-1.0 and install
the model.
### Package dependency conflicts
If you have previously installed InvokeAI or another Stable Diffusion
package, the installer may occasionally pick up outdated libraries and
either the installer or `invoke` will fail with complaints about
library conflicts.
To resolve this, re-install the application as described above.
### InvokeAI runs extremely slowly on Linux or Windows systems
The most frequent cause of this problem is when the installation
process installed the CPU-only version of the torch machine-learning
library, rather than a version that takes advantage of GPU
acceleration. To confirm this issue, look at the InvokeAI startup
messages. If you see a message saying ">> Using device CPU", then
this is what happened.
To resolve this, re-install the application as described above. Be sure to select the correct GPU device.
### Invalid configuration file
Everything seems to install ok, you get a `ValidationError` when starting up the app.
This is caused by an invalid setting in the `invokeai.yaml` configuration file. The error message should tell you what is wrong.
Check the [configuration docs] for more detail about the settings and how to specify them.
### Out of Memory Issues
The models are large, VRAM is expensive, and you may find yourself
faced with Out of Memory errors when generating images. Here are some
tips to reduce the problem:
#### 4 GB of VRAM
This should be adequate for 512x512 pixel images using Stable Diffusion 1.5
and derived models, provided that you do not use the NSFW checker. It won't be loaded unless you go into the UI settings and turn it on.
If you are on a CUDA-enabled GPU, we will automatically use xformers or torch-sdp to reduce VRAM requirements, though you can explicitly configure this. See the [configuration docs].
#### 6 GB of VRAM
This is a border case. Using the SD 1.5 series you should be able to
generate images up to 640x640 with the NSFW checker enabled, and up to
1024x1024 with it disabled.
If you run into persistent memory issues there are a series of
environment variables that you can set before launching InvokeAI that
alter how the PyTorch machine learning library manages memory. See
<https://pytorch.org/docs/stable/notes/cuda.html#memory-management> for
a list of these tweaks.
#### 12 GB of VRAM
This should be sufficient to generate larger images up to about 1280x1280.
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[discord]: https://discord.gg/ZmtBAhwWhy
[configuration docs]: ../features/CONFIGURATION.md

View File

@ -1,64 +0,0 @@
---
title: InvokeAI Binary Installer
---
The InvokeAI binary installer is a shell script that will install InvokeAI onto a stock
computer running recent versions of Linux, MacOSX or Windows. It will leave you
with a version that runs a stable version of InvokeAI. When a new version of
InvokeAI is released, you will download and reinstall the new version.
If you wish to tinker with unreleased versions of InvokeAI that introduce
potentially unstable new features, you should consider using the
[source installer](INSTALL_SOURCE.md) or one of the
[manual install](../020_INSTALL_MANUAL.md) methods.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
!!! todo
Before you begin, make sure that you meet
the[hardware requirements](/#hardware-requirements) and has the
appropriate GPU drivers installed. In particular, if you are a Linux user with
an AMD GPU installed, you may need to install the
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
## Steps to Install
1. Download the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
InvokeAI's installer for your platform. Look for a file named `InvokeAI-binary-<your platform>.zip`
2. Place the downloaded package someplace where you have plenty of HDD space,
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
3. Extract the 'InvokeAI' folder from the downloaded package
4. Open the extracted 'InvokeAI' folder
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
a terminal)
6. Follow the prompts
7. After installation, please run the 'invoke.bat' file (on Windows) or
'invoke.sh' file (on Linux/Mac) to start InvokeAI.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.

View File

@ -1,32 +0,0 @@
---
title: Running InvokeAI on Google Colab using a Jupyter Notebook
---
## Introduction
We have a [Jupyter
notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
with cell-by-cell installation steps. It will download the code in
this repo as one of the steps, so instead of cloning this repo, simply
download the notebook from the link above and load it up in VSCode
(with the appropriate extensions installed)/Jupyter/JupyterLab and
start running the cells one-by-one.
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
## Running Online On Google Colabotary
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
## Running Locally (Cloning)
1. Install the Jupyter Notebook python library (one-time):
pip install jupyter
2. Clone the InvokeAI repository:
git clone https://github.com/invoke-ai/InvokeAI.git
cd invoke-ai
3. Create a virtual environment using conda:
conda create -n invoke jupyter
4. Activate the environment and start the Jupyter notebook:
conda activate invoke
jupyter notebook

View File

@ -1,135 +0,0 @@
---
title: Manual Installation, Linux
---
# :fontawesome-brands-linux: Linux
## Installation
1. You will need to install the following prerequisites if they are not already
available. Use your operating system's preferred installer.
- Python (version 3.8.5 recommended; higher may work)
- git
2. Install the Python Anaconda environment manager.
```bash
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
```
After installing anaconda, you should log out of your system and log back
in. If the installation worked, your command prompt will be prefixed by the
name of the current anaconda environment - `(base)`.
3. Copy the InvokeAI source code from GitHub:
```bash
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
```
This will create InvokeAI folder where you will follow the rest of the
steps.
4. Enter the newly-created InvokeAI folder. From this step forward make sure
that you are working in the InvokeAI directory!
```bash
(base) ~$ cd InvokeAI
(base) ~/InvokeAI$
```
5. Use anaconda to copy necessary python packages, create a new python
environment named `invokeai` and then activate the environment.
!!! todo "For systems with a CUDA (Nvidia) card:"
```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create -f environment-cuda.yml
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
```
!!! todo "For systems with an AMD card (using ROCm driver):"
```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create -f environment-AMD.yml
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
```
After these steps, your command prompt will be prefixed by `(invokeai)` as
shown above.
6. Load the big stable diffusion weights files and a couple of smaller
machine-learning models:
```bash
(invokeai) ~/InvokeAI$ python3 scripts/configure_invokeai.py
```
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license,
and obtaining an access token for downloading. It will then download and
install the weights files for you.
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing
the same thing.
7. Start generating images!
!!! todo "Run InvokeAI!"
!!! warning "IMPORTANT"
Make sure that the conda environment is activated, which should create
`(invokeai)` in front of your prompt!
=== "CLI"
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
8. Subsequently, to relaunch the script, be sure to run "conda activate
invokeai" (step 5, second command), enter the `InvokeAI` directory, and then
launch the invoke script (step 8). If you forget to activate the 'invokeai'
environment, the script will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` method
(step 5) to download the InvokeAI directory, then to update to the latest and
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
```bash
(invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
```
This will bring your local copy into sync with the remote one.

View File

@ -1,525 +0,0 @@
---
title: Manual Installation, macOS
---
# :fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the
community.
While the repo does run on Intel Macs, we only have a couple reports. If you
have an Intel Mac and run into issues, please create an issue on Github and we
will do our best to help.
## Requirements
- macOS 12.3 Monterey or later
- About 10GB of storage (and 10GB of data if your internet connection has data
caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
## Installation
!!! todo "Homebrew"
First you will install the "brew" package manager. Skip this if brew is already installed.
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
!!! todo "Conda Installation"
Now there are two different ways to set up the Python (miniconda) environment:
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
=== "Standalone"
```bash title="Install cmake, protobuf, and rust"
brew install cmake protobuf rust
```
```bash title="Clone the InvokeAI repository"
# Clone the Invoke AI repo
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
Choose the appropriate architecture for your system and install miniconda:
=== "M1 arm64"
```bash title="Install miniconda for M1 arm64"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
-o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
```
=== "Intel x86_64"
```bash title="Install miniconda for Intel"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
-o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
```
=== "with pyenv"
```bash
brew install pyenv-virtualenv
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
!!! todo "Clone the Invoke AI repo"
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
!!! todo "Create the environment & install packages"
=== "M1 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
```
=== "Intel x86_64 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
```
```bash
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
```
!!! info
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but won't hurt.
!!! todo "Download the model weight files"
The `configure_invokeai.py` script downloads and installs the model weight
files for you. It will lead you through the process of getting a Hugging Face
account, accepting the Stable Diffusion model weight license agreement, and
creating a download token:
```bash
# This will take some time, depending on the speed of your internet connection
# and will consume about 10GB of space
python scripts/configure_invokeai.py
```
!!! todo "Run InvokeAI!"
!!! warning "IMPORTANT"
Make sure that the conda environment is activated, which should create
`(invokeai)` in front of your prompt!
=== "CLI"
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
---
## Common problems
After you followed all the instructions and try to run invoke.py, you might get
several errors. Here's the errors I've seen and found solutions for.
### Is it slow?
```bash title="Be sure to specify 1 sample and 1 iteration."
python ./scripts/orig_scripts/txt2img.py \
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
```
---
### Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup is
inherently unstable. One morning I woke up and it no longer worked no matter
what I did until I switched to miniforge. However, I have another Mac that works
just fine with Anaconda. If you can't get it to work, please search a little
first because many of the errors will get posted and solved. If you can't find a
solution please [create an issue](https://github.com/invoke-ai/InvokeAI/issues).
One debugging step is to update to the latest version of PyTorch nightly.
```bash
conda install \
pytorch \
torchvision \
-c pytorch-nightly \
-n invokeai
```
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
```bash
git clean -f
conda clean \
--yes \
--all
```
Or you could try to completley reset Anaconda:
```bash
conda update \
--force-reinstall \
-y \
-n base \
-c defaults conda
```
---
### "No module named cv2", torch, 'invokeai', 'transformers', 'taming', etc
There are several causes of these errors:
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins
with "(invokeai)" then you activated it. If it begins with "(base)" or
something else you haven't.
2. You might've run `./scripts/configure_invokeai.py` or `./scripts/invoke.py`
instead of `python ./scripts/configure_invokeai.py` or
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
3. if it says you're missing taming you need to rebuild your virtual
environment.
```bash
conda deactivate
conda env remove -n invokeai
conda env create -f environment-mac.yml
```
4. If you have activated the invokeai virtual environment and tried rebuilding
it, maybe the problem could be that I have something installed that you don't
and you'll just need to manually install it. Make sure you activate the
virtual environment so it installs there instead of globally.
```bash
conda activate invokeai
pip install <package name>
```
You might also need to install Rust (I mention this again below).
---
### How many snakes are living in your computer?
You might have multiple Python installations on your system, in which case it's
important to be explicit and consistent about which one to use for a given
project. This is because virtual environments are coupled to the Python that
created it (and all the associated 'system-level' modules).
When you run `python` or `python3`, your shell searches the colon-delimited
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
that order - first match wins. You can ask for the location of the first
`python3` found in your `PATH` with the `which` command like this:
```bash
% which python3
/usr/bin/python3
```
Anything in `/usr/bin` is
[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
However, `/usr/bin/python3` is not actually python3, but rather a stub that
offers to install Xcode (which includes python 3). If you have Xcode installed
already, `/usr/bin/python3` will execute
`/Library/Developer/CommandLineTools/usr/bin/python3` or
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
Xcode you've selected with `xcode-select`).
Note that `/usr/bin/python` is an entirely different python - specifically,
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
```bash
% which python3
/opt/homebrew/bin/python3
```
If you installed python3 with Homebrew and you've modified your path to search
for Homebrew binaries before system ones, you'll see the above path.
```bash
% which python
/opt/anaconda3/bin/python
```
If you have Anaconda installed, you will see the above path. There is a
`/opt/anaconda3/bin/python3` also.
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
should actually be the _same python_, which you can verify by comparing the
output of `python3 -V` and `python -V`.
```bash
(invokeai) % which python
/Users/name/miniforge3/envs/invokeai/bin/python
```
The above is what you'll see if you have miniforge and correctly activated the
invokeai environment, while usingd the standalone setup instructions above.
If you otherwise installed via pyenv, you will get this result:
```bash
(anaconda3-2022.05) % which python
/Users/name/.pyenv/shims/python
```
It's all a mess and you should know
[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
if you want to fix it. Here's a brief hint of the most common ways you can
modify it (don't really have the time to explain it all here).
- ~/.zshrc
- ~/.bash_profile
- ~/.bashrc
- /etc/paths.d
- /etc/path
Which one you use will depend on what you have installed, except putting a file
in /etc/paths.d - which also is the way I prefer to do.
Finally, to answer the question posed by this section's title, it may help to
list all of the `python` / `python3` things found in `$PATH` instead of just the
first hit. To do so, add the `-a` switch to `which`:
```bash
% which -a python3
...
```
This will show a list of all binaries which are actually available in your PATH.
---
### Debugging?
Tired of waiting for your renders to finish before you can see if it works?
Reduce the steps! The image quality will be horrible but at least you'll get
quick feedback.
```bash
python ./scripts/txt2img.py \
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
```
---
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
```bash
python scripts/configure_invokeai.py
```
---
### "The operator [name] is not current implemented for the MPS device." (sic)
!!! example "example error"
```bash
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
implemented for the MPS device. If you want this op to be added in priority
during the prototype phase of this feature, please comment on
https://github.com/pytorch/pytorch/issues/77764.
As a temporary fix, you can set the environment variable
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
WARNING: this will be slower than running natively on MPS.
```
The InvokeAI version includes this fix in
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
### "Could not build wheels for tokenizers"
I have not seen this error because I had Rust installed on my computer before I
started playing with Stable Diffusion. The fix is to install Rust.
```bash
curl \
--proto '=https' \
--tlsv1.2 \
-sSf https://sh.rustup.rs | sh
```
---
### How come `--seed` doesn't work?
!!! Information
Completely reproducible results are not guaranteed across PyTorch releases,
individual commits, or different platforms. Furthermore, results may not be
reproducible between CPU and GPU executions, even when using identical seeds.
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
Second, we might have a fix that at least gets a consistent seed sort of. We're
still working on it.
### libiomp5.dylib error?
```bash
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
```
You are likely using an Intel package by mistake. Be sure to run conda with the
environment variable `CONDA_SUBDIR=osx-arm64`, like so:
`CONDA_SUBDIR=osx-arm64 conda install ...`
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
by a dependency.
[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
is a metapackage designed to prevent this, by making it impossible to install
`mkl`, but if your environment is already broken it may not work.
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
masks the underlying issue of using Intel packages.
---
### Not enough memory
This seems to be a common problem and is probably the underlying problem for a
lot of symptoms (listed below). The fix is to lower your image size or to add
`model.half()` right after the model is loaded. I should probably test it out.
I've read that the reason this fixes problems is because it converts the model
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
how that would affect the quality of the images though.
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
---
### "Error: product of dimension sizes > 2\*\*31'"
This error happens with img2img, which I haven't played with too much yet. But I
know it's because your image is too big or the resolution isn't a multiple of
32x32. Because the stable-diffusion model was trained on images that were 512 x
512, it's always best to use that output size (which is the default). However,
if you're using that size and you get the above error, try 256 x 256 or 512 x
256 or something as the source image.
BTW, 2\*\*31-1 =
[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
C.
---
### I just got Rickrolled! Do I have a virus?
You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) and
here's
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
call this "computer vision", sheesh).
---
### My images come out black
We might have this fixed, we are still testing.
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
on CUDA GPU's where the images come out green. Maybe it's the same issue?
Someone in that issue says to use "--precision full", but this fork actually
disables that flag. I don't know why, someone else provided that code and I
don't know what it does. Maybe the `model.half()` suggestion above would fix
this issue too. I should probably test it.
### "view size is not compatible with input tensor's size and stride"
```bash
File "/opt/anaconda3/envs/invokeai/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but
we found a file in stable-diffusion that we could change instead. This is a
32-bit vs 16-bit problem.
### The processor must support the Intel bla bla bla
What? Intel? On an Apple Silicon?
```bash
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
```
This is due to the Intel `mkl` package getting picked up when you try to install
something that depends on it-- Rosetta can translate some Intel instructions but
not the specialized ones here. To avoid this, make sure to use the environment
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
use ARM packages, and use `nomkl` as described above.
---
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
May appear when just starting to generate, e.g.:
```bash
invoke> clouds
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```

View File

@ -1,17 +0,0 @@
---
title: Installing InvokeAI with the Pre-Compiled PIP Installer
---
# THIS NEEDS TO BE FLESHED OUT
## Introduction
## Walkthrough
## Updating to newer versions
### Updating the stable version
### Updating to the development version
## Troubleshooting

View File

@ -1,225 +0,0 @@
---
title: Source Installer
---
# The InvokeAI Source Installer
## Introduction
The source installer is a shell script that attempts to automate every step
needed to install and run InvokeAI on a stock computer running recent versions
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
version of InvokeAI with the option to upgrade to experimental versions later.
Before you begin, make sure that you meet the
[hardware requirements](../../index.md#hardware-requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
installed, you may need to install the
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
## Walk through
Though there are multiple steps, there really is only one click involved to kick
off the process.
1. The source installer is distributed in ZIP files. Go to the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named:
- [invokeAI-src-installer-2.2.3-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-mac.zip)
- [invokeAI-src-installer-2.2.3-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-windows.zip)
- [invokeAI-src-installer-2.2.3-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-linux.zip)
Download the one that is appropriate for your operating system.
2. Unpack the zip file into a directory that has at least 18G of free space. Do
_not_ unpack into a directory that has an earlier version of InvokeAI.
This will create a new directory named "InvokeAI". This example shows how
this would look using the `unzip` command-line tool, but you may use any
graphical or command-line Zip extractor:
```cmd
C:\Documents\Linco> unzip invokeAI-windows.zip
Archive: C: \Linco\Downloads\invokeAI-linux.zip
creating: invokeAI\
inflating: invokeAI\install.bat
inflating: invokeAI\readme.txt
```
3. If you are a macOS user, you may need to install the Xcode command line tools.
These are a set of tools that are needed to run certain applications in a Terminal,
including InvokeAI. This package is provided directly by Apple.
To install, open a terminal window and run `xcode-select --install`. You will get
a macOS system popup guiding you through the install. If you already have them
installed, you will instead see some output in the Terminal advising you that the
tools are already installed.
More information can be found here:
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
4. If you are using a desktop GUI, double-click the installer file. It will be
named `install.bat` on Windows systems and `install.sh` on Linux and
Macintosh systems.
5. Alternatively, from the command line, run the shell script or .bat file:
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> install.bat
```
6. Sit back and let the install script work. It will install various binary
requirements including Conda, Git and Python, then download the current
InvokeAI code and install it along with its dependencies.
Be aware that some of the library download and install steps take a long time.
In particular, the `pytorch` package is quite large and often appears to get
"stuck" at 99.9%. Similarly, the `pip installing requirements` step may
appear to hang. Have patience and the installation step will eventually
resume. However, there are occasions when the library install does
legitimately get stuck. If you have been waiting for more than ten minutes
and nothing is happening, you can interrupt the script with ^C. You may restart
it and it will pick up where it left off.
7. After installation completes, the installer will launch a script called
`configure_invokeai.py`, which will guide you through the first-time process of
selecting one or more Stable Diffusion model weights files, downloading and
configuring them.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you must agree to in order to use. The script will list the
steps you need to take to create an account on the official site that hosts
the weights files, accept the agreement, and provide an access token that
allows InvokeAI to legally download and install the weights files.
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](../050_INSTALLING_MODELS.md).
8. The script will now exit and you'll be ready to generate some images. The
invokeAI directory will contain numerous files. Look for a shell script
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
by double-clicking it or typing its name at the command-line:
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> invoke.bat
```
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
the command-line interface, or (2) the web GUI. If you start the latter, you can
load the user interface by pointing your browser at http://localhost:9090.
The `invoke` script also offers you a third option labeled "open the developer
console". If you choose this option, you will be dropped into a command-line
interface in which you can run python commands directly, access developer tools,
and launch InvokeAI with customized options. To do the latter, you would launch
the script `scripts/invoke.py` as shown in this example:
```cmd
python scripts/invoke.py --web --max_load_models=3 \
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
```
These options are described in detail in the
[Command-Line Interface](../../deprecated/CLI.md) documentation.
## Troubleshooting
_Package dependency conflicts_ If you have previously installed
InvokeAI or another Stable Diffusion package, the installer may
occasionally pick up outdated libraries and either the installer or
`invoke` will fail with complaints out library conflicts. There are
two steps you can take to clear this problem. Both of these are done
from within the "developer's console", which you can get to by
launching `invoke.sh` (or `invoke.bat`) and selecting launch option
#3:
1. Remove the previous `invokeai` environment completely. From within
the developer's console, give the command `conda env remove -n
invokeai`. This will delete previous files installed by `invoke`.
Then exit from the developer's console and launch the script
`update.sh` (or `update.bat`). This will download the most recent
InvokeAI (including bug fixes) and reinstall the environment.
You should then be able to run `invoke.sh`/`invoke.bat`.
2. If this doesn't work, you can try cleaning your system's conda
cache. This is slightly more extreme, but won't interfere with
any other python-based programs installed on your computer.
From the developer's console, run the command `conda clean -a`
and answer "yes" to all prompts.
After this is done, run `update.sh` and try again as before.
_"Corrupted configuration file."__ Everything seems to install ok, but
`invoke` complains of a corrupted configuration file and goes calls
`configure_invokeai.py` to fix, but this doesn't fix the problem.
This issue is often caused by a misconfigured configuration directive
in the `.invokeai` initialization file that contains startup settings.
This can be corrected by fixing the offending line.
First find `.invokeai`. It is a small text file located in your home
directory, `~/.invokeai` on Mac and Linux systems, and `C:\Users\*your
name*\.invokeai` on Windows systems. Open it with a text editor
(e.g. Notepad on Windows, TextEdit on Macs, or `nano` on Linux)
and look for the lines starting with `--root` and `--outdir`.
An example is here:
```cmd
--root="/home/lstein/invokeai"
--outdir="/home/lstein/invokeai/outputs"
```
There should not be whitespace before or after the directory paths,
and the paths should not end with slashes:
```cmd
--root="/home/lstein/invokeai " # wrong! no whitespace here
--root="/home\lstein\invokeai\" # wrong! shouldn't end in a slash
```
Fix the problem with your text editor and save as a **plain text**
file. This should clear the issue.
_If none of these maneuvers fixes the problem_ then please report the
problem to the [InvokeAI
Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive assistance.
## Updating to newer versions
This section describes how to update InvokeAI to new versions of the software.
### Updating the stable version
This distribution is changing rapidly, and we add new features on a daily basis.
To update to the latest released version (recommended), run the `update.sh`
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
release and re-run the `configure_invokeai` script to download any updated models
files that may be needed. You can also use this to add additional models that
you did not select at installation time.
You can now close the developer console and run `invoke` as before. If you get
complaints about missing models, then you may need to do the additional step of
running `configure_invokeai.py`. This happens relatively infrequently. To do this,
simply open up the developer's console again and type
`python scripts/configure_invokeai.py`.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.

View File

@ -1,137 +0,0 @@
---
title: Manual Installation, Windows
---
# :fontawesome-brands-windows: Windows
## **Notebook install (semi-automated)**
We have a
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
with cell-by-cell installation steps. It will download the code in this repo as
one of the steps, so instead of cloning this repo, simply download the notebook
from the link above and load it up in VSCode (with the appropriate extensions
installed)/Jupyter/JupyterLab and start running the cells one-by-one.
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
## **Manual Install with Conda**
1. Install Anaconda3 (miniconda3 version) from [here](https://docs.anaconda.com/anaconda/install/windows/)
2. Install Git from [here](https://git-scm.com/download/win)
3. Launch Anaconda from the Windows Start menu. This will bring up a command
window. Type all the remaining commands in this window.
4. Run the command:
```batch
git clone https://github.com/invoke-ai/InvokeAI.git
```
This will create stable-diffusion folder where you will follow the rest of
the steps.
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
```batch
cd InvokeAI
```
6. Run the following commands:
!!! todo "For systems with a CUDA (Nvidia) card:"
```bash
rmdir src # (this is a precaution in case there is already a src directory)
conda env create -f environment-cuda.yml
conda activate invokeai
(invokeai)>
```
!!! todo "For systems with an AMD card (using ROCm driver):"
```bash
rmdir src # (this is a precaution in case there is already a src directory)
conda env create -f environment-AMD.yml
conda activate invokeai
(invokeai)>
```
This will install all python requirements and activate the "invokeai" environment
which sets PATH and other environment variables properly.
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash
python scripts/configure_invokeai.py
```
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing the
same thing.
8. Start generating images!
!!! example ""
!!! warning "IMPORTANT"
Make sure that the conda environment is activated, which should create
`(invokeai)` in front of your prompt!
=== "CLI"
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
9. Subsequently, to relaunch the script, first activate the Anaconda
command window (step 3),enter the InvokeAI directory (step 5, `cd
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
launch the invoke script (step 9).
!!! tip "Tildebyte has written an alternative"
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
which uses the Windows Powershell and pew. If you are having trouble with
Anaconda on Windows, give this a try (or try it first!)
---
This distribution is changing rapidly. If you used the `git clone` method
(step 5) to download the stable-diffusion directory, then to update to the
latest and greatest version, launch the Anaconda window, enter
`stable-diffusion`, and type:
```bash
git pull
conda env update
```
This will bring your local copy into sync with the remote one.

View File

@ -22,6 +22,24 @@ class MyInvocation(BaseInvocation):
...
```
The full API is documented below.
## Invocation Mixins
Two important mixins are provided to facilitate working with metadata and gallery boards.
### `WithMetadata`
Inherit from this class (in addition to `BaseInvocation`) to add a `metadata` input to your node. When you do this, you can access the metadata dict from `self.metadata` in the `invoke()` function.
The dict will be populated via the node's input, and you can add any metadata you'd like to it. When you call `context.images.save()`, if the metadata dict has any data, it be automatically embedded in the image.
### `WithBoard`
Inherit from this class (in addition to `BaseInvocation`) to add a `board` input to your node. This renders as a drop-down to select a board. The user's selection will be accessible from `self.board` in the `invoke()` function.
When you call `context.images.save()`, if a board was selected, the image will added to that board as it is saved.
<!-- prettier-ignore-start -->
::: invokeai.app.services.shared.invocation_context.InvocationContext
options:

View File

@ -5,6 +5,7 @@ InvokeAI installer script
import os
import platform
import re
import shutil
import subprocess
import sys
@ -22,6 +23,15 @@ ARCH = platform.uname().machine
VERSION = "latest"
def get_version_from_wheel_filename(wheel_filename: str) -> str:
match = re.search(r"-(\d+\.\d+\.\d+)", wheel_filename)
if match:
version = match.group(1)
return version
else:
raise ValueError(f"Could not extract version from wheel filename: {wheel_filename}")
class Installer:
"""
Deploys an InvokeAI installation into a given path
@ -36,7 +46,7 @@ class Installer:
self.bootstrap()
self.available_releases = get_github_releases()
def mktemp_venv(self) -> TemporaryDirectory:
def mktemp_venv(self) -> TemporaryDirectory[str]:
"""
Creates a temporary virtual environment for the installer itself
@ -58,7 +68,7 @@ class Installer:
return venv_dir
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory | None:
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory[str] | None:
"""
Bootstrap the installer venv with packages required at install time
"""
@ -87,7 +97,7 @@ class Installer:
except subprocess.CalledProcessError as e:
print(e)
def app_venv(self, venv_parent) -> Path:
def app_venv(self, venv_parent: Path) -> Path:
"""
Create a virtualenv for the InvokeAI installation
"""
@ -106,25 +116,28 @@ class Installer:
return venv_dir
def install(
self, version=None, root: str = "~/invokeai", yes_to_all=False, find_links: Optional[Path] = None
self,
root: str = "~/invokeai",
yes_to_all: bool = False,
find_links: Optional[str] = None,
wheel: Optional[Path] = None,
) -> None:
"""
Install the InvokeAI application into the given runtime path
"""Install the InvokeAI application into the given runtime path
:param root: Destination path for the installation
:type root: str
:param version: InvokeAI version to install
:type version: str
:param yes: Accept defaults to all questions
:type yes: bool
:param find_links: A local directory to search for requirement wheels before going to remote indexes
:type find_links: Path
Args:
root: Destination path for the installation
yes_to_all: Accept defaults to all questions
find_links: A local directory to search for requirement wheels before going to remote indexes
wheel: A wheel file to install
"""
import messages
if wheel:
messages.installing_from_wheel(wheel.name)
version = get_version_from_wheel_filename(wheel.name)
else:
messages.welcome(self.available_releases)
version = messages.choose_version(self.available_releases)
auto_dest = Path(os.environ.get("INVOKEAI_ROOT", root)).expanduser().resolve()
@ -140,18 +153,11 @@ class Installer:
# install dependencies and the InvokeAI application
(extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None)
self.instance.install(
extra_index_url,
optional_modules,
find_links,
)
self.instance.install(extra_index_url, optional_modules, find_links, wheel)
# install the launch/update scripts into the runtime directory
self.instance.install_user_scripts()
# run through the configuration flow
self.instance.configure()
class InvokeAiInstance:
"""
@ -181,18 +187,20 @@ class InvokeAiInstance:
return (self.runtime, self.venv)
def install(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install the package from PyPi.
def install(
self,
extra_index_url: Optional[str] = None,
optional_modules: Optional[str] = None,
find_links: Optional[str] = None,
wheel: Optional[Path] = None,
):
"""Install the package from PyPi or a wheel, if provided.
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
:param optional_modules: optional modules to install using "[module1,module2]" format.
:type optional_modules: str
:param find_links: path to a directory containing wheels to be searched prior to going to the internet
:type find_links: Path
Args:
extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
optional_modules: optional modules to install using "[module1,module2]" format.
find_links: path to a directory containing wheels to be searched prior to going to the internet
wheel: a wheel file to install
"""
import messages
@ -216,7 +224,7 @@ class InvokeAiInstance:
messages.simple_banner("Installing the InvokeAI Application :art:")
from plumbum import FG, ProcessExecutionError, local # type: ignore
from plumbum import FG, ProcessExecutionError, local
pip = local[self.pip]
@ -225,12 +233,12 @@ class InvokeAiInstance:
"--require-virtualenv",
"--force-reinstall",
"--use-pep517",
str(src),
str(src) if not wheel else str(wheel),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
pre_flag,
pre_flag if not wheel else None, # Ignore the flag if we are installing a wheel
]
try:
@ -242,53 +250,6 @@ class InvokeAiInstance:
)
sys.exit(1)
def configure(self):
"""
Configure the InvokeAI runtime directory
"""
auto_install = False
# set sys.argv to a consistent state
new_argv = [sys.argv[0]]
for i in range(1, len(sys.argv)):
el = sys.argv[i]
if el in ["-r", "--root"]:
new_argv.append(el)
new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]:
auto_install = True
sys.argv = new_argv
import messages
import requests # to catch download exceptions
auto_install = auto_install or messages.user_wants_auto_configuration()
if auto_install:
sys.argv.append("--yes")
else:
messages.introduction()
from invokeai.frontend.install.invokeai_configure import invokeai_configure
# NOTE: currently the config script does its own arg parsing! this means the command-line switches
# from the installer will also automatically propagate down to the config script.
# this may change in the future with config refactoring!
succeeded = False
try:
invokeai_configure()
succeeded = True
except requests.exceptions.ConnectionError as e:
print(f"\nA network error was encountered during configuration and download: {str(e)}")
except OSError as e:
print(f"\nAn OS error was encountered during configuration and download: {str(e)}")
except Exception as e:
print(f"\nA problem was encountered during the configuration and download steps: {str(e)}")
finally:
if not succeeded:
print('To try again, find the "invokeai" directory, run the script "invoke.sh" or "invoke.bat"')
print("and choose option 7 to fix a broken install, optionally followed by option 5 to install models.")
print("Alternatively you can relaunch the installer.")
def install_user_scripts(self):
"""
Copy the launch and update scripts to the runtime dir
@ -370,7 +331,7 @@ def set_sys_path(venv_path: Path) -> None:
sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve()))
def get_github_releases() -> tuple[list, list] | None:
def get_github_releases() -> tuple[list[str], list[str]] | None:
"""
Query Github for published (pre-)release versions.
Return a tuple where the first element is a list of stable releases and the second element is a list of pre-releases.
@ -381,7 +342,8 @@ def get_github_releases() -> tuple[list, list] | None:
## get latest releases using github api
url = "https://api.github.com/repos/invoke-ai/InvokeAI/releases"
releases, pre_releases = [], []
releases: list[str] = []
pre_releases: list[str] = []
try:
res = requests.get(url)
res.raise_for_status()

View File

@ -30,16 +30,17 @@ if __name__ == "__main__":
)
parser.add_argument(
"--version",
dest="version",
help="Version of InvokeAI to install. Default to the latest stable release. A special 'pre' value will install the latest published pre-release version.",
"--find-links",
dest="find_links",
help="Specifies a directory of local wheel files to be searched prior to searching the online repositories.",
type=Path,
default=None,
)
parser.add_argument(
"--find-links",
dest="find_links",
help="Specifies a directory of local wheel files to be searched prior to searching the online repositories.",
"--wheel",
dest="wheel",
help="Specifies a wheel for the InvokeAI package. Used for troubleshooting or testing prereleases.",
type=Path,
default=None,
)

View File

@ -7,8 +7,9 @@ import os
import platform
from enum import Enum
from pathlib import Path
from typing import Optional
from prompt_toolkit import HTML, prompt
from prompt_toolkit import prompt
from prompt_toolkit.completion import FuzzyWordCompleter, PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
@ -36,7 +37,7 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def welcome(available_releases: tuple | None = None) -> None:
def welcome(available_releases: tuple[list[str], list[str]] | None = None) -> None:
@group()
def text():
if (platform_specific := _platform_specific_help()) is not None:
@ -72,7 +73,34 @@ def welcome(available_releases: tuple | None = None) -> None:
console.line()
def choose_version(available_releases: tuple | None = None) -> str:
def installing_from_wheel(wheel_filename: str) -> None:
"""Display a message about installing from a wheel"""
@group()
def text():
yield Text.from_markup(f"You are installing from a wheel file: [bold]{wheel_filename}\n")
yield Text.from_markup(
"[bold orange3]If you are not sure why you are doing this, you should cancel and install InvokeAI normally."
)
console.print(
Panel(
title="Installing from Wheel",
renderable=text(),
box=box.DOUBLE,
expand=True,
padding=(1, 2),
)
)
should_proceed = Confirm.ask("Do you want to proceed?")
if not should_proceed:
console.print("Installation cancelled.")
exit()
def choose_version(available_releases: tuple[list[str], list[str]] | None = None) -> str:
"""
Prompt the user to choose an Invoke version to install
"""
@ -98,39 +126,6 @@ def choose_version(available_releases: tuple | None = None) -> str:
return "stable" if response == "" else response
def user_wants_auto_configuration() -> bool:
"""Prompt the user to choose between manual and auto configuration."""
console.rule("InvokeAI Configuration Section")
console.print(
Panel(
Group(
"\n".join(
[
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
"",
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
"",
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
]
),
),
box=box.MINIMAL,
padding=(1, 1),
)
)
choice = (
prompt(
HTML("Choose <b>&lt;a&gt;</b>utomatic or <b>&lt;m&gt;</b>anual configuration [a/m] (a): "),
validator=Validator.from_callable(
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
),
)
or "a"
)
return choice.lower().startswith("a")
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":stop_sign: Directory {dest} already exists!")
@ -147,7 +142,7 @@ def confirm_install(dest: Path) -> bool:
return dest_confirmed
def dest_path(dest=None) -> Path | None:
def dest_path(dest: Optional[str | Path] = None) -> Path | None:
"""
Prompt the user for the destination path and create the path
@ -351,34 +346,6 @@ def windows_long_paths_registry() -> None:
)
def introduction() -> None:
"""
Display a banner when starting configuration of the InvokeAI application
"""
console.rule()
console.print(
Panel(
title=":art: Configuring InvokeAI :art:",
renderable=Group(
"",
"[b]This script will:",
"",
"1. Configure the InvokeAI application directory",
"2. Help download the Stable Diffusion weight files",
" and other large models that are needed for text to image generation",
"3. Create initial configuration files.",
"",
"[i]At any point you may interrupt this program and resume later.",
"",
"[b]For the best user experience, please enlarge or maximize this window",
),
)
)
console.line(2)
def _platform_specific_help() -> Text | None:
if OS == "Darwin":
text = Text.from_markup(

View File

@ -9,37 +9,19 @@ set INVOKEAI_ROOT=.
:start
echo Desired action:
echo 1. Generate images with the browser-based interface
echo 2. Run textual inversion training
echo 3. Merge models (diffusers type only)
echo 4. Download and install models
echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console
echo 8. Update InvokeAI (DEPRECATED - please use the installer)
echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help
echo 2. Open the developer console
echo 3. Run the InvokeAI image database maintenance script
echo 4. Command-line help
echo Q - Quit
set /P choice="Please enter 1-10, Q: [1] "
echo.
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
echo.
set /P choice="Please enter 1-4, Q: [1] "
if not defined choice set choice=1
IF /I "%choice%" == "1" (
echo Starting the InvokeAI browser-based UI..
python .venv\Scripts\invokeai-web.exe %*
) ELSE IF /I "%choice%" == "2" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%choice%" == "3" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%choice%" == "4" (
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%choice%" == "5" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%choice%" == "6" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
) ELSE IF /I "%choice%" == "7" (
echo Developer Console
echo Python command is:
where python
@ -51,15 +33,10 @@ IF /I "%choice%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "8" (
echo UPDATING FROM WITHIN THE APP IS BEING DEPRECATED.
echo Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.
timeout 4
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" (
) ELSE IF /I "%choice%" == "3" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
) ELSE IF /I "%choice%" == "10" (
) ELSE IF /I "%choice%" == "4" (
echo Displaying command line help...
python .venv\Scripts\invokeai-web.exe --help %*
pause

View File

@ -25,23 +25,7 @@ cd "$scriptdir"
export INVOKEAI_ROOT="$scriptdir"
PARAMS=$@
# Check to see if dialog is installed (it seems to be fairly standard, but good to check regardless) and if the user has passed the --no-tui argument to disable the dialog TUI
tui=true
if command -v dialog &>/dev/null; then
# This must use $@ to properly loop through the arguments passed by the user
for arg in "$@"; do
if [ "$arg" == "--no-tui" ]; then
tui=false
# Remove the --no-tui argument to avoid errors later on when passing arguments to InvokeAI
PARAMS=$(echo "$PARAMS" | sed 's/--no-tui//')
break
fi
done
else
tui=false
fi
# Set required env var for torch on mac MPS
# This setting allows torch to fall back to CPU for operations that are not supported by MPS on macOS.
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
@ -58,49 +42,17 @@ do_choice() {
invokeai-web $PARAMS
;;
2)
clear
printf "Textual inversion training\n"
invokeai-ti --gui $PARAMS
;;
3)
clear
printf "Merge models (diffusers type only)\n"
invokeai-merge --gui $PARAMS
;;
4)
clear
printf "Download and install models\n"
invokeai-model-install --root ${INVOKEAI_ROOT}
;;
5)
clear
printf "Change InvokeAI startup options\n"
invokeai-configure --root ${INVOKEAI_ROOT} --skip-sd-weights --skip-support-models
;;
6)
clear
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
;;
7)
clear
printf "Open the developer console\n"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
8)
clear
printf "UPDATING FROM WITHIN THE APP IS BEING DEPRECATED\n"
printf "Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.\n"
sleep 4
python -m invokeai.frontend.install.invokeai_update
;;
9)
3)
clear
printf "Running the db maintenance script\n"
invokeai-db-maintenance --root ${INVOKEAI_ROOT}
;;
10)
4)
clear
printf "Command-line help\n"
invokeai-web --help
@ -114,69 +66,26 @@ do_choice() {
clear
}
# Dialog-based TUI for launcing Invoke functions
do_dialog() {
options=(
1 "Generate images with a browser-based interface"
2 "Textual inversion training"
3 "Merge models (diffusers type only)"
4 "Download and install models"
5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console"
8 "Update InvokeAI (DEPRECATED - please use the installer)"
9 "Run the InvokeAI image database maintenance script"
10 "Command-line help"
)
choice=$(dialog --clear \
--backtitle "\Zb\Zu\Z3InvokeAI" \
--colors \
--title "What would you like to do?" \
--ok-label "Run" \
--cancel-label "Exit" \
--help-button \
--help-label "CLI Help" \
--menu "Select an option:" \
0 0 0 \
"${options[@]}" \
2>&1 >/dev/tty) || clear
do_choice "$choice"
clear
}
# Command-line interface for launching Invoke functions
do_line_input() {
clear
printf " ** For a more attractive experience, please install the 'dialog' utility using your package manager. **\n\n"
printf "What would you like to do?\n"
printf "1: Generate images using the browser-based interface\n"
printf "2: Run textual inversion training\n"
printf "3: Merge models (diffusers type only)\n"
printf "4: Download and install models\n"
printf "5: Change InvokeAI startup options\n"
printf "6: Re-run the configure script to fix a broken install\n"
printf "7: Open the developer console\n"
printf "8: Update InvokeAI\n"
printf "9: Run the InvokeAI image database maintenance script\n"
printf "10: Command-line help\n"
printf "2: Open the developer console\n"
printf "3: Run the InvokeAI image database maintenance script\n"
printf "4: Command-line help\n"
printf "Q: Quit\n\n"
read -p "Please enter 1-10, Q: [1] " yn
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.\n\n"
read -p "Please enter 1-4, Q: [1] " yn
choice=${yn:='1'}
do_choice $choice
clear
}
# Main IF statement for launching Invoke with either the TUI or CLI, and for checking if the user is in the developer console
# Main IF statement for launching Invoke, and for checking if the user is in the developer console
if [ "$0" != "bash" ]; then
while true; do
if $tui; then
# .dialogrc must be located in the same directory as the invoke.sh script
export DIALOGRC="./.dialogrc"
do_dialog
else
do_line_input
fi
done
else # in developer console
python --version

View File

@ -1,11 +0,0 @@
Organization of the source tree:
app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
in version/invokeai_version.py

View File

@ -25,6 +25,7 @@ from ..services.invocation_cache.invocation_cache_memory import MemoryInvocation
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.model_images.model_images_default import ModelImageFileStorageDisk
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
@ -63,14 +64,15 @@ class ApiDependencies:
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
output_folder = config.output_path
output_folder = config.outputs_path
if output_folder is None:
raise ValueError("Output folder is not set")
image_files = DiskImageFileStorage(f"{output_folder}/images")
model_images_folder = config.models_path
db = init_db(config=config, logger=logger, image_files=image_files)
configuration = config
@ -92,6 +94,7 @@ class ApiDependencies:
ObjectSerializerDisk[ConditioningFieldData](output_folder / "conditioning", ephemeral=True)
)
download_queue_service = DownloadQueueService(event_bus=events)
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
model_manager = ModelManagerService.build_model_manager(
app_config=configuration,
model_record_service=ModelRecordServiceSQL(db=db),
@ -118,6 +121,7 @@ class ApiDependencies:
images=images,
invocation_cache=invocation_cache,
logger=logger,
model_images=model_images_service,
model_manager=model_manager,
download_queue=download_queue_service,
names=names,

View File

@ -12,7 +12,6 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.util.logging import logging
@ -114,9 +113,7 @@ async def get_config() -> AppConfig:
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
watermarking_methods = []
if InvisibleWatermark.invisible_watermark_available():
watermarking_methods.append("invisible_watermark")
watermarking_methods = ["invisible_watermark"]
return AppConfig(
infill_methods=infill_methods,

View File

@ -1,22 +1,32 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import contextlib
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from enum import Enum
from typing import Any, Dict, List, Optional
from fastapi import Body, Path, Query, Response
import huggingface_hub
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict, Field
from PIL import Image
from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_install import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordChanges,
UnknownModelException,
)
from invokeai.app.services.model_records.model_records_base import DuplicateModelException, ModelRecordChanges
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@ -25,12 +35,18 @@ from invokeai.backend.model_manager.config import (
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
from ..dependencies import ApiDependencies
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
# images are immutable; set a high max-age
IMAGE_MAX_AGE = 31536000
class ModelsList(BaseModel):
"""Return list of configs."""
@ -105,6 +121,9 @@ async def list_model_records(
found_models.extend(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
for model in found_models:
cover_image = ApiDependencies.invoker.services.model_images.get_url(model.key)
model.cover_image = cover_image
return ModelsList(models=found_models)
@ -148,6 +167,8 @@ async def get_model_record(
record_store = ApiDependencies.invoker.services.model_manager.store
try:
config: AnyModelConfig = record_store.get_model(key)
cover_image = ApiDependencies.invoker.services.model_images.get_url(key)
config.cover_image = cover_image
return config
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@ -234,6 +255,40 @@ async def scan_for_models(
return scan_results
class HuggingFaceModels(BaseModel):
urls: List[AnyHttpUrl] | None = Field(description="URLs for all checkpoint format models in the metadata")
is_diffusers: bool = Field(description="Whether the metadata is for a Diffusers format model")
@model_manager_router.get(
"/hugging_face",
operation_id="get_hugging_face_models",
responses={
200: {"description": "Hugging Face repo scanned successfully"},
400: {"description": "Invalid hugging face repo"},
},
status_code=200,
response_model=HuggingFaceModels,
)
async def get_hugging_face_models(
hugging_face_repo: str = Query(description="Hugging face repo to search for models", default=None),
) -> HuggingFaceModels:
try:
metadata = HuggingFaceMetadataFetch().from_id(hugging_face_repo)
except UnknownMetadataException:
raise HTTPException(
status_code=400,
detail="No HuggingFace repository found",
)
assert isinstance(metadata, ModelMetadataWithFiles)
return HuggingFaceModels(
urls=metadata.ckpt_urls,
is_diffusers=metadata.is_diffusers,
)
@model_manager_router.patch(
"/i/{key}",
operation_id="update_model_record",
@ -255,8 +310,10 @@ async def update_model_record(
"""Update a model's config."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_manager.store
installer = ApiDependencies.invoker.services.model_manager.install
try:
model_response: AnyModelConfig = record_store.update_model(key, changes=changes)
record_store.update_model(key, changes=changes)
model_response: AnyModelConfig = installer.sync_model_path(key)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@ -266,6 +323,75 @@ async def update_model_record(
return model_response
@model_manager_router.get(
"/i/{key}/image",
operation_id="get_model_image",
responses={
200: {
"description": "The model image was fetched successfully",
},
400: {"description": "Bad request"},
404: {"description": "The model image could not be found"},
},
status_code=200,
)
async def get_model_image(
key: str = Path(description="The name of model image file to get"),
) -> FileResponse:
"""Gets an image file that previews the model"""
try:
path = ApiDependencies.invoker.services.model_images.get_path(key)
response = FileResponse(
path,
media_type="image/png",
filename=key + ".png",
content_disposition_type="inline",
)
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
raise HTTPException(status_code=404)
@model_manager_router.patch(
"/i/{key}/image",
operation_id="update_model_image",
responses={
200: {
"description": "The model image was updated successfully",
},
400: {"description": "Bad request"},
},
status_code=200,
)
async def update_model_image(
key: Annotated[str, Path(description="Unique key of model")],
image: UploadFile,
) -> None:
if not image.content_type or not image.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
contents = await image.read()
try:
pil_image = Image.open(io.BytesIO(contents))
except Exception:
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=415, detail="Failed to read image")
logger = ApiDependencies.invoker.services.logger
model_images = ApiDependencies.invoker.services.model_images
try:
model_images.save(pil_image, key)
logger.info(f"Updated image for model: {key}")
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return
@model_manager_router.delete(
"/i/{key}",
operation_id="delete_model",
@ -296,6 +422,29 @@ async def delete_model(
raise HTTPException(status_code=404, detail=str(e))
@model_manager_router.delete(
"/i/{key}/image",
operation_id="delete_model_image",
responses={
204: {"description": "Model image deleted successfully"},
404: {"description": "Model image not found"},
},
status_code=204,
)
async def delete_model_image(
key: str = Path(description="Unique key of model image to remove from model_images directory."),
) -> None:
logger = ApiDependencies.invoker.services.logger
model_images = ApiDependencies.invoker.services.model_images
try:
model_images.delete(key)
logger.info(f"Deleted model image: {key}")
return
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
# @model_manager_router.post(
# "/i/",
# operation_id="add_model_record",
@ -539,7 +688,7 @@ async def convert_model(
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
# loading the model will convert it into a cached diffusers file
model_manager.load_model_by_config(model_config, submodel_type=SubModelType.Scheduler)
model_manager.load.load_model(model_config, submodel_type=SubModelType.Scheduler)
# Get the path of the converted model from the loader
cache_path = loader.convert_cache.cache_path(key)
@ -640,3 +789,69 @@ async def convert_model(
# except ValueError as e:
# raise HTTPException(status_code=400, detail=str(e))
# return response
@model_manager_router.get("/starter_models", operation_id="get_starter_models", response_model=list[StarterModel])
async def get_starter_models() -> list[StarterModel]:
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
installed_model_sources = {m.source for m in installed_models}
starter_models = deepcopy(STARTER_MODELS)
for model in starter_models:
if model.source in installed_model_sources:
model.is_installed = True
# Remove already-installed dependencies
missing_deps: list[StarterModelWithoutDependencies] = []
for dep in model.dependencies or []:
if dep.source not in installed_model_sources:
missing_deps.append(dep)
model.dependencies = missing_deps
return starter_models
class HFTokenStatus(str, Enum):
VALID = "valid"
INVALID = "invalid"
UNKNOWN = "unknown"
class HFTokenHelper:
@classmethod
def get_status(cls) -> HFTokenStatus:
try:
if huggingface_hub.get_token_permission(huggingface_hub.get_token()):
# Valid token!
return HFTokenStatus.VALID
# No token set
return HFTokenStatus.INVALID
except Exception:
return HFTokenStatus.UNKNOWN
@classmethod
def set_token(cls, token: str) -> HFTokenStatus:
with SuppressOutput(), contextlib.suppress(Exception):
huggingface_hub.login(token=token, add_to_git_credential=False)
return cls.get_status()
@model_manager_router.get("/hf_login", operation_id="get_hf_login_status", response_model=HFTokenStatus)
async def get_hf_login_status() -> HFTokenStatus:
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status
@model_manager_router.post("/hf_login", operation_id="do_hf_login", response_model=HFTokenStatus)
async def do_hf_login(
token: str = Body(description="Hugging Face token to use for login", embed=True),
) -> HFTokenStatus:
HFTokenHelper.set_token(token)
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status

View File

@ -1,25 +1,7 @@
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
import sys
from contextlib import asynccontextmanager
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
from .services.config import InvokeAIAppConfig
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
if app_config.version:
print(f"InvokeAI version {__version__}")
sys.exit(0)
if True: # hack to make flake8 happy with imports coming after setting up the config
import asyncio
import mimetypes
import socket
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
from typing import Any
@ -40,6 +22,10 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
@ -59,13 +45,15 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
BaseInvocation,
UIConfigBase,
)
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
app_config = get_config()
if is_mps_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.get_logger(config=app_config)
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
@ -156,17 +144,19 @@ def custom_openapi() -> dict[str, Any]:
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
# Add Node Editor UI helper schemas
ui_config_schemas = models_json_schema(
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, ui_config_schema in ui_config_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = ui_config_schema
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
@ -243,10 +233,6 @@ def invoke_api() -> None:
else:
return port
from invokeai.backend.install.check_root import check_invokeai_root
check_invokeai_root(app_config) # note, may exit with an exception if root not set up
if app_config.dev_reload:
try:
import jurigged

View File

@ -3,9 +3,9 @@ import sys
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
custom_nodes_path = Path(InvokeAIAppConfig.get_config().custom_nodes_path.resolve())
custom_nodes_path = Path(get_config().custom_nodes_path)
custom_nodes_path.mkdir(parents=True, exist_ok=True)
custom_nodes_init_path = str(custom_nodes_path / "__init__.py")

View File

@ -33,7 +33,7 @@ from invokeai.app.invocations.fields import (
FieldKind,
Input,
)
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.metaenum import MetaEnum
from invokeai.app.util.misc import uuid_string
@ -191,7 +191,7 @@ class BaseInvocation(ABC, BaseModel):
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
app_config = InvokeAIAppConfig.get_config()
app_config = get_config()
allowed_invocations: set[BaseInvocation] = set()
for sc in cls._invocation_classes:
invocation_type = sc.get_type()

View File

@ -20,7 +20,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
from invokeai.backend.util.devices import torch_dtype
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from .model import ClipField
from .model import CLIPField
# unconditioned: Optional[torch.Tensor]
@ -36,7 +36,7 @@ from .model import ClipField
title="Prompt",
tags=["prompt", "compel"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@ -46,7 +46,7 @@ class CompelInvocation(BaseInvocation):
description=FieldDescriptions.compel_prompt,
ui_component=UIComponent.Textarea,
)
clip: ClipField = InputField(
clip: CLIPField = InputField(
title="CLIP",
description=FieldDescriptions.clip,
input=Input.Connection,
@ -54,16 +54,16 @@ class CompelInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.models.load(**self.clip.tokenizer.model_dump())
tokenizer_info = context.models.load(self.clip.tokenizer)
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**self.clip.text_encoder.model_dump())
text_encoder_info = context.models.load(self.clip.text_encoder)
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, CLIPTextModel)
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.clip.loras:
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info
@ -127,16 +127,16 @@ class SDXLPromptInvocationBase:
def run_clip_compel(
self,
context: InvocationContext,
clip_field: ClipField,
clip_field: CLIPField,
prompt: str,
get_pooled: bool,
lora_prefix: str,
zero_on_empty: bool,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[ExtraConditioningInfo]]:
tokenizer_info = context.models.load(**clip_field.tokenizer.model_dump())
tokenizer_info = context.models.load(clip_field.tokenizer)
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**clip_field.text_encoder.model_dump())
text_encoder_info = context.models.load(clip_field.text_encoder)
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, (CLIPTextModel, CLIPTextModelWithProjection))
@ -163,7 +163,7 @@ class SDXLPromptInvocationBase:
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in clip_field.loras:
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
lora_info = context.models.load(lora.lora)
lora_model = lora_info.model
assert isinstance(lora_model, LoRAModelRaw)
yield (lora_model, lora.weight)
@ -232,7 +232,7 @@ class SDXLPromptInvocationBase:
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -253,8 +253,8 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
crop_left: int = InputField(default=0, description="")
target_width: int = InputField(default=1024, description="")
target_height: int = InputField(default=1024, description="")
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
clip: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
clip2: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
@ -325,7 +325,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
title="SDXL Refiner Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.1",
version="1.1.1",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -340,7 +340,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
crop_top: int = InputField(default=0, description="")
crop_left: int = InputField(default=0, description="")
aesthetic_score: float = InputField(default=6.0, description=FieldDescriptions.sdxl_aesthetic)
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
clip2: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
@ -370,10 +370,10 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
@invocation_output("clip_skip_output")
class ClipSkipInvocationOutput(BaseInvocationOutput):
"""Clip skip node output"""
class CLIPSkipInvocationOutput(BaseInvocationOutput):
"""CLIP skip node output"""
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation(
@ -381,17 +381,17 @@ class ClipSkipInvocationOutput(BaseInvocationOutput):
title="CLIP Skip",
tags=["clipskip", "clip", "skip"],
category="conditioning",
version="1.0.0",
version="1.1.0",
)
class ClipSkipInvocation(BaseInvocation):
class CLIPSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP")
clip: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP")
skipped_layers: int = InputField(default=0, ge=0, description=FieldDescriptions.skipped_layers)
def invoke(self, context: InvocationContext) -> ClipSkipInvocationOutput:
def invoke(self, context: InvocationContext) -> CLIPSkipInvocationOutput:
self.clip.skipped_layers += self.skipped_layers
return ClipSkipInvocationOutput(
return CLIPSkipInvocationOutput(
clip=self.clip,
)

View File

@ -7,12 +7,8 @@ from typing import Dict, List, Literal, Union
import cv2
import numpy as np
from controlnet_aux import (
CannyDetector,
ContentShuffleDetector,
HEDdetector,
LeresDetector,
LineartAnimeDetector,
LineartDetector,
MediapipeFaceDetector,
MidasDetector,
MLSDdetector,
@ -31,14 +27,20 @@ from invokeai.app.invocations.fields import (
Input,
InputField,
OutputField,
UIType,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
@ -51,15 +53,9 @@ CONTROLNET_RESIZE_VALUES = Literal[
]
class ControlNetModelField(BaseModel):
"""ControlNet model field"""
key: str = Field(description="Model config record key for the ControlNet model")
class ControlField(BaseModel):
image: ImageField = Field(description="The control image")
control_model: ControlNetModelField = Field(description="The ControlNet model to use")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
@ -95,7 +91,9 @@ class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, input=Input.Direct, ui_type=UIType.ControlNetModel
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
@ -173,11 +171,13 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.2.1",
version="1.3.2",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@ -189,9 +189,14 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
# Keep alpha channel for Canny processing to detect edges of transparent areas
return context.images.get_pil(self.image.image_name, "RGBA")
def run_processor(self, image):
canny_processor = CannyDetector()
processed_image = canny_processor(image, self.low_threshold, self.high_threshold)
def run_processor(self, image: Image.Image) -> Image.Image:
processed_image = get_canny_edges(
image,
self.low_threshold,
self.high_threshold,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
@ -200,7 +205,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@ -211,9 +216,9 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image):
hed_processor = HEDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = hed_processor(
def run_processor(self, image: Image.Image) -> Image.Image:
hed_processor = HEDProcessor()
processed_image = hed_processor.run(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
@ -229,7 +234,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@ -238,9 +243,9 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image):
lineart_processor = LineartDetector.from_pretrained("lllyasviel/Annotators")
processed_image = lineart_processor(
def run_processor(self, image: Image.Image) -> Image.Image:
lineart_processor = LineartProcessor()
processed_image = lineart_processor.run(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution, coarse=self.coarse
)
return processed_image
@ -251,7 +256,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@ -259,9 +264,9 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
processor = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = processor(
def run_processor(self, image: Image.Image) -> Image.Image:
processor = LineartAnimeProcessor()
processed_image = processor.run(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
@ -274,13 +279,15 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.1",
version="1.2.3",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
@ -290,6 +297,8 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
image_resolution=self.image_resolution,
detect_resolution=self.detect_resolution,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
@ -301,7 +310,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@ -318,7 +327,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.1"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.2"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@ -341,7 +350,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.1"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.2"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@ -368,7 +377,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@ -398,7 +407,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@ -414,17 +423,25 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.1",
version="1.2.3",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
processed_image = mediapipe_face_processor(
image,
max_faces=self.max_faces,
min_confidence=self.min_confidence,
image_resolution=self.image_resolution,
detect_resolution=self.detect_resolution,
)
return processed_image
@ -433,7 +450,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@ -462,7 +479,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@ -502,18 +519,23 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.1",
version="1.2.3",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints"
)
np_img = np.array(image, dtype=np.uint8)
processed_image = segment_anything_processor(np_img)
processed_image = segment_anything_processor(
np_img, image_resolution=self.image_resolution, detect_resolution=self.detect_resolution
)
return processed_image
@ -544,7 +566,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.1",
version="1.2.2",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
@ -576,7 +598,7 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
version="1.1.1",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
@ -585,13 +607,12 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
return processed_image
@ -600,7 +621,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""

View File

@ -13,7 +13,7 @@ from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.2.1")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Simple inpaint using opencv."""

View File

@ -435,7 +435,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.1")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.2")
class FaceOffInvocation(BaseInvocation, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@ -514,7 +514,7 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.1")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.2")
class FaceMaskInvocation(BaseInvocation, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@ -617,7 +617,7 @@ class FaceMaskInvocation(BaseInvocation, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.1"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.2"
)
class FaceIdentifierInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""

View File

@ -39,13 +39,15 @@ class UIType(str, Enum, metaclass=MetaEnum):
"""
# region Model Field Types
MainModel = "MainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VaeModel = "VAEModelField"
VAEModel = "VAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
T2IAdapterModel = "T2IAdapterModelField"
# endregion
# region Misc Field Types
@ -86,7 +88,6 @@ class UIType(str, Enum, metaclass=MetaEnum):
IntegerPolymorphic = "DEPRECATED_IntegerPolymorphic"
LatentsPolymorphic = "DEPRECATED_LatentsPolymorphic"
StringPolymorphic = "DEPRECATED_StringPolymorphic"
MainModel = "DEPRECATED_MainModel"
UNet = "DEPRECATED_UNet"
Vae = "DEPRECATED_Vae"
CLIP = "DEPRECATED_CLIP"
@ -228,7 +229,7 @@ class ConditioningField(BaseModel):
# endregion
class MetadataField(RootModel):
class MetadataField(RootModel[dict[str, Any]]):
"""
Pydantic model for metadata with custom root of type dict[str, Any].
Metadata is stored without a strict schema.

View File

@ -49,7 +49,7 @@ class ShowImageInvocation(BaseInvocation):
title="Blank Image",
tags=["image"],
category="image",
version="1.2.1",
version="1.2.2",
)
class BlankImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Creates a blank image and forwards it to the pipeline"""
@ -72,7 +72,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Crop Image",
tags=["image", "crop"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageCropInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Crops an image to a specified box. The box can be outside of the image."""
@ -143,7 +143,7 @@ class CenterPadCropInvocation(BaseInvocation):
title="Paste Image",
tags=["image", "paste"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImagePasteInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Pastes an image into another image."""
@ -190,7 +190,7 @@ class ImagePasteInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Mask from Alpha",
tags=["image", "mask"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskFromAlphaInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Extracts the alpha channel of an image as a mask."""
@ -215,7 +215,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Multiply Images",
tags=["image", "multiply"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@ -242,7 +242,7 @@ IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
title="Extract Image Channel",
tags=["image", "channel"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Gets a channel from an image."""
@ -265,7 +265,7 @@ class ImageChannelInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Convert Image Mode",
tags=["image", "convert"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageConvertInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Converts an image to a different mode."""
@ -288,7 +288,7 @@ class ImageConvertInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur Image",
tags=["image", "blur"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Blurs an image"""
@ -316,7 +316,7 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Unsharp Mask",
tags=["image", "unsharp_mask"],
category="image",
version="1.2.1",
version="1.2.2",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata, WithBoard):
@ -385,7 +385,7 @@ PIL_RESAMPLING_MAP = {
title="Resize Image",
tags=["image", "resize"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Resizes an image to specific dimensions"""
@ -415,7 +415,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Scale Image",
tags=["image", "scale"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Scales an image by a factor"""
@ -450,7 +450,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Lerp Image",
tags=["image", "lerp"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Linear interpolation of all pixels of an image"""
@ -477,7 +477,7 @@ class ImageLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Inverse Lerp Image",
tags=["image", "ilerp"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Inverse linear interpolation of all pixels of an image"""
@ -504,7 +504,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add blur to NSFW-flagged images"""
@ -539,7 +539,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add an invisible watermark to an image"""
@ -560,7 +560,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Mask Edge",
tags=["image", "mask", "inpaint"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskEdgeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Applies an edge mask to an image"""
@ -599,7 +599,7 @@ class MaskEdgeInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.2.1",
version="1.2.2",
)
class MaskCombineInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@ -623,7 +623,7 @@ class MaskCombineInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Color Correct",
tags=["image", "color"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ColorCorrectInvocation(BaseInvocation, WithMetadata, WithBoard):
"""
@ -727,7 +727,7 @@ class ColorCorrectInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Adjust Image Hue",
tags=["image", "hue"],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageHueAdjustmentInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Adjusts the Hue of an image."""
@ -816,7 +816,7 @@ CHANNEL_FORMATS = {
"value",
],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelOffsetInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add or subtract a value from a specific color channel of an image."""
@ -872,7 +872,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithMetadata, WithBoard):
"value",
],
category="image",
version="1.2.1",
version="1.2.2",
)
class ImageChannelMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Scale a specific color channel of an image."""
@ -916,7 +916,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.2.1",
version="1.2.2",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation, WithMetadata, WithBoard):
@ -967,3 +967,56 @@ class CanvasPasteBackInvocation(BaseInvocation, WithMetadata, WithBoard):
image_dto = context.images.save(image=source_image)
return ImageOutput.build(image_dto)
@invocation(
"mask_from_id",
title="Mask from ID",
tags=["image", "mask", "id"],
category="image",
version="1.0.0",
)
class MaskFromIDInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generate a mask for a particular color in an ID Map"""
image: ImageField = InputField(description="The image to create the mask from")
color: ColorField = InputField(description="ID color to mask")
threshold: int = InputField(default=100, description="Threshold for color detection")
invert: bool = InputField(default=False, description="Whether or not to invert the mask")
def rgba_to_hex(self, rgba_color: tuple[int, int, int, int]):
r, g, b, a = rgba_color
hex_code = "#{:02X}{:02X}{:02X}{:02X}".format(r, g, b, int(a * 255))
return hex_code
def id_to_mask(self, id_mask: Image.Image, color: tuple[int, int, int, int], threshold: int = 100):
if id_mask.mode != "RGB":
id_mask = id_mask.convert("RGB")
# Can directly just use the tuple but I'll leave this rgba_to_hex here
# incase anyone prefers using hex codes directly instead of the color picker
hex_color_str = self.rgba_to_hex(color)
rgb_color = numpy.array([int(hex_color_str[i : i + 2], 16) for i in (1, 3, 5)])
# Maybe there's a faster way to calculate this distance but I can't think of any right now.
color_distance = numpy.linalg.norm(id_mask - rgb_color, axis=-1)
# Create a mask based on the threshold and the distance calculated above
binary_mask = (color_distance < threshold).astype(numpy.uint8) * 255
# Convert the mask back to PIL
binary_mask_pil = Image.fromarray(binary_mask)
return binary_mask_pil
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
mask = self.id_to_mask(image, self.color.tuple(), self.threshold)
if self.invert:
mask = ImageOps.invert(mask)
image_dto = context.images.save(image=mask, image_category=ImageCategory.MASK)
return ImageOutput.build(image_dto)

View File

@ -9,6 +9,7 @@ from PIL import Image, ImageOps
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
@ -120,7 +121,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with a solid color"""
@ -143,7 +144,7 @@ class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with tiles of the image"""
@ -168,7 +169,7 @@ class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
)
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@ -208,7 +209,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the LaMa model"""
@ -217,6 +218,13 @@ class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
# Downloads the LaMa model if it doesn't already exist
download_with_progress_bar(
name="LaMa Inpainting Model",
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
dest_path=context.config.get().models_path / "core/misc/lama/lama.pt",
)
infilled = infill_lama(image.copy())
image_dto = context.images.save(image=infilled)
@ -224,7 +232,7 @@ class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using OpenCV Inpainting"""

View File

@ -10,26 +10,18 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType, ModelType
# LS: Consider moving these two classes into model.py
class IPAdapterModelField(BaseModel):
key: str = Field(description="Key to the IP-Adapter model")
class CLIPVisionModelField(BaseModel):
key: str = Field(description="Key to the CLIP Vision image encoder model")
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, IPAdapterConfig, ModelType
class IPAdapterField(BaseModel):
image: Union[ImageField, List[ImageField]] = Field(description="The IP-Adapter image prompt(s).")
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model to use.")
image_encoder_model: ModelIdentifierField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
@ -56,14 +48,18 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.2")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.2.2")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
# Inputs
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).")
ip_adapter_model: IPAdapterModelField = InputField(
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.",
title="IP-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.IPAdapterModel,
)
weight: Union[float, List[float]] = InputField(
@ -90,20 +86,35 @@ class IPAdapterInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, IPAdapterConfig)
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
assert len(image_encoder_models) == 1
image_encoder_model = CLIPVisionModelField(key=image_encoder_models[0].key)
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
ip_adapter_model=self.ip_adapter_model,
image_encoder_model=image_encoder_model,
image_encoder_model=ModelIdentifierField.from_config(image_encoder_model),
weight=self.weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
),
)
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
found = False
while not found:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
found = len(image_encoder_models) > 0
if not found:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed."
)
context.logger.warning("Downloading and installing now. This may take a while.")
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # wait up to 10 minutes - then raise a TimeoutException
assert len(image_encoder_models) == 1
return image_encoder_models[0]

View File

@ -26,6 +26,7 @@ from diffusers.schedulers import SchedulerMixin as Scheduler
from PIL import Image, ImageFilter
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPVisionModelWithProjection
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
from invokeai.app.invocations.fields import (
@ -65,7 +66,6 @@ from ...backend.stable_diffusion.diffusers_pipeline import (
T2IAdapterData,
image_resized_to_grid_as_tensor,
)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device
from .baseinvocation import (
@ -75,7 +75,7 @@ from .baseinvocation import (
invocation_output,
)
from .controlnet_image_processors import ControlField
from .model import ModelInfo, UNetField, VaeField
from .model import ModelIdentifierField, UNetField, VAEField
if choose_torch_device() == torch.device("mps"):
from torch import mps
@ -113,12 +113,12 @@ class SchedulerInvocation(BaseInvocation):
title="Create Denoise Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
vae: VaeField = InputField(description=FieldDescriptions.vae, input=Input.Connection, ui_order=0)
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection, ui_order=0)
image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1)
mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=3)
@ -153,7 +153,7 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
)
if image_tensor is not None:
vae_info = context.models.load(**self.vae.vae.model_dump())
vae_info = context.models.load(self.vae.vae)
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
@ -244,12 +244,12 @@ class CreateGradientMaskInvocation(BaseInvocation):
def get_scheduler(
context: InvocationContext,
scheduler_info: ModelInfo,
scheduler_info: ModelIdentifierField,
scheduler_name: str,
seed: int,
) -> Scheduler:
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"])
orig_scheduler_info = context.models.load(**scheduler_info.model_dump())
orig_scheduler_info = context.models.load(scheduler_info)
with orig_scheduler_info as orig_scheduler:
scheduler_config = orig_scheduler.config
@ -279,7 +279,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.5.2",
version="1.5.3",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -383,12 +383,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
text_embeddings=c,
guidance_scale=self.cfg_scale,
guidance_rescale_multiplier=self.cfg_rescale_multiplier,
postprocessing_settings=PostprocessingSettings(
threshold=0.0, # threshold,
warmup=0.2, # warmup,
h_symmetry_time_pct=None, # h_symmetry_time_pct,
v_symmetry_time_pct=None, # v_symmetry_time_pct,
),
)
conditioning_data = conditioning_data.add_scheduler_args_if_applicable( # FIXME
@ -461,7 +455,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
# and if weight is None, populate with default 1.0?
controlnet_data = []
for control_info in control_list:
control_model = exit_stack.enter_context(context.models.load(key=control_info.control_model.key))
control_model = exit_stack.enter_context(context.models.load(control_info.control_model))
# control_models.append(control_model)
control_image_field = control_info.image
@ -523,11 +517,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
conditioning_data.ip_adapter_conditioning = []
for single_ip_adapter in ip_adapter:
ip_adapter_model: Union[IPAdapter, IPAdapterPlus] = exit_stack.enter_context(
context.models.load(key=single_ip_adapter.ip_adapter_model.key)
context.models.load(single_ip_adapter.ip_adapter_model)
)
image_encoder_model_info = context.models.load(key=single_ip_adapter.image_encoder_model.key)
image_encoder_model_info = context.models.load(single_ip_adapter.image_encoder_model)
# `single_ip_adapter.image` could be a list or a single ImageField. Normalize to a list here.
single_ipa_image_fields = single_ip_adapter.image
if not isinstance(single_ipa_image_fields, list):
@ -538,6 +531,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
# TODO(ryand): With some effort, the step of running the CLIP Vision encoder could be done before any other
# models are needed in memory. This would help to reduce peak memory utilization in low-memory environments.
with image_encoder_model_info as image_encoder_model:
assert isinstance(image_encoder_model, CLIPVisionModelWithProjection)
# Get image embeddings from CLIP and ImageProjModel.
image_prompt_embeds, uncond_image_prompt_embeds = ip_adapter_model.get_image_embeds(
single_ipa_images, image_encoder_model
@ -577,8 +571,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
t2i_adapter_data = []
for t2i_adapter_field in t2i_adapter:
t2i_adapter_model_config = context.models.get_config(key=t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_loaded_model = context.models.load(key=t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_model_config = context.models.get_config(t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_loaded_model = context.models.load(t2i_adapter_field.t2i_adapter_model)
image = context.images.get_pil(t2i_adapter_field.image.image_name)
# The max_unet_downscale is the maximum amount that the UNet model downscales the latent image internally.
@ -683,7 +677,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
if self.denoise_mask.masked_latents_name is not None:
masked_latents = context.tensors.load(self.denoise_mask.masked_latents_name)
else:
masked_latents = None
masked_latents = torch.where(mask < 0.5, 0.0, latents)
return 1 - mask, masked_latents, self.denoise_mask.gradient
@ -731,12 +725,13 @@ class DenoiseLatentsInvocation(BaseInvocation):
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.unet.loras:
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info
return
unet_info = context.models.load(**self.unet.unet.model_dump())
unet_info = context.models.load(self.unet.unet)
assert isinstance(unet_info.model, UNet2DConditionModel)
with (
ExitStack() as exit_stack,
@ -821,7 +816,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.2.1",
version="1.2.2",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
@ -830,7 +825,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VaeField = InputField(
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@ -841,15 +836,15 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(**self.vae.vae.model_dump())
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, torch.nn.Module)
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = isinstance(
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
@ -871,7 +866,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
vae.to(dtype=torch.float16)
latents = latents.half()
if self.tiled or context.config.get().tiled_decode:
if self.tiled or context.config.get().force_tiled_decode:
vae.enable_tiling()
else:
vae.disable_tiling()
@ -908,7 +903,7 @@ LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic",
title="Resize Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
@ -958,7 +953,7 @@ class ResizeLatentsInvocation(BaseInvocation):
title="Scale Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
@ -1000,7 +995,7 @@ class ScaleLatentsInvocation(BaseInvocation):
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@ -1008,7 +1003,7 @@ class ImageToLatentsInvocation(BaseInvocation):
image: ImageField = InputField(
description="The image to encode",
)
vae: VaeField = InputField(
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@ -1023,7 +1018,7 @@ class ImageToLatentsInvocation(BaseInvocation):
if upcast:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = isinstance(
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
@ -1064,7 +1059,7 @@ class ImageToLatentsInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(**self.vae.vae.model_dump())
vae_info = context.models.load(self.vae.vae)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
@ -1099,7 +1094,7 @@ class ImageToLatentsInvocation(BaseInvocation):
title="Blend Latents",
tags=["latents", "blend"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
@ -1190,7 +1185,7 @@ class BlendLatentsInvocation(BaseInvocation):
title="Crop Latents",
tags=["latents", "crop"],
category="latents",
version="1.0.1",
version="1.0.2",
)
# TODO(ryand): Named `CropLatentsCoreInvocation` to prevent a conflict with custom node `CropLatentsInvocation`.
# Currently, if the class names conflict then 'GET /openapi.json' fails.
@ -1251,7 +1246,7 @@ class IdealSizeOutput(BaseInvocationOutput):
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
version="1.0.3",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""

View File

@ -12,7 +12,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.0")
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
class AddInvocation(BaseInvocation):
"""Adds two numbers"""
@ -23,7 +23,7 @@ class AddInvocation(BaseInvocation):
return IntegerOutput(value=self.a + self.b)
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.0")
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.1")
class SubtractInvocation(BaseInvocation):
"""Subtracts two numbers"""
@ -34,7 +34,7 @@ class SubtractInvocation(BaseInvocation):
return IntegerOutput(value=self.a - self.b)
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.0")
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.1")
class MultiplyInvocation(BaseInvocation):
"""Multiplies two numbers"""
@ -45,7 +45,7 @@ class MultiplyInvocation(BaseInvocation):
return IntegerOutput(value=self.a * self.b)
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.0")
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.1")
class DivideInvocation(BaseInvocation):
"""Divides two numbers"""
@ -61,7 +61,7 @@ class DivideInvocation(BaseInvocation):
title="Random Integer",
tags=["math", "random"],
category="math",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class RandomIntInvocation(BaseInvocation):
@ -100,7 +100,7 @@ class RandomFloatInvocation(BaseInvocation):
title="Float To Integer",
tags=["math", "round", "integer", "float", "convert"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatToIntegerInvocation(BaseInvocation):
"""Rounds a float number to (a multiple of) an integer."""
@ -122,7 +122,7 @@ class FloatToIntegerInvocation(BaseInvocation):
return IntegerOutput(value=int(self.value / self.multiple) * self.multiple)
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.0")
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.1")
class RoundInvocation(BaseInvocation):
"""Rounds a float to a specified number of decimal places."""
@ -176,7 +176,7 @@ INTEGER_OPERATIONS_LABELS = {
"max",
],
category="math",
version="1.0.0",
version="1.0.1",
)
class IntegerMathInvocation(BaseInvocation):
"""Performs integer math."""
@ -250,7 +250,7 @@ FLOAT_OPERATIONS_LABELS = {
title="Float Math",
tags=["math", "float", "add", "subtract", "multiply", "divide", "power", "root", "absolute value", "min", "max"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatMathInvocation(BaseInvocation):
"""Performs floating point math."""

View File

@ -8,7 +8,10 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.controlnet_image_processors import (
CONTROLNET_MODE_VALUES,
CONTROLNET_RESIZE_VALUES,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@ -17,9 +20,7 @@ from invokeai.app.invocations.fields import (
OutputField,
UIType,
)
from invokeai.app.invocations.ip_adapter import IPAdapterModelField
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from ...version import __version__
@ -33,7 +34,7 @@ class MetadataItemField(BaseModel):
class LoRAMetadataField(BaseModel):
"""LoRA Metadata Field"""
model: LoRAModelField = Field(description=FieldDescriptions.lora_model)
model: ModelIdentifierField = Field(description=FieldDescriptions.lora_model)
weight: float = Field(description=FieldDescriptions.lora_weight)
@ -41,16 +42,41 @@ class IPAdapterMetadataField(BaseModel):
"""IP Adapter Field, minus the CLIP Vision Encoder model"""
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: IPAdapterModelField = Field(
description="The IP-Adapter model.",
)
weight: Union[float, list[float]] = Field(
description="The weight given to the IP-Adapter",
)
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")
class T2IAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The control image.")
processed_image: Optional[ImageField] = Field(default=None, description="The control image, after processing.")
t2i_adapter_model: ModelIdentifierField = Field(description="The T2I-Adapter model to use.")
weight: Union[float, list[float]] = Field(default=1, description="The weight given to the T2I-Adapter")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)"
)
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
class ControlNetMetadataField(BaseModel):
image: ImageField = Field(description="The control image")
processed_image: Optional[ImageField] = Field(default=None, description="The control image, after processing.")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: Union[float, list[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@invocation_output("metadata_item_output")
class MetadataItemOutput(BaseInvocationOutput):
"""Metadata Item Output"""
@ -58,7 +84,7 @@ class MetadataItemOutput(BaseInvocationOutput):
item: MetadataItemField = OutputField(description="Metadata Item")
@invocation("metadata_item", title="Metadata Item", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("metadata_item", title="Metadata Item", tags=["metadata"], category="metadata", version="1.0.1")
class MetadataItemInvocation(BaseInvocation):
"""Used to create an arbitrary metadata item. Provide "label" and make a connection to "value" to store that data as the value."""
@ -74,7 +100,7 @@ class MetadataOutput(BaseInvocationOutput):
metadata: MetadataField = OutputField(description="Metadata Dict")
@invocation("metadata", title="Metadata", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("metadata", title="Metadata", tags=["metadata"], category="metadata", version="1.0.1")
class MetadataInvocation(BaseInvocation):
"""Takes a MetadataItem or collection of MetadataItems and outputs a MetadataDict."""
@ -95,7 +121,7 @@ class MetadataInvocation(BaseInvocation):
return MetadataOutput(metadata=MetadataField.model_validate(data))
@invocation("merge_metadata", title="Metadata Merge", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("merge_metadata", title="Metadata Merge", tags=["metadata"], category="metadata", version="1.0.1")
class MergeMetadataInvocation(BaseInvocation):
"""Merged a collection of MetadataDict into a single MetadataDict."""
@ -114,7 +140,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.1.1")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="2.0.0")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
@ -140,14 +166,14 @@ class CoreMetadataInvocation(BaseInvocation):
default=None,
description="The number of skipped CLIP layers",
)
model: Optional[MainModelField] = InputField(default=None, description="The main model used for inference")
controlnets: Optional[list[ControlField]] = InputField(
model: Optional[ModelIdentifierField] = InputField(default=None, description="The main model used for inference")
controlnets: Optional[list[ControlNetMetadataField]] = InputField(
default=None, description="The ControlNets used for inference"
)
ipAdapters: Optional[list[IPAdapterMetadataField]] = InputField(
default=None, description="The IP Adapters used for inference"
)
t2iAdapters: Optional[list[T2IAdapterField]] = InputField(
t2iAdapters: Optional[list[T2IAdapterMetadataField]] = InputField(
default=None, description="The IP Adapters used for inference"
)
loras: Optional[list[LoRAMetadataField]] = InputField(default=None, description="The LoRAs used for inference")
@ -159,7 +185,7 @@ class CoreMetadataInvocation(BaseInvocation):
default=None,
description="The name of the initial image",
)
vae: Optional[VAEModelField] = InputField(
vae: Optional[ModelIdentifierField] = InputField(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
@ -190,7 +216,7 @@ class CoreMetadataInvocation(BaseInvocation):
)
# SDXL Refiner
refiner_model: Optional[MainModelField] = InputField(
refiner_model: Optional[ModelIdentifierField] = InputField(
default=None,
description="The SDXL Refiner model used",
)
@ -222,10 +248,9 @@ class CoreMetadataInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> MetadataOutput:
"""Collects and outputs a CoreMetadata object"""
return MetadataOutput(
metadata=MetadataField.model_validate(
self.model_dump(exclude_none=True, exclude={"id", "type", "is_intermediate", "use_cache"})
)
)
as_dict = self.model_dump(exclude_none=True, exclude={"id", "type", "is_intermediate", "use_cache"})
as_dict["app_version"] = __version__
return MetadataOutput(metadata=MetadataField.model_validate(as_dict))
model_config = ConfigDict(extra="allow")

View File

@ -3,11 +3,11 @@ from typing import List, Optional
from pydantic import BaseModel, Field
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
from ...backend.model_manager import SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
@ -16,33 +16,52 @@ from .baseinvocation import (
)
class ModelInfo(BaseModel):
key: str = Field(description="Key of model as returned by ModelRecordServiceBase.get_model()")
submodel_type: Optional[SubModelType] = Field(default=None, description="Info to load submodel")
class ModelIdentifierField(BaseModel):
key: str = Field(description="The model's unique key")
hash: str = Field(description="The model's BLAKE3 hash")
name: str = Field(description="The model's name")
base: BaseModelType = Field(description="The model's base model type")
type: ModelType = Field(description="The model's type")
submodel_type: Optional[SubModelType] = Field(
description="The submodel to load, if this is a main model", default=None
)
@classmethod
def from_config(
cls, config: "AnyModelConfig", submodel_type: Optional[SubModelType] = None
) -> "ModelIdentifierField":
return cls(
key=config.key,
hash=config.hash,
name=config.name,
base=config.base,
type=config.type,
submodel_type=submodel_type,
)
class LoraInfo(ModelInfo):
weight: float = Field(description="Lora's weight which to use when apply to model")
class LoRAField(BaseModel):
lora: ModelIdentifierField = Field(description="Info to load lora model")
weight: float = Field(description="Weight to apply to lora model")
class UNetField(BaseModel):
unet: ModelInfo = Field(description="Info to load unet submodel")
scheduler: ModelInfo = Field(description="Info to load scheduler submodel")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
unet: ModelIdentifierField = Field(description="Info to load unet submodel")
scheduler: ModelIdentifierField = Field(description="Info to load scheduler submodel")
loras: List[LoRAField] = Field(description="LoRAs to apply on model loading")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
freeu_config: Optional[FreeUConfig] = Field(default=None, description="FreeU configuration")
class ClipField(BaseModel):
tokenizer: ModelInfo = Field(description="Info to load tokenizer submodel")
text_encoder: ModelInfo = Field(description="Info to load text_encoder submodel")
class CLIPField(BaseModel):
tokenizer: ModelIdentifierField = Field(description="Info to load tokenizer submodel")
text_encoder: ModelIdentifierField = Field(description="Info to load text_encoder submodel")
skipped_layers: int = Field(description="Number of skipped layers in text_encoder")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
loras: List[LoRAField] = Field(description="LoRAs to apply on model loading")
class VaeField(BaseModel):
# TODO: better naming?
vae: ModelInfo = Field(description="Info to load vae submodel")
class VAEField(BaseModel):
vae: ModelIdentifierField = Field(description="Info to load vae submodel")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
@ -57,14 +76,14 @@ class UNetOutput(BaseInvocationOutput):
class VAEOutput(BaseInvocationOutput):
"""Base class for invocations that output a VAE field"""
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation_output("clip_output")
class CLIPOutput(BaseInvocationOutput):
"""Base class for invocations that output a CLIP field"""
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP")
clip: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP")
@invocation_output("model_loader_output")
@ -74,84 +93,54 @@ class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
pass
class MainModelField(BaseModel):
"""Main model field"""
key: str = Field(description="Model key")
class LoRAModelField(BaseModel):
"""LoRA model field"""
key: str = Field(description="LoRA model key")
@invocation(
"main_model_loader",
title="Main Model",
tags=["model"],
category="model",
version="1.0.1",
version="1.0.2",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
model: MainModelField = InputField(description=FieldDescriptions.main_model, input=Input.Direct)
model: ModelIdentifierField = InputField(
description=FieldDescriptions.main_model, input=Input.Direct, ui_type=UIType.MainModel
)
# TODO: precision?
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
key = self.model.key
# TODO: not found exceptions
if not context.models.exists(key):
raise Exception(f"Unknown model {key}")
if not context.models.exists(self.model.key):
raise Exception(f"Unknown model {self.model.key}")
unet = self.model.model_copy(update={"submodel_type": SubModelType.UNet})
scheduler = self.model.model_copy(update={"submodel_type": SubModelType.Scheduler})
tokenizer = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
text_encoder = self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
vae = self.model.model_copy(update={"submodel_type": SubModelType.VAE})
return ModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
key=key,
submodel_type=SubModelType.UNet,
),
scheduler=ModelInfo(
key=key,
submodel_type=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
key=key,
submodel_type=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
key=key,
submodel_type=SubModelType.TextEncoder,
),
loras=[],
skipped_layers=0,
),
vae=VaeField(
vae=ModelInfo(
key=key,
submodel_type=SubModelType.VAE,
),
),
unet=UNetField(unet=unet, scheduler=scheduler, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=text_encoder, loras=[], skipped_layers=0),
vae=VAEField(vae=vae),
)
@invocation_output("lora_loader_output")
class LoraLoaderOutput(BaseInvocationOutput):
class LoRALoaderOutput(BaseInvocationOutput):
"""Model loader output"""
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.1")
class LoraLoaderInvocation(BaseInvocation):
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.2")
class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
default=None,
@ -159,46 +148,41 @@ class LoraLoaderInvocation(BaseInvocation):
input=Input.Connection,
title="UNet",
)
clip: Optional[ClipField] = InputField(
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
)
def invoke(self, context: InvocationContext) -> LoraLoaderOutput:
if self.lora is None:
raise Exception("No LoRA provided")
def invoke(self, context: InvocationContext) -> LoRALoaderOutput:
lora_key = self.lora.key
if not context.models.exists(lora_key):
raise Exception(f"Unkown lora: {lora_key}!")
if self.unet is not None and any(lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'Lora "{lora_key}" already applied to unet')
if self.unet is not None and any(lora.lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'LoRA "{lora_key}" already applied to unet')
if self.clip is not None and any(lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip')
if self.clip is not None and any(lora.lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'LoRA "{lora_key}" already applied to clip')
output = LoraLoaderOutput()
output = LoRALoaderOutput()
if self.unet is not None:
output.unet = copy.deepcopy(self.unet)
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(
LoraInfo(
key=lora_key,
submodel_type=None,
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
if self.clip is not None:
output.clip = copy.deepcopy(self.clip)
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(
LoraInfo(
key=lora_key,
submodel_type=None,
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
@ -207,12 +191,12 @@ class LoraLoaderInvocation(BaseInvocation):
@invocation_output("sdxl_lora_loader_output")
class SDXLLoraLoaderOutput(BaseInvocationOutput):
class SDXLLoRALoaderOutput(BaseInvocationOutput):
"""SDXL LoRA Loader Output"""
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 1")
clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 1")
clip2: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
@invocation(
@ -220,12 +204,14 @@ class SDXLLoraLoaderOutput(BaseInvocationOutput):
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.1",
version="1.0.2",
)
class SDXLLoraLoaderInvocation(BaseInvocation):
class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
default=None,
@ -233,65 +219,59 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
input=Input.Connection,
title="UNet",
)
clip: Optional[ClipField] = InputField(
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 1",
)
clip2: Optional[ClipField] = InputField(
clip2: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 2",
)
def invoke(self, context: InvocationContext) -> SDXLLoraLoaderOutput:
if self.lora is None:
raise Exception("No LoRA provided")
def invoke(self, context: InvocationContext) -> SDXLLoRALoaderOutput:
lora_key = self.lora.key
if not context.models.exists(lora_key):
raise Exception(f"Unknown lora: {lora_key}!")
if self.unet is not None and any(lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'Lora "{lora_key}" already applied to unet')
if self.unet is not None and any(lora.lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'LoRA "{lora_key}" already applied to unet')
if self.clip is not None and any(lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip')
if self.clip is not None and any(lora.lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'LoRA "{lora_key}" already applied to clip')
if self.clip2 is not None and any(lora.key == lora_key for lora in self.clip2.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip2')
if self.clip2 is not None and any(lora.lora.key == lora_key for lora in self.clip2.loras):
raise Exception(f'LoRA "{lora_key}" already applied to clip2')
output = SDXLLoraLoaderOutput()
output = SDXLLoRALoaderOutput()
if self.unet is not None:
output.unet = copy.deepcopy(self.unet)
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(
LoraInfo(
key=lora_key,
submodel_type=None,
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
if self.clip is not None:
output.clip = copy.deepcopy(self.clip)
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(
LoraInfo(
key=lora_key,
submodel_type=None,
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
if self.clip2 is not None:
output.clip2 = copy.deepcopy(self.clip2)
output.clip2 = self.clip2.model_copy(deep=True)
output.clip2.loras.append(
LoraInfo(
key=lora_key,
submodel_type=None,
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
@ -299,20 +279,12 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
return output
class VAEModelField(BaseModel):
"""Vae model field"""
key: str = Field(description="Model's key")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.1")
class VaeLoaderInvocation(BaseInvocation):
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.2")
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
vae_model: VAEModelField = InputField(
description=FieldDescriptions.vae_model,
input=Input.Direct,
title="VAE",
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, input=Input.Direct, title="VAE", ui_type=UIType.VAEModel
)
def invoke(self, context: InvocationContext) -> VAEOutput:
@ -321,7 +293,7 @@ class VaeLoaderInvocation(BaseInvocation):
if not context.models.exists(key):
raise Exception(f"Unkown vae: {key}!")
return VAEOutput(vae=VaeField(vae=ModelInfo(key=key)))
return VAEOutput(vae=VAEField(vae=self.vae_model))
@invocation_output("seamless_output")
@ -329,7 +301,7 @@ class SeamlessModeOutput(BaseInvocationOutput):
"""Modified Seamless Model output"""
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
vae: Optional[VaeField] = OutputField(default=None, description=FieldDescriptions.vae, title="VAE")
vae: Optional[VAEField] = OutputField(default=None, description=FieldDescriptions.vae, title="VAE")
@invocation(
@ -337,7 +309,7 @@ class SeamlessModeOutput(BaseInvocationOutput):
title="Seamless",
tags=["seamless", "model"],
category="model",
version="1.0.0",
version="1.0.1",
)
class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE."""
@ -348,7 +320,7 @@ class SeamlessModeInvocation(BaseInvocation):
input=Input.Connection,
title="UNet",
)
vae: Optional[VaeField] = InputField(
vae: Optional[VAEField] = InputField(
default=None,
description=FieldDescriptions.vae_model,
input=Input.Connection,
@ -377,7 +349,7 @@ class SeamlessModeInvocation(BaseInvocation):
return SeamlessModeOutput(unet=unet, vae=vae)
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.0")
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.1")
class FreeUInvocation(BaseInvocation):
"""
Applies FreeU to the UNet. Suggested values (b1/b2/s1/s2):

View File

@ -81,7 +81,7 @@ class NoiseOutput(BaseInvocationOutput):
title="Noise",
tags=["latents", "noise"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""

View File

@ -51,7 +51,7 @@ from .fields import InputField
title="Float Range",
tags=["math", "range"],
category="math",
version="1.0.0",
version="1.0.1",
)
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
@ -111,7 +111,7 @@ EASING_FUNCTION_KEYS = Literal[tuple(EASING_FUNCTIONS_MAP.keys())]
title="Step Param Easing",
tags=["step", "easing"],
category="step",
version="1.0.1",
version="1.0.2",
)
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""

View File

@ -54,7 +54,7 @@ class BooleanCollectionOutput(BaseInvocationOutput):
@invocation(
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.0"
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.1"
)
class BooleanInvocation(BaseInvocation):
"""A boolean primitive value"""
@ -70,7 +70,7 @@ class BooleanInvocation(BaseInvocation):
title="Boolean Collection Primitive",
tags=["primitives", "boolean", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
@ -103,7 +103,7 @@ class IntegerCollectionOutput(BaseInvocationOutput):
@invocation(
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.0"
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.1"
)
class IntegerInvocation(BaseInvocation):
"""An integer primitive value"""
@ -119,7 +119,7 @@ class IntegerInvocation(BaseInvocation):
title="Integer Collection Primitive",
tags=["primitives", "integer", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
@ -151,7 +151,7 @@ class FloatCollectionOutput(BaseInvocationOutput):
)
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.0")
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.1")
class FloatInvocation(BaseInvocation):
"""A float primitive value"""
@ -166,7 +166,7 @@ class FloatInvocation(BaseInvocation):
title="Float Collection Primitive",
tags=["primitives", "float", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
@ -198,7 +198,7 @@ class StringCollectionOutput(BaseInvocationOutput):
)
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.0")
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.1")
class StringInvocation(BaseInvocation):
"""A string primitive value"""
@ -213,7 +213,7 @@ class StringInvocation(BaseInvocation):
title="String Collection Primitive",
tags=["primitives", "string", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
@ -255,7 +255,7 @@ class ImageCollectionOutput(BaseInvocationOutput):
)
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.1")
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.2")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
@ -276,7 +276,7 @@ class ImageInvocation(BaseInvocation):
title="Image Collection Primitive",
tags=["primitives", "image", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class ImageCollectionInvocation(BaseInvocation):
"""A collection of image primitive values"""
@ -341,7 +341,7 @@ class LatentsCollectionOutput(BaseInvocationOutput):
@invocation(
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.1"
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.2"
)
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
@ -359,7 +359,7 @@ class LatentsInvocation(BaseInvocation):
title="Latents Collection Primitive",
tags=["primitives", "latents", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class LatentsCollectionInvocation(BaseInvocation):
"""A collection of latents tensor primitive values"""
@ -393,7 +393,7 @@ class ColorCollectionOutput(BaseInvocationOutput):
)
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.0")
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.1")
class ColorInvocation(BaseInvocation):
"""A color primitive value"""
@ -433,7 +433,7 @@ class ConditioningCollectionOutput(BaseInvocationOutput):
title="Conditioning Primitive",
tags=["primitives", "conditioning"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class ConditioningInvocation(BaseInvocation):
"""A conditioning tensor primitive value"""
@ -449,7 +449,7 @@ class ConditioningInvocation(BaseInvocation):
title="Conditioning Collection Primitive",
tags=["primitives", "conditioning", "collection"],
category="primitives",
version="1.0.1",
version="1.0.2",
)
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""

View File

@ -17,7 +17,7 @@ from .fields import InputField, UIComponent
title="Dynamic Prompt",
tags=["prompt", "collection"],
category="prompt",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class DynamicPromptInvocation(BaseInvocation):
@ -46,7 +46,7 @@ class DynamicPromptInvocation(BaseInvocation):
title="Prompts from File",
tags=["prompt", "file"],
category="prompt",
version="1.0.1",
version="1.0.2",
)
class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file"""

View File

@ -8,7 +8,7 @@ from .baseinvocation import (
invocation,
invocation_output,
)
from .model import ClipField, MainModelField, ModelInfo, UNetField, VaeField
from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
@invocation_output("sdxl_model_loader_output")
@ -16,9 +16,9 @@ class SDXLModelLoaderOutput(BaseInvocationOutput):
"""SDXL base model loader output"""
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 1")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
clip: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP 1")
clip2: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation_output("sdxl_refiner_model_loader_output")
@ -26,15 +26,15 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
"""SDXL refiner model loader output"""
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
clip2: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.1")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.2")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
model: MainModelField = InputField(
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel
)
# TODO: precision?
@ -46,48 +46,19 @@ class SDXLModelLoaderInvocation(BaseInvocation):
if not context.models.exists(model_key):
raise Exception(f"Unknown model: {model_key}")
unet = self.model.model_copy(update={"submodel_type": SubModelType.UNet})
scheduler = self.model.model_copy(update={"submodel_type": SubModelType.Scheduler})
tokenizer = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
text_encoder = self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer2 = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
text_encoder2 = self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
vae = self.model.model_copy(update={"submodel_type": SubModelType.VAE})
return SDXLModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
key=model_key,
submodel_type=SubModelType.UNet,
),
scheduler=ModelInfo(
key=model_key,
submodel_type=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
key=model_key,
submodel_type=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
key=model_key,
submodel_type=SubModelType.TextEncoder,
),
loras=[],
skipped_layers=0,
),
clip2=ClipField(
tokenizer=ModelInfo(
key=model_key,
submodel_type=SubModelType.Tokenizer2,
),
text_encoder=ModelInfo(
key=model_key,
submodel_type=SubModelType.TextEncoder2,
),
loras=[],
skipped_layers=0,
),
vae=VaeField(
vae=ModelInfo(
key=model_key,
submodel_type=SubModelType.VAE,
),
),
unet=UNetField(unet=unet, scheduler=scheduler, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=text_encoder, loras=[], skipped_layers=0),
clip2=CLIPField(tokenizer=tokenizer2, text_encoder=text_encoder2, loras=[], skipped_layers=0),
vae=VAEField(vae=vae),
)
@ -96,15 +67,13 @@ class SDXLModelLoaderInvocation(BaseInvocation):
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.1",
version="1.0.2",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
model: MainModelField = InputField(
description=FieldDescriptions.sdxl_refiner_model,
input=Input.Direct,
ui_type=UIType.SDXLRefinerModel,
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_refiner_model, input=Input.Direct, ui_type=UIType.SDXLRefinerModel
)
# TODO: precision?
@ -115,34 +84,14 @@ class SDXLRefinerModelLoaderInvocation(BaseInvocation):
if not context.models.exists(model_key):
raise Exception(f"Unknown model: {model_key}")
unet = self.model.model_copy(update={"submodel_type": SubModelType.UNet})
scheduler = self.model.model_copy(update={"submodel_type": SubModelType.Scheduler})
tokenizer2 = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
text_encoder2 = self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
vae = self.model.model_copy(update={"submodel_type": SubModelType.VAE})
return SDXLRefinerModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
key=model_key,
submodel_type=SubModelType.UNet,
),
scheduler=ModelInfo(
key=model_key,
submodel_type=SubModelType.Scheduler,
),
loras=[],
),
clip2=ClipField(
tokenizer=ModelInfo(
key=model_key,
submodel_type=SubModelType.Tokenizer2,
),
text_encoder=ModelInfo(
key=model_key,
submodel_type=SubModelType.TextEncoder2,
),
loras=[],
skipped_layers=0,
),
vae=VaeField(
vae=ModelInfo(
key=model_key,
submodel_type=SubModelType.VAE,
),
),
unet=UNetField(unet=unet, scheduler=scheduler, loras=[]),
clip2=CLIPField(tokenizer=tokenizer2, text_encoder=text_encoder2, loras=[], skipped_layers=0),
vae=VAEField(vae=vae),
)

View File

@ -27,7 +27,7 @@ class StringPosNegOutput(BaseInvocationOutput):
title="String Split Negative",
tags=["string", "split", "negative"],
category="string",
version="1.0.0",
version="1.0.1",
)
class StringSplitNegInvocation(BaseInvocation):
"""Splits string into two strings, inside [] goes into negative string everthing else goes into positive string. Each [ and ] character is replaced with a space"""
@ -69,7 +69,7 @@ class String2Output(BaseInvocationOutput):
string_2: str = OutputField(description="string 2")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.0")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.1")
class StringSplitInvocation(BaseInvocation):
"""Splits string into two strings, based on the first occurance of the delimiter. The delimiter will be removed from the string"""
@ -89,7 +89,7 @@ class StringSplitInvocation(BaseInvocation):
return String2Output(string_1=part1, string_2=part2)
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.0")
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.1")
class StringJoinInvocation(BaseInvocation):
"""Joins string left to string right"""
@ -100,7 +100,7 @@ class StringJoinInvocation(BaseInvocation):
return StringOutput(value=((self.string_left or "") + (self.string_right or "")))
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.0")
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.1")
class StringJoinThreeInvocation(BaseInvocation):
"""Joins string left to string middle to string right"""
@ -113,7 +113,7 @@ class StringJoinThreeInvocation(BaseInvocation):
@invocation(
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.0"
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.1"
)
class StringReplaceInvocation(BaseInvocation):
"""Replaces the search string with the replace string"""

View File

@ -9,18 +9,15 @@ from invokeai.app.invocations.baseinvocation import (
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
class T2IAdapterModelField(BaseModel):
key: str = Field(description="Model record key for the T2I-Adapter model")
class T2IAdapterField(BaseModel):
image: ImageField = Field(description="The T2I-Adapter image prompt.")
t2i_adapter_model: T2IAdapterModelField = Field(description="The T2I-Adapter model to use.")
t2i_adapter_model: ModelIdentifierField = Field(description="The T2I-Adapter model to use.")
weight: Union[float, list[float]] = Field(default=1, description="The weight given to the T2I-Adapter")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
@ -48,18 +45,19 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.1"
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.2"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
# Inputs
image: ImageField = InputField(description="The IP-Adapter image prompt.")
t2i_adapter_model: T2IAdapterModelField = InputField(
t2i_adapter_model: ModelIdentifierField = InputField(
description="The T2I-Adapter model.",
title="T2I-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.T2IAdapterModel,
)
weight: Union[float, list[float]] = InputField(
default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight"

View File

@ -39,7 +39,7 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
title="Calculate Image Tiles",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
@ -73,7 +73,7 @@ class CalculateImageTilesInvocation(BaseInvocation):
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.1.0",
version="1.1.1",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
@ -116,7 +116,7 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
title="Calculate Image Tiles Minimum Overlap",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesMinimumOverlapInvocation(BaseInvocation):
@ -167,7 +167,7 @@ class TileToPropertiesOutput(BaseInvocationOutput):
title="Tile to Properties",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class TileToPropertiesInvocation(BaseInvocation):
@ -200,7 +200,7 @@ class PairTileImageOutput(BaseInvocationOutput):
title="Pair Tile with Image",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.0.1",
classification=Classification.Beta,
)
class PairTileImageInvocation(BaseInvocation):
@ -229,7 +229,7 @@ BLEND_MODES = Literal["Linear", "Seam"]
title="Merge Tiles to Image",
tags=["tiles"],
category="tiles",
version="1.1.0",
version="1.1.1",
classification=Classification.Beta,
)
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata, WithBoard):

View File

@ -11,6 +11,7 @@ from pydantic import ConfigDict
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import choose_torch_device
@ -27,11 +28,18 @@ ESRGAN_MODELS = Literal[
"RealESRGAN_x2plus.pth",
]
ESRGAN_MODEL_URLS: dict[str, str] = {
"RealESRGAN_x4plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
"RealESRGAN_x4plus_anime_6B.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth",
"ESRGAN_SRx4_DF2KOST_official-ff704c30.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
"RealESRGAN_x2plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
}
if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.1")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.2")
class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Upscales an image using RealESRGAN."""
@ -45,7 +53,6 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
models_path = context.config.get().models_path
rrdbnet_model = None
netscale = None
@ -92,11 +99,16 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
context.logger.error(msg)
raise ValueError(msg)
esrgan_model_path = Path(f"core/upscaling/realesrgan/{self.model_name}")
esrgan_model_path = Path(context.config.get().models_path, f"core/upscaling/realesrgan/{self.model_name}")
# Downloads the ESRGAN model if it doesn't already exist
download_with_progress_bar(
name=self.model_name, url=ESRGAN_MODEL_URLS[self.model_name], dest_path=esrgan_model_path
)
upscaler = RealESRGAN(
scale=netscale,
model_path=models_path / esrgan_model_path,
model_path=esrgan_model_path,
model=rrdbnet_model,
half=False,
tile=self.tile_size,

12
invokeai/app/run_app.py Normal file
View File

@ -0,0 +1,12 @@
"""This is a wrapper around the main app entrypoint, to allow for CLI args to be parsed before running the app."""
def run_app() -> None:
# Before doing _anything_, parse CLI args!
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
InvokeAIArgs.parse_args()
from invokeai.app.api_app import invoke_api
invoke_api()

View File

@ -2,6 +2,6 @@
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_invokeai_config
from .config_default import InvokeAIAppConfig, get_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config", "PagingArgumentParser"]
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]

View File

@ -1,224 +0,0 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
"""
Base class for the InvokeAI configuration system.
It defines a type of pydantic BaseSettings object that
is able to read and write from an omegaconf-based config file,
with overriding of settings from environment variables and/or
the command line.
"""
from __future__ import annotations
import argparse
import os
import sys
from argparse import ArgumentParser
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_args, get_origin, get_type_hints
from omegaconf import DictConfig, ListConfig, OmegaConf
from pydantic_settings import BaseSettings, SettingsConfigDict
from invokeai.app.services.config.config_common import PagingArgumentParser, int_or_float_or_str
class InvokeAISettings(BaseSettings):
"""Runtime configuration settings in which default values are read from an omegaconf .yaml file."""
initconf: ClassVar[Optional[DictConfig]] = None
argparse_groups: ClassVar[Dict[str, Any]] = {}
model_config = SettingsConfigDict(env_file_encoding="utf-8", arbitrary_types_allowed=True, case_sensitive=True)
def parse_args(self, argv: Optional[List[str]] = sys.argv[1:]) -> None:
"""Call to parse command-line arguments."""
parser = self.get_parser()
opt, unknown_opts = parser.parse_known_args(argv)
if len(unknown_opts) > 0:
print("Unknown args:", unknown_opts)
for name in self.model_fields:
if name not in self._excluded():
value = getattr(opt, name)
if isinstance(value, ListConfig):
value = list(value)
elif isinstance(value, DictConfig):
value = dict(value)
setattr(self, name, value)
def to_yaml(self) -> str:
"""Return a YAML string representing our settings. This can be used as the contents of `invokeai.yaml` to restore settings later."""
cls = self.__class__
type = get_args(get_type_hints(cls)["type"])[0]
field_dict: Dict[str, Dict[str, Any]] = {type: {}}
for name, field in self.model_fields.items():
if name in cls._excluded_from_yaml():
continue
assert isinstance(field.json_schema_extra, dict)
category = (
field.json_schema_extra.get("category", "Uncategorized") if field.json_schema_extra else "Uncategorized"
)
value = getattr(self, name)
assert isinstance(category, str)
if category not in field_dict[type]:
field_dict[type][category] = {}
# keep paths as strings to make it easier to read
field_dict[type][category][name] = str(value) if isinstance(value, Path) else value
conf = OmegaConf.create(field_dict)
return OmegaConf.to_yaml(conf)
@classmethod
def add_parser_arguments(cls, parser: ArgumentParser) -> None:
"""Dynamically create arguments for a settings parser."""
if "type" in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)["type"])[0]
else:
settings_stanza = "Uncategorized"
env_prefix = getattr(cls.model_config, "env_prefix", None)
env_prefix = env_prefix if env_prefix is not None else settings_stanza.upper()
initconf = (
cls.initconf.get(settings_stanza)
if cls.initconf and settings_stanza in cls.initconf
else OmegaConf.create()
)
# create an upcase version of the environment in
# order to achieve case-insensitive environment
# variables (the way Windows does)
upcase_environ = {}
for key, value in os.environ.items():
upcase_environ[key.upper()] = value
fields = cls.model_fields
cls.argparse_groups = {}
for name, field in fields.items():
if name not in cls._excluded():
current_default = field.default
category = (
field.json_schema_extra.get("category", "Uncategorized")
if field.json_schema_extra
else "Uncategorized"
)
env_name = env_prefix + "_" + name
if category in initconf and name in initconf.get(category):
field.default = initconf.get(category).get(name)
if env_name.upper() in upcase_environ:
field.default = upcase_environ[env_name.upper()]
cls.add_field_argument(parser, name, field)
field.default = current_default
@classmethod
def cmd_name(cls, command_field: str = "type") -> str:
"""Return the category of a setting."""
hints = get_type_hints(cls)
if command_field in hints:
result: str = get_args(hints[command_field])[0]
return result
else:
return "Uncategorized"
@classmethod
def get_parser(cls) -> ArgumentParser:
"""Get the command-line parser for a setting."""
parser = PagingArgumentParser(
prog=cls.cmd_name(),
description=cls.__doc__,
)
cls.add_parser_arguments(parser)
return parser
@classmethod
def _excluded(cls) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ["type", "initconf"]
@classmethod
def _excluded_from_yaml(cls) -> List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return [
"type",
"initconf",
"version",
"from_file",
"model",
"root",
"max_cache_size",
"max_vram_cache_size",
"always_use_cpu",
"free_gpu_mem",
"xformers_enabled",
"tiled_decode",
"lora_dir",
"embedding_dir",
"controlnet_dir",
"conf_path",
]
@classmethod
def add_field_argument(cls, command_parser, name: str, field, default_override=None) -> None:
"""Add the argparse arguments for a setting parser."""
field_type = get_type_hints(cls).get(name)
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if category := (field.json_schema_extra.get("category", None) if field.json_schema_extra else None):
if category not in cls.argparse_groups:
cls.argparse_groups[category] = command_parser.add_argument_group(category)
argparse_group = cls.argparse_groups[category]
else:
argparse_group = command_parser
if get_origin(field_type) == Literal:
allowed_values = get_args(field.annotation)
allowed_types = set()
for val in allowed_values:
allowed_types.add(type(val))
allowed_types_list = list(allowed_types)
field_type = allowed_types_list[0] if len(allowed_types) == 1 else int_or_float_or_str
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field_type,
default=default,
choices=allowed_values,
help=field.description,
)
elif get_origin(field_type) == Union:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=int_or_float_or_str,
default=default,
help=field.description,
)
elif get_origin(field_type) == list:
argparse_group.add_argument(
f"--{name}",
dest=name,
nargs="*",
type=field.annotation,
default=default,
action=argparse.BooleanOptionalAction if field.annotation == bool else "store",
help=field.description,
)
else:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field.annotation,
default=default,
action=argparse.BooleanOptionalAction if field.annotation == bool else "store",
help=field.description,
)

View File

@ -12,7 +12,6 @@ from __future__ import annotations
import argparse
import pydoc
from typing import Union
class PagingArgumentParser(argparse.ArgumentParser):
@ -24,18 +23,3 @@ class PagingArgumentParser(argparse.ArgumentParser):
def print_help(self, file=None) -> None:
text = self.format_help()
pydoc.pager(text)
def int_or_float_or_str(value: str) -> Union[int, float, str]:
"""
Workaround for argparse type checking.
"""
try:
return int(value)
except Exception as e: # noqa F841
pass
try:
return float(value)
except Exception as e: # noqa F841
pass
return str(value)

View File

@ -1,184 +1,23 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
"""Invokeai configuration system.
Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
has a top-level key of "InvokeAI" and subheadings for each of the
categories returned by `invokeai --help`. The file looks like this:
[file: invokeai.yaml]
InvokeAI:
Web Server:
host: 127.0.0.1
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
ignore_missing_core_models: false
Paths:
autoimport_dir: autoimport
lora_dir: null
embedding_dir: null
controlnet_dir: null
models_dir: models
legacy_conf_dir: configs/stable-diffusion
db_dir: databases
outdir: /home/lstein/invokeai-main/outputs
use_memory_db: false
Logging:
log_handlers:
- console
log_format: plain
log_level: info
Model Cache:
ram: 13.5
vram: 0.25
lazy_offload: true
log_memory_usage: false
Device:
device: auto
precision: auto
Generation:
sequential_guidance: false
attention_type: xformers
attention_slice_size: auto
force_tiled_decode: false
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing any
OmegaConf dictionary object initialization time:
omegaconf = OmegaConf.load('/tmp/init.yaml')
conf = InvokeAIAppConfig()
conf.parse_args(conf=omegaconf)
InvokeAIAppConfig.parse_args() will parse the contents of `sys.argv`
at initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:
conf.parse_args(argv=['--log_tokenization'])
It is also possible to set a value at initialization time. However, if
you call parse_args() it may be overwritten.
conf = InvokeAIAppConfig(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# False
To avoid this, use `get_config()` to retrieve the application-wide
configuration object. This will retain any properties set at object
creation time:
conf = InvokeAIAppConfig.get_config(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# True
Any setting can be overwritten by setting an environment variable of
form: "INVOKEAI_<setting>", as in:
export INVOKEAI_port=8080
Order of precedence (from highest):
1) initialization options
2) command line options
3) environment variable options
4) config file options
5) pydantic defaults
Typical usage at the top level file:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its cache size
conf = InvokeAIAppConfig.get_config()
conf.parse_args()
print(conf.ram_cache_size)
Typical usage in a backend module:
from invokeai.app.services.config import InvokeAIAppConfig
# get global configuration and print its cache size value
conf = InvokeAIAppConfig.get_config()
print(conf.ram_cache_size)
Computed properties:
The InvokeAIAppConfig object has a series of properties that
resolve paths relative to the runtime root directory. They each return
a Path object:
root_path - path to InvokeAI root
output_path - path to default outputs directory
conf - alias for the above
embedding_path - path to the embeddings directory
lora_path - path to the LoRA directory
In most cases, you will want to create a single InvokeAIAppConfig
object for the entire application. The InvokeAIAppConfig.get_config() function
does this:
config = InvokeAIAppConfig.get_config()
config.parse_args() # read values from the command line/config file
print(config.root)
# Subclassing
If you wish to create a similar class, please subclass the
`InvokeAISettings` class and define a Literal field named "type",
which is set to the desired top-level name. For example, to create a
"InvokeBatch" configuration, define like this:
class InvokeBatch(InvokeAISettings):
type: Literal["InvokeBatch"] = "InvokeBatch"
node_count : int = Field(default=1, description="Number of nodes to run on", json_schema_extra=dict(category='Resources'))
cpu_count : int = Field(default=8, description="Number of GPUs to run on per node", json_schema_extra=dict(category='Resources'))
This will now read and write from the "InvokeBatch" section of the
config file, look for environment variables named INVOKEBATCH_*, and
accept the command-line arguments `--node_count` and `--cpu_count`. The
two configs are kept in separate sections of the config file:
# invokeai.yaml
InvokeBatch:
Resources:
node_count: 1
cpu_count: 8
InvokeAI:
Paths:
root: /home/lstein/invokeai-main
legacy_conf_dir: configs/stable-diffusion
outdir: outputs
...
"""
# TODO(psyche): pydantic-settings supports YAML settings sources. If we can figure out a way to integrate the YAML
# migration logic, we could use that for simpler config loading.
from __future__ import annotations
import os
import re
import shutil
from functools import lru_cache
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional
from typing import Any, Literal, Optional
from omegaconf import DictConfig, OmegaConf
from pydantic import Field
from pydantic.config import JsonDict
from pydantic_settings import SettingsConfigDict
import psutil
import yaml
from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict
from .config_base import InvokeAISettings
import invokeai.configs as model_configs
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS
from invokeai.frontend.cli.arg_parser import InvokeAIArgs
INIT_FILE = Path("invokeai.yaml")
DB_FILE = Path("invokeai.db")
@ -186,304 +25,303 @@ LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_RAM_CACHE = 10.0
DEFAULT_VRAM_CACHE = 0.25
DEFAULT_CONVERT_CACHE = 20.0
DEVICE = Literal["auto", "cpu", "cuda", "cuda:1", "mps"]
PRECISION = Literal["auto", "float16", "bfloat16", "float32", "autocast"]
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
CONFIG_SCHEMA_VERSION = "4.0.0"
class Categories(object):
"""Category headers for configuration variable groups."""
def get_default_ram_cache_size() -> float:
"""Run a heuristic for the default RAM cache based on installed RAM."""
WebServer: JsonDict = {"category": "Web Server"}
Features: JsonDict = {"category": "Features"}
Paths: JsonDict = {"category": "Paths"}
Logging: JsonDict = {"category": "Logging"}
Development: JsonDict = {"category": "Development"}
Other: JsonDict = {"category": "Other"}
ModelCache: JsonDict = {"category": "Model Cache"}
Device: JsonDict = {"category": "Device"}
Generation: JsonDict = {"category": "Generation"}
Queue: JsonDict = {"category": "Queue"}
Nodes: JsonDict = {"category": "Nodes"}
MemoryPerformance: JsonDict = {"category": "Memory/Performance"}
# On some machines, psutil.virtual_memory().total gives a value that is slightly less than the actual RAM, so the
# limits are set slightly lower than than what we expect the actual RAM to be.
GB = 1024**3
max_ram = psutil.virtual_memory().total / GB
if max_ram >= 60:
return 15.0
if max_ram >= 30:
return 7.5
if max_ram >= 14:
return 4.0
return 2.1 # 2.1 is just large enough for sd 1.5 ;-)
class InvokeAIAppConfig(InvokeAISettings):
"""Configuration object for InvokeAI App."""
class URLRegexTokenPair(BaseModel):
url_regex: str = Field(description="Regular expression to match against the URL")
token: str = Field(description="Token to use when the URL matches the regex")
singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None
singleton_init: ClassVar[Optional[Dict[str, Any]]] = None
@field_validator("url_regex")
@classmethod
def validate_url_regex(cls, v: str) -> str:
"""Validate that the value is a valid regex."""
try:
re.compile(v)
except re.error as e:
raise ValueError(f"Invalid regex: {e}")
return v
class InvokeAIAppConfig(BaseSettings):
"""Invoke's global app configuration.
Typically, you won't need to interact with this class directly. Instead, use the `get_config` function from `invokeai.app.services.config` to get a singleton config object.
Attributes:
host: IP address to bind to. Use `0.0.0.0` to serve to your local network.
port: Port to bind to.
allow_origins: Allowed CORS origins.
allow_credentials: Allow CORS credentials.
allow_methods: Methods allowed for CORS.
allow_headers: Headers allowed for CORS.
ssl_certfile: SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.
ssl_keyfile: SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.
log_tokenization: Enable logging of parsed prompt tokens.
patchmatch: Enable patchmatch inpaint code.
autoimport_dir: Path to a directory of models files to be imported on startup.
models_dir: Path to the models directory.
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
legacy_conf_dir: Path to directory of legacy checkpoint config files.
db_dir: Path to InvokeAI databases directory.
outputs_dir: Path to directory for outputs.
custom_nodes_dir: Path to directory for custom nodes.
log_handlers: Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format: Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.<br>Valid values: `plain`, `color`, `syslog`, `legacy`
log_level: Emit logging messages at this level or higher.<br>Valid values: `debug`, `info`, `warning`, `error`, `critical`
log_sql: Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.
use_memory_db: Use in-memory database. Useful for development.
dev_reload: Automatically reload when Python sources are changed. Does not reload node definitions.
profile_graphs: Enable graph profiling using `cProfile`.
profile_prefix: An optional prefix for profile output files.
profiles_dir: Path to profiles output directory.
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
vram: Amount of VRAM reserved for model storage (GB).
convert_cache: Maximum size of on-disk converted models cache (GB).
lazy_offload: Keep models in VRAM until their space is needed.
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`, `autocast`
sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
attention_slice_size: Slice size, valid when attention_type=="sliced".<br>Valid values: `auto`, `balanced`, `max`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`
force_tiled_decode: Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
pil_compress_level: The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size: Maximum number of items in the session queue.
allow_nodes: List of nodes to allow. Omit to allow all.
deny_nodes: List of nodes to deny. Omit to deny none.
node_cache_size: How many cached nodes to keep in memory.
hashing_algorithm: Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.<br>Valid values: `blake3_multi`, `blake3_single`, `random`, `md5`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `blake2b`, `blake2s`, `sha3_224`, `sha3_256`, `sha3_384`, `sha3_512`, `shake_128`, `shake_256`
remote_api_tokens: List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
"""
_root: Optional[Path] = PrivateAttr(default=None)
_config_file: Optional[Path] = PrivateAttr(default=None)
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
# INTERNAL
schema_version: str = Field(default=CONFIG_SCHEMA_VERSION, description="Schema version of the config file. This is not a user-configurable setting.")
# This is only used during v3 models.yaml migration
legacy_models_yaml_path: Optional[Path] = Field(default=None, description="Path to the legacy models.yaml file. This is not a user-configurable setting.")
# WEB
host : str = Field(default="127.0.0.1", description="IP address to bind to", json_schema_extra=Categories.WebServer)
port : int = Field(default=9090, description="Port to bind to", json_schema_extra=Categories.WebServer)
allow_origins : List[str] = Field(default=[], description="Allowed CORS origins", json_schema_extra=Categories.WebServer)
allow_credentials : bool = Field(default=True, description="Allow CORS credentials", json_schema_extra=Categories.WebServer)
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS", json_schema_extra=Categories.WebServer)
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS", json_schema_extra=Categories.WebServer)
# SSL options correspond to https://www.uvicorn.org/settings/#https
ssl_certfile : Optional[Path] = Field(default=None, description="SSL certificate file (for HTTPS)", json_schema_extra=Categories.WebServer)
ssl_keyfile : Optional[Path] = Field(default=None, description="SSL key file", json_schema_extra=Categories.WebServer)
host: str = Field(default="127.0.0.1", description="IP address to bind to. Use `0.0.0.0` to serve to your local network.")
port: int = Field(default=9090, description="Port to bind to.")
allow_origins: list[str] = Field(default=[], description="Allowed CORS origins.")
allow_credentials: bool = Field(default=True, description="Allow CORS credentials.")
allow_methods: list[str] = Field(default=["*"], description="Methods allowed for CORS.")
allow_headers: list[str] = Field(default=["*"], description="Headers allowed for CORS.")
ssl_certfile: Optional[Path] = Field(default=None, description="SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.")
ssl_keyfile: Optional[Path] = Field(default=None, description="SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.")
# FEATURES
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", json_schema_extra=Categories.Features)
internet_available : bool = Field(default=True, description="If true, attempt to download models on the fly; otherwise only use local models", json_schema_extra=Categories.Features)
log_tokenization : bool = Field(default=False, description="Enable logging of parsed prompt tokens.", json_schema_extra=Categories.Features)
patchmatch : bool = Field(default=True, description="Enable/disable patchmatch inpaint code", json_schema_extra=Categories.Features)
ignore_missing_core_models : bool = Field(default=False, description='Ignore missing models in models/core/convert', json_schema_extra=Categories.Features)
# MISC FEATURES
log_tokenization: bool = Field(default=False, description="Enable logging of parsed prompt tokens.")
patchmatch: bool = Field(default=True, description="Enable patchmatch inpaint code.")
# PATHS
root : Optional[Path] = Field(default=None, description='InvokeAI runtime root directory', json_schema_extra=Categories.Paths)
autoimport_dir : Path = Field(default=Path('autoimport'), description='Path to a directory of models files to be imported on startup.', json_schema_extra=Categories.Paths)
models_dir : Path = Field(default=Path('models'), description='Path to the models directory', json_schema_extra=Categories.Paths)
convert_cache_dir : Path = Field(default=Path('models/.cache'), description='Path to the converted models cache directory', json_schema_extra=Categories.Paths)
legacy_conf_dir : Path = Field(default=Path('configs/stable-diffusion'), description='Path to directory of legacy checkpoint config files', json_schema_extra=Categories.Paths)
db_dir : Path = Field(default=Path('databases'), description='Path to InvokeAI databases directory', json_schema_extra=Categories.Paths)
outdir : Path = Field(default=Path('outputs'), description='Default folder for output images', json_schema_extra=Categories.Paths)
use_memory_db : bool = Field(default=False, description='Use in-memory database for storing image metadata', json_schema_extra=Categories.Paths)
custom_nodes_dir : Path = Field(default=Path('nodes'), description='Path to directory for custom nodes', json_schema_extra=Categories.Paths)
from_file : Optional[Path] = Field(default=None, description='Take command input from the indicated file (command-line client only)', json_schema_extra=Categories.Paths)
autoimport_dir: Path = Field(default=Path("autoimport"), description="Path to a directory of models files to be imported on startup.")
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
convert_cache_dir: Path = Field(default=Path("models/.cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
custom_nodes_dir: Path = Field(default=Path("nodes"), description="Path to directory for custom nodes.")
# LOGGING
log_handlers : List[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>"', json_schema_extra=Categories.Logging)
log_handlers: list[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".')
# note - would be better to read the log_format values from logging.py, but this creates circular dependencies issues
log_format : Literal['plain', 'color', 'syslog', 'legacy'] = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style', json_schema_extra=Categories.Logging)
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", json_schema_extra=Categories.Logging)
log_sql : bool = Field(default=False, description="Log SQL queries", json_schema_extra=Categories.Logging)
log_format: LOG_FORMAT = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.')
log_level: LOG_LEVEL = Field(default="info", description="Emit logging messages at this level or higher.")
log_sql: bool = Field(default=False, description="Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.")
# Development
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", json_schema_extra=Categories.Development)
profile_graphs : bool = Field(default=False, description="Enable graph profiling", json_schema_extra=Categories.Development)
profile_prefix : Optional[str] = Field(default=None, description="An optional prefix for profile output files.", json_schema_extra=Categories.Development)
profiles_dir : Path = Field(default=Path('profiles'), description="Directory for graph profiles", json_schema_extra=Categories.Development)
skip_model_hash : bool = Field(default=False, description="Skip model hashing, instead assigning a UUID to models. Useful when using a memory db to reduce startup time.", json_schema_extra=Categories.Development)
version : bool = Field(default=False, description="Show InvokeAI version and exit", json_schema_extra=Categories.Other)
use_memory_db: bool = Field(default=False, description="Use in-memory database. Useful for development.")
dev_reload: bool = Field(default=False, description="Automatically reload when Python sources are changed. Does not reload node definitions.")
profile_graphs: bool = Field(default=False, description="Enable graph profiling using `cProfile`.")
profile_prefix: Optional[str] = Field(default=None, description="An optional prefix for profile output files.")
profiles_dir: Path = Field(default=Path("profiles"), description="Path to profiles output directory.")
# CACHE
ram : float = Field(default=DEFAULT_RAM_CACHE, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number, GB)", json_schema_extra=Categories.ModelCache, )
vram : float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (floating point number, GB)", json_schema_extra=Categories.ModelCache, )
convert_cache : float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB)", json_schema_extra=Categories.ModelCache)
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed", json_schema_extra=Categories.ModelCache, )
log_memory_usage : bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.", json_schema_extra=Categories.ModelCache)
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
convert_cache: float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB).")
lazy_offload: bool = Field(default=True, description="Keep models in VRAM until their space is needed.")
log_memory_usage: bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
# DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "bfloat16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
device: DEVICE = Field(default="auto", description="Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.")
precision: PRECISION = Field(default="auto", description="Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.")
# GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", json_schema_extra=Categories.Generation)
attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type", json_schema_extra=Categories.Generation)
attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', json_schema_extra=Categories.Generation)
force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
# QUEUE
max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue", json_schema_extra=Categories.Queue)
sequential_guidance: bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.")
attention_type: ATTENTION_TYPE = Field(default="auto", description="Attention type.")
attention_slice_size: ATTENTION_SLICE_SIZE = Field(default="auto", description='Slice size, valid when attention_type=="sliced".')
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).")
pil_compress_level: int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.")
max_queue_size: int = Field(default=10000, gt=0, description="Maximum number of items in the session queue.")
# NODES
allow_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.", json_schema_extra=Categories.Nodes)
deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", json_schema_extra=Categories.Nodes)
allow_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.")
deny_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.")
node_cache_size: int = Field(default=512, description="How many cached nodes to keep in memory.")
# MODEL IMPORT
civitai_api_key : Optional[str] = Field(default=os.environ.get("CIVITAI_API_KEY"), description="API key for CivitAI", json_schema_extra=Categories.Other)
# MODEL INSTALL
hashing_algorithm: HASHING_ALGORITHMS = Field(default="blake3_single", description="Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.")
remote_api_tokens: Optional[list[URLRegexTokenPair]] = Field(default=None, description="List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.")
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.MemoryPerformance)
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.MemoryPerformance)
max_vram_cache_size : Optional[float] = Field(default=None, ge=0, description="Amount of VRAM reserved for model storage", json_schema_extra=Categories.MemoryPerformance)
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", json_schema_extra=Categories.MemoryPerformance)
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.MemoryPerformance)
lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Paths)
embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
conf_path : Path = Field(default=Path('configs/models.yaml'), description='Path to models definition file', json_schema_extra=Categories.Paths)
# this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
# See InvokeAIAppConfig subclass below for CACHE and DEVICE categories
# fmt: on
model_config = SettingsConfigDict(validate_assignment=True, env_prefix="INVOKEAI")
model_config = SettingsConfigDict(env_prefix="INVOKEAI_", env_ignore_empty=True)
def parse_args(
self,
argv: Optional[list[str]] = None,
conf: Optional[DictConfig] = None,
clobber: Optional[bool] = False,
) -> None:
def update_config(self, config: dict[str, Any] | InvokeAIAppConfig, clobber: bool = True) -> None:
"""Updates the config, overwriting existing values.
Args:
config: A dictionary of config settings, or instance of `InvokeAIAppConfig`. If an instance of \
`InvokeAIAppConfig`, only the explicitly set fields will be merged into the singleton config.
clobber: If `True`, overwrite existing values. If `False`, only update fields that are not already set.
"""
Update settings with contents of init file, environment, and command-line settings.
:param conf: alternate Omegaconf dictionary object
:param argv: aternate sys.argv list
:param clobber: ovewrite any initialization parameters passed during initialization
if isinstance(config, dict):
new_config = self.model_validate(config)
else:
new_config = config
for field_name in new_config.model_fields_set:
new_value = getattr(new_config, field_name)
current_value = getattr(self, field_name)
if field_name in self.model_fields_set and not clobber:
continue
if new_value != current_value:
setattr(self, field_name, new_value)
def write_file(self, dest_path: Path, as_example: bool = False) -> None:
"""Write the current configuration to file. This will overwrite the existing file.
A `meta` stanza is added to the top of the file, containing metadata about the config file. This is not stored in the config object.
Args:
dest_path: Path to write the config to.
"""
# Set the runtime root directory. We parse command-line switches here
# in order to pick up the --root_dir option.
super().parse_args(argv)
loaded_conf = None
if conf is None:
try:
loaded_conf = OmegaConf.load(self.root_dir / INIT_FILE)
except Exception:
pass
if isinstance(loaded_conf, DictConfig):
InvokeAISettings.initconf = loaded_conf
else:
InvokeAISettings.initconf = conf
dest_path.parent.mkdir(parents=True, exist_ok=True)
with open(dest_path, "w") as file:
# Meta fields should be written in a separate stanza - skip legacy_models_yaml_path
meta_dict = self.model_dump(mode="json", include={"schema_version"})
# parse args again in order to pick up settings in configuration file
super().parse_args(argv)
# User settings
config_dict = self.model_dump(
mode="json",
exclude_unset=False if as_example else True,
exclude_defaults=False if as_example else True,
exclude_none=True if as_example else False,
exclude={"schema_version", "legacy_models_yaml_path"},
)
if self.singleton_init and not clobber:
# When setting values in this way, set validate_assignment to true if you want to validate the value.
for k, v in self.singleton_init.items():
setattr(self, k, v)
@classmethod
def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object."""
if (
cls.singleton_config is None
or type(cls.singleton_config) is not cls
or (kwargs and cls.singleton_init != kwargs)
):
cls.singleton_config = cls(**kwargs)
cls.singleton_init = kwargs
return cls.singleton_config
@property
def root_path(self) -> Path:
"""Path to the runtime root directory."""
if self.root:
root = Path(self.root).expanduser().absolute()
else:
root = self.find_root().expanduser().absolute()
self.root = root # insulate ourselves from relative paths that may change
return root.resolve()
@property
def root_dir(self) -> Path:
"""Alias for above."""
return self.root_path
if as_example:
file.write(
"# This is an example file with default and example settings. Use the values here as a baseline.\n\n"
)
file.write("# Internal metadata - do not edit:\n")
file.write(yaml.dump(meta_dict, sort_keys=False))
file.write("\n")
file.write("# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:\n")
if len(config_dict) > 0:
file.write(yaml.dump(config_dict, sort_keys=False))
def _resolve(self, partial_path: Path) -> Path:
return (self.root_path / partial_path).resolve()
@property
def init_file_path(self) -> Path:
"""Path to invokeai.yaml."""
resolved_path = self._resolve(INIT_FILE)
def root_path(self) -> Path:
"""Path to the runtime root directory, resolved to an absolute path."""
if self._root:
root = Path(self._root).expanduser().absolute()
else:
root = self.find_root().expanduser().absolute()
self._root = root # insulate ourselves from relative paths that may change
return root.resolve()
@property
def config_file_path(self) -> Path:
"""Path to invokeai.yaml, resolved to an absolute path.."""
resolved_path = self._resolve(self._config_file or INIT_FILE)
assert resolved_path is not None
return resolved_path
@property
def output_path(self) -> Optional[Path]:
"""Path to defaults outputs directory."""
return self._resolve(self.outdir)
def autoimport_path(self) -> Path:
"""Path to the autoimports directory, resolved to an absolute path.."""
return self._resolve(self.autoimport_dir)
@property
def outputs_path(self) -> Optional[Path]:
"""Path to the outputs directory, resolved to an absolute path.."""
return self._resolve(self.outputs_dir)
@property
def db_path(self) -> Path:
"""Path to the invokeai.db file."""
"""Path to the invokeai.db file, resolved to an absolute path.."""
db_dir = self._resolve(self.db_dir)
assert db_dir is not None
return db_dir / DB_FILE
@property
def model_conf_path(self) -> Path:
"""Path to models configuration file."""
return self._resolve(self.conf_path)
@property
def legacy_conf_path(self) -> Path:
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml)."""
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml), resolved to an absolute path.."""
return self._resolve(self.legacy_conf_dir)
@property
def models_path(self) -> Path:
"""Path to the models directory."""
"""Path to the models directory, resolved to an absolute path.."""
return self._resolve(self.models_dir)
@property
def models_convert_cache_path(self) -> Path:
"""Path to the converted cache models directory."""
def convert_cache_path(self) -> Path:
"""Path to the converted cache models directory, resolved to an absolute path.."""
return self._resolve(self.convert_cache_dir)
@property
def custom_nodes_path(self) -> Path:
"""Path to the custom nodes directory."""
"""Path to the custom nodes directory, resolved to an absolute path.."""
custom_nodes_path = self._resolve(self.custom_nodes_dir)
assert custom_nodes_path is not None
return custom_nodes_path
# the following methods support legacy calls leftover from the Globals era
@property
def full_precision(self) -> bool:
"""Return true if precision set to float32."""
return self.precision == "float32"
@property
def try_patchmatch(self) -> bool:
"""Return true if patchmatch true."""
return self.patchmatch
@property
def nsfw_checker(self) -> bool:
"""Return value for NSFW checker. The NSFW node is always active and disabled from Web UI."""
return True
@property
def invisible_watermark(self) -> bool:
"""Return value of invisible watermark. It is always active and disabled from Web UI."""
return True
@property
def ram_cache_size(self) -> float:
"""Return the ram cache size using the legacy or modern setting (GB)."""
return self.max_cache_size or self.ram
@property
def vram_cache_size(self) -> float:
"""Return the vram cache size using the legacy or modern setting (GB)."""
return self.max_vram_cache_size or self.vram
@property
def convert_cache_size(self) -> float:
"""Return the convert cache size on disk (GB)."""
return self.convert_cache
@property
def use_cpu(self) -> bool:
"""Return true if the device is set to CPU or the always_use_cpu flag is set."""
return self.always_use_cpu or self.device == "cpu"
@property
def disable_xformers(self) -> bool:
"""Return true if enable_xformers is false (reversed logic) and attention type is not set to xformers."""
disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers"
@property
def profiles_path(self) -> Path:
"""Path to the graph profiles directory."""
"""Path to the graph profiles directory, resolved to an absolute path.."""
return self._resolve(self.profiles_dir)
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""
return _find_root()
def get_invokeai_config(**kwargs: Any) -> InvokeAIAppConfig:
"""Legacy function which returns InvokeAIAppConfig.get_config()."""
return InvokeAIAppConfig.get_config(**kwargs)
def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
@ -492,3 +330,158 @@ def _find_root() -> Path:
else:
root = Path("~/invokeai").expanduser().resolve()
return root
class DefaultInvokeAIAppConfig(InvokeAIAppConfig):
"""A version of `InvokeAIAppConfig` that does not automatically parse any settings from environment variables
or any file.
This is useful for writing out a default config file.
Note that init settings are set if provided.
"""
@classmethod
def settings_customise_sources(
cls,
settings_cls: type[BaseSettings],
init_settings: PydanticBaseSettingsSource,
env_settings: PydanticBaseSettingsSource,
dotenv_settings: PydanticBaseSettingsSource,
file_secret_settings: PydanticBaseSettingsSource,
) -> tuple[PydanticBaseSettingsSource, ...]:
return (init_settings,)
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
"""Migrate a v3 config dictionary to a current config object.
Args:
config_dict: A dictionary of settings from a v3 config file.
Returns:
An instance of `InvokeAIAppConfig` with the migrated settings.
"""
parsed_config_dict: dict[str, Any] = {}
for _category_name, category_dict in config_dict["InvokeAI"].items():
for k, v in category_dict.items():
# `outdir` was renamed to `outputs_dir` in v4
if k == "outdir":
parsed_config_dict["outputs_dir"] = v
# `max_cache_size` was renamed to `ram` some time in v3, but both names were used
if k == "max_cache_size" and "ram" not in category_dict:
parsed_config_dict["ram"] = v
# `max_vram_cache_size` was renamed to `vram` some time in v3, but both names were used
if k == "max_vram_cache_size" and "vram" not in category_dict:
parsed_config_dict["vram"] = v
if k == "conf_path":
parsed_config_dict["legacy_models_yaml_path"] = v
if k == "legacy_conf_dir":
# The old default for this was "configs/stable-diffusion". If if the incoming config has that as the value, we won't set it.
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
# Else we do not attempt to migrate this setting
if v != "configs/stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = v
elif Path(v).name == "stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
elif k in InvokeAIAppConfig.model_fields:
# skip unknown fields
parsed_config_dict[k] = v
# When migrating the config file, we should not include currently-set environment variables.
config = DefaultInvokeAIAppConfig.model_validate(parsed_config_dict)
return config
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
"""Load and migrate a config file to the latest version.
Args:
config_path: Path to the config file.
Returns:
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
"""
assert config_path.suffix == ".yaml"
with open(config_path) as file:
loaded_config_dict = yaml.safe_load(file)
assert isinstance(loaded_config_dict, dict)
if "InvokeAI" in loaded_config_dict:
# This is a v3 config file, attempt to migrate it
shutil.copy(config_path, config_path.with_suffix(".yaml.bak"))
try:
# loaded_config_dict could be the wrong shape, but we will catch all exceptions below
migrated_config = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
except Exception as e:
shutil.copy(config_path.with_suffix(".yaml.bak"), config_path)
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
migrated_config.write_file(config_path)
return migrated_config
else:
# Attempt to load as a v4 config file
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
assert (
config.schema_version == CONFIG_SCHEMA_VERSION
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
return config
except Exception as e:
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
@lru_cache(maxsize=1)
def get_config() -> InvokeAIAppConfig:
"""Get the global singleton app config.
When first called, this function:
- Creates a config object. `pydantic-settings` handles merging of settings from environment variables, but not the init file.
- Retrieves any provided CLI args from the InvokeAIArgs class. It does not _parse_ the CLI args; that is done in the main entrypoint.
- Sets the root dir, if provided via CLI args.
- Logs in to HF if there is no valid token already.
- Copies all legacy configs to the legacy conf dir (needed for conversion from ckpt to diffusers).
- Reads and merges in settings from the config file if it exists, else writes out a default config file.
On subsequent calls, the object is returned from the cache.
"""
# This object includes environment variables, as parsed by pydantic-settings
config = InvokeAIAppConfig()
args = InvokeAIArgs.args
# This flag serves as a proxy for whether the config was retrieved in the context of the full application or not.
# If it is False, we should just return a default config and not set the root, log in to HF, etc.
if not InvokeAIArgs.did_parse:
return config
# Set CLI args
if root := getattr(args, "root", None):
config._root = Path(root)
if config_file := getattr(args, "config_file", None):
config._config_file = Path(config_file)
# Create the example config file, with some extra example values provided
example_config = DefaultInvokeAIAppConfig()
example_config.remote_api_tokens = [
URLRegexTokenPair(url_regex="cool-models.com", token="my_secret_token"),
URLRegexTokenPair(url_regex="nifty-models.com", token="some_other_token"),
]
example_config.write_file(config.config_file_path.with_suffix(".example.yaml"), as_example=True)
# Copy all legacy configs - We know `__path__[0]` is correct here
configs_src = Path(model_configs.__path__[0]) # pyright: ignore [reportUnknownMemberType, reportUnknownArgumentType, reportAttributeAccessIssue]
shutil.copytree(configs_src, config.legacy_conf_path, dirs_exist_ok=True)
if config.config_file_path.exists():
config_from_file = load_and_migrate_config(config.config_file_path)
# Clobbering here will overwrite any settings that were set via environment variables
config.update_config(config_from_file, clobber=False)
else:
# We should never write env vars to the config file
default_config = DefaultInvokeAIAppConfig()
default_config.write_file(config.config_file_path, as_example=False)
return config

View File

@ -85,8 +85,10 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._logger.info(f"Waiting for {len(active_jobs)} active download jobs to complete")
with self._queue.mutex:
self._queue.queue.clear()
self.join() # wait for all active jobs to finish
self.cancel_all_jobs()
self._stop_event.set()
for thread in self._worker_pool:
thread.join()
self._worker_pool.clear()
def submit_download_job(

View File

@ -12,6 +12,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.util.misc import get_timestamp
from invokeai.backend.model_manager import AnyModelConfig
from invokeai.backend.model_manager.config import SubModelType
class EventServiceBase:
@ -80,7 +81,7 @@ class EventServiceBase:
"graph_execution_state_id": graph_execution_state_id,
"node_id": node_id,
"source_node_id": source_node_id,
"progress_image": progress_image.model_dump() if progress_image is not None else None,
"progress_image": progress_image.model_dump(mode="json") if progress_image is not None else None,
"step": step,
"order": order,
"total_steps": total_steps,
@ -180,6 +181,7 @@ class EventServiceBase:
queue_batch_id: str,
graph_execution_state_id: str,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> None:
"""Emitted when a model is requested"""
self.__emit_queue_event(
@ -189,7 +191,8 @@ class EventServiceBase:
"queue_item_id": queue_item_id,
"queue_batch_id": queue_batch_id,
"graph_execution_state_id": graph_execution_state_id,
"model_config": model_config.model_dump(),
"model_config": model_config.model_dump(mode="json"),
"submodel_type": submodel_type,
},
)
@ -200,6 +203,7 @@ class EventServiceBase:
queue_batch_id: str,
graph_execution_state_id: str,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> None:
"""Emitted when a model is correctly loaded (returns model info)"""
self.__emit_queue_event(
@ -209,7 +213,8 @@ class EventServiceBase:
"queue_item_id": queue_item_id,
"queue_batch_id": queue_batch_id,
"graph_execution_state_id": graph_execution_state_id,
"model_config": model_config.model_dump(),
"model_config": model_config.model_dump(mode="json"),
"submodel_type": submodel_type,
},
)
@ -254,8 +259,8 @@ class EventServiceBase:
"started_at": str(session_queue_item.started_at) if session_queue_item.started_at else None,
"completed_at": str(session_queue_item.completed_at) if session_queue_item.completed_at else None,
},
"batch_status": batch_status.model_dump(),
"queue_status": queue_status.model_dump(),
"batch_status": batch_status.model_dump(mode="json"),
"queue_status": queue_status.model_dump(mode="json"),
},
)
@ -381,6 +386,17 @@ class EventServiceBase:
},
)
def emit_model_install_downloads_done(self, source: str) -> None:
"""
Emit once when all parts are downloaded, but before the probing and registration start.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_downloads_done",
payload={"source": source},
)
def emit_model_install_running(self, source: str) -> None:
"""
Emit once when an install job becomes active.
@ -405,7 +421,7 @@ class EventServiceBase:
payload={"source": source, "total_bytes": total_bytes, "key": key, "id": id},
)
def emit_model_install_cancelled(self, source: str) -> None:
def emit_model_install_cancelled(self, source: str, id: int) -> None:
"""
Emit when an install job is cancelled.
@ -413,7 +429,7 @@ class EventServiceBase:
"""
self.__emit_model_event(
event_name="model_install_cancelled",
payload={"source": source},
payload={"source": source, "id": id},
)
def emit_model_install_error(self, source: str, error_type: str, error: str, id: int) -> None:

View File

@ -82,7 +82,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
image_path,
"PNG",
pnginfo=pnginfo,
compress_level=self.__invoker.services.configuration.png_compress_level,
compress_level=self.__invoker.services.configuration.pil_compress_level,
)
thumbnail_name = get_thumbnail_name(image_name)

View File

@ -41,8 +41,9 @@ class InvocationCacheBase(ABC):
"""Clears the cache"""
pass
@staticmethod
@abstractmethod
def create_key(self, invocation: BaseInvocation) -> int:
def create_key(invocation: BaseInvocation) -> int:
"""Gets the key for the invocation's cache item"""
pass

View File

@ -61,9 +61,7 @@ class MemoryInvocationCache(InvocationCacheBase):
self._delete_oldest_access(number_to_delete)
self._cache[key] = CachedItem(
invocation_output,
invocation_output.model_dump_json(
warnings=False, exclude_defaults=True, exclude_unset=True, include={"type"}
),
invocation_output.model_dump_json(warnings=False, exclude_defaults=True, exclude_unset=True),
)
def _delete_oldest_access(self, number_to_delete: int) -> None:
@ -81,7 +79,7 @@ class MemoryInvocationCache(InvocationCacheBase):
with self._lock:
return self._delete(key)
def clear(self, *args, **kwargs) -> None:
def clear(self) -> None:
with self._lock:
if self._max_cache_size == 0:
return

View File

@ -25,6 +25,7 @@ if TYPE_CHECKING:
from .images.images_base import ImageServiceABC
from .invocation_cache.invocation_cache_base import InvocationCacheBase
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from .model_images.model_images_base import ModelImageFileStorageBase
from .model_manager.model_manager_base import ModelManagerServiceBase
from .names.names_base import NameServiceBase
from .session_processor.session_processor_base import SessionProcessorBase
@ -49,6 +50,7 @@ class InvocationServices:
image_files: "ImageFileStorageBase",
image_records: "ImageRecordStorageBase",
logger: "Logger",
model_images: "ModelImageFileStorageBase",
model_manager: "ModelManagerServiceBase",
download_queue: "DownloadQueueServiceBase",
performance_statistics: "InvocationStatsServiceBase",
@ -72,6 +74,7 @@ class InvocationServices:
self.image_files = image_files
self.image_records = image_records
self.logger = logger
self.model_images = model_images
self.model_manager = model_manager
self.download_queue = download_queue
self.performance_statistics = performance_statistics

View File

@ -0,0 +1,33 @@
from abc import ABC, abstractmethod
from pathlib import Path
from PIL.Image import Image as PILImageType
class ModelImageFileStorageBase(ABC):
"""Low-level service responsible for storing and retrieving image files."""
@abstractmethod
def get(self, model_key: str) -> PILImageType:
"""Retrieves a model image as PIL Image."""
pass
@abstractmethod
def get_path(self, model_key: str) -> Path:
"""Gets the internal path to a model image."""
pass
@abstractmethod
def get_url(self, model_key: str) -> str | None:
"""Gets the URL to fetch a model image."""
pass
@abstractmethod
def save(self, image: PILImageType, model_key: str) -> None:
"""Saves a model image."""
pass
@abstractmethod
def delete(self, model_key: str) -> None:
"""Deletes a model image."""
pass

View File

@ -0,0 +1,20 @@
# TODO: Should these excpetions subclass existing python exceptions?
class ModelImageFileNotFoundException(Exception):
"""Raised when an image file is not found in storage."""
def __init__(self, message="Model image file not found"):
super().__init__(message)
class ModelImageFileSaveException(Exception):
"""Raised when an image cannot be saved."""
def __init__(self, message="Model image file not saved"):
super().__init__(message)
class ModelImageFileDeleteException(Exception):
"""Raised when an image cannot be deleted."""
def __init__(self, message="Model image file not deleted"):
super().__init__(message)

View File

@ -0,0 +1,85 @@
from pathlib import Path
from PIL import Image
from PIL.Image import Image as PILImageType
from send2trash import send2trash
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.misc import uuid_string
from invokeai.app.util.thumbnails import make_thumbnail
from .model_images_base import ModelImageFileStorageBase
from .model_images_common import (
ModelImageFileDeleteException,
ModelImageFileNotFoundException,
ModelImageFileSaveException,
)
class ModelImageFileStorageDisk(ModelImageFileStorageBase):
"""Stores images on disk"""
def __init__(self, model_images_folder: Path):
self._model_images_folder = model_images_folder
self._validate_storage_folders()
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
def get(self, model_key: str) -> PILImageType:
try:
path = self.get_path(model_key)
if not self._validate_path(path):
raise ModelImageFileNotFoundException
return Image.open(path)
except FileNotFoundError as e:
raise ModelImageFileNotFoundException from e
def save(self, image: PILImageType, model_key: str) -> None:
try:
self._validate_storage_folders()
image_path = self._model_images_folder / (model_key + ".webp")
thumbnail = make_thumbnail(image, 256)
thumbnail.save(image_path, format="webp")
except Exception as e:
raise ModelImageFileSaveException from e
def get_path(self, model_key: str) -> Path:
path = self._model_images_folder / (model_key + ".webp")
return path
def get_url(self, model_key: str) -> str | None:
path = self.get_path(model_key)
if not self._validate_path(path):
return
url = self._invoker.services.urls.get_model_image_url(model_key)
# The image URL never changes, so we must add random query string to it to prevent caching
url += f"?{uuid_string()}"
return url
def delete(self, model_key: str) -> None:
try:
path = self.get_path(model_key)
if not self._validate_path(path):
raise ModelImageFileNotFoundException
send2trash(path)
except Exception as e:
raise ModelImageFileDeleteException from e
def _validate_path(self, path: Path) -> bool:
"""Validates the path given for an image."""
return path.exists()
def _validate_storage_folders(self) -> None:
"""Checks if the required folders exist and create them if they don't"""
self._model_images_folder.mkdir(parents=True, exist_ok=True)

View File

@ -1,7 +1,6 @@
"""Initialization file for model install service package."""
from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
@ -23,5 +22,4 @@ __all__ = [
"LocalModelSource",
"HFModelSource",
"URLModelSource",
"CivitaiModelSource",
]

View File

@ -91,21 +91,6 @@ class LocalModelSource(StringLikeSource):
return Path(self.path).as_posix()
class CivitaiModelSource(StringLikeSource):
"""A Civitai version id, with optional variant and access token."""
version_id: int
variant: Optional[ModelRepoVariant] = None
access_token: Optional[str] = None
type: Literal["civitai"] = "civitai"
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = str(self.version_id)
base += f" ({self.variant})" if self.variant else ""
return base
class HFModelSource(StringLikeSource):
"""
A HuggingFace repo_id with optional variant, sub-folder and access token.
@ -129,8 +114,10 @@ class HFModelSource(StringLikeSource):
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = self.repo_id
if self.variant:
base += f":{self.variant or ''}"
base += f":{self.subfolder}" if self.subfolder else ""
if self.subfolder:
base += f":{self.subfolder}"
return base
@ -146,14 +133,11 @@ class URLModelSource(StringLikeSource):
return str(self.url)
ModelSource = Annotated[
Union[LocalModelSource, HFModelSource, CivitaiModelSource, URLModelSource], Field(discriminator="type")
]
ModelSource = Annotated[Union[LocalModelSource, HFModelSource, URLModelSource], Field(discriminator="type")]
MODEL_SOURCE_TO_TYPE_MAP = {
URLModelSource: ModelSourceType.Url,
HFModelSource: ModelSourceType.HFRepoID,
CivitaiModelSource: ModelSourceType.CivitAI,
LocalModelSource: ModelSourceType.Path,
}
@ -484,6 +468,19 @@ class ModelInstallServiceBase(ABC):
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the model record database."""
@abstractmethod
def sync_model_path(self, key: str) -> AnyModelConfig:
"""
Move model into the location indicated by its basetype, type and name.
Call this after updating a model's attributes in order to move
the model's path into the location indicated by its basetype, type and
name. Applies only to models whose paths are within the root `models_dir`
directory.
May raise an UnknownModelException.
"""
@abstractmethod
def download_and_cache(self, source: Union[str, AnyHttpUrl], access_token: Optional[str] = None) -> Path:
"""

View File

@ -2,6 +2,7 @@
import os
import re
import signal
import threading
import time
from hashlib import sha256
@ -11,6 +12,7 @@ from shutil import copyfile, copytree, move, rmtree
from tempfile import mkdtemp
from typing import Any, Dict, List, Optional, Set, Union
import yaml
from huggingface_hub import HfFolder
from pydantic.networks import AnyHttpUrl
from requests import Session
@ -21,7 +23,6 @@ from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.util.misc import uuid_string
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@ -33,12 +34,12 @@ from invokeai.backend.model_manager.config import (
)
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
HuggingFaceMetadataFetch,
ModelMetadataFetchBase,
ModelMetadataWithFiles,
RemoteModelFile,
)
from invokeai.backend.model_manager.metadata.metadata_base import CivitaiMetadata, HuggingFaceMetadata
from invokeai.backend.model_manager.metadata.metadata_base import HuggingFaceMetadata
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import Chdir, InvokeAILogger
@ -46,7 +47,6 @@ from invokeai.backend.util.devices import choose_precision, choose_torch_device
from .model_install_base import (
MODEL_SOURCE_TO_TYPE_MAP,
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
@ -94,6 +94,7 @@ class ModelInstallService(ModelInstallServiceBase):
self._download_cache: Dict[AnyHttpUrl, ModelInstallJob] = {}
self._running = False
self._session = session
self._install_thread: Optional[threading.Thread] = None
self._next_job_id = 0
@property
@ -112,28 +113,56 @@ class ModelInstallService(ModelInstallServiceBase):
# makes the installer harder to use outside the web app
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer thread."""
# Yes, this is weird. When the installer thread is running, the
# thread masks the ^C signal. When we receive a
# sigINT, we stop the thread, reset sigINT, and send a new
# sigINT to the parent process.
def sigint_handler(signum, frame):
self.stop()
signal.signal(signal.SIGINT, signal.SIG_DFL)
signal.raise_signal(signal.SIGINT)
signal.signal(signal.SIGINT, sigint_handler)
with self._lock:
if self._running:
raise Exception("Attempt to start the installer service twice")
self._start_installer_thread()
self._remove_dangling_install_dirs()
self._migrate_yaml()
self.sync_to_config()
def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the installer thread; after this the object can be deleted and garbage collected."""
with self._lock:
if not self._running:
raise Exception("Attempt to stop the install service before it was started")
self._logger.debug("calling stop_event.set()")
self._stop_event.set()
with self._install_queue.mutex:
self._install_queue.queue.clear() # get rid of pending jobs
active_jobs = [x for x in self.list_jobs() if x.running]
if active_jobs:
self._logger.warning("Waiting for active install job to complete")
self.wait_for_installs()
self._clear_pending_jobs()
self._download_cache.clear()
assert self._install_thread is not None
self._install_thread.join()
self._running = False
def _clear_pending_jobs(self) -> None:
for job in self.list_jobs():
if not job.in_terminal_state:
self._logger.warning("Cancelling job {job.id}")
self.cancel_job(job)
while True:
try:
job = self._install_queue.get(block=False)
self._install_queue.task_done()
except Empty:
break
def _put_in_queue(self, job: ModelInstallJob) -> None:
if self._stop_event.is_set():
self.cancel_job(job)
else:
self._install_queue.put(job)
def register_path(
self,
model_path: Union[Path, str],
@ -154,10 +183,7 @@ class ModelInstallService(ModelInstallServiceBase):
model_path = Path(model_path)
config = config or {}
if self._app_config.skip_model_hash:
config["hash"] = uuid_string()
info: AnyModelConfig = ModelProbe.probe(Path(model_path), config)
info: AnyModelConfig = ModelProbe.probe(Path(model_path), config, hash_algo=self._app_config.hashing_algorithm)
if preferred_name := config.get("name"):
preferred_name = Path(preferred_name).with_suffix(model_path.suffix)
@ -199,9 +225,16 @@ class ModelInstallService(ModelInstallServiceBase):
access_token=access_token,
)
elif re.match(r"^https?://[^/]+", source):
# Pull the token from config if it exists and matches the URL
_token = access_token
if _token is None:
for pair in self.app_config.remote_api_tokens or []:
if re.search(pair.url_regex, source):
_token = pair.token
break
source_obj = URLModelSource(
url=AnyHttpUrl(source),
access_token=access_token,
access_token=_token,
)
else:
raise ValueError(f"Unsupported model source: '{source}'")
@ -215,9 +248,7 @@ class ModelInstallService(ModelInstallServiceBase):
if isinstance(source, LocalModelSource):
install_job = self._import_local_model(source, config)
self._install_queue.put(install_job) # synchronously install
elif isinstance(source, CivitaiModelSource):
install_job = self._import_from_civitai(source, config)
self._put_in_queue(install_job) # synchronously install
elif isinstance(source, HFModelSource):
install_job = self._import_from_hf(source, config)
elif isinstance(source, URLModelSource):
@ -252,7 +283,6 @@ class ModelInstallService(ModelInstallServiceBase):
raise TimeoutError("Timeout exceeded")
return job
# TODO: Better name? Maybe wait_for_jobs()? Maybe too easily confused with above
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]: # noqa D102
"""Block until all installation jobs are done."""
start = time.time()
@ -262,6 +292,7 @@ class ModelInstallService(ModelInstallServiceBase):
if timeout > 0 and time.time() - start > timeout:
raise TimeoutError("Timeout exceeded")
self._install_queue.join()
return self._install_jobs
def cancel_job(self, job: ModelInstallJob) -> None:
@ -278,16 +309,67 @@ class ModelInstallService(ModelInstallServiceBase):
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the config record store database."""
self._scan_models_directory()
if autoimport := self._app_config.autoimport_dir:
if self._app_config.autoimport_path:
self._logger.info("Scanning autoimport directory for new models")
installed = self.scan_directory(self._app_config.root_path / autoimport)
installed = self.scan_directory(self._app_config.autoimport_path)
self._logger.info(f"{len(installed)} new models registered")
self._logger.info("Model installer (re)initialized")
def _migrate_yaml(self) -> None:
db_models = self.record_store.all_models()
legacy_models_yaml_path = (
self._app_config.legacy_models_yaml_path or self._app_config.root_path / "configs" / "models.yaml"
)
# The old path may be relative to the root path
if not legacy_models_yaml_path.exists():
legacy_models_yaml_path = Path(self._app_config.root_path, legacy_models_yaml_path)
if legacy_models_yaml_path.exists():
legacy_models_yaml = yaml.safe_load(legacy_models_yaml_path.read_text())
yaml_metadata = legacy_models_yaml.pop("__metadata__")
yaml_version = yaml_metadata.get("version")
if yaml_version != "3.0.0":
raise ValueError(
f"Attempted migration of unsupported `models.yaml` v{yaml_version}. Only v3.0.0 is supported. Exiting."
)
self._logger.info(
f"Starting one-time migration of {len(legacy_models_yaml.items())} models from {str(legacy_models_yaml_path)}. This may take a few minutes."
)
if len(db_models) == 0 and len(legacy_models_yaml.items()) != 0:
for model_key, stanza in legacy_models_yaml.items():
_, _, model_name = str(model_key).split("/")
model_path = Path(stanza["path"])
if not model_path.is_absolute():
model_path = self._app_config.models_path / model_path
model_path = model_path.resolve()
config: dict[str, Any] = {}
config["name"] = model_name
config["description"] = stanza.get("description")
config["config_path"] = stanza.get("config")
try:
id = self.register_path(model_path=model_path, config=config)
self._logger.info(f"Migrated {model_name} with id {id}")
except Exception as e:
self._logger.warning(f"Model at {model_path} could not be migrated: {e}")
# Rename `models.yaml` to `models.yaml.bak` to prevent re-migration
legacy_models_yaml_path.rename(legacy_models_yaml_path.with_suffix(".yaml.bak"))
# Unset the path - we are done with it either way
self._app_config.legacy_models_yaml_path = None
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]: # noqa D102
self._cached_model_paths = {Path(x.path).absolute() for x in self.record_store.all_models()}
self._cached_model_paths = {Path(x.path).resolve() for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback, config=self._app_config)
search = ModelSearch(on_model_found=callback)
self._models_installed.clear()
search.search(scan_dir)
return list(self._models_installed)
@ -299,7 +381,7 @@ class ModelInstallService(ModelInstallServiceBase):
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
model_path = models_dir / Path(model.path) # handle legacy relative model paths
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
@ -307,11 +389,11 @@ class ModelInstallService(ModelInstallServiceBase):
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
model_path = self.app_config.models_path / model.path
if model_path.is_file() or model_path.is_symlink():
model_path.unlink()
elif model_path.is_dir():
rmtree(model_path)
self.unregister(key)
def download_and_cache(
@ -322,7 +404,7 @@ class ModelInstallService(ModelInstallServiceBase):
) -> Path:
"""Download the model file located at source to the models cache and return its Path."""
model_hash = sha256(str(source).encode("utf-8")).hexdigest()[0:32]
model_path = self._app_config.models_convert_cache_path / model_hash
model_path = self._app_config.convert_cache_path / model_hash
# We expect the cache directory to contain one and only one downloaded file.
# We don't know the file's name in advance, as it is set by the download
@ -350,20 +432,20 @@ class ModelInstallService(ModelInstallServiceBase):
# Internal functions that manage the installer threads
# --------------------------------------------------------------------------------------------
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
self._install_thread = threading.Thread(target=self._install_next_item, daemon=True)
self._install_thread.start()
self._running = True
def _install_next_item(self) -> None:
done = False
while not done:
self._logger.debug(f"Installer thread {threading.get_ident()} starting")
while True:
if self._stop_event.is_set():
done = True
continue
break
self._logger.debug(f"Installer thread {threading.get_ident()} polling")
try:
job = self._install_queue.get(timeout=1)
except Empty:
continue
assert job.local_path is not None
try:
if job.cancelled:
@ -372,19 +454,33 @@ class ModelInstallService(ModelInstallServiceBase):
elif job.errored:
self._signal_job_errored(job)
elif (
job.waiting or job.downloads_done
): # local jobs will be in waiting state, remote jobs will be downloading state
elif job.waiting or job.downloads_done:
self._register_or_install(job)
except InvalidModelConfigException as excp:
self._set_error(job, excp)
except (OSError, DuplicateModelException) as excp:
self._set_error(job, excp)
finally:
# if this is an install of a remote file, then clean up the temporary directory
if job._install_tmpdir is not None:
rmtree(job._install_tmpdir)
self._install_completed_event.set()
self._install_queue.task_done()
self._logger.info(f"Installer thread {threading.get_ident()} exiting")
def _register_or_install(self, job: ModelInstallJob) -> None:
# local jobs will be in waiting state, remote jobs will be downloading state
job.total_bytes = self._stat_size(job.local_path)
job.bytes = job.total_bytes
self._signal_job_running(job)
job.config_in["source"] = str(job.source)
job.config_in["source_type"] = MODEL_SOURCE_TO_TYPE_MAP[job.source.__class__]
# enter the metadata, if there is any
if isinstance(job.source_metadata, (CivitaiMetadata, HuggingFaceMetadata)):
if isinstance(job.source_metadata, (HuggingFaceMetadata)):
job.config_in["source_api_response"] = job.source_metadata.api_response
if isinstance(job.source_metadata, CivitaiMetadata) and job.source_metadata.trigger_phrases:
job.config_in["trigger_phrases"] = job.source_metadata.trigger_phrases
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
@ -393,7 +489,7 @@ class ModelInstallService(ModelInstallServiceBase):
job.config_out = self.record_store.get_model(key)
self._signal_job_completed(job)
except InvalidModelConfigException as excp:
def _set_error(self, job: ModelInstallJob, excp: Exception) -> None:
if any(x.content_type is not None and "text/html" in x.content_type for x in job.download_parts):
job.set_error(
InvalidModelConfigException(
@ -404,19 +500,6 @@ class ModelInstallService(ModelInstallServiceBase):
job.set_error(excp)
self._signal_job_errored(job)
except (OSError, DuplicateModelException) as excp:
job.set_error(excp)
self._signal_job_errored(job)
finally:
# if this is an install of a remote file, then clean up the temporary directory
if job._install_tmpdir is not None:
rmtree(job._install_tmpdir)
self._install_completed_event.set()
self._install_queue.task_done()
self._logger.info("Install thread exiting")
# --------------------------------------------------------------------------------------------
# Internal functions that manage the models directory
# --------------------------------------------------------------------------------------------
@ -450,11 +533,13 @@ class ModelInstallService(ModelInstallServiceBase):
self._logger.info(f"Scanning {self._app_config.models_path} for new and orphaned models")
for cur_base_model in BaseModelType:
for cur_model_type in ModelType:
models_dir = Path(cur_base_model.value, cur_model_type.value)
models_dir = self._app_config.models_path / Path(cur_base_model.value, cur_model_type.value)
if not models_dir.exists():
continue
installed.update(self.scan_directory(models_dir))
self._logger.info(f"{len(installed)} new models registered; {len(defunct_models)} unregistered")
def _sync_model_path(self, key: str) -> AnyModelConfig:
def sync_model_path(self, key: str) -> AnyModelConfig:
"""
Move model into the location indicated by its basetype, type and name.
@ -466,25 +551,29 @@ class ModelInstallService(ModelInstallServiceBase):
May raise an UnknownModelException.
"""
model = self.record_store.get_model(key)
old_path = Path(model.path)
models_dir = self.app_config.models_path
old_path = Path(model.path).resolve()
models_dir = self.app_config.models_path.resolve()
if not old_path.is_relative_to(models_dir):
return model
new_path = models_dir / model.base.value / model.type.value / model.name
new_path = (models_dir / model.base.value / model.type.value / model.name).with_suffix(old_path.suffix)
if old_path == new_path or new_path.exists() and old_path == new_path.resolve():
return model
self._logger.info(f"Moving {model.name} to {new_path}.")
new_path = self._move_model(old_path, new_path)
model.path = new_path.relative_to(models_dir).as_posix()
model.path = new_path.as_posix()
self.record_store.update_model(key, ModelRecordChanges(path=model.path))
return model
def _scan_register(self, model: Path) -> bool:
if model in self._cached_model_paths:
if model.resolve() in self._cached_model_paths:
return True
try:
id = self.register_path(model)
self._sync_model_path(id) # possibly move it to right place in `models`
self.sync_model_path(id) # possibly move it to right place in `models`
self._logger.info(f"Registered {model.name} with id {id}")
self._models_installed.add(id)
except DuplicateModelException:
@ -533,22 +622,16 @@ class ModelInstallService(ModelInstallServiceBase):
) -> str:
config = config or {}
if self._app_config.skip_model_hash:
config["hash"] = uuid_string()
info = info or ModelProbe.probe(model_path, config, hash_algo=self._app_config.hashing_algorithm)
info = info or ModelProbe.probe(model_path, config)
model_path = model_path.absolute()
if model_path.is_relative_to(self.app_config.models_path):
model_path = model_path.relative_to(self.app_config.models_path)
model_path = model_path.resolve()
info.path = model_path.as_posix()
# add 'main' specific fields
# Checkpoints have a config file needed for conversion - resolve this to an absolute path
if isinstance(info, CheckpointConfigBase):
# make config relative to our root
legacy_conf = (self.app_config.root_dir / self.app_config.legacy_conf_dir / info.config_path).resolve()
info.config_path = legacy_conf.relative_to(self.app_config.root_dir).as_posix()
legacy_conf = (self.app_config.legacy_conf_path / info.config_path).resolve()
info.config_path = legacy_conf.as_posix()
self.record_store.add_model(info)
return info.key
@ -573,16 +656,6 @@ class ModelInstallService(ModelInstallServiceBase):
inplace=source.inplace or False,
)
def _import_from_civitai(self, source: CivitaiModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
if not source.access_token:
self._logger.info("No Civitai access token provided; some models may not be downloadable.")
metadata = CivitaiMetadataFetch(self._session, self.app_config.get_config().civitai_api_key).from_id(
str(source.version_id)
)
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(session=self._session)
return self._import_remote_model(source=source, config=config, metadata=metadata, remote_files=remote_files)
def _import_from_hf(self, source: HFModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# Add user's cached access token to HuggingFace requests
source.access_token = source.access_token or HfFolder.get_token()
@ -605,7 +678,7 @@ class ModelInstallService(ModelInstallServiceBase):
)
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# URLs from Civitai or HuggingFace will be handled specially
# URLs from HuggingFace will be handled specially
metadata = None
fetcher = None
try:
@ -613,8 +686,6 @@ class ModelInstallService(ModelInstallServiceBase):
except ValueError:
pass
kwargs: dict[str, Any] = {"session": self._session}
if fetcher is CivitaiMetadataFetch:
kwargs["api_key"] = self._app_config.get_config().civitai_api_key
if fetcher is not None:
metadata = fetcher(**kwargs).from_url(source.url)
self._logger.debug(f"metadata={metadata}")
@ -631,7 +702,7 @@ class ModelInstallService(ModelInstallServiceBase):
def _import_remote_model(
self,
source: HFModelSource | CivitaiModelSource | URLModelSource,
source: HFModelSource | URLModelSource,
remote_files: List[RemoteModelFile],
metadata: Optional[AnyModelRepoMetadata],
config: Optional[Dict[str, Any]],
@ -671,12 +742,13 @@ class ModelInstallService(ModelInstallServiceBase):
install_job._install_tmpdir = tmpdir
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
self._logger.info(f"Queuing {source} for downloading")
files_string = "file" if len(remote_files) == 1 else "file"
self._logger.info(f"Queuing model install: {source} ({len(remote_files)} {files_string})")
self._logger.debug(f"remote_files={remote_files}")
for model_file in remote_files:
url = model_file.url
path = root / model_file.path.relative_to(subfolder)
self._logger.info(f"Downloading {url} => {path}")
self._logger.debug(f"Downloading {url} => {path}")
install_job.total_bytes += model_file.size
assert hasattr(source, "access_token")
dest = tmpdir / path.parent
@ -712,7 +784,7 @@ class ModelInstallService(ModelInstallServiceBase):
# Callbacks are executed by the download queue in a separate thread
# ------------------------------------------------------------------
def _download_started_callback(self, download_job: DownloadJob) -> None:
self._logger.info(f"{download_job.source}: model download started")
self._logger.info(f"Model download started: {download_job.source}")
with self._lock:
install_job = self._download_cache[download_job.source]
install_job.status = InstallStatus.DOWNLOADING
@ -738,17 +810,17 @@ class ModelInstallService(ModelInstallServiceBase):
self._signal_job_downloading(install_job)
def _download_complete_callback(self, download_job: DownloadJob) -> None:
self._logger.info(f"{download_job.source}: model download complete")
self._logger.info(f"Model download complete: {download_job.source}")
with self._lock:
install_job = self._download_cache[download_job.source]
self._download_cache.pop(download_job.source, None)
# are there any more active jobs left in this task?
if install_job.downloading and all(x.complete for x in install_job.download_parts):
install_job.status = InstallStatus.DOWNLOADS_DONE
self._install_queue.put(install_job)
self._signal_job_downloads_done(install_job)
self._put_in_queue(install_job)
# Let other threads know that the number of downloads has changed
self._download_cache.pop(download_job.source, None)
self._downloads_changed_event.set()
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
@ -771,7 +843,7 @@ class ModelInstallService(ModelInstallServiceBase):
if not install_job:
return
self._downloads_changed_event.set()
self._logger.warning(f"{download_job.source}: model download cancelled")
self._logger.warning(f"Model download canceled: {download_job.source}")
# if install job has already registered an error, then do not replace its status with cancelled
if not install_job.errored:
install_job.cancel()
@ -788,14 +860,14 @@ class ModelInstallService(ModelInstallServiceBase):
if all(x.in_terminal_state for x in install_job.download_parts):
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
self._install_queue.put(install_job)
self._put_in_queue(install_job)
# ------------------------------------------------------------------------------------------------
# Internal methods that put events on the event bus
# ------------------------------------------------------------------------------------------------
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
self._logger.info(f"Model install started: {job.source}")
if self._event_bus:
self._event_bus.emit_model_install_running(str(job.source))
@ -821,12 +893,17 @@ class ModelInstallService(ModelInstallServiceBase):
id=job.id,
)
def _signal_job_downloads_done(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.DOWNLOADS_DONE
self._logger.info(f"Model download complete: {job.source}")
if self._event_bus:
self._event_bus.emit_model_install_downloads_done(str(job.source))
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
self._logger.info(f"Model install complete: {job.source}")
self._logger.debug(f"{job.local_path} registered key {job.config_out.key}")
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
@ -834,7 +911,7 @@ class ModelInstallService(ModelInstallServiceBase):
self._event_bus.emit_model_install_completed(str(job.source), key, id=job.id)
def _signal_job_errored(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}\n{job.error}")
self._logger.info(f"Model install error: {job.source}, {job.error_type}\n{job.error}")
if self._event_bus:
error_type = job.error_type
error = job.error
@ -843,14 +920,12 @@ class ModelInstallService(ModelInstallServiceBase):
self._event_bus.emit_model_install_error(str(job.source), error_type, error, id=job.id)
def _signal_job_cancelled(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation was cancelled")
self._logger.info(f"Model install canceled: {job.source}")
if self._event_bus:
self._event_bus.emit_model_install_cancelled(str(job.source))
self._event_bus.emit_model_install_cancelled(str(job.source), id=job.id)
@staticmethod
def get_fetcher_from_url(url: str):
if re.match(r"^https?://civitai.com/", url.lower()):
return CivitaiMetadataFetch
elif re.match(r"^https?://huggingface.co/[^/]+/[^/]+$", url.lower()):
def get_fetcher_from_url(url: str) -> ModelMetadataFetchBase:
if re.match(r"^https?://huggingface.co/[^/]+/[^/]+$", url.lower()):
return HuggingFaceMetadataFetch
raise ValueError(f"Unsupported model source: '{url}'")

View File

@ -68,6 +68,7 @@ class ModelLoadService(ModelLoadServiceBase):
self._emit_load_event(
context_data=context_data,
model_config=model_config,
submodel_type=submodel_type,
)
implementation, model_config, submodel_type = self._registry.get_implementation(model_config, submodel_type) # type: ignore
@ -82,6 +83,7 @@ class ModelLoadService(ModelLoadServiceBase):
self._emit_load_event(
context_data=context_data,
model_config=model_config,
submodel_type=submodel_type,
loaded=True,
)
return loaded_model
@ -91,6 +93,7 @@ class ModelLoadService(ModelLoadServiceBase):
context_data: InvocationContextData,
model_config: AnyModelConfig,
loaded: Optional[bool] = False,
submodel_type: Optional[SubModelType] = None,
) -> None:
if not self._invoker:
return
@ -102,6 +105,7 @@ class ModelLoadService(ModelLoadServiceBase):
queue_batch_id=context_data.queue_item.batch_id,
graph_execution_state_id=context_data.queue_item.session_id,
model_config=model_config,
submodel_type=submodel_type,
)
else:
self._invoker.services.events.emit_model_load_completed(
@ -110,4 +114,5 @@ class ModelLoadService(ModelLoadServiceBase):
queue_batch_id=context_data.queue_item.batch_id,
graph_execution_state_id=context_data.queue_item.session_id,
model_config=model_config,
submodel_type=submodel_type,
)

View File

@ -1,15 +1,11 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Team
from abc import ABC, abstractmethod
from typing import Optional
import torch
from typing_extensions import Self
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.invocation_context import InvocationContextData
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
from invokeai.backend.model_manager.load.load_base import LoadedModel
from ..config import InvokeAIAppConfig
from ..download import DownloadQueueServiceBase
@ -70,32 +66,3 @@ class ModelManagerServiceBase(ABC):
@abstractmethod
def stop(self, invoker: Invoker) -> None:
pass
@abstractmethod
def load_model_by_config(
self,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
pass
@abstractmethod
def load_model_by_key(
self,
key: str,
submodel_type: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
pass
@abstractmethod
def load_model_by_attr(
self,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
pass

View File

@ -1,14 +1,10 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Team
"""Implementation of ModelManagerServiceBase."""
from typing import Optional
import torch
from typing_extensions import Self
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.invocation_context import InvocationContextData
from invokeai.backend.model_manager import AnyModelConfig, BaseModelType, LoadedModel, ModelType, SubModelType
from invokeai.backend.model_manager.load import ModelCache, ModelConvertCache, ModelLoaderRegistry
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.logging import InvokeAILogger
@ -18,7 +14,7 @@ from ..download import DownloadQueueServiceBase
from ..events.events_base import EventServiceBase
from ..model_install import ModelInstallService, ModelInstallServiceBase
from ..model_load import ModelLoadService, ModelLoadServiceBase
from ..model_records import ModelRecordServiceBase, UnknownModelException
from ..model_records import ModelRecordServiceBase
from .model_manager_base import ModelManagerServiceBase
@ -64,56 +60,6 @@ class ModelManagerService(ModelManagerServiceBase):
if hasattr(service, "stop"):
service.stop(invoker)
def load_model_by_config(
self,
model_config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
return self.load.load_model(model_config, submodel_type, context_data)
def load_model_by_key(
self,
key: str,
submodel_type: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
config = self.store.get_model(key)
return self.load.load_model(config, submodel_type, context_data)
def load_model_by_attr(
self,
model_name: str,
base_model: BaseModelType,
model_type: ModelType,
submodel: Optional[SubModelType] = None,
context_data: Optional[InvocationContextData] = None,
) -> LoadedModel:
"""
Given a model's attributes, search the database for it, and if found, load and return the LoadedModel object.
This is provided for API compatability with the get_model() method
in the original model manager. However, note that LoadedModel is
not the same as the original ModelInfo that ws returned.
:param model_name: Name of to be fetched.
:param base_model: Base model
:param model_type: Type of the model
:param submodel: For main (pipeline models), the submodel to fetch
:param context: The invocation context.
Exceptions: UnknownModelException -- model with this key not known
NotImplementedException -- a model loader was not provided at initialization time
ValueError -- more than one model matches this combination
"""
configs = self.store.search_by_attr(model_name, base_model, model_type)
if len(configs) == 0:
raise UnknownModelException(f"{base_model}/{model_type}/{model_name}: Unknown model")
elif len(configs) > 1:
raise ValueError(f"{base_model}/{model_type}/{model_name}: More than one model matches.")
else:
return self.load.load_model(configs[0], submodel, context_data)
@classmethod
def build_model_manager(
cls,
@ -132,14 +78,12 @@ class ModelManagerService(ModelManagerServiceBase):
logger.setLevel(app_config.log_level.upper())
ram_cache = ModelCache(
max_cache_size=app_config.ram_cache_size,
max_vram_cache_size=app_config.vram_cache_size,
max_cache_size=app_config.ram,
max_vram_cache_size=app_config.vram,
logger=logger,
execution_device=execution_device,
)
convert_cache = ModelConvertCache(
cache_path=app_config.models_convert_cache_path, max_size=app_config.convert_cache_size
)
convert_cache = ModelConvertCache(cache_path=app_config.convert_cache_path, max_size=app_config.convert_cache)
loader = ModelLoadService(
app_config=app_config,
ram_cache=ram_cache,

View File

@ -6,6 +6,7 @@ from .model_records_base import ( # noqa F401
ModelRecordServiceBase,
UnknownModelException,
ModelSummary,
ModelRecordChanges,
ModelRecordOrderBy,
)
from .model_records_sql import ModelRecordServiceSQL # noqa F401
@ -17,5 +18,6 @@ __all__ = [
"InvalidModelException",
"UnknownModelException",
"ModelSummary",
"ModelRecordChanges",
"ModelRecordOrderBy",
]

View File

@ -18,7 +18,12 @@ from invokeai.backend.model_manager import (
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.config import ModelDefaultSettings, ModelVariantType, SchedulerPredictionType
from invokeai.backend.model_manager.config import (
ControlAdapterDefaultSettings,
MainModelDefaultSettings,
ModelVariantType,
SchedulerPredictionType,
)
class DuplicateModelException(Exception):
@ -68,7 +73,7 @@ class ModelRecordChanges(BaseModelExcludeNull):
description: Optional[str] = Field(description="Model description", default=None)
base: Optional[BaseModelType] = Field(description="The base model.", default=None)
trigger_phrases: Optional[set[str]] = Field(description="Set of trigger phrases for this model", default=None)
default_settings: Optional[ModelDefaultSettings] = Field(
default_settings: Optional[MainModelDefaultSettings | ControlAdapterDefaultSettings] = Field(
description="Default settings for this model", default=None
)
@ -79,6 +84,7 @@ class ModelRecordChanges(BaseModelExcludeNull):
description="The prediction type of the model.", default=None
)
upcast_attention: Optional[bool] = Field(description="Whether to upcast attention.", default=None)
config_path: Optional[str] = Field(description="Path to config file for model", default=None)
class ModelRecordServiceBase(ABC):
@ -129,6 +135,17 @@ class ModelRecordServiceBase(ABC):
"""
pass
@abstractmethod
def get_model_by_hash(self, hash: str) -> AnyModelConfig:
"""
Retrieve the configuration for the indicated model.
:param hash: Hash of model config to be fetched.
Exceptions: UnknownModelException
"""
pass
@abstractmethod
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default

View File

@ -203,6 +203,21 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
model = ModelConfigFactory.make_config(json.loads(rows[0]), timestamp=rows[1])
return model
def get_model_by_hash(self, hash: str) -> AnyModelConfig:
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT config, strftime('%s',updated_at) FROM models
WHERE hash=?;
""",
(hash,),
)
rows = self._cursor.fetchone()
if not rows:
raise UnknownModelException("model not found")
model = ModelConfigFactory.make_config(json.loads(rows[0]), timestamp=rows[1])
return model
def exists(self, key: str) -> bool:
"""
Return True if a model with the indicated key exists in the databse.
@ -227,6 +242,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -235,10 +251,21 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
:param order_by: Result order
If none of the optional filters are passed, will return all
models in the database.
"""
assert isinstance(order_by, ModelRecordOrderBy)
ordering = {
ModelRecordOrderBy.Default: "type, base, name, format",
ModelRecordOrderBy.Type: "type",
ModelRecordOrderBy.Base: "base",
ModelRecordOrderBy.Name: "name",
ModelRecordOrderBy.Format: "format",
}
where_clause: list[str] = []
bindings: list[str] = []
if model_name:
@ -257,8 +284,10 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
with self._db.lock:
self._cursor.execute(
f"""--sql
SELECT config, strftime('%s',updated_at) FROM models
{where};
SELECT config, strftime('%s',updated_at)
FROM models
{where}
ORDER BY {ordering[order_by]} -- using ? to bind doesn't work here for some reason;
""",
tuple(bindings),
)
@ -304,7 +333,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Return a paginated summary listing of each model in the database."""
assert isinstance(order_by, ModelRecordOrderBy)
ordering = {
ModelRecordOrderBy.Default: "type, base, format, name",
ModelRecordOrderBy.Default: "type, base, name, format",
ModelRecordOrderBy.Type: "type",
ModelRecordOrderBy.Base: "base",
ModelRecordOrderBy.Name: "name",

View File

@ -151,7 +151,7 @@ class SqliteSessionQueue(SessionQueueBase):
# TODO: how does this work in a multi-user scenario?
current_queue_size = self._get_current_queue_size(queue_id)
max_queue_size = self.__invoker.services.configuration.get_config().max_queue_size
max_queue_size = self.__invoker.services.configuration.max_queue_size
max_new_queue_items = max_queue_size - current_queue_size
priority = 0

View File

@ -1043,24 +1043,33 @@ class GraphExecutionState(BaseModel):
"""Gets the deepest node that is ready to be executed"""
g = self.execution_graph.nx_graph()
# Depth-first search with pre-order traversal is a depth-first topological sort
sorted_nodes = nx.dfs_preorder_nodes(g)
# Perform a topological sort using depth-first search
topo_order = list(nx.dfs_postorder_nodes(g))
next_node = next(
(
n
for n in sorted_nodes
if n not in self.executed # the node must not already be executed...
and all((e[0] in self.executed for e in g.in_edges(n))) # ...and all its inputs must be executed
),
None,
)
# Get all IterateInvocation nodes
iterate_nodes = [n for n in topo_order if isinstance(self.execution_graph.nodes[n], IterateInvocation)]
if next_node is None:
# Sort the IterateInvocation nodes based on their index attribute
iterate_nodes.sort(key=lambda x: self.execution_graph.nodes[x].index)
# Prioritize IterateInvocation nodes and their children
for iterate_node in iterate_nodes:
if iterate_node not in self.executed and all((e[0] in self.executed for e in g.in_edges(iterate_node))):
return self.execution_graph.nodes[iterate_node]
# Check the children of the IterateInvocation node
for child_node in nx.dfs_postorder_nodes(g, iterate_node):
if child_node not in self.executed and all((e[0] in self.executed for e in g.in_edges(child_node))):
return self.execution_graph.nodes[child_node]
# If no IterateInvocation node or its children are ready, return the first ready node in the topological order
for node in topo_order:
if node not in self.executed and all((e[0] in self.executed for e in g.in_edges(node))):
return self.execution_graph.nodes[node]
# If no node is found, return None
return None
return self.execution_graph.nodes[next_node]
def _prepare_inputs(self, node: BaseInvocation):
input_edges = [e for e in self.execution_graph.edges if e.destination.node_id == node.id]
# Inputs must be deep-copied, else if a node mutates the object, other nodes that get the same input

View File

@ -1,7 +1,7 @@
import threading
from dataclasses import dataclass
from pathlib import Path
from typing import TYPE_CHECKING, Optional
from typing import TYPE_CHECKING, Optional, Union
from PIL.Image import Image
from torch import Tensor
@ -13,15 +13,16 @@ from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.model_records.model_records_base import UnknownModelException
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType, SubModelType
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.model_manager.metadata.metadata_base import AnyModelRepoMetadata
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
if TYPE_CHECKING:
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem
"""
@ -299,22 +300,27 @@ class ConditioningInterface(InvocationContextInterface):
class ModelsInterface(InvocationContextInterface):
def exists(self, key: str) -> bool:
def exists(self, identifier: Union[str, "ModelIdentifierField"]) -> bool:
"""Checks if a model exists.
Args:
key: The key of the model.
identifier: The key or ModelField representing the model.
Returns:
True if the model exists, False if not.
"""
return self._services.model_manager.store.exists(key)
if isinstance(identifier, str):
return self._services.model_manager.store.exists(identifier)
def load(self, key: str, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
return self._services.model_manager.store.exists(identifier.key)
def load(
self, identifier: Union[str, "ModelIdentifierField"], submodel_type: Optional[SubModelType] = None
) -> LoadedModel:
"""Loads a model.
Args:
key: The key of the model.
identifier: The key or ModelField representing the model.
submodel_type: The submodel of the model to get.
Returns:
@ -324,9 +330,13 @@ class ModelsInterface(InvocationContextInterface):
# The model manager emits events as it loads the model. It needs the context data to build
# the event payloads.
return self._services.model_manager.load_model_by_key(
key=key, submodel_type=submodel_type, context_data=self._data
)
if isinstance(identifier, str):
model = self._services.model_manager.store.get_model(identifier)
return self._services.model_manager.load.load_model(model, submodel_type, self._data)
else:
_submodel_type = submodel_type or identifier.submodel_type
model = self._services.model_manager.store.get_model(identifier.key)
return self._services.model_manager.load.load_model(model, _submodel_type, self._data)
def load_by_attrs(
self, name: str, base: BaseModelType, type: ModelType, submodel_type: Optional[SubModelType] = None
@ -343,35 +353,29 @@ class ModelsInterface(InvocationContextInterface):
Returns:
An object representing the loaded model.
"""
return self._services.model_manager.load_model_by_attr(
model_name=name,
base_model=base,
model_type=type,
submodel=submodel_type,
context_data=self._data,
)
def get_config(self, key: str) -> AnyModelConfig:
configs = self._services.model_manager.store.search_by_attr(model_name=name, base_model=base, model_type=type)
if len(configs) == 0:
raise UnknownModelException(f"No model found with name {name}, base {base}, and type {type}")
if len(configs) > 1:
raise ValueError(f"More than one model found with name {name}, base {base}, and type {type}")
return self._services.model_manager.load.load_model(configs[0], submodel_type, self._data)
def get_config(self, identifier: Union[str, "ModelIdentifierField"]) -> AnyModelConfig:
"""Gets a model's config.
Args:
key: The key of the model.
identifier: The key or ModelField representing the model.
Returns:
The model's config.
"""
return self._services.model_manager.store.get_model(key=key)
if isinstance(identifier, str):
return self._services.model_manager.store.get_model(identifier)
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""Gets a model's metadata, if it has any.
Args:
key: The key of the model.
Returns:
The model's metadata, if it has any.
"""
return self._services.model_manager.store.get_metadata(key=key)
return self._services.model_manager.store.get_model(identifier.key)
def search_by_path(self, path: Path) -> list[AnyModelConfig]:
"""Searches for models by path.
@ -419,7 +423,7 @@ class ConfigInterface(InvocationContextInterface):
The app's config.
"""
return self._services.configuration.get_config()
return self._services.configuration
class UtilInterface(InvocationContextInterface):

View File

@ -4,8 +4,6 @@ from logging import Logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback:
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger) -> None:
@ -15,7 +13,6 @@ class Migration3Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
@ -55,12 +52,6 @@ class Migration3Callback:
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
self._logger.info("Migrating model config records from models.yaml to database")
model_record_migrator = MigrateModelYamlToDb1(self._app_config, self._logger, cursor)
model_record_migrator.migrate()
def build_migration_3(app_config: InvokeAIAppConfig, logger: Logger) -> Migration:
"""

View File

@ -1,163 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein
"""Migrate from the InvokeAI v2 models.yaml format to the v3 sqlite format."""
import json
import sqlite3
from logging import Logger
from pathlib import Path
from typing import Optional
from omegaconf import DictConfig, OmegaConf
from pydantic import TypeAdapter
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import (
DuplicateModelException,
UnknownModelException,
)
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelType,
)
from invokeai.backend.model_manager.hash import ModelHash
ModelsValidator = TypeAdapter(AnyModelConfig)
class MigrateModelYamlToDb1:
"""
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.5.0).
The class has one externally useful method, migrate(), which scans the
currently models.yaml file and imports all its entries into invokeai.db.
Use this way:
from invokeai.backend.model_manager/migrate_to_db import MigrateModelYamlToDb
MigrateModelYamlToDb().migrate()
"""
config: InvokeAIAppConfig
logger: Logger
cursor: sqlite3.Cursor
def __init__(self, config: InvokeAIAppConfig, logger: Logger, cursor: sqlite3.Cursor = None) -> None:
self.config = config
self.logger = logger
self.cursor = cursor
def get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
yaml_path = self.config.model_conf_path
omegaconf = OmegaConf.load(yaml_path)
assert isinstance(omegaconf, DictConfig)
return omegaconf
def migrate(self) -> None:
"""Do the migration from models.yaml to invokeai.db."""
try:
yaml = self.get_yaml()
except OSError:
return
for model_key, stanza in yaml.items():
if model_key == "__metadata__":
assert (
stanza["version"] == "3.0.0"
), f"This script works on version 3.0.0 yaml files, but your configuration points to a {stanza['version']} version"
continue
base_type, model_type, model_name = str(model_key).split("/")
try:
hash = ModelHash().hash(self.config.models_path / stanza.path)
except OSError:
self.logger.warning(f"The model at {stanza.path} is not a valid file or directory. Skipping migration.")
continue
stanza["base"] = BaseModelType(base_type)
stanza["type"] = ModelType(model_type)
stanza["name"] = model_name
stanza["original_hash"] = hash
stanza["current_hash"] = hash
new_key = hash # deterministic key assignment
# special case for ip adapters, which need the new `image_encoder_model_id` field
if stanza["type"] == ModelType.IPAdapter:
try:
stanza["image_encoder_model_id"] = self._get_image_encoder_model_id(
self.config.models_path / stanza.path
)
except OSError:
self.logger.warning(f"Could not determine image encoder for {stanza.path}. Skipping.")
continue
new_config: AnyModelConfig = ModelsValidator.validate_python(stanza) # type: ignore # see https://github.com/pydantic/pydantic/discussions/7094
try:
if original_record := self._search_by_path(stanza.path):
key = original_record.key
self.logger.info(f"Updating model {model_name} with information from models.yaml using key {key}")
self._update_model(key, new_config)
else:
self.logger.info(f"Adding model {model_name} with key {new_key}")
self._add_model(new_key, new_config)
except DuplicateModelException:
self.logger.warning(f"Model {model_name} is already in the database")
except UnknownModelException:
self.logger.warning(f"Model at {stanza.path} could not be found in database")
def _search_by_path(self, path: Path) -> Optional[AnyModelConfig]:
self.cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE path=?;
""",
(str(path),),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self.cursor.fetchall()]
return results[0] if results else None
def _update_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect
json_serialized = record.model_dump_json() # and turn it into a json string.
self.cursor.execute(
"""--sql
UPDATE model_config
SET
config=?
WHERE id=?;
""",
(json_serialized, key),
)
if self.cursor.rowcount == 0:
raise UnknownModelException("model not found")
def _add_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect.
json_serialized = record.model_dump_json() # and turn it into a json string.
try:
self.cursor.execute(
"""--sql
INSERT INTO model_config (
id,
original_hash,
config
)
VALUES (?,?,?);
""",
(
key,
record.hash,
json_serialized,
),
)
except sqlite3.IntegrityError as exc:
raise DuplicateModelException(f"{record.name}: model is already in database") from exc
def _get_image_encoder_model_id(self, model_path: Path) -> str:
with open(model_path / "image_encoder.txt") as f:
encoder = f.read()
return encoder.strip()

View File

@ -8,3 +8,8 @@ class UrlServiceBase(ABC):
def get_image_url(self, image_name: str, thumbnail: bool = False) -> str:
"""Gets the URL for an image or thumbnail."""
pass
@abstractmethod
def get_model_image_url(self, model_key: str) -> str:
"""Gets the URL for a model image"""
pass

View File

@ -4,8 +4,9 @@ from .urls_base import UrlServiceBase
class LocalUrlService(UrlServiceBase):
def __init__(self, base_url: str = "api/v1"):
def __init__(self, base_url: str = "api/v1", base_url_v2: str = "api/v2"):
self._base_url = base_url
self._base_url_v2 = base_url_v2
def get_image_url(self, image_name: str, thumbnail: bool = False) -> str:
image_basename = os.path.basename(image_name)
@ -15,3 +16,6 @@ class LocalUrlService(UrlServiceBase):
return f"{self._base_url}/images/i/{image_basename}/thumbnail"
return f"{self._base_url}/images/i/{image_basename}/full"
def get_model_image_url(self, model_key: str) -> str:
return f"{self._base_url_v2}/models/i/{model_key}/image"

Some files were not shown because too many files have changed in this diff Show More