Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.
This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.
Closes#6010
These two changes are interrelated.
## Autoimport
The autoimport feature can be easily replicated using the scan folder tab in the model manager. Removing the implicit autoimport reduces surface area and unifies all model installation into the UI.
This functionality is removed, and the `autoimport_dir` config setting is removed.
## Startup model dir scanning
We scanned the invoke-managed models dir on startup and took certain actions:
- Register orphaned model files
- Remove model records from the db when the model path doesn't exist
### Orphaned model files
We should never have orphaned model files during normal use - we manage the models directory, and we only delete files when the user requests it.
During testing or development, when a fresh DB or memory DB is used, we could end up with orphaned models that should be registered.
Instead of always scanning for orphaned models and registering them, we now only do the scan if the new `scan_models_on_startup` config flag is set.
The description for this setting indicates it is intended for use for testing only.
### Remove records for missing model files
This functionality could unexpectedly wipe models from the db.
For example, if your models dir was on external media, and that media was inaccessible during startup, the scan would see all your models as missing and delete them from the db.
The "proactive" scan is removed. Instead, we will scan for missing models and log a warning if we find a model whose path doesn't exist. No possibility for data loss.
I had added this because I mistakenly believed the HF token was required to download HF models.
Turns out this is not the case, and the vast majority of HF models do not need the API token to download.
"Normal" models have 4 in-channels, while "Depth" models have 5 and "Inpaint" models have 9.
We need to explicitly tell diffusers the channel count when converting models.
Closes #6058
It's possible for a model's state dict to have integer keys, though we do not actually support such models.
As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.
This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`
To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.
Closes#6044
Previously we only handled expected error types. If a different error was raised, the install job would end up in an unexpected state where it has failed and isn't doing anything, but its status is still running.
This indirectly prevents the installer threads from exiting - they are waiting for all jobs to be completed, including the failed-but-still-running job.
We need to handle any error here to prevent this.
Updating should always be done via the installer. We initially planned to only deprecate the updater, but given the scale of changes for v4, there's no point in waiting to remove it entirely.
Loading default workflows sometimes requires we mutate the workflow object in order to change the category or ID of the workflow.
This happens in `invokeai/frontend/web/src/features/nodes/util/workflow/validateWorkflow.ts`
The data we get back from the query hooks is frozen and sealed by redux, because they are part of redux state. We need to clone the workflow before operating on it.
It's not clear how this ever worked in the past, because redux state has always been frozen and sealed.
Add `extra="forbid"` to the default settings models.
Closes#6035.
Pydantic has some quirks related to unions. This affected how the union of default settings was evaluated. See https://github.com/pydantic/pydantic/issues/9095 for a detailed description of the behaviour that this change addresses.
- Enriched dependencies to not just be a string - allows reuse of a dependency as a starter model _and_ dependency of another model. For example, all the SDXL models have the fp16 VAE as a dependency, but you can also download it on its own.
- Looked at popular models on the major model sites to select the list. No SD2 models. All hosted on HF.
* Fix minor bugs involving model manager handling of model paths
- Leave models found in the `autoimport` directory there. Do not move them
into the `models` hierarchy.
- If model name, type or base is updated and model is in the `models` directory,
update its path as appropriate.
- On startup during model scanning, if a model's path is a symbolic link, then resolve
to an absolute path before deciding it is a new model that must be hashed and
registered. (This prevents needless hashing at startup time).
* fix issue with dropped suffix
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Currently translated at 98.2% (1102 of 1122 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.9% (1099 of 1122 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.9% (1099 of 1122 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Add class `DefaultInvokeAIAppConfig`, which inherits from `InvokeAIAppConfig`. When instantiated, this class does not parse environment variables, so it outputs a "clean" default config. That's the only difference.
Then, we can use this new class in the 3 places:
- When creating the example config file (no env vars should be here)
- When migrating a v3 config (we want to instantiate the migrated config without env vars, so that when we write it out, they are not written to disk)
- When creating a fresh config file (i.e. on first run with an uninitialized root or new config file path - no env vars here!)
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.
For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.
For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.
The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
With the change to model identifiers from v3 to v4, if a user had persisted redux state with the old format, we could get unexpected runtime errors when rehydrating state if we try to access model attributes that no longer exist.
For example, the CLIP Skip component does this:
```ts
CLIP_SKIP_MAP[model.base].maxClip
```
In v3, models had a `base_type` attribute, but it is renamed to `base` in v4. This code therefore causes a runtime error:
- `model.base` is `undefined`
- `CLIP_SKIP_MAP[undefined]` is also undefined
- `undefined.maxClip` is a runtime error!
Resolved by adding a migration for the redux slices that have model identifiers. The migration simply resets the slice or the part of the slice that is affected, when it's simple to do a partial reset.
Closes#6000
* add probe for SDXL controlnet models
* Update invokeai/backend/model_management/model_probe.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Update invokeai/backend/model_manager/probe.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
These all support controlnet processors.
- `pil_to_cv2`
- `cv2_to_pil`
- `pil_to_np`
- `np_to_pil`
- `normalize_image_channel_count` (a readable version of `HWC3` from the controlnet repo)
- `fit_image_to_resolution` (a readable version of `resize_image` from the controlnet repo)
- `non_maximum_suppression` (a readable version of `nms` from the controlnet repo)
- `safe_step` (a readable version of `safe_step` from the controlnet repo)
Some processors, like Canny, didn't use `detect_resolution`. The resultant control images were then resized by the processors from 512x512 to the desired dimensions. The result is that the control images are the right size, but very low quality.
Using detect_resolution fixes this.
- Display a toast on UI launch if the HF token is invalid
- Show form in MM if token is invalid or unable to be verified, let user set the token via this form
This allows users to create simple "profiles" via separate `invokeai.yaml` files.
- Remove `InvokeAIAppConfig.set_root()`, it's extraneous
- Remove `InvokeAIAppConfig.merge_from_file()`, it's extraneous
- Add `--config` to the app arg parser, add `InvokeAIAppConfig._config_file`, and consume in the config singleton getter
- `InvokeAIAppConfig.init_file_path` -> `InvokeAIAppConfig.config_file_path`
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
This flag acts as a proxy for the `get_config()` function to determine if the full application is running.
If it was, the config will set the root, do HF login, etc.
If not (e.g. it's called by an external script), all that stuff will be skipped.
HF login, legacy yaml confs, and default init file are all handled during app setup.
All directories are created as they are needed by the app.
No need to check for a valid root dir - we will make it if it doesn't exist.
This provides a simple way to provide a HF token. If HF reports no valid token, one is prompted for until a valid token is provided, or the user presses Ctrl + C to cancel.
Use the util function to calculate ram cache size on startup. This way, the `ram` setting will always be optimized for a system, even if they add or remove RAM. In other words, the default value is now dynamic.
- Move base of t2i and clip_vision config models to DiffusersBase, which contains
a field to record the model variant (e.g. "fp16")
- This restore the ability to load fp16 t2i and clip_vision models
- Also add defensive coding to load the vanilla model when the fp16 model
has been replaced (or more likely, user's preferences changed since installation)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
When consolidating all the model queries I messed up the query tags. Fixed now, so that when a model is installed, removed, or changed, the list refreshes.
Currently translated at 52.5% (576 of 1096 strings)
translationBot(ui): update translation (Japanese)
Currently translated at 52.0% (570 of 1096 strings)
Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
Currently translated at 98.2% (1077 of 1096 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1077 of 1096 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 99.0% (1518 of 1533 strings)
translationBot(ui): update translation (Russian)
Currently translated at 99.0% (1518 of 1533 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 97.8% (1510 of 1543 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.1% (1503 of 1532 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.1% (1503 of 1532 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
In order to allow for null and undefined metadata values, this hook returned a symbol to indicate that parsing failed or was pending.
For values where the parsed value will never be null or undefined, it is useful get the value or null (instead of a symbol).
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.
- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
Hold onto `conf_path` temporarily while migrating `invokeai.yaml` so that it gets migrated correctly as the model installer starts up. Stashed as `legacy_models_yaml_path` in the config, excluded from serialization.
We have two problems with how argparse is being utilized:
- We parse CLI args as the `api_app.py` file is read. This causes a problem pytest, which has an incompatible set of CLI args. Some tests import the FastAPI app, which triggers the config to parse CLI args, which receives the pytest args and fails.
- We've repeatedly had problems when something that uses the config is imported before the CLI args are parsed. When this happens, the root dir may not be set correctly, so we attempt to operate on incorrect paths.
To resolve these issues, we need to lift CLI arg parsing outside of the application code, but still let the application access the CLI args. We can create a external app entrypoint to do this.
- `InvokeAIArgs` is a simple helper class that parses CLI args and stores the result.
- `run_app()` is the new entrypoint. It first parses CLI args, then runs `invoke_api` to start the app.
The `invokeai-web` project script and `invokeai-web.py` dev script now call `run_app()` instead of `invoke_api()`.
The first time `get_config()` is called to get the singleton config object, it retrieves the args from `InvokeAIArgs`, sets the root dir if provided, then merges settings in from `invokeai.yaml`.
CLI arg parsing is now safely insulated from application code, but still accessible. And we don't need to worry about import order having an impact on anything, because by the time the app is running, we have already parsed CLI args. Whew!
This fixes an issue with `test_images.py`, which tests the bulk images routers and imports the whole FastAPI app. This triggers the config logic which fails on the test runner, because it has no `invokeai.yaml`.
Also probably just good for graceful fallback.