Commit Graph

249 Commits

Author SHA1 Message Date
psychedelicious
1cb1b60b4c tidy: "check_root.py" -> "check_directories.py" 2024-03-19 09:24:28 +11:00
psychedelicious
1d4517d00d tidy: "validate_root" -> "validate_directories" 2024-03-19 09:24:28 +11:00
Lincoln Stein
5d16a40b95 fix invokeai-configure to use isolated argument-parsing pattern 2024-03-19 09:24:28 +11:00
Lincoln Stein
d871fca643 partially address --root CLI argument handling
- fix places where `get_config()` is being called at import time rather
  than at run time.

- add regression test for import time get_config() calling.
2024-03-19 09:24:28 +11:00
psychedelicious
e76cc71e81 fix(config): edge cases in models.yaml migration
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.

- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
2024-03-19 09:24:28 +11:00
psychedelicious
5179587b5a feat(config): restore ignore_missing_core_models arg 2024-03-19 09:24:28 +11:00
psychedelicious
cb180909f7 fix(install): resolve config-related issues with configurator
- Do not use the singleton app config, this causes a lot of weirdness
- Update logic to use new config object
- Remove unused code
2024-03-19 09:24:28 +11:00
psychedelicious
f69938c6a8 fix(config): revised config methods
- `write_file` requires an destination file path
- `read_config` -> `merge_from_file`, if no path is provided, reads from `self.init_file_path`
- update app, tests to use new methods
- fix configurator, was overwriting config file data unexpectedly
2024-03-19 09:24:28 +11:00
psychedelicious
77b86e9ad5 fix(install): remove broken v2.3 -> v3 migration logic from configurator 2024-03-19 09:24:28 +11:00
psychedelicious
a6181b5759 fix(install): update configurator to use new config system 2024-03-19 09:24:28 +11:00
psychedelicious
b4b0af7c60 fix(install): do not use deprecated pydantic methods 2024-03-19 09:24:28 +11:00
psychedelicious
b8c46fb15b fix(config): split check_invokeai_root into separate function to validate, use this in model_install to determine if need to run configurator 2024-03-19 09:24:28 +11:00
psychedelicious
7716a4a8c7 fix(config): use correct config in install_helper 2024-03-19 09:24:28 +11:00
Lincoln Stein
71a1740740 Remove core safetensors->diffusers conversion models
- No longer install core conversion models. Use the HuggingFace cache to load
  them if and when needed.

- Call directly into the diffusers library to perform conversions with only shallow
   wrappers around them to massage arguments, etc.

- At root configuration time, do not create all the possible model subdirectories,
  but let them be created and populated at model install time.

- Remove checks for missing core conversion files, since they are no
  longer installed.
2024-03-17 19:13:18 -04:00
Brandon Rising
149ff758b9 Run ruff 2024-03-11 15:53:00 -04:00
Brandon Rising
65d415d5aa Remove redundant with_suffix call 2024-03-11 15:53:00 -04:00
Brandon Rising
c74c1927ec Gracefully error without deleting invokeai.yaml 2024-03-11 15:53:00 -04:00
Brandon Rising
46f32c5e3c Remove references to the no longer existing invokeai.app.services.model_metadata package 2024-03-05 19:58:25 -05:00
psychedelicious
dd9daf8efb chore: ruff 2024-03-01 10:42:33 +11:00
Lincoln Stein
cc41e8912c several small model install enhancements
- Support extended HF repoid syntax in TUI. This allows
  installation of subfolders and safetensors files, as in
  `XpucT/Deliberate::Deliberate_v5.safetensors`

- Add `error` and `error_traceback` properties to the install
  job objects.

- Rename the `heuristic_import` route to `heuristic_install`.

- Fix the example `config` input in the `heuristic_install` route.
2024-03-01 10:42:33 +11:00
Lincoln Stein
5bb3aeaccd remove startup dependency on legacy models.yaml file 2024-03-01 10:42:33 +11:00
Lincoln Stein
af2117dc0c remove errant def that was crashing invokeai-configure 2024-03-01 10:42:33 +11:00
psychedelicious
5a3195f757 final tidying before marking PR as ready for review
- Replace AnyModelLoader with ModelLoaderRegistry
- Fix type check errors in multiple files
- Remove apparently unneeded `get_model_config_enum()` method from model manager
- Remove last vestiges of old model manager
- Updated tests and documentation

resolve conflict with seamless.py
2024-03-01 10:42:33 +11:00
Lincoln Stein
996eb96b4e Fix issues identified during PR review by RyanjDick and brandonrising
- ModelMetadataStoreService is now injected into ModelRecordStoreService
  (these two services are really joined at the hip, and should someday be merged)
- ModelRecordStoreService is now injected into ModelManagerService
- Reduced timeout value for the various installer and download wait*() methods
- Introduced a Mock modelmanager for testing
- Removed bare print() statement with _logger in the install helper backend.
- Removed unused code from model loader init file
- Made `locker` a private variable in the `LoadedModel` object.
- Fixed up model merge frontend (will be deprecated anyway!)
2024-03-01 10:42:33 +11:00
Lincoln Stein
db340bc253 fix invokeai_configure script to work with new mm; rename CLIs 2024-03-01 10:42:33 +11:00
Lincoln Stein
78ef946e01 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-03-01 10:42:33 +11:00
Lincoln Stein
5745ce9c7d Multiple refinements on loaders:
- Cache stat collection enabled.
- Implemented ONNX loading.
- Add ability to specify the repo version variant in installer CLI.
- If caller asks for a repo version that doesn't exist, will fall back
  to empty version rather than raising an error.
2024-03-01 10:42:33 +11:00
Lincoln Stein
67eb715093 loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-03-01 10:42:33 +11:00
Lincoln Stein
8ba5360269 model loading and conversion implemented for vaes 2024-03-01 10:42:33 +11:00
Lincoln Stein
f2777f5096
Port the command-line tools to use model_manager2 (#5546)
* Port the command-line tools to use model_manager2

1.Reimplement the following:

  - invokeai-model-install
  - invokeai-merge
  - invokeai-ti

  To avoid breaking the original modeal manager, the udpated tools
  have been renamed invokeai-model-install2 and invokeai-merge2. The
  textual inversion training script should continue to work with
  existing installations. The "starter" models now live in
  `invokeai/configs/INITIAL_MODELS2.yaml`.

  When the full model manager 2 is in place and working, I'll rename
  these files and commands.

2. Add the `merge` route to the web API. This will merge two or three models,
   resulting a new one.

   - Note that because the model installer selectively installs the `fp16` variant
     of models (rather than both 16- and 32-bit versions as previous),
     the diffusers merge script will choke on any huggingface diffuserse models
     that were downloaded with the new installer. Previously-downloaded models
     should continue to merge correctly. I have a PR
     upstream https://github.com/huggingface/diffusers/pull/6670 to fix
     this.

3. (more important!)
  During implementation of the CLI tools, found and fixed a number of small
  runtime bugs in the model_manager2 implementation:

  - During model database migration, if a registered models file was
    not found on disk, the migration would be aborted. Now the
    offending model is skipped with a log warning.

  - Caught and fixed a condition in which the installer would download the
    entire diffusers repo when the user provided a single `.safetensors`
    file URL.

  - Caught and fixed a condition in which the installer would raise an
    exception and stop the app when a request for an unknown model's metadata
    was passed to Civitai. Now an error is logged and the installer continues.

  - Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
    from Civitai.

* fix ruff issue

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-02 17:18:47 +00:00
Brandon Rising
a0996b1c0a Fix ruff styling 2024-01-31 07:16:14 -06:00
Brandon Rising
522ff4a042 civit -> civitai 2024-01-31 07:16:14 -06:00
Brandon Rising
5d773dc94c Remove debug line 2024-01-31 07:16:14 -06:00
Brandon Rising
088e3420e6 Allow passing of civit api key via config 2024-01-31 07:16:14 -06:00
Brandon Rising
14efc95707 Allow passing of a civit api key 2024-01-31 07:16:14 -06:00
Neil Wang
aa25ea62a5
fix(backend) installed models being redownloaded (#5526)
* fix

* fix ruff errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-18 16:53:53 -05:00
Lincoln Stein
4957a360ff close #5209 2023-12-21 23:02:57 -05:00
psychedelicious
6494e8e551 chore: ruff format 2023-11-11 10:55:40 +11:00
psychedelicious
99a8ebe3a0 chore: ruff check - fix flake8-bugbear 2023-11-11 10:55:28 +11:00
psychedelicious
3a136420d5 chore: ruff check - fix flake8-comprensions 2023-11-11 10:55:23 +11:00
Lincoln Stein
8702a63197 add support for downloading and installing LCM lora diffusers models 2023-11-10 17:51:30 -05:00
blessedcoolant
356b5a41a9 wip: Add LCMScheduler 2023-11-10 06:54:36 -08:00
psychedelicious
2570497d83 fix(installer): fix import of ValidationError
It was being imported from a deprecated module
2023-11-10 06:11:15 +11:00
Lincoln Stein
6cbc69f3b7 support conversion of controlnets from safetensors to diffusers 2023-10-23 22:06:10 -04:00
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
blessedcoolant
2c6772f92f
Merge branch 'main' into bugfix/trim-whitespace-from-urls 2023-10-15 17:41:41 +05:30
Lincoln Stein
29c3f49182 enable the ram cache slider in invokeai-configure 2023-10-12 23:04:16 -04:00
Lincoln Stein
21d5969942 strip leading and trailing quotes as well as whitespace 2023-10-12 22:35:02 -04:00
Lincoln Stein
52274087f3 close #4536 2023-10-12 21:24:07 -04:00
Lincoln Stein
fe0cf2c160 remove hardcoded subfolder name from model downloader 2023-10-08 17:45:39 -04:00