Commit Graph

203 Commits

Author SHA1 Message Date
Lincoln Stein
a23dedd2ee make model manager v2 ready for PR review
- Replace legacy model manager service with the v2 manager.

- Update invocations to use new load interface.

- Fixed many but not all type checking errors in the invocations. Most
  were unrelated to model manager

- Updated routes. All the new routes live under the route tag
  `model_manager_v2`. To avoid confusion with the old routes,
  they have the URL prefix `/api/v2/models`. The old routes
  have been de-registered.

- Added a pytest for the loader.

- Updated documentation in contributing/MODEL_MANAGER.md
2024-03-01 10:42:33 +11:00
Lincoln Stein
78ef946e01 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-03-01 10:42:33 +11:00
psychedelicious
05fb485d33 feat(nodes): move ConditioningFieldData to conditioning_data.py 2024-03-01 10:42:33 +11:00
Wubbbi
73a077956b Why did my IDE change the comment? 2024-02-01 20:40:28 -05:00
Wubbbi
5e1e50bd47 Fix hopefully last import 2024-02-01 20:40:28 -05:00
Jonathan
83a9e26cd8
Respect torch-sdp in config.yaml (#5353)
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
2023-12-28 05:46:28 +00:00
Ryan Dick
5127e9df2d Fix error caused by bump to diffusers 0.24. We pass kwargs now instead of positional args so that we are more robust to future changes. 2023-12-13 09:17:30 -05:00
Damian Stewart
0beb08686c
Add CFG Rescale option for supporting zero-terminal SNR models (#4335)
* add support for CFG rescale

* fix typo

* move rescale position and tweak docs

* move input position

* implement suggestions from github and discord

* cleanup unused code

* add back dropped FieldDescription

* fix(ui): revert unrelated UI changes

* chore(nodes): bump denoise_latents version 1.4.0 -> 1.5.0

* feat(nodes): add cfg_rescale_multiplier to metadata node

* feat(ui): add cfg rescale multiplier to linear UI

- add param to state
- update graph builders
- add UI under advanced
- add metadata handling & recall
- regen types

* chore: black

* fix(backend): make `StableDiffusionGeneratorPipeline._rescale_cfg()` staticmethod

This doesn't need access to class.

* feat(backend): add docstring for `_rescale_cfg()` method

* feat(ui): update cfg rescale mult translation string

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-30 20:55:20 +11:00
psychedelicious
6494e8e551 chore: ruff format 2023-11-11 10:55:40 +11:00
psychedelicious
513fceac82 chore: ruff check - fix pycodestyle 2023-11-11 10:55:33 +11:00
psychedelicious
99a8ebe3a0 chore: ruff check - fix flake8-bugbear 2023-11-11 10:55:28 +11:00
psychedelicious
3a136420d5 chore: ruff check - fix flake8-comprensions 2023-11-11 10:55:23 +11:00
blessedcoolant
356b5a41a9 wip: Add LCMScheduler 2023-11-10 06:54:36 -08:00
Kent Keirsey
67f2616d5a
Merge branch 'main' into revert-4923-revert-4914-feat/mix-cnet-t2iadapter 2023-11-06 07:34:51 -08:00
Ryan Dick
a078efc0f2 Merge branch 'main' into ryan/multi-image-ip 2023-10-18 08:59:12 -04:00
Kent Keirsey
b7555ddae8 Revert "Revert "chore: lint""
This reverts commit 38e7eb8878.
2023-10-17 11:59:19 -04:00
Kent Keirsey
8afc47018b Revert "Revert "Cleaning up (removing diagnostic prints)""
This reverts commit 6e697b7b6f.
2023-10-17 11:59:19 -04:00
Kent Keirsey
a97ec88e06 Revert "Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.""
This reverts commit c04fb451ee.
2023-10-17 11:59:19 -04:00
Kent Keirsey
282d36b640 Revert "Revert "Fixing some var and arg names.""
This reverts commit 58a0709c1e.
2023-10-17 11:59:19 -04:00
psychedelicious
58a0709c1e Revert "Fixing some var and arg names."
This reverts commit f11ba81a8d.
2023-10-17 11:59:11 -04:00
psychedelicious
c04fb451ee Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time."
This reverts commit 378689a519.
2023-10-17 11:59:11 -04:00
psychedelicious
6e697b7b6f Revert "Cleaning up (removing diagnostic prints)"
This reverts commit 06f8a3276d.
2023-10-17 11:59:11 -04:00
psychedelicious
38e7eb8878 Revert "chore: lint"
This reverts commit fff29d663d.
2023-10-17 11:59:11 -04:00
psychedelicious
fff29d663d chore: lint 2023-10-17 19:42:06 +11:00
user1
06f8a3276d Cleaning up (removing diagnostic prints) 2023-10-17 19:42:06 +11:00
user1
378689a519 Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.
Also, the PREVIOUS commit (@8d3885d, which was already pushed to github repo) was wrongly commented, but too late to fix without a force push or other mucking that I'm reluctant to do. That commit is actually the one that has all the changes to diffusers_pipeline.py to use additional arg down_intrablock_additional_residuals (introduced in diffusers PR https://github.com/huggingface/diffusers/pull/5362) to detangle T2I-Adapter from ControlNet inputs to main UNet.
2023-10-17 19:42:06 +11:00
user1
f11ba81a8d Fixing some var and arg names. 2023-10-17 19:42:06 +11:00
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
Ryan Dick
8464450a53 Add support for multi-image IP-Adapter. 2023-10-14 12:50:33 -04:00
Ryan Dick
971ccfb081 Refactor multi-IP-Adapter to clean up the interface around changing scales. 2023-10-06 20:43:43 -04:00
Ryan Dick
43a3c3c7ea Fix typo in setting IP-Adapter scales. 2023-10-06 20:43:43 -04:00
Ryan Dick
d8d0c9af09 Fix handling of scales with multiple IP-Adapters. 2023-10-06 20:43:43 -04:00
Ryan Dick
7ca456d674 Update IP-Adapter model to enable running multiple IP-Adapters at once. (Not tested yet.) 2023-10-06 20:43:43 -04:00
Ryan Dick
78828b6b9c WIP - Accept a list of IPAdapterFields in DenoiseLatents. 2023-10-06 20:43:43 -04:00
Ryan Dick
78377469db
Add support for T2I-Adapter in node workflows (#4612)
* Bump diffusers to 0.21.2.

* Add T2IAdapterInvocation boilerplate.

* Add T2I-Adapter model to model-management.

* (minor) Tidy prepare_control_image(...).

* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.

* Add logic for applying T2I-Adapter weights and accumulating.

* Add T2IAdapter to MODEL_CLASSES map.

* yarn typegen

* Add model probes for T2I-Adapter models.

* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.

* Add T2IAdapterModel.convert_if_required(...).

* Fix errors in T2I-Adapter input image sizing logic.

* Fix bug with handling of multiple T2I-Adapters.

* black / flake8

* Fix typo

* yarn build

* Add num_channels param to prepare_control_image(...).

* Link to upstream diffusers bugfix PR that currently requires a workaround.

* feat: Add Color Map Preprocessor

Needed for the color T2I Adapter

* feat: Add Color Map Preprocessor to Linear UI

* Revert "feat: Add Color Map Preprocessor"

This reverts commit a1119a00bf.

* Revert "feat: Add Color Map Preprocessor to Linear UI"

This reverts commit bd8a9b82d8.

* Fix T2I-Adapter field rendering in workflow editor.

* yarn build, yarn typegen

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-05 16:29:16 +11:00
blessedcoolant
e0dddbd38e chore: fix isort issues 2023-09-17 12:13:03 +12:00
blessedcoolant
b7773c9962 chore: black & lint fixes 2023-09-17 12:00:21 +12:00
user1
c48e648cbb Added per-step setting of IP-Adapter weights (for param easing, etc.) 2023-09-16 12:36:16 -07:00
user1
7ee13879e3 Added check in IP-Adapter to avoid begin/end step percent handling if use of IP-Adapter is already turned off due to potential clash with other cross attention control. 2023-09-16 09:29:50 -07:00
user1
ced297ed21 Initial implementation of IP-Adapter "begin_step_percent" and "end_step_percent" for controlling on which steps IP-Adapter is applied in the denoising loop. 2023-09-16 08:24:12 -07:00
Ryan Dick
b57acb7353 Merge branch 'main' into feat/ip-adapter 2023-09-15 13:15:25 -04:00
Ryan Dick
c2f074dc2f Fix python static checks. 2023-09-14 16:48:47 -04:00
Ryan Dick
781e8521d5 Eliminate the need for IPAdapter.initialize(). 2023-09-14 15:02:59 -04:00
Ryan Dick
c34b359c36 (minor) Remove duplicate TODO. 2023-09-13 21:25:20 -04:00
Ryan Dick
1c8991a3df Use CLIPVisionModel under model management for IP-Adapter. 2023-09-13 19:10:02 -04:00
Ryan Dick
3ee9a21647 Initial (barely) working version of IP-Adapter model management. 2023-09-13 08:27:24 -04:00
Martin Kristiansen
e467ca7f1b Apply black, isort, flake8 2023-09-12 13:01:58 -04:00
Martin Kristiansen
caea6d11c6 isort wip 2 2023-09-12 13:01:58 -04:00
Ryan Dick
50a0691514 flake8 2023-09-08 18:05:31 -04:00
Ryan Dick
a255624984 black 2023-09-08 17:55:23 -04:00