Commit Graph

1018 Commits

Author SHA1 Message Date
Lincoln Stein
f65c7e2bfd
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 13:42:54 -04:00
dunkeroni
f262b9032d fix: changed validation to not error on connection 2024-04-28 12:48:56 -04:00
dunkeroni
71c3197eab fix: denoise latents accepts CFG lists as input 2024-04-28 12:48:56 -04:00
Lincoln Stein
bb04f496e0 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 11:33:26 -04:00
Lincoln Stein
70903ef057 refactor load_ckpt_from_url() 2024-04-28 11:33:23 -04:00
Lincoln Stein
d72f272f16 Address change requests in first round of PR reviews.
Pending:

- Move model install calls into model manager and create passthrus in invocation_context.
- Consider splitting load_model_from_url() into a call to get the path and a call to load the path.
2024-04-24 23:53:30 -04:00
psychedelicious
5b8f77f990 tidy(nodes): move cnet mode literals to utils
Now they can be used in type signatures without circular imports.
2024-04-25 13:20:09 +10:00
psychedelicious
1bef13db37 feat(nodes): restore unet check on CreateGradientMaskInvocation
Special handling for inpainting models
2024-04-23 07:32:53 -04:00
dunkeroni
bc12d6654e chore: comments and ruff 2024-04-23 07:32:53 -04:00
dunkeroni
6d7c8d5f57 remove unet test 2024-04-23 07:32:53 -04:00
dunkeroni
781de914f4 fix threshhold 2024-04-23 07:32:53 -04:00
dunkeroni
c094bad233 add unet check in gradient mask node 2024-04-23 07:32:53 -04:00
dunkeroni
0063014f2b gradient mask node test for inpaint 2024-04-23 07:32:53 -04:00
psychedelicious
a00e703144 feat(nodes): image mask to tensor invocation
Thanks @JPPhoto!
2024-04-20 11:32:08 -04:00
psychedelicious
ea527f5fe1 feat(nodes): add beta classification to mask tensor nodes 2024-04-19 09:32:56 -04:00
psychedelicious
aace364677 feat(nodes): add InvertTensorMaskInvocation 2024-04-19 09:32:56 -04:00
psychedelicious
602a59066e fix(nodes): handle invert in alpha_mask_to_tensor 2024-04-19 09:32:56 -04:00
psychedelicious
8911017bd1 feat(ui): selectable & draggable layers 2024-04-19 09:32:56 -04:00
psychedelicious
fc26f3e430 feat(nodes): add alpha mask to tensor invocation 2024-04-19 09:32:56 -04:00
Lincoln Stein
34cdfc61ab
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-17 17:18:13 -04:00
blessedcoolant
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
blessedcoolant
b39ce642b6 cleanup: raise ValueErrors when target_blocks dont match base model 2024-04-16 04:12:30 +05:30
Lincoln Stein
470a39935c fix merge conflicts with main 2024-04-15 09:24:57 -04:00
Lincoln Stein
e93f4d632d
[util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
Lincoln Stein
41b909cbe3 port dw_openpose, depth_anything, and lama processors to new model download scheme 2024-04-14 15:57:03 -04:00
blessedcoolant
2d5786d3bb fix: Incorrect composition blocks for SD1.5 2024-04-13 13:52:10 +05:30
blessedcoolant
27466ffa1a chore: update the ip adapter node version 2024-04-13 13:39:08 +05:30
blessedcoolant
9fc73743b2 feat: support SD1.5 2024-04-13 12:30:39 +05:30
blessedcoolant
d4393e4170 chore: linter fixes 2024-04-13 12:14:45 +05:30
blessedcoolant
7a67fd6a06 Revert "chore: ruff fixes"
This reverts commit af36fe8c1e.
2024-04-13 12:10:20 +05:30
blessedcoolant
af36fe8c1e chore: ruff fixes 2024-04-13 12:08:52 +05:30
blessedcoolant
e9f16ac8c7 feat: add UI for IP Adapter Method 2024-04-13 12:06:59 +05:30
blessedcoolant
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
Lincoln Stein
3a26c7bb9e fix merge conflicts 2024-04-12 00:58:11 -04:00
Lincoln Stein
df5ebdbc4f add invocation_context.load_ckpt_from_url() method 2024-04-12 00:55:21 -04:00
Lincoln Stein
dedf0c6ffa fix ruff issues 2024-04-12 07:19:16 +10:00
Lincoln Stein
579082ac10 [mm] clear the cache entry for a model that got an OOM during loading 2024-04-12 07:19:16 +10:00
psychedelicious
026d095afe fix(nodes): do not set seed on output latents from denoise latents
`LatentsField` objects have an optional `seed` field. This should only be populated when the latents are noise, generated from a seed.

`DenoiseLatentsInvocation` needs a seed value for scheduler initialization. It's used in a few places, and there is some logic for determining the seed to use with a series of fallbacks:
- Use the seed from the noise (a `LatentsField` object)
- Use the seed from the latents (a `LatentsField` object - normally it won't have a seed)
- Use `0` as a final fallback

In `DenoisLatentsInvocation`, we set the seed in the `LatentsOutput`, even though the output latents are not noise.

This is normally fine, but when we use refiner, we re-use the those same latents for the refiner denoise. This causes that characteristic same-seed-fried look on the refiner pass.

Simple fix - do not set the field in the output latents.
2024-04-11 07:21:50 -04:00
Jonathan
80d631118d
Fix field ordering
Changed fields to go in w/h x/y order.
2024-04-09 14:17:55 -05:00
Ryan Dick
0bdbfd4d1d Add support for IP-Adapter masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
2e27ed5f3d Pass IP-Adapter scales through the cross_attn_kwargs pathway, since they are the same for all attention layers. This change also helps to prepare for adding IP-Adapter region masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
babdc64b17 (minor) Fix typo in IP-Adapter field description. 2024-04-09 15:06:51 -04:00
Ryan Dick
4a828818da Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
psychedelicious
fe386252f3 Revert "feat(nodes): add prompt region from image nodes"
This reverts commit 3a531c5097.
2024-04-09 08:12:12 -04:00
Ryan Dick
182810337c Add utility to_standard_float_mask(...) to convert various mask formats to a standardized format. 2024-04-09 08:12:12 -04:00
Ryan Dick
338bf808d6 Rename MaskField to be a generice TensorField. 2024-04-09 08:12:12 -04:00
Ryan Dick
5b5a4204a1 Fix dimensions of mask produced by ExtractMasksAndPromptsInvocation. Also, added a clearer error message in case the same error is introduced in the future. 2024-04-09 08:12:12 -04:00
psychedelicious
926b8d0efe feat(nodes): add prompt region from image nodes 2024-04-09 08:12:12 -04:00
Ryan Dick
9d9d1761f3 (minor) The latest ruff version has _slightly_ different formatting preferences. 2024-04-09 08:12:12 -04:00
Ryan Dick
dc64fec771 Add support for lists of prompt embeddings to be passed to the DenoiseLatents invocation, and add handling of the conditioning region masks in DenoiseLatents. 2024-04-09 08:12:12 -04:00
Ryan Dick
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
4e64b26702 Update compel nodes to accept an optional prompt mask. 2024-04-09 08:12:12 -04:00
Ryan Dick
c22d772062 Add RectangleMaskInvocation. 2024-04-09 08:12:12 -04:00
Ryan Dick
d6be7662c9 Add a MaskField primitive, and add a mask to the ConditioningField primitive type. 2024-04-09 08:12:12 -04:00
blessedcoolant
540d506ec9 fix: Incorrect default clip vision opt in the node 2024-04-05 15:06:33 -04:00
psychedelicious
d6ccd5bc81 feat(nodes): disable mosaic fill
Needs a bit of tweaking, leaving the code in just disabled/commented it out.
2024-04-05 08:49:13 +11:00
psychedelicious
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
psychedelicious
b061db414f tidy(nodes): abstractmethod is noop 2024-04-05 08:49:13 +11:00
blessedcoolant
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
Jonathan
3659219f46
Fix IdealSizeInvocation (#6145) 2024-04-05 08:38:40 +11:00
blessedcoolant
d284e0567a fix: ip adapter clip selection being broken 2024-04-05 07:49:04 +11:00
blessedcoolant
dc1681a0de fix: clip vision model auto param
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
2024-04-03 12:40:11 +05:30
blessedcoolant
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
blessedcoolant
91a70c8d07 feat: Let users pick CLIP Vision model for Checkpoint IP Adapters 2024-04-03 12:40:05 +05:30
blessedcoolant
5829b87b8d ui: update the new ip adapter configs on the frontend 2024-04-03 12:40:01 +05:30
blessedcoolant
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
psychedelicious
d4be945dde fix(nodes): gracefully handle custom nodes init error
Previously, exceptions raised as custom nodes are initialized were fatal errors, causing the app to exit.

With this change, any error on import is caught and the error message printed. App continues to start up without the node.

For example, a custom node that isn't updated for v4.0.0 may raise an error on import if it is attempting to import things that no longer exist.
2024-04-02 13:25:14 +11:00
psychedelicious
5b016bf376 fix(nodes): esrgan model name typo 2024-03-22 02:22:19 -07:00
blessedcoolant
eafc85cfe3 feat: Add Mask from ID Node 2024-03-22 06:23:51 +05:30
psychedelicious
c36d12a50f feat: adaptation of Lineart Anime processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
c7f8fe4d5e feat: adaptation of Lineart processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
ffb41c3616 feat: adaptation of HED processor
Adapted from controlnet repo
2024-03-21 07:02:57 -07:00
psychedelicious
611006b692 feat: adaptation of Canny processor
Adapted from controlnet processors package

fix: do final resize in canny processor

canny
2024-03-21 07:02:57 -07:00
psychedelicious
01d8ab04a5 feat(nodes): add missing detect_resolution to processors
Some processors, like Canny, didn't use `detect_resolution`. The resultant control images were then resized by the processors from 512x512 to the desired dimensions. The result is that the control images are the right size, but very low quality.

Using detect_resolution fixes this.
2024-03-21 07:02:57 -07:00
psychedelicious
fabef8b45b feat(mm): download upscaling & lama models as they are requested 2024-03-20 15:05:25 +11:00
psychedelicious
29b04b7e83 chore: bump nodes versions
Bump all nodes in prep for v4.0.0.
2024-03-20 10:28:07 +11:00
maryhipp
820614e4d8 ruff 2024-03-19 21:59:51 +11:00
maryhipp
4e9207a10b fix(worker): remove resolution from zoe as it seems to break it 2024-03-19 21:59:51 +11:00
maryhipp
ed0f9f7d66 feat(worker): add image_resolution as option for all cnet procesors 2024-03-19 21:59:51 +11:00
psychedelicious
dedce2d896 fix(config): remove unnecessary resolve on config path 2024-03-19 09:24:28 +11:00
psychedelicious
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
Lincoln Stein
7387b0bdc9
install missing clip_vision encoders if required by an ip adapter (#5982)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 02:19:53 +00:00
psychedelicious
297408d67e feat(nodes): add control adapter processed images to metadata
In the client, a controlnet or t2i adapter has two images:
- The source control image: the image the user selected (required)
- The processed control image: the user's image after we've processed it (optional)

The processed image is optional because a user may provide a pre-processed image.

We only actually use one of these images when building the graph, and until this change, we only stored one of the in image metadata. This created a situation where only a processed image was stored in metadata - say, a canny edge map - and the user-selected image wasn't provided.

By adding the processed image to metadata, we can recall both the control image and optional processed image.

This commit is followed by a UI-facing changes to support the change.
2024-03-14 12:34:03 -07:00
psychedelicious
4492bedd19 tidy(nodes): use ModelIdentifierField for model metadata
Until recently, this had a different shape than the ModelMetadataField. They are now the same, so we can re-use the ModelIdentifierField.
2024-03-14 10:53:57 +11:00
blessedcoolant
af660163ca chore: cleanup DepthAnything code 2024-03-13 20:35:52 +05:30
Brandon Rising
8d2a4db902 Found another instance of expecting a mid_block on the decoder in a vae 2024-03-12 12:11:38 -04:00
Brandon Rising
7b393656de Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
psychedelicious
8c2ff794d5 fix(nodes): ip adapter uses valid ModelIdentifierField for image encoder model
- Add class method to `ModelIdentifierField` to construct the field from a model config
- Use this to construct a valid IP adapter model field
2024-03-10 17:28:58 -05:00
Ryan Dick
145bb45858 Remove dead code related to an old symmetry feature. 2024-03-10 00:13:18 -06:00
dunkeroni
631e789195 fix(canvas): create masked latents when None 2024-03-10 11:58:41 +11:00
psychedelicious
92b0d13d0e feat(nodes): "ModelField" -> "ModelIdentifierField", add hash/name/base/type 2024-03-10 11:03:38 +11:00
psychedelicious
effbd8a1ba chore: ruff 2024-03-08 11:10:44 -05:00
psychedelicious
ddde355b09 fix(mm): add ui_type to model fields
Recently the schema for models was changed to a generic `ModelField`, and the UI was unable to derive the type of those fields. This didn't affect functionality, but it did break the styling of handles.

Add `ui_type` to the affected fields and update the UI to use the correct capitalizations.
2024-03-08 11:10:44 -05:00
psychedelicious
132790eebe tidy(nodes): use canonical capitalizations 2024-03-07 10:56:59 +11:00
psychedelicious
528ac5dd25 refactor(nodes): model identifiers
- All models are identified by a key and optionally a submodel type via new model `ModelField`. Previously, a few model types had their own class, but not all of them. This inconsistency just added complexity without any benefit.
- Update all invocation to use the new format.
- In the node API, models are loaded by key or an instance of `ModelField` as a convenience.
- Add an enriched model schema for metadata. It includes key, hash, name, base and type.
2024-03-07 10:56:59 +11:00
psychedelicious
7c9128b253 tidy(mm): use canonical capitalization for all model-related enums, classes
For example, "Lora" -> "LoRA", "Vae" -> "VAE".
2024-03-05 23:50:19 +11:00
dunkeroni
735857479d fix(canvas): use corrected mask for pasteback 2024-03-03 12:58:47 -05:00
Ryan Dick
cc45007dc4 Remove unused code for attention map saving. 2024-03-02 08:25:41 -05:00
Ryan Dick
ad96857e0f Fix avoid storing extra conditioning info in two places. 2024-03-01 15:12:03 -05:00
psychedelicious
0b0128647b feat(nodes): revise model load API args 2024-03-01 10:42:33 +11:00
blessedcoolant
ae34bcfbc0 fix: Assertion issue with SDXL Compel 2024-03-01 10:42:33 +11:00
Brandon Rising
f475b78734 Ruff check 2024-03-01 10:42:33 +11:00
Brandon Rising
ca9b815c89 Extract TI loading logic into util, disallow it from ever failing a generation 2024-03-01 10:42:33 +11:00
Brandon Rising
8efd4284e9 Fix one last reference to the uncasted model 2024-03-01 10:42:33 +11:00
Brandon Rising
5922cee541 Allow TIs to be either a key or a name in the prompt during our transition to using keys 2024-03-01 10:42:33 +11:00
psychedelicious
80697a71de feat(nodes): update LoRAMetadataItem model
LoRA model now at under `model` not `lora.
2024-03-01 10:42:33 +11:00
psychedelicious
82249cc634 tidy(nodes): rename canvas paste back 2024-03-01 10:42:33 +11:00
blessedcoolant
cc82ce820a fix: outpaint result not getting pasted back correctly 2024-03-01 10:42:33 +11:00
blessedcoolant
8e1fbd6ed1 fix: lint errors 2024-03-01 10:42:33 +11:00
blessedcoolant
68d79c002d canvas: improve paste back (or try to) 2024-03-01 10:42:33 +11:00
dunkeroni
30a374a70f chore: typing 2024-03-01 10:42:33 +11:00
dunkeroni
07dde92664 chore: typing fix 2024-03-01 10:42:33 +11:00
dunkeroni
06cc57d82a feat(nodes): added gradient mask node 2024-03-01 10:42:33 +11:00
psychedelicious
34f3a39cc9 fix(nodes): fix TI loading 2024-03-01 10:42:33 +11:00
psychedelicious
731860c332 feat(nodes): JIT graph nodes validation
We use pydantic to validate a union of valid invocations when instantiating a graph.

Previously, we constructed the union while creating the `Graph` class. This introduces a dependency on the order of imports.

For example, consider a setup where we have 3 invocations in the app:

- Python executes the module where `FirstInvocation` is defined, registering `FirstInvocation`.
- Python executes the module where `SecondInvocation` is defined, registering `SecondInvocation`.
- Python executes the module where `Graph` is defined. A union of invocations is created and used to define the `Graph.nodes` field. The union contains `FirstInvocation` and `SecondInvocation`.
- Python executes the module where `ThirdInvocation` is defined, registering `ThirdInvocation`.
- A graph is created that includes `ThirdInvocation`. Pydantic validates the graph using the union, which does not know about `ThirdInvocation`, raising a `ValidationError` about an unknown invocation type.

This scenario has been particularly problematic in tests, where we may create invocations dynamically. The test files have to be structured in such a way that the imports happen in the right order. It's a major pain.

This PR refactors the validation of graph nodes to resolve this issue:

- `BaseInvocation` gets a new method `get_typeadapter`. This builds a pydantic `TypeAdapter` for the union of all registered invocations, caching it after the first call.
- `Graph.nodes`'s type is widened to `dict[str, BaseInvocation]`. This actually is a nice bonus, because we get better type hints whenever we reference `some_graph.nodes`.
- A "plain" field validator takes over the validation logic for `Graph.nodes`. "Plain" validators totally override pydantic's own validation logic. The validator grabs the `TypeAdapter` from `BaseInvocation`, then validates each node with it. The validation is identical to the previous implementation - we get the same errors.

`BaseInvocationOutput` gets the same treatment.
2024-03-01 10:42:33 +11:00
dunkeroni
1242cb4f85 one more redundant RGB convert removed 2024-03-01 10:42:33 +11:00
dunkeroni
cd070d8be9 chore: ruff formatting 2024-03-01 10:42:33 +11:00
dunkeroni
56ac2104e3 chore(invocations): remove redundant RGB conversions 2024-03-01 10:42:33 +11:00
dunkeroni
965867151b chore(invocations): use IMAGE_MODES constant literal 2024-03-01 10:42:33 +11:00
dunkeroni
2d007ce532 fix: removed custom module 2024-03-01 10:42:33 +11:00
dunkeroni
92394ab751 fix(nodes): canny preprocessor uses RGBA again 2024-03-01 10:42:33 +11:00
dunkeroni
43d94c8108 feat(nodes): format option for get_image method
Also default CNet preprocessors to "RGB"
2024-03-01 10:42:33 +11:00
blessedcoolant
fc20822595 fix: Alpha channel causing issue with DW Processor 2024-03-01 10:42:33 +11:00
psychedelicious
5a3195f757 final tidying before marking PR as ready for review
- Replace AnyModelLoader with ModelLoaderRegistry
- Fix type check errors in multiple files
- Remove apparently unneeded `get_model_config_enum()` method from model manager
- Remove last vestiges of old model manager
- Updated tests and documentation

resolve conflict with seamless.py
2024-03-01 10:42:33 +11:00
Lincoln Stein
5d612ec095 Tidy names and locations of modules
- Rename old "model_management" directory to "model_management_OLD" in order to catch
  dangling references to original model manager.
- Caught and fixed most dangling references (still checking)
- Rename lora, textual_inversion and model_patcher modules
- Introduce a RawModel base class to simplfy the Union returned by the
  model loaders.
- Tidy up the model manager 2-related tests. Add useful fixtures, and
  a finalizer to the queue and installer fixtures that will stop the
  services and release threads.
2024-03-01 10:42:33 +11:00
psychedelicious
6df3c450e8 fix(nodes): fix t2i adapter model loading 2024-03-01 10:42:33 +11:00
psychedelicious
c80987eb8a chore: ruff 2024-03-01 10:42:33 +11:00
psychedelicious
539570cc7a feat(nodes): update invocation context for mm2, update nodes model usage 2024-03-01 10:42:33 +11:00
Brandon Rising
262cbaacdd References to context.services.model_manager.store.get_model can only accept keys, remove invalid assertion 2024-03-01 10:42:33 +11:00
Brandon Rising
35e8a33dfd Remove references to model_records service, change submodel property on ModelInfo to submodel_type to support new params in model manager 2024-03-01 10:42:33 +11:00
Lincoln Stein
3e330d7d9d fix a number of typechecking errors 2024-03-01 10:42:33 +11:00
Lincoln Stein
a23dedd2ee make model manager v2 ready for PR review
- Replace legacy model manager service with the v2 manager.

- Update invocations to use new load interface.

- Fixed many but not all type checking errors in the invocations. Most
  were unrelated to model manager

- Updated routes. All the new routes live under the route tag
  `model_manager_v2`. To avoid confusion with the old routes,
  they have the URL prefix `/api/v2/models`. The old routes
  have been de-registered.

- Added a pytest for the loader.

- Updated documentation in contributing/MODEL_MANAGER.md
2024-03-01 10:42:33 +11:00
Lincoln Stein
8db01ab1b3 probe for required encoder for IPAdapters and add to config 2024-03-01 10:42:33 +11:00
Lincoln Stein
78ef946e01 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-03-01 10:42:33 +11:00
psychedelicious
25f64d5b19 chore(nodes): "SAMPLER_NAME_VALUES" -> "SCHEDULER_NAME_VALUES"
This was named inaccurately.
2024-03-01 10:42:33 +11:00
psychedelicious
e0694a2856 feat(nodes): use LATENT_SCALE_FACTOR in primitives.py, noise.py
- LatentsOutput.build
- NoiseOutput.build
- Noise.width, Noise.height multiple_of
2024-03-01 10:42:33 +11:00
psychedelicious
e5d8921cf2 feat(nodes): extract LATENT_SCALE_FACTOR to constants.py 2024-03-01 10:42:33 +11:00
psychedelicious
220baae793 Revert "feat(nodes): use LATENT_SCALE_FACTOR const in tensor output builders"
This reverts commit ef18fc546560277302f3886e456da9a47e8edce0.
2024-03-01 10:42:33 +11:00
psychedelicious
e08f16763b feat(nodes): use LATENT_SCALE_FACTOR const in tensor output builders 2024-03-01 10:42:33 +11:00
psychedelicious
9f382419dc revert(nodes): revert making tensors/conditioning use item storage
Turns out they are just different enough in purpose that the implementations would be rather unintuitive. I've made a separate ObjectSerializer service to handle tensors and conditioning.

Refined the class a bit too.
2024-03-01 10:42:33 +11:00
psychedelicious
0710fb3fb0 feat(nodes): replace latents service with tensors and conditioning services
- New generic class `PickleStorageBase`, implements the same API as `LatentsStorageBase`, use for storing non-serializable data via pickling
- Implementation `PickleStorageTorch` uses `torch.save` and `torch.load`, same as `LatentsStorageDisk`
- Add `tensors: PickleStorageBase[torch.Tensor]` to `InvocationServices`
- Add `conditioning: PickleStorageBase[ConditioningFieldData]` to `InvocationServices`
- Remove `latents` service and all `LatentsStorage` classes
- Update `InvocationContext` and all usage of old `latents` service to use the new services/context wrapper methods
2024-03-01 10:42:33 +11:00
psychedelicious
31db62ba99 tidy(nodes): delete onnx.py
It doesn't work and keeping it updated to prevent the app from starting was getting tedious. Deleted.
2024-03-01 10:42:33 +11:00
psychedelicious
322a60f48f fix(nodes): rearrange fields.py to avoid needing forward refs 2024-03-01 10:42:33 +11:00
psychedelicious
7fbdfbf9e5 feat(nodes): add WithBoard field helper class
This class works the same way as `WithMetadata` - it simply adds a `board` field to the node. The context wrapper function is able to pull the board id from this. This allows image-outputting nodes to get a board field "for free", and have their outputs automatically saved to it.

This is a breaking change for node authors who may have a field called `board`, because it makes `board` a reserved field name. I'll look into how to avoid this - maybe by naming this invoke-managed field `_board` to avoid collisions?

Supporting changes:
- `WithBoard` is added to all image-outputting nodes, giving them the ability to save to board.
- Unused, duplicate `WithMetadata` and `WithWorkflow` classes are deleted from `baseinvocation.py`. The "real" versions are in `fields.py`.
- Remove `LinearUIOutputInvocation`. Now that all nodes that output images also have a `board` field by default, this node is no longer necessary. See comment here for context: https://github.com/invoke-ai/InvokeAI/pull/5491#discussion_r1480760629
- Without `LinearUIOutputInvocation`, the `ImagesInferface.update` method is no longer needed, and removed.

Note: This commit does not bump all node versions. I will ensure that is done correctly before merging the PR of which this commit is a part.

Note: A followup commit will implement the frontend changes to support this change.
2024-03-01 10:42:33 +11:00
psychedelicious
cc8d713c57 fix(nodes): restore missing context type annotations 2024-03-01 10:42:33 +11:00