Commit Graph

974 Commits

Author SHA1 Message Date
blessedcoolant
1bdcbe3284 cleanup: use dict update to actually update the scheduler keyword args 2024-05-01 12:22:39 +05:30
Lincoln Stein
49c84cd423
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-30 18:13:42 -04:00
blessedcoolant
2ddb82200c fix: Manually update eta(gamma) to 1.0 for TCDScheduler
seems to work best with invoke at 4 steps
2024-05-01 01:20:53 +05:30
psychedelicious
e822897b1c feat(nodes): add prototype heuristic image resize node
Uses the fancy cnet resize that retains edges.
2024-04-30 08:10:59 -04:00
psychedelicious
1fe90c357c feat(backend): lift managed model loading out of depthanything class 2024-04-29 08:56:00 +10:00
psychedelicious
fcb071f30c feat(backend): lift managed model loading out of lama class 2024-04-29 08:12:51 +10:00
Lincoln Stein
f65c7e2bfd
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 13:42:54 -04:00
dunkeroni
f262b9032d fix: changed validation to not error on connection 2024-04-28 12:48:56 -04:00
dunkeroni
71c3197eab fix: denoise latents accepts CFG lists as input 2024-04-28 12:48:56 -04:00
Lincoln Stein
bb04f496e0 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 11:33:26 -04:00
Lincoln Stein
70903ef057 refactor load_ckpt_from_url() 2024-04-28 11:33:23 -04:00
Lincoln Stein
d72f272f16 Address change requests in first round of PR reviews.
Pending:

- Move model install calls into model manager and create passthrus in invocation_context.
- Consider splitting load_model_from_url() into a call to get the path and a call to load the path.
2024-04-24 23:53:30 -04:00
psychedelicious
5b8f77f990 tidy(nodes): move cnet mode literals to utils
Now they can be used in type signatures without circular imports.
2024-04-25 13:20:09 +10:00
psychedelicious
1bef13db37 feat(nodes): restore unet check on CreateGradientMaskInvocation
Special handling for inpainting models
2024-04-23 07:32:53 -04:00
dunkeroni
bc12d6654e chore: comments and ruff 2024-04-23 07:32:53 -04:00
dunkeroni
6d7c8d5f57 remove unet test 2024-04-23 07:32:53 -04:00
dunkeroni
781de914f4 fix threshhold 2024-04-23 07:32:53 -04:00
dunkeroni
c094bad233 add unet check in gradient mask node 2024-04-23 07:32:53 -04:00
dunkeroni
0063014f2b gradient mask node test for inpaint 2024-04-23 07:32:53 -04:00
psychedelicious
a00e703144 feat(nodes): image mask to tensor invocation
Thanks @JPPhoto!
2024-04-20 11:32:08 -04:00
psychedelicious
ea527f5fe1 feat(nodes): add beta classification to mask tensor nodes 2024-04-19 09:32:56 -04:00
psychedelicious
aace364677 feat(nodes): add InvertTensorMaskInvocation 2024-04-19 09:32:56 -04:00
psychedelicious
602a59066e fix(nodes): handle invert in alpha_mask_to_tensor 2024-04-19 09:32:56 -04:00
psychedelicious
8911017bd1 feat(ui): selectable & draggable layers 2024-04-19 09:32:56 -04:00
psychedelicious
fc26f3e430 feat(nodes): add alpha mask to tensor invocation 2024-04-19 09:32:56 -04:00
Lincoln Stein
34cdfc61ab
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-17 17:18:13 -04:00
blessedcoolant
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
blessedcoolant
b39ce642b6 cleanup: raise ValueErrors when target_blocks dont match base model 2024-04-16 04:12:30 +05:30
Lincoln Stein
470a39935c fix merge conflicts with main 2024-04-15 09:24:57 -04:00
Lincoln Stein
e93f4d632d
[util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
Lincoln Stein
41b909cbe3 port dw_openpose, depth_anything, and lama processors to new model download scheme 2024-04-14 15:57:03 -04:00
blessedcoolant
2d5786d3bb fix: Incorrect composition blocks for SD1.5 2024-04-13 13:52:10 +05:30
blessedcoolant
27466ffa1a chore: update the ip adapter node version 2024-04-13 13:39:08 +05:30
blessedcoolant
9fc73743b2 feat: support SD1.5 2024-04-13 12:30:39 +05:30
blessedcoolant
d4393e4170 chore: linter fixes 2024-04-13 12:14:45 +05:30
blessedcoolant
7a67fd6a06 Revert "chore: ruff fixes"
This reverts commit af36fe8c1e.
2024-04-13 12:10:20 +05:30
blessedcoolant
af36fe8c1e chore: ruff fixes 2024-04-13 12:08:52 +05:30
blessedcoolant
e9f16ac8c7 feat: add UI for IP Adapter Method 2024-04-13 12:06:59 +05:30
blessedcoolant
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
Lincoln Stein
3a26c7bb9e fix merge conflicts 2024-04-12 00:58:11 -04:00
Lincoln Stein
df5ebdbc4f add invocation_context.load_ckpt_from_url() method 2024-04-12 00:55:21 -04:00
Lincoln Stein
dedf0c6ffa fix ruff issues 2024-04-12 07:19:16 +10:00
Lincoln Stein
579082ac10 [mm] clear the cache entry for a model that got an OOM during loading 2024-04-12 07:19:16 +10:00
psychedelicious
026d095afe fix(nodes): do not set seed on output latents from denoise latents
`LatentsField` objects have an optional `seed` field. This should only be populated when the latents are noise, generated from a seed.

`DenoiseLatentsInvocation` needs a seed value for scheduler initialization. It's used in a few places, and there is some logic for determining the seed to use with a series of fallbacks:
- Use the seed from the noise (a `LatentsField` object)
- Use the seed from the latents (a `LatentsField` object - normally it won't have a seed)
- Use `0` as a final fallback

In `DenoisLatentsInvocation`, we set the seed in the `LatentsOutput`, even though the output latents are not noise.

This is normally fine, but when we use refiner, we re-use the those same latents for the refiner denoise. This causes that characteristic same-seed-fried look on the refiner pass.

Simple fix - do not set the field in the output latents.
2024-04-11 07:21:50 -04:00
Jonathan
80d631118d
Fix field ordering
Changed fields to go in w/h x/y order.
2024-04-09 14:17:55 -05:00
Ryan Dick
0bdbfd4d1d Add support for IP-Adapter masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
2e27ed5f3d Pass IP-Adapter scales through the cross_attn_kwargs pathway, since they are the same for all attention layers. This change also helps to prepare for adding IP-Adapter region masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
babdc64b17 (minor) Fix typo in IP-Adapter field description. 2024-04-09 15:06:51 -04:00
Ryan Dick
4a828818da Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
psychedelicious
fe386252f3 Revert "feat(nodes): add prompt region from image nodes"
This reverts commit 3a531c5097.
2024-04-09 08:12:12 -04:00
Ryan Dick
182810337c Add utility to_standard_float_mask(...) to convert various mask formats to a standardized format. 2024-04-09 08:12:12 -04:00
Ryan Dick
338bf808d6 Rename MaskField to be a generice TensorField. 2024-04-09 08:12:12 -04:00
Ryan Dick
5b5a4204a1 Fix dimensions of mask produced by ExtractMasksAndPromptsInvocation. Also, added a clearer error message in case the same error is introduced in the future. 2024-04-09 08:12:12 -04:00
psychedelicious
926b8d0efe feat(nodes): add prompt region from image nodes 2024-04-09 08:12:12 -04:00
Ryan Dick
9d9d1761f3 (minor) The latest ruff version has _slightly_ different formatting preferences. 2024-04-09 08:12:12 -04:00
Ryan Dick
dc64fec771 Add support for lists of prompt embeddings to be passed to the DenoiseLatents invocation, and add handling of the conditioning region masks in DenoiseLatents. 2024-04-09 08:12:12 -04:00
Ryan Dick
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
4e64b26702 Update compel nodes to accept an optional prompt mask. 2024-04-09 08:12:12 -04:00
Ryan Dick
c22d772062 Add RectangleMaskInvocation. 2024-04-09 08:12:12 -04:00
Ryan Dick
d6be7662c9 Add a MaskField primitive, and add a mask to the ConditioningField primitive type. 2024-04-09 08:12:12 -04:00
blessedcoolant
540d506ec9 fix: Incorrect default clip vision opt in the node 2024-04-05 15:06:33 -04:00
psychedelicious
d6ccd5bc81 feat(nodes): disable mosaic fill
Needs a bit of tweaking, leaving the code in just disabled/commented it out.
2024-04-05 08:49:13 +11:00
psychedelicious
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
psychedelicious
b061db414f tidy(nodes): abstractmethod is noop 2024-04-05 08:49:13 +11:00
blessedcoolant
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
Jonathan
3659219f46
Fix IdealSizeInvocation (#6145) 2024-04-05 08:38:40 +11:00
blessedcoolant
d284e0567a fix: ip adapter clip selection being broken 2024-04-05 07:49:04 +11:00
blessedcoolant
dc1681a0de fix: clip vision model auto param
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
2024-04-03 12:40:11 +05:30
blessedcoolant
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
blessedcoolant
91a70c8d07 feat: Let users pick CLIP Vision model for Checkpoint IP Adapters 2024-04-03 12:40:05 +05:30
blessedcoolant
5829b87b8d ui: update the new ip adapter configs on the frontend 2024-04-03 12:40:01 +05:30
blessedcoolant
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
psychedelicious
d4be945dde fix(nodes): gracefully handle custom nodes init error
Previously, exceptions raised as custom nodes are initialized were fatal errors, causing the app to exit.

With this change, any error on import is caught and the error message printed. App continues to start up without the node.

For example, a custom node that isn't updated for v4.0.0 may raise an error on import if it is attempting to import things that no longer exist.
2024-04-02 13:25:14 +11:00
psychedelicious
5b016bf376 fix(nodes): esrgan model name typo 2024-03-22 02:22:19 -07:00
blessedcoolant
eafc85cfe3 feat: Add Mask from ID Node 2024-03-22 06:23:51 +05:30
psychedelicious
c36d12a50f feat: adaptation of Lineart Anime processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
c7f8fe4d5e feat: adaptation of Lineart processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
ffb41c3616 feat: adaptation of HED processor
Adapted from controlnet repo
2024-03-21 07:02:57 -07:00
psychedelicious
611006b692 feat: adaptation of Canny processor
Adapted from controlnet processors package

fix: do final resize in canny processor

canny
2024-03-21 07:02:57 -07:00
psychedelicious
01d8ab04a5 feat(nodes): add missing detect_resolution to processors
Some processors, like Canny, didn't use `detect_resolution`. The resultant control images were then resized by the processors from 512x512 to the desired dimensions. The result is that the control images are the right size, but very low quality.

Using detect_resolution fixes this.
2024-03-21 07:02:57 -07:00
psychedelicious
fabef8b45b feat(mm): download upscaling & lama models as they are requested 2024-03-20 15:05:25 +11:00
psychedelicious
29b04b7e83 chore: bump nodes versions
Bump all nodes in prep for v4.0.0.
2024-03-20 10:28:07 +11:00
maryhipp
820614e4d8 ruff 2024-03-19 21:59:51 +11:00
maryhipp
4e9207a10b fix(worker): remove resolution from zoe as it seems to break it 2024-03-19 21:59:51 +11:00
maryhipp
ed0f9f7d66 feat(worker): add image_resolution as option for all cnet procesors 2024-03-19 21:59:51 +11:00
psychedelicious
dedce2d896 fix(config): remove unnecessary resolve on config path 2024-03-19 09:24:28 +11:00
psychedelicious
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
Lincoln Stein
7387b0bdc9
install missing clip_vision encoders if required by an ip adapter (#5982)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-18 02:19:53 +00:00
psychedelicious
297408d67e feat(nodes): add control adapter processed images to metadata
In the client, a controlnet or t2i adapter has two images:
- The source control image: the image the user selected (required)
- The processed control image: the user's image after we've processed it (optional)

The processed image is optional because a user may provide a pre-processed image.

We only actually use one of these images when building the graph, and until this change, we only stored one of the in image metadata. This created a situation where only a processed image was stored in metadata - say, a canny edge map - and the user-selected image wasn't provided.

By adding the processed image to metadata, we can recall both the control image and optional processed image.

This commit is followed by a UI-facing changes to support the change.
2024-03-14 12:34:03 -07:00
psychedelicious
4492bedd19 tidy(nodes): use ModelIdentifierField for model metadata
Until recently, this had a different shape than the ModelMetadataField. They are now the same, so we can re-use the ModelIdentifierField.
2024-03-14 10:53:57 +11:00
blessedcoolant
af660163ca chore: cleanup DepthAnything code 2024-03-13 20:35:52 +05:30
Brandon Rising
8d2a4db902 Found another instance of expecting a mid_block on the decoder in a vae 2024-03-12 12:11:38 -04:00
Brandon Rising
7b393656de Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
psychedelicious
8c2ff794d5 fix(nodes): ip adapter uses valid ModelIdentifierField for image encoder model
- Add class method to `ModelIdentifierField` to construct the field from a model config
- Use this to construct a valid IP adapter model field
2024-03-10 17:28:58 -05:00
Ryan Dick
145bb45858 Remove dead code related to an old symmetry feature. 2024-03-10 00:13:18 -06:00