Commit Graph

255 Commits

Author SHA1 Message Date
Ryan Dick
d183aa823c wip 2024-04-20 17:09:41 -04:00
blessedcoolant
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
blessedcoolant
f46bbaf8c4 fix: make ip-adapter weights not be optional 2024-04-16 21:12:45 +05:30
blessedcoolant
d27907cc6d fix: entire reshaping block needs to be skipped 2024-04-16 04:29:53 +05:30
blessedcoolant
7ee3fef2db cleanup: better var names for the ip adapter weight collection block 2024-04-16 04:23:50 +05:30
blessedcoolant
a148c4322c fix: IP Adapter weights being incorrectly applied
They were being overwritten rather than being appended
2024-04-16 04:10:41 +05:30
blessedcoolant
5f6c6abf9c chore: change IPAdapterAttentionWeights to a dataclass 2024-04-15 23:38:55 +05:30
Lincoln Stein
e93f4d632d
[util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
blessedcoolant
8426f1e7b2 fix(experimental): Possible fix for conflict with regional embed length mismatch
Pushing this so people can test it out and see if this needs to be handled in a different way.
2024-04-14 12:19:19 +05:30
blessedcoolant
9cb0f63c44 refactor: fix a bunch of type issues in custom_attention 2024-04-13 14:17:25 +05:30
blessedcoolant
d4393e4170 chore: linter fixes 2024-04-13 12:14:45 +05:30
blessedcoolant
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
psychedelicious
7bc77ddb40 fix(nodes): doubly-noised latents
When using refiner with a mask (i.e. inpainting), we don't have noise provided as an input to the node.

This situation uniquely hits a code path that wasn't reviewed when gradient denoising was implemented.

That code path does two things wrong:
- It lerp'd the input latents. This was fixed in 5a1f4cb1ce.
- It added noise to the latents an extra time. This is fixed in this change.

We don't need to add noise in `latents_from_embeddings` because we do it just a lines later in `AddsMaskGuidance`.

- Remove the extraneous call to `add_noise`
- Make `seed` a required arg. We never call the function without seed anyways. If we refactor this in the future, it will be clearer that we need to look at how seed is handled.
- Move the call to create the noise to a deeper conditional, just before we call `AddsMaskGuidance`. The created noise tensor is now only used in that function, no need to create it every time.

Note: Whether or not having both noise and latents as inputs on the node is correct is a separate conversation. This change just fixes the issue with the current setup.
2024-04-11 07:21:50 -04:00
Ryan Dick
f9af32a6d1 Fix the padding behavior when max-pooling regional IP-Adapter masks to mirror the downscaling behavior of SD and SDXL. Prior to this change, denoising with input latent dimensions that were not evenly divisible by 8 would raise an exception. 2024-04-09 16:50:43 -04:00
Ryan Dick
fba40eb1bd Fix the padding behavior when max-pooling regional prompt masks to mirror the downscaling behavior of SD and SDXL. Prior to this change, denoising with input latent dimensions that were not evenly divisible by 8 would raise an exception. 2024-04-09 16:50:43 -04:00
Ryan Dick
0c6dd32ece (minor) Fix IP-Adapter conditional logic in CustomAttnProcessor2_0. 2024-04-09 15:06:51 -04:00
Ryan Dick
0bdbfd4d1d Add support for IP-Adapter masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
2e27ed5f3d Pass IP-Adapter scales through the cross_attn_kwargs pathway, since they are the same for all attention layers. This change also helps to prepare for adding IP-Adapter region masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
4a828818da Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
Ryan Dick
182810337c Add utility to_standard_float_mask(...) to convert various mask formats to a standardized format. 2024-04-09 08:12:12 -04:00
Ryan Dick
75ef473748 Pull the upstream changes from diffusers' AttnProcessor2_0 into CustomAttnProcessor2_0. This fixes a bug in CustomAttnProcessor2_0 that was being triggered when peft was not installed. The bug was present in a block of code that was previously copied from diffusers. The bug seems to have been introduced during diffusers' migration to PEFT for their LoRA handling. The upstream bug was fixed in 531e719163. 2024-04-09 08:12:12 -04:00
Ryan Dick
a78df8123f Update the diffusion logic to use the new regional prompting feature. 2024-04-09 08:12:12 -04:00
Ryan Dick
7ca677578e Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00
Ryan Dick
31c456c1e6 Update CustomAttention to support both IP-Adapters and regional prompting. 2024-04-09 08:12:12 -04:00
Ryan Dick
2ce79b61f5 Initialize a RegionalPromptAttnProcessor2_0 class by copying AttnProcessor2_0 from diffusers. 2024-04-09 08:12:12 -04:00
Ryan Dick
109e3f0e7f Add RegionalPromptData class for managing prompt region masks. 2024-04-09 08:12:12 -04:00
Ryan Dick
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
aba023e0c5 Improve documentation of conditioning_data.py. 2024-04-09 08:12:12 -04:00
Ryan Dick
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
blessedcoolant
fd1f240853 fix: SDXL Refiner not working properly with Inpainting 2024-04-09 14:13:10 +10:00
psychedelicious
2526ef52c5 fix(nodes): workaround seamless multi gpu error #6010
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.

This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.

Closes #6010
2024-03-29 08:56:38 +11:00
psychedelicious
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
dunkeroni
609c2c0abf Fix: progress image preview for inpainting 2024-03-20 13:36:05 +11:00
dunkeroni
fe5fa7f8cc chore: make ruff 2024-03-20 13:36:05 +11:00
dunkeroni
8b30cbe81e chore: clean up old code comments 2024-03-20 13:36:05 +11:00
dunkeroni
2af9286345 fix: denoise mask incorectly applied after step 2024-03-20 13:36:05 +11:00
psychedelicious
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
psychedelicious
a72cea014c fix(config): drop usage of deprecated config.xformers, just use the existing utility function 2024-03-19 09:24:28 +11:00
Brandon Rising
7b393656de Update l2i invoke and seamless to support AutoencoderTiny, remove attention processors if no mid_block is detected 2024-03-12 12:00:24 -04:00
Ryan Dick
145bb45858 Remove dead code related to an old symmetry feature. 2024-03-10 00:13:18 -06:00
Ryan Dick
cc45007dc4 Remove unused code for attention map saving. 2024-03-02 08:25:41 -05:00
Ryan Dick
73bec56c59 Delete unused functions from shared_invokeai_diffusion.py. 2024-03-02 08:25:41 -05:00
Ryan Dick
6935830f99 Remove unused constructor declared with typo in name: __int__. 2024-03-01 15:12:03 -05:00
Ryan Dick
7651eeea8d Merge sequential conditioning and cac conditioning logic to eliminate a bunch of duplication. 2024-03-01 15:12:03 -05:00
Ryan Dick
204e7d383b Remove outdated comments related to T2I-Adapters and ControlNets. 2024-03-01 15:12:03 -05:00
Ryan Dick
9bc4e7a593 Remove use of **kwargs in do_unet_step(...), where full parameter list is known and supported. 2024-03-01 15:12:03 -05:00
Ryan Dick
ad96857e0f Fix avoid storing extra conditioning info in two places. 2024-03-01 15:12:03 -05:00
psychedelicious
dd9daf8efb chore: ruff 2024-03-01 10:42:33 +11:00