Commit Graph

192 Commits

Author SHA1 Message Date
Ryan Dick
9a3b8c6fcb Fix handling of init_timestep in StableDiffusionGeneratorPipeline and improve its documentation. 2024-06-26 12:51:51 -04:00
Ryan Dick
bd74b84cc5 Revert "Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array."
This reverts commit fa40061eca.
2024-06-26 12:51:51 -04:00
Ryan Dick
cd9dfefe3c Fix inpainting mask shape assertions. 2024-06-25 11:31:52 -07:00
Ryan Dick
fa40061eca Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array. 2024-06-25 11:31:52 -07:00
Ryan Dick
605f460c7d Add detailed docstring to latents_from_embeddings(). 2024-06-25 11:31:52 -07:00
Ryan Dick
22704dd542 Simplify handling of inpainting models. Improve the in-code documentation around inpainting. 2024-06-25 11:31:52 -07:00
Ryan Dick
875673c9ba Minor tidying of latents_from_embeddings(...). 2024-06-25 11:31:52 -07:00
Ryan Dick
f604575862 Consolidate latents_from_embeddings(...) and generate_latents_from_embeddings(...) into a single function. 2024-06-25 11:31:52 -07:00
Ryan Dick
60ac937698 Improve clarity of comments regarded when 'noise' and 'latents' are expected to be set. 2024-06-25 11:31:52 -07:00
Ryan Dick
1e41949a02 Fix static check errors on imports in diffusers_pipeline.py. 2024-06-25 11:31:52 -07:00
Ryan Dick
5f0e330ed2 Remove a condition for handling inpainting models that never resolves to True. The same logic is already applied earlier by AddsMaskLatents. 2024-06-25 11:31:52 -07:00
Ryan Dick
9dd779b414 Add clarifying comment to explain why noise might be None in latents_from_embedding(). 2024-06-25 11:31:52 -07:00
Ryan Dick
fa183025ac Remove unused are_like_tensors() function. 2024-06-25 11:31:52 -07:00
Ryan Dick
d3c85aa91a Remove unused StableDiffusionGeneratorPipeline.use_ip_adapter member. 2024-06-25 11:31:52 -07:00
Ryan Dick
82619602a5 Remove unused StableDiffusionGeneratorPipeline.control_model. 2024-06-25 11:31:52 -07:00
Ryan Dick
196f3b721d Stricter typing for the is_gradient_mask: bool. 2024-06-25 11:31:52 -07:00
Ryan Dick
244c28859d Fix typing of control_data to reflect that it can be None. 2024-06-25 11:31:52 -07:00
Ryan Dick
40ae174c41 Fix typing of timesteps and init_timestep. 2024-06-25 11:31:52 -07:00
Ryan Dick
afaebdf151 Fix typing to reflect that the callback arg to latents_from_embeddings is never None. 2024-06-25 11:31:52 -07:00
Ryan Dick
d661517d94 Move seed above optional params. 2024-06-25 11:31:52 -07:00
Ryan Dick
82a69a54ac Simplify handling of AddsMaskGuidance, and fix some related type errors. 2024-06-25 11:31:52 -07:00
Ryan Dick
ffc28176fe Remove unused num_inference_steps. 2024-06-25 11:31:52 -07:00
Ryan Dick
d08e405017 Fix ControlNetModel type hint import source. 2024-06-25 11:31:52 -07:00
blessedcoolant
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
Lincoln Stein
e93f4d632d
[util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
blessedcoolant
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
psychedelicious
7bc77ddb40 fix(nodes): doubly-noised latents
When using refiner with a mask (i.e. inpainting), we don't have noise provided as an input to the node.

This situation uniquely hits a code path that wasn't reviewed when gradient denoising was implemented.

That code path does two things wrong:
- It lerp'd the input latents. This was fixed in 5a1f4cb1ce.
- It added noise to the latents an extra time. This is fixed in this change.

We don't need to add noise in `latents_from_embeddings` because we do it just a lines later in `AddsMaskGuidance`.

- Remove the extraneous call to `add_noise`
- Make `seed` a required arg. We never call the function without seed anyways. If we refactor this in the future, it will be clearer that we need to look at how seed is handled.
- Move the call to create the noise to a deeper conditional, just before we call `AddsMaskGuidance`. The created noise tensor is now only used in that function, no need to create it every time.

Note: Whether or not having both noise and latents as inputs on the node is correct is a separate conversation. This change just fixes the issue with the current setup.
2024-04-11 07:21:50 -04:00
Ryan Dick
2e27ed5f3d Pass IP-Adapter scales through the cross_attn_kwargs pathway, since they are the same for all attention layers. This change also helps to prepare for adding IP-Adapter region masks. 2024-04-09 15:06:51 -04:00
Ryan Dick
4a828818da Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
Ryan Dick
a78df8123f Update the diffusion logic to use the new regional prompting feature. 2024-04-09 08:12:12 -04:00
Ryan Dick
7ca677578e Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00
Ryan Dick
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
blessedcoolant
fd1f240853 fix: SDXL Refiner not working properly with Inpainting 2024-04-09 14:13:10 +10:00
psychedelicious
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
dunkeroni
609c2c0abf Fix: progress image preview for inpainting 2024-03-20 13:36:05 +11:00
dunkeroni
fe5fa7f8cc chore: make ruff 2024-03-20 13:36:05 +11:00
dunkeroni
8b30cbe81e chore: clean up old code comments 2024-03-20 13:36:05 +11:00
dunkeroni
2af9286345 fix: denoise mask incorectly applied after step 2024-03-20 13:36:05 +11:00
psychedelicious
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
psychedelicious
a72cea014c fix(config): drop usage of deprecated config.xformers, just use the existing utility function 2024-03-19 09:24:28 +11:00
Ryan Dick
145bb45858 Remove dead code related to an old symmetry feature. 2024-03-10 00:13:18 -06:00
Ryan Dick
cc45007dc4 Remove unused code for attention map saving. 2024-03-02 08:25:41 -05:00
Ryan Dick
204e7d383b Remove outdated comments related to T2I-Adapters and ControlNets. 2024-03-01 15:12:03 -05:00
Ryan Dick
9bc4e7a593 Remove use of **kwargs in do_unet_step(...), where full parameter list is known and supported. 2024-03-01 15:12:03 -05:00
Ryan Dick
ad96857e0f Fix avoid storing extra conditioning info in two places. 2024-03-01 15:12:03 -05:00
dunkeroni
06cc57d82a feat(nodes): added gradient mask node 2024-03-01 10:42:33 +11:00
Jonathan
83a9e26cd8
Respect torch-sdp in config.yaml (#5353)
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
2023-12-28 05:46:28 +00:00