InvokeAI/invokeai/backend/stable_diffusion/diffusion
2024-04-09 10:57:02 -04:00
..
__init__.py Remove unused code for attention map saving. 2024-03-02 08:25:41 -05:00
conditioning_data.py Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
custom_atttention.py Pull the upstream changes from diffusers' AttnProcessor2_0 into CustomAttnProcessor2_0. This fixes a bug in CustomAttnProcessor2_0 that was being triggered when peft was not installed. The bug was present in a block of code that was previously copied from diffusers. The bug seems to have been introduced during diffusers' migration to PEFT for their LoRA handling. The upstream bug was fixed in 531e719163. 2024-04-09 08:12:12 -04:00
regional_prompt_data.py Add utility to_standard_float_mask(...) to convert various mask formats to a standardized format. 2024-04-09 08:12:12 -04:00
shared_invokeai_diffusion.py Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
unet_attention_patcher.py Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00