Merge branch 'main' into refactor_use_compel

This commit is contained in:
Lincoln Stein 2023-02-21 17:19:48 -05:00 committed by GitHub
commit 0a6d8b4855
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 211 additions and 68 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

View File

@ -40,9 +40,10 @@ experimental versions later.
this, open up a command-line window ("Terminal" on Linux and this, open up a command-line window ("Terminal" on Linux and
Macintosh, "Command" or "Powershell" on Windows) and type `python Macintosh, "Command" or "Powershell" on Windows) and type `python
--version`. If Python is installed, it will print out the version --version`. If Python is installed, it will print out the version
number. If it is version `3.9.1` or `3.10.x`, you meet number. If it is version `3.9.*` or `3.10.*`, you meet
requirements. requirements. We do not recommend using Python 3.11 or higher,
as not all the libraries that InvokeAI depends on work properly
with this version.
!!! warning "What to do if you have an unsupported version" !!! warning "What to do if you have an unsupported version"
@ -50,8 +51,7 @@ experimental versions later.
and download the appropriate installer package for your and download the appropriate installer package for your
platform. We recommend [Version platform. We recommend [Version
3.10.9](https://www.python.org/downloads/release/python-3109/), 3.10.9](https://www.python.org/downloads/release/python-3109/),
which has been extensively tested with InvokeAI. At this time which has been extensively tested with InvokeAI.
we do not recommend Python 3.11.
_Please select your platform in the section below for platform-specific _Please select your platform in the section below for platform-specific
setup requirements._ setup requirements._
@ -150,7 +150,7 @@ experimental versions later.
```cmd ```cmd
C:\Documents\Linco> cd InvokeAI-Installer C:\Documents\Linco> cd InvokeAI-Installer
C:\Documents\Linco\invokeAI> install.bat C:\Documents\Linco\invokeAI> .\install.bat
``` ```
7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a 7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a
@ -167,6 +167,11 @@ experimental versions later.
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai` `/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
on Macintoshes, where "YourName" is your login name. on Macintoshes, where "YourName" is your login name.
-If you have previously installed InvokeAI, you will be asked to
confirm whether you want to reinstall into this directory. You
may choose to reinstall, in which case your version will be upgraded,
or choose a different directory.
- The script uses tab autocompletion to suggest directory path completions. - The script uses tab autocompletion to suggest directory path completions.
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
to suggest completions. to suggest completions.
@ -181,11 +186,6 @@ experimental versions later.
are unsure what GPU you are using, you can ask the installer to are unsure what GPU you are using, you can ask the installer to
guess. guess.
<figure markdown>
![choose-gpu-screenshot](../assets/installer-walkthrough/choose-gpu.png)
</figure>
9. **Watch it go!**: Sit back and let the install script work. It will install the third-party 9. **Watch it go!**: Sit back and let the install script work. It will install the third-party
libraries needed by InvokeAI and the application itself. libraries needed by InvokeAI and the application itself.
@ -197,25 +197,138 @@ experimental versions later.
minutes and nothing is happening, you can interrupt the script with ^C. You minutes and nothing is happening, you can interrupt the script with ^C. You
may restart it and it will pick up where it left off. may restart it and it will pick up where it left off.
10. **Post-install Configuration**: After installation completes, the installer will launch the
configuration script, which will guide you through the first-time
process of selecting one or more Stable Diffusion model weights
files, downloading and configuring them. We provide a list of
popular models that InvokeAI performs well with. However, you can
add more weight files later on using the command-line client or
the Web UI. See [Installing Models](050_INSTALLING_MODELS.md) for
details.
<figure markdown> <figure markdown>
![downloading-models-screenshot](../assets/installer-walkthrough/downloading-models.png) ![initial-settings-screenshot](../assets/installer-walkthrough/settings-form.png)
</figure> </figure>
If you have already downloaded the weights file(s) for another Stable 10. **Post-install Configuration**: After installation completes, the
Diffusion distribution, you may skip this step (by selecting "skip" when installer will launch the configuration form, which will guide you
prompted) and configure InvokeAI to use the previously-downloaded files. The through the first-time process of adjusting some of InvokeAI's
process for this is described in [Installing Models](050_INSTALLING_MODELS.md). startup settings. To move around this form use ctrl-N for
&lt;N&gt;ext and ctrl-P for &lt;P&gt;revious, or use &lt;tab&gt;
and shift-&lt;tab&gt; to move forward and back. Once you are in a
multi-checkbox field use the up and down cursor keys to select the
item you want, and &lt;space&gt; to toggle it on and off. Within
a directory field, pressing &lt;tab&gt; will provide autocomplete
options.
11. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look Generally the defaults are fine, and you can come back to this screen at
any time to tweak your system. Here are the options you can adjust:
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***NSFW checker***
If checked, InvokeAI will test images for potential sexual content
and blur them out if found.
- ***HuggingFace Access Token***
InvokeAI has the ability to download embedded styles and subjects
from the HuggingFace Concept Library on-demand. However, some of
the concept library files are password protected. To make download
smoother, you can set up an account at huggingface.co, obtain an
access token, and paste it into this field. Note that you paste
to this screen using ctrl-shift-V
- ***Free GPU memory after each generation***
This is useful for low-memory machines and helps minimize the
amount of GPU VRAM used by InvokeAI.
- ***Enable xformers support if available***
If the xformers library was successfully installed, this will activate
it to reduce memory consumption and increase rendering speed noticeably.
Note that xformers has the side effect of generating slightly different
images even when presented with the same seed and other settings.
- ***Force CPU to be used on GPU systems***
This will use the (slow) CPU rather than the accelerated GPU. This
can be used to generate images on systems that don't have a compatible
GPU.
- ***Precision***
This controls whether to use float32 or float16 arithmetic.
float16 uses less memory but is also slightly less accurate.
Ordinarily the right arithmetic is picked automatically ("auto"),
but you may have to use float32 to get images on certain systems
and graphics cards. The "autocast" option is deprecated and
shouldn't be used unless you are asked to by a member of the team.
- ***Number of models to cache in CPU memory***
This allows you to keep models in memory and switch rapidly among
them rather than having them load from disk each time. This slider
controls how many models to keep loaded at once. Each
model will use 2-4 GB of RAM, so use this cautiously
- ***Directory containing embedding/textual inversion files***
This is the directory in which you can place custom embedding
files (.pt or .bin). During startup, this directory will be
scanned and InvokeAI will print out the text terms that
are available to trigger the embeddings.
At the bottom of the screen you will see a checkbox for accepting
the CreativeML Responsible AI License. You need to accept the license
in order to download Stable Diffusion models from the next screen.
_You can come back to the startup options form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
this script. On the command line, it is named `invokeai-configure`.
11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
to another screen that prompts you to download a series of starter models. The ones
we recommend are preselected for you, but you are encouraged to use the checkboxes to
pick and choose.
You will probably wish to download `autoencoder-840000` for use with models that
were trained with an older version of the Stability VAE.
<figure markdown>
![select-models-screenshot](../assets/installer-walkthrough/installing-models.png)
</figure>
Below the preselected list of starter models is a large text field which you can use
to specify a series of models to import. You can specify models in a variety of formats,
each separated by a space or newline. The formats accepted are:
- The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
the file browser to the textfield to automatically paste the path. Be sure to remove
extraneous quotation marks and other things that come along for the ride.
- The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
The directory will be scanned from top to bottom (including subfolders) and any
file that can be imported will be.
- A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
and paste directly from a web page, or simply drag the link from the web page
or navigation bar. (You can also use ctrl-shift-V to paste into this field)
The file will be downloaded and installed.
- The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
the format _author_name/model_name_, as in `andite/anything-v4.0`
- The path to a local directory containing a `diffusers`
model. These directories always have the file `model_index.json`
at their top level.
_Select a directory for models to import_ You may select a local
directory for autoimporting at startup time. If you select this
option, the directory you choose will be scanned for new
.ckpt/.safetensors files each time InvokeAI starts up, and any new
files will be automatically imported and made available for your
use.
_Convert imported models into diffusers_ When legacy checkpoint
files are imported, you may select to use them unmodified (the
default) or to convert them into `diffusers` models. The latter
load much faster and have slightly better rendering performance,
but not all checkpoint files can be converted. Note that Stable Diffusion
Version 2.X files are **only** supported in `diffusers` format and will
be converted regardless.
_You can come back to the model install form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
this script. On the command line, it is named `invokeai-model-install`.
12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
for the directory `invokeai` installed in the location you chose at the for the directory `invokeai` installed in the location you chose at the
beginning of the install session. Look for a shell script named `invoke.sh` beginning of the install session. Look for a shell script named `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking (Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
@ -348,25 +461,11 @@ version (recommended), follow these steps:
1. Start the `invoke.sh`/`invoke.bat` launch script from within the 1. Start the `invoke.sh`/`invoke.bat` launch script from within the
`invokeai` root directory. `invokeai` root directory.
2. Choose menu item (6) "Developer's Console". This will launch a new 2. Choose menu item (10) "Update InvokeAI".
command line.
3. Type the following command:
```bash
pip install InvokeAI --upgrade
```
4. Watch the installation run. Once it is complete, you may exit the
command line by typing `exit`, and then start InvokeAI from the
launch script as per usual.
Alternatively, if you wish to get the most recent unreleased
development version, perform the same steps to enter the developer's
console, and then type:
```bash
pip install https://github.com/invoke-ai/InvokeAI/archive/refs/heads/main.zip
```
3. This will launch a menu that gives you the option of:
1. Updating to the latest official release;
2. Updating to the bleeding-edge development version; or
3. Manually entering the tag or branch name of a version of
InvokeAI you wish to try out.

View File

@ -178,7 +178,9 @@ class Generate:
self.model_hash = None self.model_hash = None
self.sampler = None self.sampler = None
self.device = None self.device = None
self.session_peakmem = None self.max_memory_allocated = 0
self.memory_allocated = 0
self.session_peakmem = 0
self.base_generator = None self.base_generator = None
self.seed = None self.seed = None
self.outdir = outdir self.outdir = outdir
@ -781,6 +783,7 @@ class Generate:
embiggen_tiles=opt.embiggen_tiles, embiggen_tiles=opt.embiggen_tiles,
embiggen_strength=opt.embiggen_strength, embiggen_strength=opt.embiggen_strength,
image_callback=callback, image_callback=callback,
clear_cuda_cache=self.clear_cuda_cache,
) )
elif tool == "outpaint": elif tool == "outpaint":
from ldm.invoke.restoration.outpaint import Outpaint from ldm.invoke.restoration.outpaint import Outpaint

View File

@ -62,6 +62,7 @@ def main():
Globals.always_use_cpu = args.always_use_cpu Globals.always_use_cpu = args.always_use_cpu
Globals.internet_available = args.internet_available and check_internet() Globals.internet_available = args.internet_available and check_internet()
Globals.disable_xformers = not args.xformers Globals.disable_xformers = not args.xformers
Globals.sequential_guidance = args.sequential_guidance
Globals.ckpt_convert = args.ckpt_convert Globals.ckpt_convert = args.ckpt_convert
print(f">> Internet connectivity is {Globals.internet_available}") print(f">> Internet connectivity is {Globals.internet_available}")

View File

@ -91,13 +91,14 @@ import pydoc
import re import re
import shlex import shlex
import sys import sys
from argparse import Namespace
from pathlib import Path
import ldm.invoke import ldm.invoke
import ldm.invoke.pngwriter import ldm.invoke.pngwriter
from compel.prompt_parser import split_weighted_subprompts from compel.prompt_parser import split_weighted_subprompts
from ldm.invoke.globals import Globals from ldm.invoke.globals import Globals
from argparse import Namespace
from pathlib import Path
APP_ID = ldm.invoke.__app_id__ APP_ID = ldm.invoke.__app_id__
APP_NAME = ldm.invoke.__app_name__ APP_NAME = ldm.invoke.__app_name__
@ -488,6 +489,13 @@ class Args(object):
action='store_true', action='store_true',
help='Force free gpu memory before final decoding', help='Force free gpu memory before final decoding',
) )
model_group.add_argument(
'--sequential_guidance',
dest='sequential_guidance',
action='store_true',
help="Calculate guidance in serial instead of in parallel, lowering memory requirement "
"at the expense of speed",
)
model_group.add_argument( model_group.add_argument(
'--xformers', '--xformers',
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,

View File

@ -126,7 +126,7 @@ class Generator:
seed = self.new_seed() seed = self.new_seed()
# Free up memory from the last generation. # Free up memory from the last generation.
clear_cuda_cache = kwargs['clear_cuda_cache'] or None clear_cuda_cache = kwargs['clear_cuda_cache'] if 'clear_cuda_cache' in kwargs else None
if clear_cuda_cache is not None: if clear_cuda_cache is not None:
clear_cuda_cache() clear_cuda_cache()

View File

@ -330,7 +330,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
16 * \ 16 * \
latents.size(dim=2) * latents.size(dim=3) * latents.size(dim=2) * latents.size(dim=3) * \ latents.size(dim=2) * latents.size(dim=3) * latents.size(dim=2) * latents.size(dim=3) * \
bytes_per_element_needed_for_baddbmm_duplication bytes_per_element_needed_for_baddbmm_duplication
if max_size_required_for_baddbmm > (mem_free * 3.3 / 4.0): # 3.3 / 4.0 is from old Invoke code if max_size_required_for_baddbmm > (mem_free * 3.0 / 4.0): # 3.3 / 4.0 is from old Invoke code
self.enable_attention_slicing(slice_size='max') self.enable_attention_slicing(slice_size='max')
else: else:
self.disable_attention_slicing() self.disable_attention_slicing()

View File

@ -346,6 +346,7 @@ class Embiggen(Generator):
newinitimage = torch.from_numpy(newinitimage) newinitimage = torch.from_numpy(newinitimage)
newinitimage = 2.0 * newinitimage - 1.0 newinitimage = 2.0 * newinitimage - 1.0
newinitimage = newinitimage.to(self.model.device) newinitimage = newinitimage.to(self.model.device)
clear_cuda_cache = kwargs['clear_cuda_cache'] if 'clear_cuda_cache' in kwargs else None
tile_results = gen_img2img.generate( tile_results = gen_img2img.generate(
prompt, prompt,
@ -363,6 +364,7 @@ class Embiggen(Generator):
init_image = newinitimage, # notice that init_image is different from init_img init_image = newinitimage, # notice that init_image is different from init_img
mask_image = None, mask_image = None,
strength = strength, strength = strength,
clear_cuda_cache = clear_cuda_cache
) )
emb_tile_store.append(tile_results[0][0]) emb_tile_store.append(tile_results[0][0])

View File

@ -13,8 +13,8 @@ the attributes:
import os import os
import os.path as osp import os.path as osp
from pathlib import Path
from argparse import Namespace from argparse import Namespace
from pathlib import Path
from typing import Union from typing import Union
Globals = Namespace() Globals = Namespace()
@ -48,6 +48,9 @@ Globals.internet_available = True
# Whether to disable xformers # Whether to disable xformers
Globals.disable_xformers = False Globals.disable_xformers = False
# Low-memory tradeoff for guidance calculations.
Globals.sequential_guidance = False
# whether we are forcing full precision # whether we are forcing full precision
Globals.full_precision = False Globals.full_precision = False

View File

@ -1,4 +1,3 @@
import math
from contextlib import contextmanager from contextlib import contextmanager
from dataclasses import dataclass from dataclasses import dataclass
from math import ceil from math import ceil
@ -6,13 +5,20 @@ from typing import Callable, Optional, Union, Any, Dict
import numpy as np import numpy as np
import torch import torch
from diffusers.models.cross_attention import AttnProcessor from diffusers.models.cross_attention import AttnProcessor
from typing_extensions import TypeAlias
from ldm.invoke.globals import Globals
from ldm.models.diffusion.cross_attention_control import Arguments, \ from ldm.models.diffusion.cross_attention_control import Arguments, \
restore_default_cross_attention, override_cross_attention, Context, get_cross_attention_modules, \ restore_default_cross_attention, override_cross_attention, Context, get_cross_attention_modules, \
CrossAttentionType, SwapCrossAttnContext CrossAttentionType, SwapCrossAttnContext
from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
ModelForwardCallback: TypeAlias = Union[
# x, t, conditioning, Optional[cross-attention kwargs]
Callable[[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str, Any]]], torch.Tensor],
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor]
]
@dataclass(frozen=True) @dataclass(frozen=True)
class PostprocessingSettings: class PostprocessingSettings:
@ -32,7 +38,7 @@ class InvokeAIDiffuserComponent:
* Hybrid conditioning (used for inpainting) * Hybrid conditioning (used for inpainting)
''' '''
debug_thresholding = False debug_thresholding = False
last_percent_through = 0.0 sequential_guidance = False
@dataclass @dataclass
class ExtraConditioningInfo: class ExtraConditioningInfo:
@ -45,8 +51,7 @@ class InvokeAIDiffuserComponent:
return self.cross_attention_control_args is not None return self.cross_attention_control_args is not None
def __init__(self, model, model_forward_callback: def __init__(self, model, model_forward_callback: ModelForwardCallback,
Callable[[torch.Tensor, torch.Tensor, torch.Tensor, Optional[dict[str,Any]]], torch.Tensor],
is_running_diffusers: bool=False, is_running_diffusers: bool=False,
): ):
""" """
@ -58,7 +63,7 @@ class InvokeAIDiffuserComponent:
self.is_running_diffusers = is_running_diffusers self.is_running_diffusers = is_running_diffusers
self.model_forward_callback = model_forward_callback self.model_forward_callback = model_forward_callback
self.cross_attention_control_context = None self.cross_attention_control_context = None
self.last_percent_through = 0.0 self.sequential_guidance = Globals.sequential_guidance
@contextmanager @contextmanager
def custom_attention_context(self, def custom_attention_context(self,
@ -146,11 +151,20 @@ class InvokeAIDiffuserComponent:
wants_hybrid_conditioning = isinstance(conditioning, dict) wants_hybrid_conditioning = isinstance(conditioning, dict)
if wants_hybrid_conditioning: if wants_hybrid_conditioning:
unconditioned_next_x, conditioned_next_x = self.apply_hybrid_conditioning(x, sigma, unconditioning, conditioning) unconditioned_next_x, conditioned_next_x = self._apply_hybrid_conditioning(x, sigma, unconditioning,
conditioning)
elif wants_cross_attention_control: elif wants_cross_attention_control:
unconditioned_next_x, conditioned_next_x = self.apply_cross_attention_controlled_conditioning(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do) unconditioned_next_x, conditioned_next_x = self._apply_cross_attention_controlled_conditioning(x, sigma,
unconditioning,
conditioning,
cross_attention_control_types_to_do)
elif self.sequential_guidance:
unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning_sequentially(
x, sigma, unconditioning, conditioning)
else: else:
unconditioned_next_x, conditioned_next_x = self.apply_standard_conditioning(x, sigma, unconditioning, conditioning) unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning(
x, sigma, unconditioning, conditioning)
combined_next_x = self._combine(unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale) combined_next_x = self._combine(unconditioned_next_x, conditioned_next_x, unconditional_guidance_scale)
@ -185,7 +199,7 @@ class InvokeAIDiffuserComponent:
# methods below are called from do_diffusion_step and should be considered private to this class. # methods below are called from do_diffusion_step and should be considered private to this class.
def apply_standard_conditioning(self, x, sigma, unconditioning, conditioning): def _apply_standard_conditioning(self, x, sigma, unconditioning, conditioning):
# fast batched path # fast batched path
x_twice = torch.cat([x] * 2) x_twice = torch.cat([x] * 2)
sigma_twice = torch.cat([sigma] * 2) sigma_twice = torch.cat([sigma] * 2)
@ -198,7 +212,17 @@ class InvokeAIDiffuserComponent:
return unconditioned_next_x, conditioned_next_x return unconditioned_next_x, conditioned_next_x
def apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning): def _apply_standard_conditioning_sequentially(self, x: torch.Tensor, sigma, unconditioning: torch.Tensor, conditioning: torch.Tensor):
# low-memory sequential path
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning)
if conditioned_next_x.device.type == 'mps':
# prevent a result filled with zeros. seems to be a torch bug.
conditioned_next_x = conditioned_next_x.clone()
return unconditioned_next_x, conditioned_next_x
def _apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning):
assert isinstance(conditioning, dict) assert isinstance(conditioning, dict)
assert isinstance(unconditioning, dict) assert isinstance(unconditioning, dict)
x_twice = torch.cat([x] * 2) x_twice = torch.cat([x] * 2)
@ -216,18 +240,21 @@ class InvokeAIDiffuserComponent:
return unconditioned_next_x, conditioned_next_x return unconditioned_next_x, conditioned_next_x
def apply_cross_attention_controlled_conditioning(self, def _apply_cross_attention_controlled_conditioning(self,
x: torch.Tensor, x: torch.Tensor,
sigma, sigma,
unconditioning, unconditioning,
conditioning, conditioning,
cross_attention_control_types_to_do): cross_attention_control_types_to_do):
if self.is_running_diffusers: if self.is_running_diffusers:
return self.apply_cross_attention_controlled_conditioning__diffusers(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do) return self._apply_cross_attention_controlled_conditioning__diffusers(x, sigma, unconditioning,
conditioning,
cross_attention_control_types_to_do)
else: else:
return self.apply_cross_attention_controlled_conditioning__compvis(x, sigma, unconditioning, conditioning, cross_attention_control_types_to_do) return self._apply_cross_attention_controlled_conditioning__compvis(x, sigma, unconditioning, conditioning,
cross_attention_control_types_to_do)
def apply_cross_attention_controlled_conditioning__diffusers(self, def _apply_cross_attention_controlled_conditioning__diffusers(self,
x: torch.Tensor, x: torch.Tensor,
sigma, sigma,
unconditioning, unconditioning,
@ -250,7 +277,7 @@ class InvokeAIDiffuserComponent:
return unconditioned_next_x, conditioned_next_x return unconditioned_next_x, conditioned_next_x
def apply_cross_attention_controlled_conditioning__compvis(self, x:torch.Tensor, sigma, unconditioning, conditioning, cross_attention_control_types_to_do): def _apply_cross_attention_controlled_conditioning__compvis(self, x:torch.Tensor, sigma, unconditioning, conditioning, cross_attention_control_types_to_do):
# print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do) # print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do)
# slower non-batched path (20% slower on mac MPS) # slower non-batched path (20% slower on mac MPS)
# We are only interested in using attention maps for conditioned_next_x, but batching them with generation of # We are only interested in using attention maps for conditioned_next_x, but batching them with generation of