fix merge issues; likely nonfunctional
BIN
docs/assets/gallery/board_settings.png
Normal file
After Width: | Height: | Size: 23 KiB |
BIN
docs/assets/gallery/board_tabs.png
Normal file
After Width: | Height: | Size: 2.7 KiB |
BIN
docs/assets/gallery/board_thumbnails.png
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
docs/assets/gallery/gallery.png
Normal file
After Width: | Height: | Size: 221 KiB |
BIN
docs/assets/gallery/image_menu.png
Normal file
After Width: | Height: | Size: 53 KiB |
BIN
docs/assets/gallery/info_button.png
Normal file
After Width: | Height: | Size: 786 B |
BIN
docs/assets/gallery/thumbnail_menu.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
docs/assets/gallery/top_controls.png
Normal file
After Width: | Height: | Size: 3.3 KiB |
92
docs/features/GALLERY.md
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
---
|
||||||
|
title: InvokeAI Gallery Panel
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-web: InvokeAI Gallery Panel
|
||||||
|
|
||||||
|
## Quick guided walkthrough of the Gallery Panel's features
|
||||||
|
|
||||||
|
The Gallery Panel is a fast way to review, find, and make use of images you've
|
||||||
|
generated and loaded. The Gallery is divided into Boards. The Uncategorized board is always
|
||||||
|
present but you can create your own for better organization.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Board Display and Settings
|
||||||
|
|
||||||
|
At the very top of the Gallery Panel are the boards disclosure and settings buttons.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The disclosure button shows the name of the currently selected board and allows you to show and hide the board thumbnails (shown in the image below).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The settings button opens a list of options.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- ***Image Size*** this slider lets you control the size of the image previews (images of three different sizes).
|
||||||
|
- ***Auto-Switch to New Images*** if you turn this on, whenever a new image is generated, it will automatically be loaded into the current image panel on the Text to Image tab and into the result panel on the [Image to Image](IMG2IMG.md) tab. This will happen invisibly if you are on any other tab when the image is generated.
|
||||||
|
- ***Auto-Assign Board on Click*** whenever an image is generated or saved, it always gets put in a board. The board it gets put into is marked with AUTO (image of board marked). Turning on Auto-Assign Board on Click will make whichever board you last selected be the destination when you click Invoke. That means you can click Invoke, select a different board, and then click Invoke again and the two images will be put in two different boards. (bold)It's the board selected when Invoke is clicked that's used, not the board that's selected when the image is finished generating.(bold) Turning this off, enables the Auto-Add Board drop down which lets you set one specific board to always put generated images into. This also enables and disables the Auto-add to this Board menu item described below.
|
||||||
|
- ***Always Show Image Size Badge*** this toggles whether to show image sizes for each image preview (show two images, one with sizes shown, one without)
|
||||||
|
|
||||||
|
Below these two buttons, you'll see the Search Boards text entry area. You use this to search for specific boards by the name of the board.
|
||||||
|
Next to it is the Add Board (+) button which lets you add new boards. Boards can be renamed by clicking on the name of the board under its thumbnail and typing in the new name.
|
||||||
|
|
||||||
|
### Board Thumbnail Menu
|
||||||
|
|
||||||
|
Each board has a context menu (ctrl+click / right-click).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- ***Auto-add to this Board*** if you've disabled Auto-Assign Board on Click in the board settings, you can use this option to set this board to be where new images are put.
|
||||||
|
- ***Download Board*** this will add all the images in the board into a zip file and provide a link to it in a notification (image of notification)
|
||||||
|
- ***Delete Board*** this will delete the board
|
||||||
|
> [!CAUTION]
|
||||||
|
> This will delete all the images in the board and the board itself.
|
||||||
|
|
||||||
|
### Board Contents
|
||||||
|
|
||||||
|
Every board is organized by two tabs, Images and Assets.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Images are the Invoke-generated images that are placed into the board. Assets are images that you upload into Invoke to be used as an [Image Prompt](https://support.invoke.ai/support/solutions/articles/151000159340-using-the-image-prompt-adapter-ip-adapter-) or in the [Image to Image](IMG2IMG.md) tab.
|
||||||
|
|
||||||
|
### Image Thumbnail Menu
|
||||||
|
|
||||||
|
Every image generated by Invoke has its generation information stored as text inside the image file itself. This can be read directly by selecting the image and clicking on the Info button  in any of the image result panels.
|
||||||
|
|
||||||
|
Each image also has a context menu (ctrl+click / right-click).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The options are (items marked with an * will not work with images that lack generation information):
|
||||||
|
- ***Open in New Tab*** this will open the image alone in a new browser tab, separate from the Invoke interface.
|
||||||
|
- ***Download Image*** this will trigger your browser to download the image.
|
||||||
|
- ***Load Workflow **** this will load any workflow settings into the Workflow tab and automatically open it.
|
||||||
|
- ***Remix Image **** this will load all of the image's generation information, (bold)excluding its Seed, into the left hand control panel
|
||||||
|
- ***Use Prompt **** this will load only the image's text prompts into the left-hand control panel
|
||||||
|
- ***Use Seed **** this will load only the image's Seed into the left-hand control panel
|
||||||
|
- ***Use All **** this will load all of the image's generation information into the left-hand control panel
|
||||||
|
- ***Send to Image to Image*** this will put the image into the left-hand panel in the Image to Image tab ana automatically open it
|
||||||
|
- ***Send to Unified Canvas*** This will (bold)replace whatever is already present(bold) in the Unified Canvas tab with the image and automatically open the tab
|
||||||
|
- ***Change Board*** this will oipen a small window that will let you move the image to a different board. This is the same as dragging the image to that board's thumbnail.
|
||||||
|
- ***Star Image*** this will add the image to the board's list of starred images that are always kept at the top of the gallery. This is the same as clicking on the star on the top right-hand side of the image that appears when you hover over the image with the mouse
|
||||||
|
- ***Delete Image*** this will delete the image from the board
|
||||||
|
> [!CAUTION]
|
||||||
|
> This will delete the image entirely from Invoke.
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This walkthrough only covers the Gallery interface and Boards. Actually generating images is handled by [Prompts](PROMPTS.md), the [Image to Image](IMG2IMG.md) tab, and the [Unified Canvas](UNIFIED_CANVAS.md).
|
||||||
|
|
||||||
|
## Acknowledgements
|
||||||
|
|
||||||
|
A huge shout-out to the core team working to make the Web GUI a reality,
|
||||||
|
including [psychedelicious](https://github.com/psychedelicious),
|
||||||
|
[Kyle0654](https://github.com/Kyle0654) and
|
||||||
|
[blessedcoolant](https://github.com/blessedcoolant).
|
||||||
|
[hipsterusername](https://github.com/hipsterusername) was the team's unofficial
|
||||||
|
cheerleader and added tooltips/docs.
|
@ -108,40 +108,6 @@ Can be used with .and():
|
|||||||
Each will give you different results - try them out and see what you prefer!
|
Each will give you different results - try them out and see what you prefer!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Cross-Attention Control ('prompt2prompt')
|
|
||||||
|
|
||||||
Sometimes an image you generate is almost right, and you just want to change one
|
|
||||||
detail without affecting the rest. You could use a photo editor and inpainting
|
|
||||||
to overpaint the area, but that's a pain. Here's where `prompt2prompt` comes in
|
|
||||||
handy.
|
|
||||||
|
|
||||||
Generate an image with a given prompt, record the seed of the image, and then
|
|
||||||
use the `prompt2prompt` syntax to substitute words in the original prompt for
|
|
||||||
words in a new prompt. This works for `img2img` as well.
|
|
||||||
|
|
||||||
For example, consider the prompt `a cat.swap(dog) playing with a ball in the forest`. Normally, because the words interact with each other when doing a stable diffusion image generation, these two prompts would generate different compositions:
|
|
||||||
- `a cat playing with a ball in the forest`
|
|
||||||
- `a dog playing with a ball in the forest`
|
|
||||||
|
|
||||||
| `a cat playing with a ball in the forest` | `a dog playing with a ball in the forest` |
|
|
||||||
| --- | --- |
|
|
||||||
| img | img |
|
|
||||||
|
|
||||||
|
|
||||||
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
|
|
||||||
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
|
|
||||||
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to (bloc97's)[(https://github.com/bloc97/CrossAttentionControl)] `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
|
|
||||||
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
|
|
||||||
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
|
|
||||||
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
|
|
||||||
> For img2img, the step sequence does not start at 0 but instead at `(1.0-strength)` - so if the img2img `strength` is `0.7`, `t_start` and `t_end` must both be greater than `0.3` (`1.0-0.7`) to have any effect.
|
|
||||||
|
|
||||||
Prompt2prompt `.swap()` is not compatible with xformers, which will be temporarily disabled when doing a `.swap()` - so you should expect to use more VRAM and run slower that with xformers enabled.
|
|
||||||
|
|
||||||
The `prompt2prompt` code is based off
|
|
||||||
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
|
||||||
|
|
||||||
### Escaping parentheses and speech marks
|
### Escaping parentheses and speech marks
|
||||||
|
|
||||||
If the model you are using has parentheses () or speech marks "" as part of its
|
If the model you are using has parentheses () or speech marks "" as part of its
|
||||||
|
@ -54,7 +54,7 @@ main sections:
|
|||||||
of buttons at the top lets you modify and manipulate the image in
|
of buttons at the top lets you modify and manipulate the image in
|
||||||
various ways.
|
various ways.
|
||||||
|
|
||||||
3. A **gallery** section on the left that contains a history of the images you
|
3. A **gallery** section on the right that contains a history of the images you
|
||||||
have generated. These images are read and written to the directory specified
|
have generated. These images are read and written to the directory specified
|
||||||
in the `INVOKEAIROOT/invokeai.yaml` initialization file, usually a directory
|
in the `INVOKEAIROOT/invokeai.yaml` initialization file, usually a directory
|
||||||
named `outputs` in `INVOKEAIROOT`.
|
named `outputs` in `INVOKEAIROOT`.
|
||||||
|
@ -18,12 +18,47 @@ Note that any releases marked as _pre-release_ are in a beta state. You may expe
|
|||||||
|
|
||||||
The Model Manager tab in the UI provides a few ways to install models, including using your already-downloaded models. You'll see a popup directing you there on first startup. For more information, see the [model install docs].
|
The Model Manager tab in the UI provides a few ways to install models, including using your already-downloaded models. You'll see a popup directing you there on first startup. For more information, see the [model install docs].
|
||||||
|
|
||||||
|
## Missing models after updating to v4
|
||||||
|
|
||||||
|
If you find some models are missing after updating to v4, it's likely they weren't correctly registered before the update and didn't get picked up in the migration.
|
||||||
|
|
||||||
|
You can use the `Scan Folder` tab in the Model Manager UI to fix this. The models will either be in the old, now-unused `autoimport` folder, or your `models` folder.
|
||||||
|
|
||||||
|
- Find and copy your install's old `autoimport` folder path, install the main install folder.
|
||||||
|
- Go to the Model Manager and click `Scan Folder`.
|
||||||
|
- Paste the path and scan.
|
||||||
|
- IMPORTANT: Uncheck `Inplace install`.
|
||||||
|
- Click `Install All` to install all found models, or just install the models you want.
|
||||||
|
|
||||||
|
Next, find and copy your install's `models` folder path (this could be your custom models folder path, or the `models` folder inside the main install folder).
|
||||||
|
|
||||||
|
Follow the same steps to scan and import the missing models.
|
||||||
|
|
||||||
## Slow generation
|
## Slow generation
|
||||||
|
|
||||||
- Check the [system requirements] to ensure that your system is capable of generating images.
|
- Check the [system requirements] to ensure that your system is capable of generating images.
|
||||||
- Check the `ram` setting in `invokeai.yaml`. This setting tells Invoke how much of your system RAM can be used to cache models. Having this too high or too low can slow things down. That said, it's generally safest to not set this at all and instead let Invoke manage it.
|
- Check the `ram` setting in `invokeai.yaml`. This setting tells Invoke how much of your system RAM can be used to cache models. Having this too high or too low can slow things down. That said, it's generally safest to not set this at all and instead let Invoke manage it.
|
||||||
- Check the `vram` setting in `invokeai.yaml`. This setting tells Invoke how much of your GPU VRAM can be used to cache models. Counter-intuitively, if this setting is too high, Invoke will need to do a lot of shuffling of models as it juggles the VRAM cache and the currently-loaded model. The default value of 0.25 is generally works well for GPUs without 16GB or more VRAM. Even on a 24GB card, the default works well.
|
- Check the `vram` setting in `invokeai.yaml`. This setting tells Invoke how much of your GPU VRAM can be used to cache models. Counter-intuitively, if this setting is too high, Invoke will need to do a lot of shuffling of models as it juggles the VRAM cache and the currently-loaded model. The default value of 0.25 is generally works well for GPUs without 16GB or more VRAM. Even on a 24GB card, the default works well.
|
||||||
- Check that your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. If your GPU isn't used, re-install to ensure the correct versions of torch get installed.
|
- Check that your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. If your GPU isn't used, re-install to ensure the correct versions of torch get installed.
|
||||||
|
- If you are on Windows, you may have exceeded your GPU's VRAM capacity and are using slower [shared GPU memory](#shared-gpu-memory-windows). There's a guide to opt out of this behaviour in the linked FAQ entry.
|
||||||
|
|
||||||
|
## Shared GPU Memory (Windows)
|
||||||
|
|
||||||
|
!!! tip "Nvidia GPUs with driver 536.40"
|
||||||
|
|
||||||
|
This only applies to current Nvidia cards with driver 536.40 or later, released in June 2023.
|
||||||
|
|
||||||
|
When the GPU doesn't have enough VRAM for a task, Windows is able to allocate some of its CPU RAM to the GPU. This is much slower than VRAM, but it does allow the system to generate when it otherwise might no have enough VRAM.
|
||||||
|
|
||||||
|
When shared GPU memory is used, generation slows down dramatically - but at least it doesn't crash.
|
||||||
|
|
||||||
|
If you'd like to opt out of this behavior and instead get an error when you exceed your GPU's VRAM, follow [this guide from Nvidia](https://nvidia.custhelp.com/app/answers/detail/a_id/5490).
|
||||||
|
|
||||||
|
Here's how to get the python path required in the linked guide:
|
||||||
|
|
||||||
|
- Run `invoke.bat`.
|
||||||
|
- Select option 2 for developer console.
|
||||||
|
- At least one python path will be printed. Copy the path that includes your invoke installation directory (typically the first).
|
||||||
|
|
||||||
## Installer cannot find python (Windows)
|
## Installer cannot find python (Windows)
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ The installation process is simple, with a few prompts:
|
|||||||
|
|
||||||
- Select the version to install. Unless you have a specific reason to install a specific version, select the default (the latest version).
|
- Select the version to install. Unless you have a specific reason to install a specific version, select the default (the latest version).
|
||||||
- Select location for the install. Be sure you have enough space in this folder for the base application, as described in the [installation requirements].
|
- Select location for the install. Be sure you have enough space in this folder for the base application, as described in the [installation requirements].
|
||||||
- Select a GPU device. If you are unsure, you can let the installer figure it out.
|
- Select a GPU device.
|
||||||
|
|
||||||
!!! info "Slow Installation"
|
!!! info "Slow Installation"
|
||||||
|
|
||||||
|
@ -6,11 +6,7 @@
|
|||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
!!! tip "Conda"
|
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer and launcher that you'll need to manage manually, described in this guide.
|
||||||
|
|
||||||
As of InvokeAI v2.3.0 installation using the `conda` package manager is no longer being supported. It will likely still work, but we are not testing this installation method.
|
|
||||||
|
|
||||||
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer that you'll need to manage manually, described in this guide.
|
|
||||||
|
|
||||||
### Requirements
|
### Requirements
|
||||||
|
|
||||||
@ -40,11 +36,11 @@ Before you start, go through the [installation requirements].
|
|||||||
|
|
||||||
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
|
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
|
||||||
|
|
||||||
!!! info "Virtual Environment Location"
|
!!! warning "Virtual Environment Location"
|
||||||
|
|
||||||
While you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This allows the application to automatically detect its data directories.
|
While you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This allows the application to automatically detect its data directories.
|
||||||
|
|
||||||
If you choose a different location for the venv, then you must set the `INVOKEAI_ROOT` environment variable or pass the directory using the `--root` CLI arg.
|
If you choose a different location for the venv, then you _must_ set the `INVOKEAI_ROOT` environment variable or specify the root directory using the `--root` CLI arg.
|
||||||
|
|
||||||
```terminal
|
```terminal
|
||||||
cd $INVOKEAI_ROOT
|
cd $INVOKEAI_ROOT
|
||||||
@ -81,31 +77,23 @@ Before you start, go through the [installation requirements].
|
|||||||
python3 -m pip install --upgrade pip
|
python3 -m pip install --upgrade pip
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Install the InvokeAI Package. The `--extra-index-url` option is used to select the correct `torch` backend:
|
1. Install the InvokeAI Package. The base command is `pip install InvokeAI --use-pep517`, but you may need to change this depending on your system and the desired features.
|
||||||
|
|
||||||
=== "CUDA (NVidia)"
|
- You may need to provide an [extra index URL]. Select your platform configuration using [this tool on the PyTorch website]. Copy the `--extra-index-url` string from this and append it to your install command.
|
||||||
|
|
||||||
```bash
|
!!! example "Install with an extra index URL"
|
||||||
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "ROCm (AMD)"
|
```bash
|
||||||
|
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
- If you have a CUDA GPU and want to install with `xformers`, you need to add an option to the package name. Note that `xformers` is not necessary. PyTorch includes an implementation of the SDP attention algorithm with the same performance.
|
||||||
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "CPU (Intel Macs & non-GPU systems)"
|
!!! example "Install with `xformers`"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
|
pip install "InvokeAI[xformers]" --use-pep517
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "MPS (Apple Silicon)"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install InvokeAI --use-pep517
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
|
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
|
||||||
|
|
||||||
@ -126,37 +114,6 @@ Before you start, go through the [installation requirements].
|
|||||||
|
|
||||||
Run `invokeai-web` to start the UI. You must activate the virtual environment before running the app.
|
Run `invokeai-web` to start the UI. You must activate the virtual environment before running the app.
|
||||||
|
|
||||||
If the virtual environment you selected is NOT inside `INVOKEAI_ROOT`, then you must specify the path to the root directory by adding
|
!!! warning
|
||||||
`--root_dir \path\to\invokeai`.
|
|
||||||
|
|
||||||
!!! tip
|
If the virtual environment is _not_ inside the root directory, then you _must_ specify the path to the root directory with `--root_dir \path\to\invokeai` or the `INVOKEAI_ROOT` environment variable.
|
||||||
|
|
||||||
You can permanently set the location of the runtime directory
|
|
||||||
by setting the environment variable `INVOKEAI_ROOT` to the
|
|
||||||
path of the directory. As mentioned previously, this is
|
|
||||||
recommended if your virtual environment is located outside of
|
|
||||||
your runtime directory.
|
|
||||||
|
|
||||||
## Unsupported Conda Install
|
|
||||||
|
|
||||||
Congratulations, you found the "secret" Conda installation instructions. If you really **really** want to use Conda with InvokeAI, you can do so using this unsupported recipe:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
mkdir ~/invokeai
|
|
||||||
conda create -n invokeai python=3.11
|
|
||||||
conda activate invokeai
|
|
||||||
# Adjust this as described above for the appropriate torch backend
|
|
||||||
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
|
|
||||||
invokeai-web --root ~/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
The `pip install` command shown in this recipe is for Linux/Windows
|
|
||||||
systems with an NVIDIA GPU. See step (6) above for the command to use
|
|
||||||
with other platforms/GPU combinations. If you don't wish to pass the
|
|
||||||
`--root` argument to `invokeai` with each launch, you may set the
|
|
||||||
environment variable `INVOKEAI_ROOT` to point to the installation directory.
|
|
||||||
|
|
||||||
Note that if you run into problems with the Conda installation, the InvokeAI
|
|
||||||
staff will **not** be able to help you out. Caveat Emptor!
|
|
||||||
|
|
||||||
[installation requirements]: INSTALL_REQUIREMENTS.md
|
|
||||||
|
@ -23,6 +23,7 @@ If you have an interest in how InvokeAI works, or you would like to add features
|
|||||||
|
|
||||||
1. [Fork and clone] the [InvokeAI repo].
|
1. [Fork and clone] the [InvokeAI repo].
|
||||||
1. Follow the [manual installation] docs to create a new virtual environment for the development install.
|
1. Follow the [manual installation] docs to create a new virtual environment for the development install.
|
||||||
|
- Create a new folder outside the repo root for the installation and create the venv inside that folder.
|
||||||
- When installing the InvokeAI package, add `-e` to the command so you get an [editable install].
|
- When installing the InvokeAI package, add `-e` to the command so you get an [editable install].
|
||||||
1. Install the [frontend dev toolchain] and do a production build of the UI as described.
|
1. Install the [frontend dev toolchain] and do a production build of the UI as described.
|
||||||
1. You can now run the app as described in the [manual installation] docs.
|
1. You can now run the app as described in the [manual installation] docs.
|
||||||
@ -32,5 +33,5 @@ As described in the [frontend dev toolchain] docs, you can run the UI using a de
|
|||||||
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
|
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
|
||||||
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
|
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
|
||||||
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
|
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
|
||||||
[manual installation]: installation/020_INSTALL_MANUAL.md
|
[manual installation]: ./020_INSTALL_MANUAL.md
|
||||||
[editable install]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e
|
[editable install]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
InvokeAI installer script
|
InvokeAI installer script
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import locale
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import re
|
import re
|
||||||
@ -316,7 +317,9 @@ def upgrade_pip(venv_path: Path) -> str | None:
|
|||||||
python = str(venv_path.expanduser().resolve() / python)
|
python = str(venv_path.expanduser().resolve() / python)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
|
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode(
|
||||||
|
encoding=locale.getpreferredencoding()
|
||||||
|
)
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
print(e)
|
print(e)
|
||||||
result = None
|
result = None
|
||||||
@ -404,22 +407,29 @@ def get_torch_source() -> Tuple[str | None, str | None]:
|
|||||||
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
|
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
|
||||||
device = select_gpu()
|
device = select_gpu()
|
||||||
|
|
||||||
|
# The correct extra index URLs for torch are inconsistent, see https://pytorch.org/get-started/locally/#start-locally
|
||||||
|
|
||||||
url = None
|
url = None
|
||||||
optional_modules = "[onnx]"
|
optional_modules: str | None = None
|
||||||
if OS == "Linux":
|
if OS == "Linux":
|
||||||
if device.value == "rocm":
|
if device.value == "rocm":
|
||||||
url = "https://download.pytorch.org/whl/rocm5.6"
|
url = "https://download.pytorch.org/whl/rocm5.6"
|
||||||
elif device.value == "cpu":
|
elif device.value == "cpu":
|
||||||
url = "https://download.pytorch.org/whl/cpu"
|
url = "https://download.pytorch.org/whl/cpu"
|
||||||
|
elif device.value == "cuda":
|
||||||
|
# CUDA uses the default PyPi index
|
||||||
|
optional_modules = "[xformers,onnx-cuda]"
|
||||||
elif OS == "Windows":
|
elif OS == "Windows":
|
||||||
if device.value == "cuda":
|
if device.value == "cuda":
|
||||||
url = "https://download.pytorch.org/whl/cu121"
|
url = "https://download.pytorch.org/whl/cu121"
|
||||||
optional_modules = "[xformers,onnx-cuda]"
|
optional_modules = "[xformers,onnx-cuda]"
|
||||||
if device.value == "cuda_and_dml":
|
elif device.value == "cpu":
|
||||||
url = "https://download.pytorch.org/whl/cu121"
|
# CPU uses the default PyPi index, no optional modules
|
||||||
optional_modules = "[xformers,onnx-directml]"
|
pass
|
||||||
|
elif OS == "Darwin":
|
||||||
|
# macOS uses the default PyPi index, no optional modules
|
||||||
|
pass
|
||||||
|
|
||||||
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13
|
# Fall back to defaults
|
||||||
|
|
||||||
return (url, optional_modules)
|
return (url, optional_modules)
|
||||||
|
@ -207,10 +207,8 @@ def dest_path(dest: Optional[str | Path] = None) -> Path | None:
|
|||||||
|
|
||||||
class GpuType(Enum):
|
class GpuType(Enum):
|
||||||
CUDA = "cuda"
|
CUDA = "cuda"
|
||||||
CUDA_AND_DML = "cuda_and_dml"
|
|
||||||
ROCM = "rocm"
|
ROCM = "rocm"
|
||||||
CPU = "cpu"
|
CPU = "cpu"
|
||||||
AUTODETECT = "autodetect"
|
|
||||||
|
|
||||||
|
|
||||||
def select_gpu() -> GpuType:
|
def select_gpu() -> GpuType:
|
||||||
@ -226,10 +224,6 @@ def select_gpu() -> GpuType:
|
|||||||
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
|
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
|
||||||
GpuType.CUDA,
|
GpuType.CUDA,
|
||||||
)
|
)
|
||||||
nvidia_with_dml = (
|
|
||||||
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
|
|
||||||
GpuType.CUDA_AND_DML,
|
|
||||||
)
|
|
||||||
amd = (
|
amd = (
|
||||||
"an [gold1 b]AMD[/] GPU (using ROCm™)",
|
"an [gold1 b]AMD[/] GPU (using ROCm™)",
|
||||||
GpuType.ROCM,
|
GpuType.ROCM,
|
||||||
@ -238,27 +232,19 @@ def select_gpu() -> GpuType:
|
|||||||
"Do not install any GPU support, use CPU for generation (slow)",
|
"Do not install any GPU support, use CPU for generation (slow)",
|
||||||
GpuType.CPU,
|
GpuType.CPU,
|
||||||
)
|
)
|
||||||
autodetect = (
|
|
||||||
"I'm not sure what to choose",
|
|
||||||
GpuType.AUTODETECT,
|
|
||||||
)
|
|
||||||
|
|
||||||
options = []
|
options = []
|
||||||
if OS == "Windows":
|
if OS == "Windows":
|
||||||
options = [nvidia, nvidia_with_dml, cpu]
|
options = [nvidia, cpu]
|
||||||
if OS == "Linux":
|
if OS == "Linux":
|
||||||
options = [nvidia, amd, cpu]
|
options = [nvidia, amd, cpu]
|
||||||
elif OS == "Darwin":
|
elif OS == "Darwin":
|
||||||
options = [cpu]
|
options = [cpu]
|
||||||
# future CoreML?
|
|
||||||
|
|
||||||
if len(options) == 1:
|
if len(options) == 1:
|
||||||
print(f'Your platform [gold1]{OS}-{ARCH}[/] only supports the "{options[0][1]}" driver. Proceeding with that.')
|
print(f'Your platform [gold1]{OS}-{ARCH}[/] only supports the "{options[0][1]}" driver. Proceeding with that.')
|
||||||
return options[0][1]
|
return options[0][1]
|
||||||
|
|
||||||
# "I don't know" is always added the last option
|
|
||||||
options.append(autodetect) # type: ignore
|
|
||||||
|
|
||||||
options = {str(i): opt for i, opt in enumerate(options, 1)}
|
options = {str(i): opt for i, opt in enumerate(options, 1)}
|
||||||
|
|
||||||
console.rule(":space_invader: GPU (Graphics Card) selection :space_invader:")
|
console.rule(":space_invader: GPU (Graphics Card) selection :space_invader:")
|
||||||
@ -292,11 +278,6 @@ def select_gpu() -> GpuType:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
if options[choice][1] is GpuType.AUTODETECT:
|
|
||||||
console.print(
|
|
||||||
"No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
|
|
||||||
)
|
|
||||||
|
|
||||||
return options[choice][1]
|
return options[choice][1]
|
||||||
|
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ from pydantic import BaseModel, Field
|
|||||||
|
|
||||||
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
||||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||||
from invokeai.backend.image_util.patchmatch import PatchMatch
|
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
|
||||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||||
from invokeai.backend.util.logging import logging
|
from invokeai.backend.util.logging import logging
|
||||||
from invokeai.version import __version__
|
from invokeai.version import __version__
|
||||||
@ -100,7 +100,7 @@ async def get_app_deps() -> AppDependencyVersions:
|
|||||||
|
|
||||||
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
|
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
|
||||||
async def get_config() -> AppConfig:
|
async def get_config() -> AppConfig:
|
||||||
infill_methods = ["tile", "lama", "cv2"]
|
infill_methods = ["tile", "lama", "cv2", "color"] # TODO: add mosaic back
|
||||||
if PatchMatch.patchmatch_available():
|
if PatchMatch.patchmatch_available():
|
||||||
infill_methods.append("patchmatch")
|
infill_methods.append("patchmatch")
|
||||||
|
|
||||||
|
@ -219,28 +219,13 @@ async def scan_for_models(
|
|||||||
non_core_model_paths = [p for p in found_model_paths if not p.is_relative_to(core_models_path)]
|
non_core_model_paths = [p for p in found_model_paths if not p.is_relative_to(core_models_path)]
|
||||||
|
|
||||||
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
|
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
|
||||||
resolved_installed_model_paths: list[str] = []
|
|
||||||
installed_model_sources: list[str] = []
|
|
||||||
|
|
||||||
# This call lists all installed models.
|
|
||||||
for model in installed_models:
|
|
||||||
path = pathlib.Path(model.path)
|
|
||||||
# If the model has a source, we need to add it to the list of installed sources.
|
|
||||||
if model.source:
|
|
||||||
installed_model_sources.append(model.source)
|
|
||||||
# If the path is not absolute, that means it is in the app models directory, and we need to join it with
|
|
||||||
# the models path before resolving.
|
|
||||||
if not path.is_absolute():
|
|
||||||
resolved_installed_model_paths.append(str(pathlib.Path(models_path, path).resolve()))
|
|
||||||
continue
|
|
||||||
resolved_installed_model_paths.append(str(path.resolve()))
|
|
||||||
|
|
||||||
scan_results: list[FoundModel] = []
|
scan_results: list[FoundModel] = []
|
||||||
|
|
||||||
# Check if the model is installed by comparing the resolved paths, appending to the scan result.
|
# Check if the model is installed by comparing paths, appending to the scan result.
|
||||||
for p in non_core_model_paths:
|
for p in non_core_model_paths:
|
||||||
path = str(p)
|
path = str(p)
|
||||||
is_installed = path in resolved_installed_model_paths or path in installed_model_sources
|
is_installed = any(str(models_path / m.path) == path for m in installed_models)
|
||||||
found_model = FoundModel(path=path, is_installed=is_installed)
|
found_model = FoundModel(path=path, is_installed=is_installed)
|
||||||
scan_results.append(found_model)
|
scan_results.append(found_model)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -28,7 +28,7 @@ from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
|
|||||||
from invokeai.app.invocations.model import ModelIdentifierField
|
from invokeai.app.invocations.model import ModelIdentifierField
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
|
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
|
||||||
from invokeai.backend.util.devices import get_torch_device_name
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from ..backend.util.logging import InvokeAILogger
|
from ..backend.util.logging import InvokeAILogger
|
||||||
from .api.dependencies import ApiDependencies
|
from .api.dependencies import ApiDependencies
|
||||||
@ -63,7 +63,7 @@ logger = InvokeAILogger.get_logger(config=app_config)
|
|||||||
mimetypes.add_type("application/javascript", ".js")
|
mimetypes.add_type("application/javascript", ".js")
|
||||||
mimetypes.add_type("text/css", ".css")
|
mimetypes.add_type("text/css", ".css")
|
||||||
|
|
||||||
torch_device_name = get_torch_device_name()
|
torch_device_name = TorchDevice.get_torch_device_name()
|
||||||
logger.info(f"Using torch device: {torch_device_name}")
|
logger.info(f"Using torch device: {torch_device_name}")
|
||||||
|
|
||||||
|
|
||||||
|
@ -5,7 +5,15 @@ from compel import Compel, ReturnedEmbeddingsType
|
|||||||
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
||||||
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
|
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIComponent
|
from invokeai.app.invocations.fields import (
|
||||||
|
ConditioningField,
|
||||||
|
FieldDescriptions,
|
||||||
|
Input,
|
||||||
|
InputField,
|
||||||
|
OutputField,
|
||||||
|
TensorField,
|
||||||
|
UIComponent,
|
||||||
|
)
|
||||||
from invokeai.app.invocations.primitives import ConditioningOutput
|
from invokeai.app.invocations.primitives import ConditioningOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.ti_utils import generate_ti_list
|
from invokeai.app.util.ti_utils import generate_ti_list
|
||||||
@ -14,10 +22,9 @@ from invokeai.backend.model_patcher import ModelPatcher
|
|||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||||
BasicConditioningInfo,
|
BasicConditioningInfo,
|
||||||
ConditioningFieldData,
|
ConditioningFieldData,
|
||||||
ExtraConditioningInfo,
|
|
||||||
SDXLConditioningInfo,
|
SDXLConditioningInfo,
|
||||||
)
|
)
|
||||||
from invokeai.backend.util.devices import torch_dtype
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
from .model import CLIPField
|
from .model import CLIPField
|
||||||
@ -36,7 +43,7 @@ from .model import CLIPField
|
|||||||
title="Prompt",
|
title="Prompt",
|
||||||
tags=["prompt", "compel"],
|
tags=["prompt", "compel"],
|
||||||
category="conditioning",
|
category="conditioning",
|
||||||
version="1.1.1",
|
version="1.2.0",
|
||||||
)
|
)
|
||||||
class CompelInvocation(BaseInvocation):
|
class CompelInvocation(BaseInvocation):
|
||||||
"""Parse prompt using compel package to conditioning."""
|
"""Parse prompt using compel package to conditioning."""
|
||||||
@ -51,6 +58,9 @@ class CompelInvocation(BaseInvocation):
|
|||||||
description=FieldDescriptions.clip,
|
description=FieldDescriptions.clip,
|
||||||
input=Input.Connection,
|
input=Input.Connection,
|
||||||
)
|
)
|
||||||
|
mask: Optional[TensorField] = InputField(
|
||||||
|
default=None, description="A mask defining the region that this conditioning prompt applies to."
|
||||||
|
)
|
||||||
|
|
||||||
@torch.no_grad()
|
@torch.no_grad()
|
||||||
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
||||||
@ -70,52 +80,44 @@ class CompelInvocation(BaseInvocation):
|
|||||||
|
|
||||||
ti_list = generate_ti_list(self.prompt, text_encoder_info.config.base, context)
|
ti_list = generate_ti_list(self.prompt, text_encoder_info.config.base, context)
|
||||||
|
|
||||||
with text_encoder_info as text_encoder:
|
with (
|
||||||
with (
|
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
|
||||||
ModelPatcher.apply_ti(tokenizer_model, text_encoder, ti_list) as (
|
tokenizer,
|
||||||
tokenizer,
|
ti_manager,
|
||||||
ti_manager,
|
),
|
||||||
),
|
text_encoder_info as text_encoder,
|
||||||
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
||||||
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
|
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
|
||||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||||
ModelPatcher.apply_clip_skip(text_encoder, self.clip.skipped_layers),
|
ModelPatcher.apply_clip_skip(text_encoder_model, self.clip.skipped_layers),
|
||||||
):
|
):
|
||||||
assert isinstance(text_encoder, CLIPTextModel)
|
assert isinstance(text_encoder, CLIPTextModel)
|
||||||
compel = Compel(
|
compel = Compel(
|
||||||
tokenizer=tokenizer,
|
tokenizer=tokenizer,
|
||||||
text_encoder=text_encoder,
|
text_encoder=text_encoder,
|
||||||
textual_inversion_manager=ti_manager,
|
textual_inversion_manager=ti_manager,
|
||||||
dtype_for_device_getter=torch_dtype,
|
dtype_for_device_getter=TorchDevice.choose_torch_dtype,
|
||||||
truncate_long_prompts=False,
|
truncate_long_prompts=False,
|
||||||
)
|
|
||||||
|
|
||||||
conjunction = Compel.parse_prompt_string(self.prompt)
|
|
||||||
|
|
||||||
if context.config.get().log_tokenization:
|
|
||||||
log_tokenization_for_conjunction(conjunction, tokenizer)
|
|
||||||
|
|
||||||
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
|
|
||||||
|
|
||||||
ec = ExtraConditioningInfo(
|
|
||||||
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
|
|
||||||
cross_attention_control_args=options.get("cross_attention_control", None),
|
|
||||||
)
|
|
||||||
|
|
||||||
c = c.detach().to("cpu")
|
|
||||||
|
|
||||||
conditioning_data = ConditioningFieldData(
|
|
||||||
conditionings=[
|
|
||||||
BasicConditioningInfo(
|
|
||||||
embeds=c,
|
|
||||||
extra_conditioning=ec,
|
|
||||||
)
|
|
||||||
]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
conditioning_name = context.conditioning.save(conditioning_data)
|
conjunction = Compel.parse_prompt_string(self.prompt)
|
||||||
|
|
||||||
return ConditioningOutput.build(conditioning_name)
|
if context.config.get().log_tokenization:
|
||||||
|
log_tokenization_for_conjunction(conjunction, tokenizer)
|
||||||
|
|
||||||
|
c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
|
||||||
|
|
||||||
|
c = c.detach().to("cpu")
|
||||||
|
|
||||||
|
conditioning_data = ConditioningFieldData(conditionings=[BasicConditioningInfo(embeds=c)])
|
||||||
|
|
||||||
|
conditioning_name = context.conditioning.save(conditioning_data)
|
||||||
|
return ConditioningOutput(
|
||||||
|
conditioning=ConditioningField(
|
||||||
|
conditioning_name=conditioning_name,
|
||||||
|
mask=self.mask,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SDXLPromptInvocationBase:
|
class SDXLPromptInvocationBase:
|
||||||
@ -129,7 +131,7 @@ class SDXLPromptInvocationBase:
|
|||||||
get_pooled: bool,
|
get_pooled: bool,
|
||||||
lora_prefix: str,
|
lora_prefix: str,
|
||||||
zero_on_empty: bool,
|
zero_on_empty: bool,
|
||||||
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[ExtraConditioningInfo]]:
|
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
|
||||||
tokenizer_info = context.models.load(clip_field.tokenizer)
|
tokenizer_info = context.models.load(clip_field.tokenizer)
|
||||||
tokenizer_model = tokenizer_info.model
|
tokenizer_model = tokenizer_info.model
|
||||||
assert isinstance(tokenizer_model, CLIPTokenizer)
|
assert isinstance(tokenizer_model, CLIPTokenizer)
|
||||||
@ -155,7 +157,7 @@ class SDXLPromptInvocationBase:
|
|||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
c_pooled = None
|
c_pooled = None
|
||||||
return c, c_pooled, None
|
return c, c_pooled
|
||||||
|
|
||||||
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
|
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
|
||||||
for lora in clip_field.loras:
|
for lora in clip_field.loras:
|
||||||
@ -170,28 +172,28 @@ class SDXLPromptInvocationBase:
|
|||||||
|
|
||||||
ti_list = generate_ti_list(prompt, text_encoder_info.config.base, context)
|
ti_list = generate_ti_list(prompt, text_encoder_info.config.base, context)
|
||||||
|
|
||||||
with text_encoder_info as text_encoder:
|
with (
|
||||||
with (
|
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
|
||||||
ModelPatcher.apply_ti(tokenizer_model, text_encoder, ti_list) as (
|
tokenizer,
|
||||||
tokenizer,
|
ti_manager,
|
||||||
ti_manager,
|
),
|
||||||
),
|
text_encoder_info as text_encoder,
|
||||||
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
||||||
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
|
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
|
||||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||||
ModelPatcher.apply_clip_skip(text_encoder, clip_field.skipped_layers),
|
ModelPatcher.apply_clip_skip(text_encoder_model, clip_field.skipped_layers),
|
||||||
):
|
):
|
||||||
assert isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
|
assert isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
|
||||||
text_encoder = cast(CLIPTextModel, text_encoder)
|
text_encoder = cast(CLIPTextModel, text_encoder)
|
||||||
compel = Compel(
|
compel = Compel(
|
||||||
tokenizer=tokenizer,
|
tokenizer=tokenizer,
|
||||||
text_encoder=text_encoder,
|
text_encoder=text_encoder,
|
||||||
textual_inversion_manager=ti_manager,
|
textual_inversion_manager=ti_manager,
|
||||||
dtype_for_device_getter=torch_dtype,
|
dtype_for_device_getter=TorchDevice.choose_torch_dtype,
|
||||||
truncate_long_prompts=False, # TODO:
|
truncate_long_prompts=False, # TODO:
|
||||||
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
|
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
|
||||||
requires_pooled=get_pooled,
|
requires_pooled=get_pooled,
|
||||||
)
|
)
|
||||||
|
|
||||||
conjunction = Compel.parse_prompt_string(prompt)
|
conjunction = Compel.parse_prompt_string(prompt)
|
||||||
|
|
||||||
@ -199,28 +201,23 @@ class SDXLPromptInvocationBase:
|
|||||||
# TODO: better logging for and syntax
|
# TODO: better logging for and syntax
|
||||||
log_tokenization_for_conjunction(conjunction, tokenizer)
|
log_tokenization_for_conjunction(conjunction, tokenizer)
|
||||||
|
|
||||||
# TODO: ask for optimizations? to not run text_encoder twice
|
# TODO: ask for optimizations? to not run text_encoder twice
|
||||||
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
|
c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
|
||||||
if get_pooled:
|
if get_pooled:
|
||||||
c_pooled = compel.conditioning_provider.get_pooled_embeddings([prompt])
|
c_pooled = compel.conditioning_provider.get_pooled_embeddings([prompt])
|
||||||
else:
|
else:
|
||||||
c_pooled = None
|
c_pooled = None
|
||||||
|
|
||||||
ec = ExtraConditioningInfo(
|
del tokenizer
|
||||||
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
|
del text_encoder
|
||||||
cross_attention_control_args=options.get("cross_attention_control", None),
|
del tokenizer_info
|
||||||
)
|
del text_encoder_info
|
||||||
|
|
||||||
del tokenizer
|
c = c.detach().to("cpu")
|
||||||
del text_encoder
|
if c_pooled is not None:
|
||||||
del tokenizer_info
|
c_pooled = c_pooled.detach().to("cpu")
|
||||||
del text_encoder_info
|
|
||||||
|
|
||||||
c = c.detach().to("cpu")
|
return c, c_pooled
|
||||||
if c_pooled is not None:
|
|
||||||
c_pooled = c_pooled.detach().to("cpu")
|
|
||||||
|
|
||||||
return c, c_pooled, ec
|
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
@ -228,7 +225,7 @@ class SDXLPromptInvocationBase:
|
|||||||
title="SDXL Prompt",
|
title="SDXL Prompt",
|
||||||
tags=["sdxl", "compel", "prompt"],
|
tags=["sdxl", "compel", "prompt"],
|
||||||
category="conditioning",
|
category="conditioning",
|
||||||
version="1.1.1",
|
version="1.2.0",
|
||||||
)
|
)
|
||||||
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||||
"""Parse prompt using compel package to conditioning."""
|
"""Parse prompt using compel package to conditioning."""
|
||||||
@ -251,20 +248,19 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
|||||||
target_height: int = InputField(default=1024, description="")
|
target_height: int = InputField(default=1024, description="")
|
||||||
clip: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
|
clip: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
|
||||||
clip2: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
|
clip2: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
|
||||||
|
mask: Optional[TensorField] = InputField(
|
||||||
|
default=None, description="A mask defining the region that this conditioning prompt applies to."
|
||||||
|
)
|
||||||
|
|
||||||
@torch.no_grad()
|
@torch.no_grad()
|
||||||
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
||||||
c1, c1_pooled, ec1 = self.run_clip_compel(
|
c1, c1_pooled = self.run_clip_compel(context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True)
|
||||||
context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True
|
|
||||||
)
|
|
||||||
if self.style.strip() == "":
|
if self.style.strip() == "":
|
||||||
c2, c2_pooled, ec2 = self.run_clip_compel(
|
c2, c2_pooled = self.run_clip_compel(
|
||||||
context, self.clip2, self.prompt, True, "lora_te2_", zero_on_empty=True
|
context, self.clip2, self.prompt, True, "lora_te2_", zero_on_empty=True
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
c2, c2_pooled, ec2 = self.run_clip_compel(
|
c2, c2_pooled = self.run_clip_compel(context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True)
|
||||||
context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True
|
|
||||||
)
|
|
||||||
|
|
||||||
original_size = (self.original_height, self.original_width)
|
original_size = (self.original_height, self.original_width)
|
||||||
crop_coords = (self.crop_top, self.crop_left)
|
crop_coords = (self.crop_top, self.crop_left)
|
||||||
@ -303,17 +299,19 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
|||||||
conditioning_data = ConditioningFieldData(
|
conditioning_data = ConditioningFieldData(
|
||||||
conditionings=[
|
conditionings=[
|
||||||
SDXLConditioningInfo(
|
SDXLConditioningInfo(
|
||||||
embeds=torch.cat([c1, c2], dim=-1),
|
embeds=torch.cat([c1, c2], dim=-1), pooled_embeds=c2_pooled, add_time_ids=add_time_ids
|
||||||
pooled_embeds=c2_pooled,
|
|
||||||
add_time_ids=add_time_ids,
|
|
||||||
extra_conditioning=ec1,
|
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
conditioning_name = context.conditioning.save(conditioning_data)
|
conditioning_name = context.conditioning.save(conditioning_data)
|
||||||
|
|
||||||
return ConditioningOutput.build(conditioning_name)
|
return ConditioningOutput(
|
||||||
|
conditioning=ConditioningField(
|
||||||
|
conditioning_name=conditioning_name,
|
||||||
|
mask=self.mask,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
@ -341,7 +339,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
|
|||||||
@torch.no_grad()
|
@torch.no_grad()
|
||||||
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
||||||
# TODO: if there will appear lora for refiner - write proper prefix
|
# TODO: if there will appear lora for refiner - write proper prefix
|
||||||
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.style, True, "<NONE>", zero_on_empty=False)
|
c2, c2_pooled = self.run_clip_compel(context, self.clip2, self.style, True, "<NONE>", zero_on_empty=False)
|
||||||
|
|
||||||
original_size = (self.original_height, self.original_width)
|
original_size = (self.original_height, self.original_width)
|
||||||
crop_coords = (self.crop_top, self.crop_left)
|
crop_coords = (self.crop_top, self.crop_left)
|
||||||
@ -350,14 +348,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
|
|||||||
|
|
||||||
assert c2_pooled is not None
|
assert c2_pooled is not None
|
||||||
conditioning_data = ConditioningFieldData(
|
conditioning_data = ConditioningFieldData(
|
||||||
conditionings=[
|
conditionings=[SDXLConditioningInfo(embeds=c2, pooled_embeds=c2_pooled, add_time_ids=add_time_ids)]
|
||||||
SDXLConditioningInfo(
|
|
||||||
embeds=c2,
|
|
||||||
pooled_embeds=c2_pooled,
|
|
||||||
add_time_ids=add_time_ids,
|
|
||||||
extra_conditioning=ec2, # or None
|
|
||||||
)
|
|
||||||
]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
conditioning_name = context.conditioning.save(conditioning_data)
|
conditioning_name = context.conditioning.save(conditioning_data)
|
||||||
|
@ -3,6 +3,7 @@ Invoke-managed custom node loader. See README.md for more information.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
import traceback
|
||||||
from importlib.util import module_from_spec, spec_from_file_location
|
from importlib.util import module_from_spec, spec_from_file_location
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@ -41,11 +42,15 @@ for d in Path(__file__).parent.iterdir():
|
|||||||
|
|
||||||
logger.info(f"Loading node pack {module_name}")
|
logger.info(f"Loading node pack {module_name}")
|
||||||
|
|
||||||
module = module_from_spec(spec)
|
try:
|
||||||
sys.modules[spec.name] = module
|
module = module_from_spec(spec)
|
||||||
spec.loader.exec_module(module)
|
sys.modules[spec.name] = module
|
||||||
|
spec.loader.exec_module(module)
|
||||||
|
|
||||||
loaded_count += 1
|
loaded_count += 1
|
||||||
|
except Exception:
|
||||||
|
full_error = traceback.format_exc()
|
||||||
|
logger.error(f"Failed to load node pack {module_name}:\n{full_error}")
|
||||||
|
|
||||||
del init, module_name
|
del init, module_name
|
||||||
|
|
||||||
|
@ -203,6 +203,12 @@ class DenoiseMaskField(BaseModel):
|
|||||||
gradient: bool = Field(default=False, description="Used for gradient inpainting")
|
gradient: bool = Field(default=False, description="Used for gradient inpainting")
|
||||||
|
|
||||||
|
|
||||||
|
class TensorField(BaseModel):
|
||||||
|
"""A tensor primitive field."""
|
||||||
|
|
||||||
|
tensor_name: str = Field(description="The name of a tensor.")
|
||||||
|
|
||||||
|
|
||||||
class LatentsField(BaseModel):
|
class LatentsField(BaseModel):
|
||||||
"""A latents tensor primitive field"""
|
"""A latents tensor primitive field"""
|
||||||
|
|
||||||
@ -226,7 +232,11 @@ class ConditioningField(BaseModel):
|
|||||||
"""A conditioning tensor primitive value"""
|
"""A conditioning tensor primitive value"""
|
||||||
|
|
||||||
conditioning_name: str = Field(description="The name of conditioning tensor")
|
conditioning_name: str = Field(description="The name of conditioning tensor")
|
||||||
# endregion
|
mask: Optional[TensorField] = Field(
|
||||||
|
default=None,
|
||||||
|
description="The mask associated with this conditioning tensor. Excluded regions should be set to False, "
|
||||||
|
"included regions should be set to True.",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class MetadataField(RootModel[dict[str, Any]]):
|
class MetadataField(RootModel[dict[str, Any]]):
|
||||||
|
@ -1,154 +1,91 @@
|
|||||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
|
from abc import abstractmethod
|
||||||
|
from typing import Literal, get_args
|
||||||
|
|
||||||
import math
|
from PIL import Image
|
||||||
from typing import Literal, Optional, get_args
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
from PIL import Image, ImageOps
|
|
||||||
|
|
||||||
from invokeai.app.invocations.fields import ColorField, ImageField
|
from invokeai.app.invocations.fields import ColorField, ImageField
|
||||||
from invokeai.app.invocations.primitives import ImageOutput
|
from invokeai.app.invocations.primitives import ImageOutput
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
|
||||||
from invokeai.app.util.misc import SEED_MAX
|
from invokeai.app.util.misc import SEED_MAX
|
||||||
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
|
from invokeai.backend.image_util.infill_methods.cv2_inpaint import cv2_inpaint
|
||||||
from invokeai.backend.image_util.lama import LaMA
|
from invokeai.backend.image_util.infill_methods.lama import LaMA
|
||||||
from invokeai.backend.image_util.patchmatch import PatchMatch
|
from invokeai.backend.image_util.infill_methods.mosaic import infill_mosaic
|
||||||
|
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, infill_patchmatch
|
||||||
|
from invokeai.backend.image_util.infill_methods.tile import infill_tile
|
||||||
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
from .baseinvocation import BaseInvocation, invocation
|
||||||
from .fields import InputField, WithBoard, WithMetadata
|
from .fields import InputField, WithBoard, WithMetadata
|
||||||
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
||||||
|
|
||||||
|
logger = InvokeAILogger.get_logger()
|
||||||
|
|
||||||
def infill_methods() -> list[str]:
|
|
||||||
methods = ["tile", "solid", "lama", "cv2"]
|
def get_infill_methods():
|
||||||
|
methods = Literal["tile", "color", "lama", "cv2"] # TODO: add mosaic back
|
||||||
if PatchMatch.patchmatch_available():
|
if PatchMatch.patchmatch_available():
|
||||||
methods.insert(0, "patchmatch")
|
methods = Literal["patchmatch", "tile", "color", "lama", "cv2"] # TODO: add mosaic back
|
||||||
return methods
|
return methods
|
||||||
|
|
||||||
|
|
||||||
INFILL_METHODS = Literal[tuple(infill_methods())]
|
INFILL_METHODS = get_infill_methods()
|
||||||
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
|
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
|
||||||
|
|
||||||
|
|
||||||
def infill_lama(im: Image.Image) -> Image.Image:
|
class InfillImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||||
lama = LaMA()
|
"""Base class for invocations that preprocess images for Infilling"""
|
||||||
return lama(im)
|
|
||||||
|
|
||||||
|
image: ImageField = InputField(description="The image to process")
|
||||||
|
|
||||||
def infill_patchmatch(im: Image.Image) -> Image.Image:
|
@abstractmethod
|
||||||
if im.mode != "RGBA":
|
def infill(self, image: Image.Image) -> Image.Image:
|
||||||
return im
|
"""Infill the image with the specified method"""
|
||||||
|
pass
|
||||||
|
|
||||||
# Skip patchmatch if patchmatch isn't available
|
def load_image(self, context: InvocationContext) -> tuple[Image.Image, bool]:
|
||||||
if not PatchMatch.patchmatch_available():
|
"""Process the image to have an alpha channel before being infilled"""
|
||||||
return im
|
image = context.images.get_pil(self.image.image_name)
|
||||||
|
has_alpha = True if image.mode == "RGBA" else False
|
||||||
|
return image, has_alpha
|
||||||
|
|
||||||
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
|
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||||
im_patched_np = PatchMatch.inpaint(im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3)
|
# Retrieve and process image to be infilled
|
||||||
im_patched = Image.fromarray(im_patched_np, mode="RGB")
|
input_image, has_alpha = self.load_image(context)
|
||||||
return im_patched
|
|
||||||
|
|
||||||
|
# If the input image has no alpha channel, return it
|
||||||
|
if has_alpha is False:
|
||||||
|
return ImageOutput.build(context.images.get_dto(self.image.image_name))
|
||||||
|
|
||||||
def infill_cv2(im: Image.Image) -> Image.Image:
|
# Perform Infill action
|
||||||
return cv2_inpaint(im)
|
infilled_image = self.infill(input_image)
|
||||||
|
|
||||||
|
# Create ImageDTO for Infilled Image
|
||||||
|
infilled_image_dto = context.images.save(image=infilled_image)
|
||||||
|
|
||||||
def get_tile_images(image: np.ndarray, width=8, height=8):
|
# Return Infilled Image
|
||||||
_nrows, _ncols, depth = image.shape
|
return ImageOutput.build(infilled_image_dto)
|
||||||
_strides = image.strides
|
|
||||||
|
|
||||||
nrows, _m = divmod(_nrows, height)
|
|
||||||
ncols, _n = divmod(_ncols, width)
|
|
||||||
if _m != 0 or _n != 0:
|
|
||||||
return None
|
|
||||||
|
|
||||||
return np.lib.stride_tricks.as_strided(
|
|
||||||
np.ravel(image),
|
|
||||||
shape=(nrows, ncols, height, width, depth),
|
|
||||||
strides=(height * _strides[0], width * _strides[1], *_strides),
|
|
||||||
writeable=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int] = None) -> Image.Image:
|
|
||||||
# Only fill if there's an alpha layer
|
|
||||||
if im.mode != "RGBA":
|
|
||||||
return im
|
|
||||||
|
|
||||||
a = np.asarray(im, dtype=np.uint8)
|
|
||||||
|
|
||||||
tile_size_tuple = (tile_size, tile_size)
|
|
||||||
|
|
||||||
# Get the image as tiles of a specified size
|
|
||||||
tiles = get_tile_images(a, *tile_size_tuple).copy()
|
|
||||||
|
|
||||||
# Get the mask as tiles
|
|
||||||
tiles_mask = tiles[:, :, :, :, 3]
|
|
||||||
|
|
||||||
# Find any mask tiles with any fully transparent pixels (we will be replacing these later)
|
|
||||||
tmask_shape = tiles_mask.shape
|
|
||||||
tiles_mask = tiles_mask.reshape(math.prod(tiles_mask.shape))
|
|
||||||
n, ny = (math.prod(tmask_shape[0:2])), math.prod(tmask_shape[2:])
|
|
||||||
tiles_mask = tiles_mask > 0
|
|
||||||
tiles_mask = tiles_mask.reshape((n, ny)).all(axis=1)
|
|
||||||
|
|
||||||
# Get RGB tiles in single array and filter by the mask
|
|
||||||
tshape = tiles.shape
|
|
||||||
tiles_all = tiles.reshape((math.prod(tiles.shape[0:2]), *tiles.shape[2:]))
|
|
||||||
filtered_tiles = tiles_all[tiles_mask]
|
|
||||||
|
|
||||||
if len(filtered_tiles) == 0:
|
|
||||||
return im
|
|
||||||
|
|
||||||
# Find all invalid tiles and replace with a random valid tile
|
|
||||||
replace_count = (tiles_mask == False).sum() # noqa: E712
|
|
||||||
rng = np.random.default_rng(seed=seed)
|
|
||||||
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count), :, :, :]
|
|
||||||
|
|
||||||
# Convert back to an image
|
|
||||||
tiles_all = tiles_all.reshape(tshape)
|
|
||||||
tiles_all = tiles_all.swapaxes(1, 2)
|
|
||||||
st = tiles_all.reshape(
|
|
||||||
(
|
|
||||||
math.prod(tiles_all.shape[0:2]),
|
|
||||||
math.prod(tiles_all.shape[2:4]),
|
|
||||||
tiles_all.shape[4],
|
|
||||||
)
|
|
||||||
)
|
|
||||||
si = Image.fromarray(st, mode="RGBA")
|
|
||||||
|
|
||||||
return si
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
||||||
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class InfillColorInvocation(InfillImageProcessorInvocation):
|
||||||
"""Infills transparent areas of an image with a solid color"""
|
"""Infills transparent areas of an image with a solid color"""
|
||||||
|
|
||||||
image: ImageField = InputField(description="The image to infill")
|
|
||||||
color: ColorField = InputField(
|
color: ColorField = InputField(
|
||||||
default=ColorField(r=127, g=127, b=127, a=255),
|
default=ColorField(r=127, g=127, b=127, a=255),
|
||||||
description="The color to use to infill",
|
description="The color to use to infill",
|
||||||
)
|
)
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
def infill(self, image: Image.Image):
|
||||||
image = context.images.get_pil(self.image.image_name)
|
|
||||||
|
|
||||||
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
|
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
|
||||||
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
|
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
|
||||||
|
|
||||||
infilled.paste(image, (0, 0), image.split()[-1])
|
infilled.paste(image, (0, 0), image.split()[-1])
|
||||||
|
return infilled
|
||||||
image_dto = context.images.save(image=infilled)
|
|
||||||
|
|
||||||
return ImageOutput.build(image_dto)
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
|
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
|
||||||
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class InfillTileInvocation(InfillImageProcessorInvocation):
|
||||||
"""Infills transparent areas of an image with tiles of the image"""
|
"""Infills transparent areas of an image with tiles of the image"""
|
||||||
|
|
||||||
image: ImageField = InputField(description="The image to infill")
|
|
||||||
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
|
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
|
||||||
seed: int = InputField(
|
seed: int = InputField(
|
||||||
default=0,
|
default=0,
|
||||||
@ -157,92 +94,74 @@ class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
|
|||||||
description="The seed to use for tile generation (omit for random)",
|
description="The seed to use for tile generation (omit for random)",
|
||||||
)
|
)
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
def infill(self, image: Image.Image):
|
||||||
image = context.images.get_pil(self.image.image_name)
|
output = infill_tile(image, seed=self.seed, tile_size=self.tile_size)
|
||||||
|
return output.infilled
|
||||||
infilled = tile_fill_missing(image.copy(), seed=self.seed, tile_size=self.tile_size)
|
|
||||||
infilled.paste(image, (0, 0), image.split()[-1])
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=infilled)
|
|
||||||
|
|
||||||
return ImageOutput.build(image_dto)
|
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
|
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
|
||||||
)
|
)
|
||||||
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class InfillPatchMatchInvocation(InfillImageProcessorInvocation):
|
||||||
"""Infills transparent areas of an image using the PatchMatch algorithm"""
|
"""Infills transparent areas of an image using the PatchMatch algorithm"""
|
||||||
|
|
||||||
image: ImageField = InputField(description="The image to infill")
|
|
||||||
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
|
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
|
||||||
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
|
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
def infill(self, image: Image.Image):
|
||||||
image = context.images.get_pil(self.image.image_name).convert("RGBA")
|
|
||||||
|
|
||||||
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
|
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
|
||||||
|
|
||||||
infill_image = image.copy()
|
|
||||||
width = int(image.width / self.downscale)
|
width = int(image.width / self.downscale)
|
||||||
height = int(image.height / self.downscale)
|
height = int(image.height / self.downscale)
|
||||||
infill_image = infill_image.resize(
|
|
||||||
|
infilled = image.resize(
|
||||||
(width, height),
|
(width, height),
|
||||||
resample=resample_mode,
|
resample=resample_mode,
|
||||||
)
|
)
|
||||||
|
infilled = infill_patchmatch(image)
|
||||||
if PatchMatch.patchmatch_available():
|
|
||||||
infilled = infill_patchmatch(infill_image)
|
|
||||||
else:
|
|
||||||
raise ValueError("PatchMatch is not available on this system")
|
|
||||||
|
|
||||||
infilled = infilled.resize(
|
infilled = infilled.resize(
|
||||||
(image.width, image.height),
|
(image.width, image.height),
|
||||||
resample=resample_mode,
|
resample=resample_mode,
|
||||||
)
|
)
|
||||||
|
|
||||||
infilled.paste(image, (0, 0), mask=image.split()[-1])
|
infilled.paste(image, (0, 0), mask=image.split()[-1])
|
||||||
# image.paste(infilled, (0, 0), mask=image.split()[-1])
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=infilled)
|
return infilled
|
||||||
|
|
||||||
return ImageOutput.build(image_dto)
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
||||||
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class LaMaInfillInvocation(InfillImageProcessorInvocation):
|
||||||
"""Infills transparent areas of an image using the LaMa model"""
|
"""Infills transparent areas of an image using the LaMa model"""
|
||||||
|
|
||||||
image: ImageField = InputField(description="The image to infill")
|
def infill(self, image: Image.Image):
|
||||||
|
lama = LaMA()
|
||||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
return lama(image)
|
||||||
image = context.images.get_pil(self.image.image_name)
|
|
||||||
|
|
||||||
# Downloads the LaMa model if it doesn't already exist
|
|
||||||
download_with_progress_bar(
|
|
||||||
name="LaMa Inpainting Model",
|
|
||||||
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
|
|
||||||
dest_path=context.config.get().models_path / "core/misc/lama/lama.pt",
|
|
||||||
)
|
|
||||||
|
|
||||||
infilled = infill_lama(image.copy())
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=infilled)
|
|
||||||
|
|
||||||
return ImageOutput.build(image_dto)
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
|
||||||
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class CV2InfillInvocation(InfillImageProcessorInvocation):
|
||||||
"""Infills transparent areas of an image using OpenCV Inpainting"""
|
"""Infills transparent areas of an image using OpenCV Inpainting"""
|
||||||
|
|
||||||
|
def infill(self, image: Image.Image):
|
||||||
|
return cv2_inpaint(image)
|
||||||
|
|
||||||
|
|
||||||
|
# @invocation(
|
||||||
|
# "infill_mosaic", title="Mosaic Infill", tags=["image", "inpaint", "outpaint"], category="inpaint", version="1.0.0"
|
||||||
|
# )
|
||||||
|
class MosaicInfillInvocation(InfillImageProcessorInvocation):
|
||||||
|
"""Infills transparent areas of an image with a mosaic pattern drawing colors from the rest of the image"""
|
||||||
|
|
||||||
image: ImageField = InputField(description="The image to infill")
|
image: ImageField = InputField(description="The image to infill")
|
||||||
|
tile_width: int = InputField(default=64, description="Width of the tile")
|
||||||
|
tile_height: int = InputField(default=64, description="Height of the tile")
|
||||||
|
min_color: ColorField = InputField(
|
||||||
|
default=ColorField(r=0, g=0, b=0, a=255),
|
||||||
|
description="The min threshold for color",
|
||||||
|
)
|
||||||
|
max_color: ColorField = InputField(
|
||||||
|
default=ColorField(r=255, g=255, b=255, a=255),
|
||||||
|
description="The max threshold for color",
|
||||||
|
)
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
def infill(self, image: Image.Image):
|
||||||
image = context.images.get_pil(self.image.image_name)
|
return infill_mosaic(image, (self.tile_width, self.tile_height), self.min_color.tuple(), self.max_color.tuple())
|
||||||
|
|
||||||
infilled = infill_cv2(image.copy())
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=infilled)
|
|
||||||
|
|
||||||
return ImageOutput.build(image_dto)
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
from builtins import float
|
from builtins import float
|
||||||
from typing import List, Union
|
from typing import List, Literal, Optional, Union
|
||||||
|
|
||||||
from pydantic import BaseModel, Field, field_validator, model_validator
|
from pydantic import BaseModel, Field, field_validator, model_validator
|
||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
@ -10,25 +10,43 @@ from invokeai.app.invocations.baseinvocation import (
|
|||||||
invocation,
|
invocation,
|
||||||
invocation_output,
|
invocation_output,
|
||||||
)
|
)
|
||||||
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
|
from invokeai.app.invocations.fields import (
|
||||||
|
FieldDescriptions,
|
||||||
|
Input,
|
||||||
|
InputField,
|
||||||
|
OutputField,
|
||||||
|
TensorField,
|
||||||
|
UIType,
|
||||||
|
)
|
||||||
from invokeai.app.invocations.model import ModelIdentifierField
|
from invokeai.app.invocations.model import ModelIdentifierField
|
||||||
from invokeai.app.invocations.primitives import ImageField
|
from invokeai.app.invocations.primitives import ImageField
|
||||||
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
|
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, IPAdapterConfig, ModelType
|
from invokeai.backend.model_manager.config import (
|
||||||
|
AnyModelConfig,
|
||||||
|
BaseModelType,
|
||||||
|
IPAdapterCheckpointConfig,
|
||||||
|
IPAdapterInvokeAIConfig,
|
||||||
|
ModelType,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterField(BaseModel):
|
class IPAdapterField(BaseModel):
|
||||||
image: Union[ImageField, List[ImageField]] = Field(description="The IP-Adapter image prompt(s).")
|
image: Union[ImageField, List[ImageField]] = Field(description="The IP-Adapter image prompt(s).")
|
||||||
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model to use.")
|
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model to use.")
|
||||||
image_encoder_model: ModelIdentifierField = Field(description="The name of the CLIP image encoder model.")
|
image_encoder_model: ModelIdentifierField = Field(description="The name of the CLIP image encoder model.")
|
||||||
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
|
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the IP-Adapter.")
|
||||||
begin_step_percent: float = Field(
|
begin_step_percent: float = Field(
|
||||||
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
|
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
|
||||||
)
|
)
|
||||||
end_step_percent: float = Field(
|
end_step_percent: float = Field(
|
||||||
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
|
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
|
||||||
)
|
)
|
||||||
|
mask: Optional[TensorField] = Field(
|
||||||
|
default=None,
|
||||||
|
description="The bool mask associated with this IP-Adapter. Excluded regions should be set to False, included "
|
||||||
|
"regions should be set to True.",
|
||||||
|
)
|
||||||
|
|
||||||
@field_validator("weight")
|
@field_validator("weight")
|
||||||
@classmethod
|
@classmethod
|
||||||
@ -48,12 +66,15 @@ class IPAdapterOutput(BaseInvocationOutput):
|
|||||||
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
|
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
|
||||||
|
|
||||||
|
|
||||||
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.2.2")
|
CLIP_VISION_MODEL_MAP = {"ViT-H": "ip_adapter_sd_image_encoder", "ViT-G": "ip_adapter_sdxl_image_encoder"}
|
||||||
|
|
||||||
|
|
||||||
|
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.3.0")
|
||||||
class IPAdapterInvocation(BaseInvocation):
|
class IPAdapterInvocation(BaseInvocation):
|
||||||
"""Collects IP-Adapter info to pass to other nodes."""
|
"""Collects IP-Adapter info to pass to other nodes."""
|
||||||
|
|
||||||
# Inputs
|
# Inputs
|
||||||
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).")
|
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).", ui_order=1)
|
||||||
ip_adapter_model: ModelIdentifierField = InputField(
|
ip_adapter_model: ModelIdentifierField = InputField(
|
||||||
description="The IP-Adapter model.",
|
description="The IP-Adapter model.",
|
||||||
title="IP-Adapter Model",
|
title="IP-Adapter Model",
|
||||||
@ -61,7 +82,11 @@ class IPAdapterInvocation(BaseInvocation):
|
|||||||
ui_order=-1,
|
ui_order=-1,
|
||||||
ui_type=UIType.IPAdapterModel,
|
ui_type=UIType.IPAdapterModel,
|
||||||
)
|
)
|
||||||
|
clip_vision_model: Literal["ViT-H", "ViT-G"] = InputField(
|
||||||
|
description="CLIP Vision model to use. Overrides model settings. Mandatory for checkpoint models.",
|
||||||
|
default="ViT-H",
|
||||||
|
ui_order=2,
|
||||||
|
)
|
||||||
weight: Union[float, List[float]] = InputField(
|
weight: Union[float, List[float]] = InputField(
|
||||||
default=1, description="The weight given to the IP-Adapter", title="Weight"
|
default=1, description="The weight given to the IP-Adapter", title="Weight"
|
||||||
)
|
)
|
||||||
@ -71,6 +96,9 @@ class IPAdapterInvocation(BaseInvocation):
|
|||||||
end_step_percent: float = InputField(
|
end_step_percent: float = InputField(
|
||||||
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
|
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
|
||||||
)
|
)
|
||||||
|
mask: Optional[TensorField] = InputField(
|
||||||
|
default=None, description="A mask defining the region that this IP-Adapter applies to."
|
||||||
|
)
|
||||||
|
|
||||||
@field_validator("weight")
|
@field_validator("weight")
|
||||||
@classmethod
|
@classmethod
|
||||||
@ -86,10 +114,16 @@ class IPAdapterInvocation(BaseInvocation):
|
|||||||
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
|
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
|
||||||
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
|
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
|
||||||
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
|
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
|
||||||
assert isinstance(ip_adapter_info, IPAdapterConfig)
|
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
|
||||||
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
|
|
||||||
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
|
if isinstance(ip_adapter_info, IPAdapterInvokeAIConfig):
|
||||||
|
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
|
||||||
|
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
|
||||||
|
else:
|
||||||
|
image_encoder_model_name = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
|
||||||
|
|
||||||
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
|
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
|
||||||
|
|
||||||
return IPAdapterOutput(
|
return IPAdapterOutput(
|
||||||
ip_adapter=IPAdapterField(
|
ip_adapter=IPAdapterField(
|
||||||
image=self.image,
|
image=self.image,
|
||||||
@ -98,23 +132,30 @@ class IPAdapterInvocation(BaseInvocation):
|
|||||||
weight=self.weight,
|
weight=self.weight,
|
||||||
begin_step_percent=self.begin_step_percent,
|
begin_step_percent=self.begin_step_percent,
|
||||||
end_step_percent=self.end_step_percent,
|
end_step_percent=self.end_step_percent,
|
||||||
|
mask=self.mask,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
|
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
|
||||||
found = False
|
image_encoder_models = context.models.search_by_attrs(
|
||||||
while not found:
|
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
|
||||||
|
)
|
||||||
|
|
||||||
|
if not len(image_encoder_models) > 0:
|
||||||
|
context.logger.warning(
|
||||||
|
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed. \
|
||||||
|
Downloading and installing now. This may take a while."
|
||||||
|
)
|
||||||
|
|
||||||
|
installer = context._services.model_manager.install
|
||||||
|
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
|
||||||
|
installer.wait_for_job(job, timeout=600) # Wait for up to 10 minutes
|
||||||
image_encoder_models = context.models.search_by_attrs(
|
image_encoder_models = context.models.search_by_attrs(
|
||||||
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
|
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
|
||||||
)
|
)
|
||||||
found = len(image_encoder_models) > 0
|
|
||||||
if not found:
|
if len(image_encoder_models) == 0:
|
||||||
context.logger.warning(
|
context.logger.error("Error while fetching CLIP Vision Image Encoder")
|
||||||
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed."
|
assert len(image_encoder_models) == 1
|
||||||
)
|
|
||||||
context.logger.warning("Downloading and installing now. This may take a while.")
|
|
||||||
installer = context._services.model_manager.install
|
|
||||||
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
|
|
||||||
installer.wait_for_job(job, timeout=600) # wait up to 10 minutes - then raise a TimeoutException
|
|
||||||
assert len(image_encoder_models) == 1
|
|
||||||
return image_encoder_models[0]
|
return image_encoder_models[0]
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
|
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
|
||||||
|
import inspect
|
||||||
import math
|
import math
|
||||||
from contextlib import ExitStack
|
from contextlib import ExitStack
|
||||||
from functools import singledispatchmethod
|
from functools import singledispatchmethod
|
||||||
@ -9,6 +9,7 @@ import einops
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
import numpy.typing as npt
|
import numpy.typing as npt
|
||||||
import torch
|
import torch
|
||||||
|
import torchvision
|
||||||
import torchvision.transforms as T
|
import torchvision.transforms as T
|
||||||
from diffusers import AutoencoderKL, AutoencoderTiny
|
from diffusers import AutoencoderKL, AutoencoderTiny
|
||||||
from diffusers.configuration_utils import ConfigMixin
|
from diffusers.configuration_utils import ConfigMixin
|
||||||
@ -43,11 +44,7 @@ from invokeai.app.invocations.fields import (
|
|||||||
WithMetadata,
|
WithMetadata,
|
||||||
)
|
)
|
||||||
from invokeai.app.invocations.ip_adapter import IPAdapterField
|
from invokeai.app.invocations.ip_adapter import IPAdapterField
|
||||||
from invokeai.app.invocations.primitives import (
|
from invokeai.app.invocations.primitives import DenoiseMaskOutput, ImageOutput, LatentsOutput
|
||||||
DenoiseMaskOutput,
|
|
||||||
ImageOutput,
|
|
||||||
LatentsOutput,
|
|
||||||
)
|
|
||||||
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
|
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
|
||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.controlnet_utils import prepare_control_image
|
from invokeai.app.util.controlnet_utils import prepare_control_image
|
||||||
@ -56,31 +53,31 @@ from invokeai.backend.lora import LoRAModelRaw
|
|||||||
from invokeai.backend.model_manager import BaseModelType, LoadedModel
|
from invokeai.backend.model_manager import BaseModelType, LoadedModel
|
||||||
from invokeai.backend.model_patcher import ModelPatcher
|
from invokeai.backend.model_patcher import ModelPatcher
|
||||||
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
|
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
|
||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningData, IPAdapterConditioningInfo
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||||
|
BasicConditioningInfo,
|
||||||
|
IPAdapterConditioningInfo,
|
||||||
|
IPAdapterData,
|
||||||
|
Range,
|
||||||
|
SDXLConditioningInfo,
|
||||||
|
TextConditioningData,
|
||||||
|
TextConditioningRegions,
|
||||||
|
)
|
||||||
|
from invokeai.backend.util.mask import to_standard_float_mask
|
||||||
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
||||||
|
|
||||||
from ...backend.stable_diffusion.diffusers_pipeline import (
|
from ...backend.stable_diffusion.diffusers_pipeline import (
|
||||||
ControlNetData,
|
ControlNetData,
|
||||||
IPAdapterData,
|
|
||||||
StableDiffusionGeneratorPipeline,
|
StableDiffusionGeneratorPipeline,
|
||||||
T2IAdapterData,
|
T2IAdapterData,
|
||||||
image_resized_to_grid_as_tensor,
|
image_resized_to_grid_as_tensor,
|
||||||
)
|
)
|
||||||
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||||
from ...backend.util.devices import choose_precision, choose_torch_device
|
from ...backend.util.devices import TorchDevice
|
||||||
from .baseinvocation import (
|
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
BaseInvocation,
|
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
from .controlnet_image_processors import ControlField
|
from .controlnet_image_processors import ControlField
|
||||||
from .model import ModelIdentifierField, UNetField, VAEField
|
from .model import ModelIdentifierField, UNetField, VAEField
|
||||||
|
|
||||||
if choose_torch_device() == torch.device("mps"):
|
DEFAULT_PRECISION = TorchDevice.choose_torch_dtype()
|
||||||
from torch import mps
|
|
||||||
|
|
||||||
DEFAULT_PRECISION = choose_precision(choose_torch_device())
|
|
||||||
|
|
||||||
|
|
||||||
@invocation_output("scheduler_output")
|
@invocation_output("scheduler_output")
|
||||||
@ -284,10 +281,10 @@ def get_scheduler(
|
|||||||
class DenoiseLatentsInvocation(BaseInvocation):
|
class DenoiseLatentsInvocation(BaseInvocation):
|
||||||
"""Denoises noisy latents to decodable images"""
|
"""Denoises noisy latents to decodable images"""
|
||||||
|
|
||||||
positive_conditioning: ConditioningField = InputField(
|
positive_conditioning: Union[ConditioningField, list[ConditioningField]] = InputField(
|
||||||
description=FieldDescriptions.positive_cond, input=Input.Connection, ui_order=0
|
description=FieldDescriptions.positive_cond, input=Input.Connection, ui_order=0
|
||||||
)
|
)
|
||||||
negative_conditioning: ConditioningField = InputField(
|
negative_conditioning: Union[ConditioningField, list[ConditioningField]] = InputField(
|
||||||
description=FieldDescriptions.negative_cond, input=Input.Connection, ui_order=1
|
description=FieldDescriptions.negative_cond, input=Input.Connection, ui_order=1
|
||||||
)
|
)
|
||||||
noise: Optional[LatentsField] = InputField(
|
noise: Optional[LatentsField] = InputField(
|
||||||
@ -365,33 +362,168 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
raise ValueError("cfg_scale must be greater than 1")
|
raise ValueError("cfg_scale must be greater than 1")
|
||||||
return v
|
return v
|
||||||
|
|
||||||
|
def _get_text_embeddings_and_masks(
|
||||||
|
self,
|
||||||
|
cond_list: list[ConditioningField],
|
||||||
|
context: InvocationContext,
|
||||||
|
device: torch.device,
|
||||||
|
dtype: torch.dtype,
|
||||||
|
) -> tuple[Union[list[BasicConditioningInfo], list[SDXLConditioningInfo]], list[Optional[torch.Tensor]]]:
|
||||||
|
"""Get the text embeddings and masks from the input conditioning fields."""
|
||||||
|
text_embeddings: Union[list[BasicConditioningInfo], list[SDXLConditioningInfo]] = []
|
||||||
|
text_embeddings_masks: list[Optional[torch.Tensor]] = []
|
||||||
|
for cond in cond_list:
|
||||||
|
cond_data = context.conditioning.load(cond.conditioning_name)
|
||||||
|
text_embeddings.append(cond_data.conditionings[0].to(device=device, dtype=dtype))
|
||||||
|
|
||||||
|
mask = cond.mask
|
||||||
|
if mask is not None:
|
||||||
|
mask = context.tensors.load(mask.tensor_name)
|
||||||
|
text_embeddings_masks.append(mask)
|
||||||
|
|
||||||
|
return text_embeddings, text_embeddings_masks
|
||||||
|
|
||||||
|
def _preprocess_regional_prompt_mask(
|
||||||
|
self, mask: Optional[torch.Tensor], target_height: int, target_width: int, dtype: torch.dtype
|
||||||
|
) -> torch.Tensor:
|
||||||
|
"""Preprocess a regional prompt mask to match the target height and width.
|
||||||
|
If mask is None, returns a mask of all ones with the target height and width.
|
||||||
|
If mask is not None, resizes the mask to the target height and width using 'nearest' interpolation.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
torch.Tensor: The processed mask. shape: (1, 1, target_height, target_width).
|
||||||
|
"""
|
||||||
|
|
||||||
|
if mask is None:
|
||||||
|
return torch.ones((1, 1, target_height, target_width), dtype=dtype)
|
||||||
|
|
||||||
|
mask = to_standard_float_mask(mask, out_dtype=dtype)
|
||||||
|
|
||||||
|
tf = torchvision.transforms.Resize(
|
||||||
|
(target_height, target_width), interpolation=torchvision.transforms.InterpolationMode.NEAREST
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add a batch dimension to the mask, because torchvision expects shape (batch, channels, h, w).
|
||||||
|
mask = mask.unsqueeze(0) # Shape: (1, h, w) -> (1, 1, h, w)
|
||||||
|
resized_mask = tf(mask)
|
||||||
|
return resized_mask
|
||||||
|
|
||||||
|
def _concat_regional_text_embeddings(
|
||||||
|
self,
|
||||||
|
text_conditionings: Union[list[BasicConditioningInfo], list[SDXLConditioningInfo]],
|
||||||
|
masks: Optional[list[Optional[torch.Tensor]]],
|
||||||
|
latent_height: int,
|
||||||
|
latent_width: int,
|
||||||
|
dtype: torch.dtype,
|
||||||
|
) -> tuple[Union[BasicConditioningInfo, SDXLConditioningInfo], Optional[TextConditioningRegions]]:
|
||||||
|
"""Concatenate regional text embeddings into a single embedding and track the region masks accordingly."""
|
||||||
|
if masks is None:
|
||||||
|
masks = [None] * len(text_conditionings)
|
||||||
|
assert len(text_conditionings) == len(masks)
|
||||||
|
|
||||||
|
is_sdxl = type(text_conditionings[0]) is SDXLConditioningInfo
|
||||||
|
|
||||||
|
all_masks_are_none = all(mask is None for mask in masks)
|
||||||
|
|
||||||
|
text_embedding = []
|
||||||
|
pooled_embedding = None
|
||||||
|
add_time_ids = None
|
||||||
|
cur_text_embedding_len = 0
|
||||||
|
processed_masks = []
|
||||||
|
embedding_ranges = []
|
||||||
|
|
||||||
|
for prompt_idx, text_embedding_info in enumerate(text_conditionings):
|
||||||
|
mask = masks[prompt_idx]
|
||||||
|
|
||||||
|
if is_sdxl:
|
||||||
|
# We choose a random SDXLConditioningInfo's pooled_embeds and add_time_ids here, with a preference for
|
||||||
|
# prompts without a mask. We prefer prompts without a mask, because they are more likely to contain
|
||||||
|
# global prompt information. In an ideal case, there should be exactly one global prompt without a
|
||||||
|
# mask, but we don't enforce this.
|
||||||
|
|
||||||
|
# HACK(ryand): The fact that we have to choose a single pooled_embedding and add_time_ids here is a
|
||||||
|
# fundamental interface issue. The SDXL Compel nodes are not designed to be used in the way that we use
|
||||||
|
# them for regional prompting. Ideally, the DenoiseLatents invocation should accept a single
|
||||||
|
# pooled_embeds tensor and a list of standard text embeds with region masks. This change would be a
|
||||||
|
# pretty major breaking change to a popular node, so for now we use this hack.
|
||||||
|
if pooled_embedding is None or mask is None:
|
||||||
|
pooled_embedding = text_embedding_info.pooled_embeds
|
||||||
|
if add_time_ids is None or mask is None:
|
||||||
|
add_time_ids = text_embedding_info.add_time_ids
|
||||||
|
|
||||||
|
text_embedding.append(text_embedding_info.embeds)
|
||||||
|
if not all_masks_are_none:
|
||||||
|
embedding_ranges.append(
|
||||||
|
Range(
|
||||||
|
start=cur_text_embedding_len, end=cur_text_embedding_len + text_embedding_info.embeds.shape[1]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
processed_masks.append(
|
||||||
|
self._preprocess_regional_prompt_mask(mask, latent_height, latent_width, dtype=dtype)
|
||||||
|
)
|
||||||
|
|
||||||
|
cur_text_embedding_len += text_embedding_info.embeds.shape[1]
|
||||||
|
|
||||||
|
text_embedding = torch.cat(text_embedding, dim=1)
|
||||||
|
assert len(text_embedding.shape) == 3 # batch_size, seq_len, token_len
|
||||||
|
|
||||||
|
regions = None
|
||||||
|
if not all_masks_are_none:
|
||||||
|
regions = TextConditioningRegions(
|
||||||
|
masks=torch.cat(processed_masks, dim=1),
|
||||||
|
ranges=embedding_ranges,
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_sdxl:
|
||||||
|
return SDXLConditioningInfo(
|
||||||
|
embeds=text_embedding, pooled_embeds=pooled_embedding, add_time_ids=add_time_ids
|
||||||
|
), regions
|
||||||
|
return BasicConditioningInfo(embeds=text_embedding), regions
|
||||||
|
|
||||||
def get_conditioning_data(
|
def get_conditioning_data(
|
||||||
self,
|
self,
|
||||||
context: InvocationContext,
|
context: InvocationContext,
|
||||||
scheduler: Scheduler,
|
|
||||||
unet: UNet2DConditionModel,
|
unet: UNet2DConditionModel,
|
||||||
seed: int,
|
latent_height: int,
|
||||||
) -> ConditioningData:
|
latent_width: int,
|
||||||
positive_cond_data = context.conditioning.load(self.positive_conditioning.conditioning_name)
|
) -> TextConditioningData:
|
||||||
c = positive_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
|
# Normalize self.positive_conditioning and self.negative_conditioning to lists.
|
||||||
|
cond_list = self.positive_conditioning
|
||||||
|
if not isinstance(cond_list, list):
|
||||||
|
cond_list = [cond_list]
|
||||||
|
uncond_list = self.negative_conditioning
|
||||||
|
if not isinstance(uncond_list, list):
|
||||||
|
uncond_list = [uncond_list]
|
||||||
|
|
||||||
negative_cond_data = context.conditioning.load(self.negative_conditioning.conditioning_name)
|
cond_text_embeddings, cond_text_embedding_masks = self._get_text_embeddings_and_masks(
|
||||||
uc = negative_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
|
cond_list, context, unet.device, unet.dtype
|
||||||
|
)
|
||||||
conditioning_data = ConditioningData(
|
uncond_text_embeddings, uncond_text_embedding_masks = self._get_text_embeddings_and_masks(
|
||||||
unconditioned_embeddings=uc,
|
uncond_list, context, unet.device, unet.dtype
|
||||||
text_embeddings=c,
|
|
||||||
guidance_scale=self.cfg_scale,
|
|
||||||
guidance_rescale_multiplier=self.cfg_rescale_multiplier,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
conditioning_data = conditioning_data.add_scheduler_args_if_applicable( # FIXME
|
cond_text_embedding, cond_regions = self._concat_regional_text_embeddings(
|
||||||
scheduler,
|
text_conditionings=cond_text_embeddings,
|
||||||
# for ddim scheduler
|
masks=cond_text_embedding_masks,
|
||||||
eta=0.0, # ddim_eta
|
latent_height=latent_height,
|
||||||
# for ancestral and sde schedulers
|
latent_width=latent_width,
|
||||||
# flip all bits to have noise different from initial
|
dtype=unet.dtype,
|
||||||
generator=torch.Generator(device=unet.device).manual_seed(seed ^ 0xFFFFFFFF),
|
)
|
||||||
|
uncond_text_embedding, uncond_regions = self._concat_regional_text_embeddings(
|
||||||
|
text_conditionings=uncond_text_embeddings,
|
||||||
|
masks=uncond_text_embedding_masks,
|
||||||
|
latent_height=latent_height,
|
||||||
|
latent_width=latent_width,
|
||||||
|
dtype=unet.dtype,
|
||||||
|
)
|
||||||
|
|
||||||
|
conditioning_data = TextConditioningData(
|
||||||
|
uncond_text=uncond_text_embedding,
|
||||||
|
cond_text=cond_text_embedding,
|
||||||
|
uncond_regions=uncond_regions,
|
||||||
|
cond_regions=cond_regions,
|
||||||
|
guidance_scale=self.cfg_scale,
|
||||||
|
guidance_rescale_multiplier=self.cfg_rescale_multiplier,
|
||||||
)
|
)
|
||||||
|
|
||||||
if conditioning_data.unconditioned_embeddings.embeds.device != conditioning_data.text_embeddings.embeds.device:
|
if conditioning_data.unconditioned_embeddings.embeds.device != conditioning_data.text_embeddings.embeds.device:
|
||||||
@ -502,8 +634,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
self,
|
self,
|
||||||
context: InvocationContext,
|
context: InvocationContext,
|
||||||
ip_adapter: Optional[Union[IPAdapterField, list[IPAdapterField]]],
|
ip_adapter: Optional[Union[IPAdapterField, list[IPAdapterField]]],
|
||||||
conditioning_data: ConditioningData,
|
|
||||||
exit_stack: ExitStack,
|
exit_stack: ExitStack,
|
||||||
|
latent_height: int,
|
||||||
|
latent_width: int,
|
||||||
|
dtype: torch.dtype,
|
||||||
) -> Optional[list[IPAdapterData]]:
|
) -> Optional[list[IPAdapterData]]:
|
||||||
"""If IP-Adapter is enabled, then this function loads the requisite models, and adds the image prompt embeddings
|
"""If IP-Adapter is enabled, then this function loads the requisite models, and adds the image prompt embeddings
|
||||||
to the `conditioning_data` (in-place).
|
to the `conditioning_data` (in-place).
|
||||||
@ -519,7 +653,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
ip_adapter_data_list = []
|
ip_adapter_data_list = []
|
||||||
conditioning_data.ip_adapter_conditioning = []
|
|
||||||
for single_ip_adapter in ip_adapter:
|
for single_ip_adapter in ip_adapter:
|
||||||
ip_adapter_model: Union[IPAdapter, IPAdapterPlus] = exit_stack.enter_context(
|
ip_adapter_model: Union[IPAdapter, IPAdapterPlus] = exit_stack.enter_context(
|
||||||
context.models.load(single_ip_adapter.ip_adapter_model)
|
context.models.load(single_ip_adapter.ip_adapter_model)
|
||||||
@ -542,9 +675,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
single_ipa_images, image_encoder_model
|
single_ipa_images, image_encoder_model
|
||||||
)
|
)
|
||||||
|
|
||||||
conditioning_data.ip_adapter_conditioning.append(
|
mask = single_ip_adapter.mask
|
||||||
IPAdapterConditioningInfo(image_prompt_embeds, uncond_image_prompt_embeds)
|
if mask is not None:
|
||||||
)
|
mask = context.tensors.load(mask.tensor_name)
|
||||||
|
mask = self._preprocess_regional_prompt_mask(mask, latent_height, latent_width, dtype=dtype)
|
||||||
|
|
||||||
ip_adapter_data_list.append(
|
ip_adapter_data_list.append(
|
||||||
IPAdapterData(
|
IPAdapterData(
|
||||||
@ -552,6 +686,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
weight=single_ip_adapter.weight,
|
weight=single_ip_adapter.weight,
|
||||||
begin_step_percent=single_ip_adapter.begin_step_percent,
|
begin_step_percent=single_ip_adapter.begin_step_percent,
|
||||||
end_step_percent=single_ip_adapter.end_step_percent,
|
end_step_percent=single_ip_adapter.end_step_percent,
|
||||||
|
ip_adapter_conditioning=IPAdapterConditioningInfo(image_prompt_embeds, uncond_image_prompt_embeds),
|
||||||
|
mask=mask,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -641,6 +777,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
steps: int,
|
steps: int,
|
||||||
denoising_start: float,
|
denoising_start: float,
|
||||||
denoising_end: float,
|
denoising_end: float,
|
||||||
|
seed: int,
|
||||||
) -> Tuple[int, List[int], int]:
|
) -> Tuple[int, List[int], int]:
|
||||||
assert isinstance(scheduler, ConfigMixin)
|
assert isinstance(scheduler, ConfigMixin)
|
||||||
if scheduler.config.get("cpu_only", False):
|
if scheduler.config.get("cpu_only", False):
|
||||||
@ -669,7 +806,15 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
timesteps = timesteps[t_start_idx : t_start_idx + t_end_idx]
|
timesteps = timesteps[t_start_idx : t_start_idx + t_end_idx]
|
||||||
num_inference_steps = len(timesteps) // scheduler.order
|
num_inference_steps = len(timesteps) // scheduler.order
|
||||||
|
|
||||||
return num_inference_steps, timesteps, init_timestep
|
scheduler_step_kwargs = {}
|
||||||
|
scheduler_step_signature = inspect.signature(scheduler.step)
|
||||||
|
if "generator" in scheduler_step_signature.parameters:
|
||||||
|
# At some point, someone decided that schedulers that accept a generator should use the original seed with
|
||||||
|
# all bits flipped. I don't know the original rationale for this, but now we must keep it like this for
|
||||||
|
# reproducibility.
|
||||||
|
scheduler_step_kwargs = {"generator": torch.Generator(device=device).manual_seed(seed ^ 0xFFFFFFFF)}
|
||||||
|
|
||||||
|
return num_inference_steps, timesteps, init_timestep, scheduler_step_kwargs
|
||||||
|
|
||||||
def prep_inpaint_mask(
|
def prep_inpaint_mask(
|
||||||
self, context: InvocationContext, latents: torch.Tensor
|
self, context: InvocationContext, latents: torch.Tensor
|
||||||
@ -762,7 +907,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
)
|
)
|
||||||
|
|
||||||
pipeline = self.create_pipeline(unet, scheduler)
|
pipeline = self.create_pipeline(unet, scheduler)
|
||||||
conditioning_data = self.get_conditioning_data(context, scheduler, unet, seed)
|
|
||||||
|
_, _, latent_height, latent_width = latents.shape
|
||||||
|
conditioning_data = self.get_conditioning_data(
|
||||||
|
context=context, unet=unet, latent_height=latent_height, latent_width=latent_width
|
||||||
|
)
|
||||||
|
|
||||||
controlnet_data = self.prep_control_data(
|
controlnet_data = self.prep_control_data(
|
||||||
context=context,
|
context=context,
|
||||||
@ -776,16 +925,19 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
ip_adapter_data = self.prep_ip_adapter_data(
|
ip_adapter_data = self.prep_ip_adapter_data(
|
||||||
context=context,
|
context=context,
|
||||||
ip_adapter=self.ip_adapter,
|
ip_adapter=self.ip_adapter,
|
||||||
conditioning_data=conditioning_data,
|
|
||||||
exit_stack=exit_stack,
|
exit_stack=exit_stack,
|
||||||
|
latent_height=latent_height,
|
||||||
|
latent_width=latent_width,
|
||||||
|
dtype=unet.dtype,
|
||||||
)
|
)
|
||||||
|
|
||||||
num_inference_steps, timesteps, init_timestep = self.init_scheduler(
|
num_inference_steps, timesteps, init_timestep, scheduler_step_kwargs = self.init_scheduler(
|
||||||
scheduler,
|
scheduler,
|
||||||
device=unet.device,
|
device=unet.device,
|
||||||
steps=self.steps,
|
steps=self.steps,
|
||||||
denoising_start=self.denoising_start,
|
denoising_start=self.denoising_start,
|
||||||
denoising_end=self.denoising_end,
|
denoising_end=self.denoising_end,
|
||||||
|
seed=seed,
|
||||||
)
|
)
|
||||||
|
|
||||||
result_latents = pipeline.latents_from_embeddings(
|
result_latents = pipeline.latents_from_embeddings(
|
||||||
@ -798,6 +950,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
masked_latents=masked_latents,
|
masked_latents=masked_latents,
|
||||||
gradient_mask=gradient_mask,
|
gradient_mask=gradient_mask,
|
||||||
num_inference_steps=num_inference_steps,
|
num_inference_steps=num_inference_steps,
|
||||||
|
scheduler_step_kwargs=scheduler_step_kwargs,
|
||||||
conditioning_data=conditioning_data,
|
conditioning_data=conditioning_data,
|
||||||
control_data=controlnet_data,
|
control_data=controlnet_data,
|
||||||
ip_adapter_data=ip_adapter_data,
|
ip_adapter_data=ip_adapter_data,
|
||||||
@ -807,12 +960,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
|||||||
|
|
||||||
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
||||||
result_latents = result_latents.to("cpu")
|
result_latents = result_latents.to("cpu")
|
||||||
torch.cuda.empty_cache()
|
TorchDevice.empty_cache()
|
||||||
if choose_torch_device() == torch.device("mps"):
|
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
name = context.tensors.save(tensor=result_latents)
|
name = context.tensors.save(tensor=result_latents)
|
||||||
return LatentsOutput.build(latents_name=name, latents=result_latents, seed=seed)
|
return LatentsOutput.build(latents_name=name, latents=result_latents, seed=None)
|
||||||
|
|
||||||
|
|
||||||
@invocation(
|
@invocation(
|
||||||
@ -876,9 +1027,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
|||||||
vae.disable_tiling()
|
vae.disable_tiling()
|
||||||
|
|
||||||
# clear memory as vae decode can request a lot
|
# clear memory as vae decode can request a lot
|
||||||
torch.cuda.empty_cache()
|
TorchDevice.empty_cache()
|
||||||
if choose_torch_device() == torch.device("mps"):
|
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
with torch.inference_mode():
|
with torch.inference_mode():
|
||||||
# copied from diffusers pipeline
|
# copied from diffusers pipeline
|
||||||
@ -890,9 +1039,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
|||||||
|
|
||||||
image = VaeImageProcessor.numpy_to_pil(np_image)[0]
|
image = VaeImageProcessor.numpy_to_pil(np_image)[0]
|
||||||
|
|
||||||
torch.cuda.empty_cache()
|
TorchDevice.empty_cache()
|
||||||
if choose_torch_device() == torch.device("mps"):
|
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=image)
|
image_dto = context.images.save(image=image)
|
||||||
|
|
||||||
@ -931,9 +1078,7 @@ class ResizeLatentsInvocation(BaseInvocation):
|
|||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||||
latents = context.tensors.load(self.latents.latents_name)
|
latents = context.tensors.load(self.latents.latents_name)
|
||||||
|
device = TorchDevice.choose_torch_device()
|
||||||
# TODO:
|
|
||||||
device = choose_torch_device()
|
|
||||||
|
|
||||||
resized_latents = torch.nn.functional.interpolate(
|
resized_latents = torch.nn.functional.interpolate(
|
||||||
latents.to(device),
|
latents.to(device),
|
||||||
@ -944,9 +1089,8 @@ class ResizeLatentsInvocation(BaseInvocation):
|
|||||||
|
|
||||||
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
||||||
resized_latents = resized_latents.to("cpu")
|
resized_latents = resized_latents.to("cpu")
|
||||||
torch.cuda.empty_cache()
|
|
||||||
if device == torch.device("mps"):
|
TorchDevice.empty_cache()
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
name = context.tensors.save(tensor=resized_latents)
|
name = context.tensors.save(tensor=resized_latents)
|
||||||
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
|
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
|
||||||
@ -973,8 +1117,7 @@ class ScaleLatentsInvocation(BaseInvocation):
|
|||||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||||
latents = context.tensors.load(self.latents.latents_name)
|
latents = context.tensors.load(self.latents.latents_name)
|
||||||
|
|
||||||
# TODO:
|
device = TorchDevice.choose_torch_device()
|
||||||
device = choose_torch_device()
|
|
||||||
|
|
||||||
# resizing
|
# resizing
|
||||||
resized_latents = torch.nn.functional.interpolate(
|
resized_latents = torch.nn.functional.interpolate(
|
||||||
@ -986,9 +1129,7 @@ class ScaleLatentsInvocation(BaseInvocation):
|
|||||||
|
|
||||||
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
||||||
resized_latents = resized_latents.to("cpu")
|
resized_latents = resized_latents.to("cpu")
|
||||||
torch.cuda.empty_cache()
|
TorchDevice.empty_cache()
|
||||||
if device == torch.device("mps"):
|
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
name = context.tensors.save(tensor=resized_latents)
|
name = context.tensors.save(tensor=resized_latents)
|
||||||
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
|
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
|
||||||
@ -1120,8 +1261,7 @@ class BlendLatentsInvocation(BaseInvocation):
|
|||||||
if latents_a.shape != latents_b.shape:
|
if latents_a.shape != latents_b.shape:
|
||||||
raise Exception("Latents to blend must be the same size.")
|
raise Exception("Latents to blend must be the same size.")
|
||||||
|
|
||||||
# TODO:
|
device = TorchDevice.choose_torch_device()
|
||||||
device = choose_torch_device()
|
|
||||||
|
|
||||||
def slerp(
|
def slerp(
|
||||||
t: Union[float, npt.NDArray[Any]], # FIXME: maybe use np.float32 here?
|
t: Union[float, npt.NDArray[Any]], # FIXME: maybe use np.float32 here?
|
||||||
@ -1174,9 +1314,8 @@ class BlendLatentsInvocation(BaseInvocation):
|
|||||||
|
|
||||||
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
||||||
blended_latents = blended_latents.to("cpu")
|
blended_latents = blended_latents.to("cpu")
|
||||||
torch.cuda.empty_cache()
|
|
||||||
if device == torch.device("mps"):
|
TorchDevice.empty_cache()
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
name = context.tensors.save(tensor=blended_latents)
|
name = context.tensors.save(tensor=blended_latents)
|
||||||
return LatentsOutput.build(latents_name=name, latents=blended_latents)
|
return LatentsOutput.build(latents_name=name, latents=blended_latents)
|
||||||
@ -1267,7 +1406,7 @@ class IdealSizeInvocation(BaseInvocation):
|
|||||||
return tuple((x - x % multiple_of) for x in args)
|
return tuple((x - x % multiple_of) for x in args)
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
|
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
|
||||||
unet_config = context.models.get_config(**self.unet.unet.model_dump())
|
unet_config = context.models.get_config(self.unet.unet.key)
|
||||||
aspect = self.width / self.height
|
aspect = self.width / self.height
|
||||||
dimension: float = 512
|
dimension: float = 512
|
||||||
if unet_config.base == BaseModelType.StableDiffusion2:
|
if unet_config.base == BaseModelType.StableDiffusion2:
|
||||||
|
36
invokeai/app/invocations/mask.py
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
import torch
|
||||||
|
|
||||||
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, InvocationContext, invocation
|
||||||
|
from invokeai.app.invocations.fields import InputField, TensorField, WithMetadata
|
||||||
|
from invokeai.app.invocations.primitives import MaskOutput
|
||||||
|
|
||||||
|
|
||||||
|
@invocation(
|
||||||
|
"rectangle_mask",
|
||||||
|
title="Create Rectangle Mask",
|
||||||
|
tags=["conditioning"],
|
||||||
|
category="conditioning",
|
||||||
|
version="1.0.1",
|
||||||
|
)
|
||||||
|
class RectangleMaskInvocation(BaseInvocation, WithMetadata):
|
||||||
|
"""Create a rectangular mask."""
|
||||||
|
|
||||||
|
width: int = InputField(description="The width of the entire mask.")
|
||||||
|
height: int = InputField(description="The height of the entire mask.")
|
||||||
|
x_left: int = InputField(description="The left x-coordinate of the rectangular masked region (inclusive).")
|
||||||
|
y_top: int = InputField(description="The top y-coordinate of the rectangular masked region (inclusive).")
|
||||||
|
rectangle_width: int = InputField(description="The width of the rectangular masked region.")
|
||||||
|
rectangle_height: int = InputField(description="The height of the rectangular masked region.")
|
||||||
|
|
||||||
|
def invoke(self, context: InvocationContext) -> MaskOutput:
|
||||||
|
mask = torch.zeros((1, self.height, self.width), dtype=torch.bool)
|
||||||
|
mask[:, self.y_top : self.y_top + self.rectangle_height, self.x_left : self.x_left + self.rectangle_width] = (
|
||||||
|
True
|
||||||
|
)
|
||||||
|
|
||||||
|
mask_tensor_name = context.tensors.save(mask)
|
||||||
|
return MaskOutput(
|
||||||
|
mask=TensorField(tensor_name=mask_tensor_name),
|
||||||
|
width=self.width,
|
||||||
|
height=self.height,
|
||||||
|
)
|
@ -2,16 +2,8 @@ from typing import Any, Literal, Optional, Union
|
|||||||
|
|
||||||
from pydantic import BaseModel, ConfigDict, Field
|
from pydantic import BaseModel, ConfigDict, Field
|
||||||
|
|
||||||
from invokeai.app.invocations.baseinvocation import (
|
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||||
BaseInvocation,
|
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
|
||||||
BaseInvocationOutput,
|
|
||||||
invocation,
|
|
||||||
invocation_output,
|
|
||||||
)
|
|
||||||
from invokeai.app.invocations.controlnet_image_processors import (
|
|
||||||
CONTROLNET_MODE_VALUES,
|
|
||||||
CONTROLNET_RESIZE_VALUES,
|
|
||||||
)
|
|
||||||
from invokeai.app.invocations.fields import (
|
from invokeai.app.invocations.fields import (
|
||||||
FieldDescriptions,
|
FieldDescriptions,
|
||||||
ImageField,
|
ImageField,
|
||||||
@ -43,6 +35,7 @@ class IPAdapterMetadataField(BaseModel):
|
|||||||
|
|
||||||
image: ImageField = Field(description="The IP-Adapter image prompt.")
|
image: ImageField = Field(description="The IP-Adapter image prompt.")
|
||||||
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
|
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
|
||||||
|
clip_vision_model: Literal["ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
|
||||||
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
|
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
|
||||||
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
|
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
|
||||||
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")
|
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")
|
||||||
|
@ -9,7 +9,7 @@ from invokeai.app.invocations.fields import FieldDescriptions, InputField, Laten
|
|||||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||||
from invokeai.app.util.misc import SEED_MAX
|
from invokeai.app.util.misc import SEED_MAX
|
||||||
|
|
||||||
from ...backend.util.devices import choose_torch_device, torch_dtype
|
from ...backend.util.devices import TorchDevice
|
||||||
from .baseinvocation import (
|
from .baseinvocation import (
|
||||||
BaseInvocation,
|
BaseInvocation,
|
||||||
BaseInvocationOutput,
|
BaseInvocationOutput,
|
||||||
@ -46,7 +46,7 @@ def get_noise(
|
|||||||
height // downsampling_factor,
|
height // downsampling_factor,
|
||||||
width // downsampling_factor,
|
width // downsampling_factor,
|
||||||
],
|
],
|
||||||
dtype=torch_dtype(device),
|
dtype=TorchDevice.choose_torch_dtype(device=device),
|
||||||
device=noise_device_type,
|
device=noise_device_type,
|
||||||
generator=generator,
|
generator=generator,
|
||||||
).to("cpu")
|
).to("cpu")
|
||||||
@ -111,14 +111,14 @@ class NoiseInvocation(BaseInvocation):
|
|||||||
|
|
||||||
@field_validator("seed", mode="before")
|
@field_validator("seed", mode="before")
|
||||||
def modulo_seed(cls, v):
|
def modulo_seed(cls, v):
|
||||||
"""Returns the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
|
"""Return the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
|
||||||
return v % (SEED_MAX + 1)
|
return v % (SEED_MAX + 1)
|
||||||
|
|
||||||
def invoke(self, context: InvocationContext) -> NoiseOutput:
|
def invoke(self, context: InvocationContext) -> NoiseOutput:
|
||||||
noise = get_noise(
|
noise = get_noise(
|
||||||
width=self.width,
|
width=self.width,
|
||||||
height=self.height,
|
height=self.height,
|
||||||
device=choose_torch_device(),
|
device=TorchDevice.choose_torch_device(),
|
||||||
seed=self.seed,
|
seed=self.seed,
|
||||||
use_cpu=self.use_cpu,
|
use_cpu=self.use_cpu,
|
||||||
)
|
)
|
||||||
|
@ -15,6 +15,7 @@ from invokeai.app.invocations.fields import (
|
|||||||
InputField,
|
InputField,
|
||||||
LatentsField,
|
LatentsField,
|
||||||
OutputField,
|
OutputField,
|
||||||
|
TensorField,
|
||||||
UIComponent,
|
UIComponent,
|
||||||
)
|
)
|
||||||
from invokeai.app.services.images.images_common import ImageDTO
|
from invokeai.app.services.images.images_common import ImageDTO
|
||||||
@ -405,9 +406,19 @@ class ColorInvocation(BaseInvocation):
|
|||||||
|
|
||||||
# endregion
|
# endregion
|
||||||
|
|
||||||
|
|
||||||
# region Conditioning
|
# region Conditioning
|
||||||
|
|
||||||
|
|
||||||
|
@invocation_output("mask_output")
|
||||||
|
class MaskOutput(BaseInvocationOutput):
|
||||||
|
"""A torch mask tensor."""
|
||||||
|
|
||||||
|
mask: TensorField = OutputField(description="The mask.")
|
||||||
|
width: int = OutputField(description="The width of the mask in pixels.")
|
||||||
|
height: int = OutputField(description="The height of the mask in pixels.")
|
||||||
|
|
||||||
|
|
||||||
@invocation_output("conditioning_output")
|
@invocation_output("conditioning_output")
|
||||||
class ConditioningOutput(BaseInvocationOutput):
|
class ConditioningOutput(BaseInvocationOutput):
|
||||||
"""Base class for nodes that output a single conditioning tensor"""
|
"""Base class for nodes that output a single conditioning tensor"""
|
||||||
|
@ -4,7 +4,6 @@ from typing import Literal
|
|||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import torch
|
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pydantic import ConfigDict
|
from pydantic import ConfigDict
|
||||||
|
|
||||||
@ -14,7 +13,7 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
|
|||||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||||
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .baseinvocation import BaseInvocation, invocation
|
from .baseinvocation import BaseInvocation, invocation
|
||||||
from .fields import InputField, WithBoard, WithMetadata
|
from .fields import InputField, WithBoard, WithMetadata
|
||||||
@ -35,9 +34,6 @@ ESRGAN_MODEL_URLS: dict[str, str] = {
|
|||||||
"RealESRGAN_x2plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
|
"RealESRGAN_x2plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
|
||||||
}
|
}
|
||||||
|
|
||||||
if choose_torch_device() == torch.device("mps"):
|
|
||||||
from torch import mps
|
|
||||||
|
|
||||||
|
|
||||||
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.2")
|
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.2")
|
||||||
class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
|
class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||||
@ -120,9 +116,7 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
|
|||||||
upscaled_image = upscaler.upscale(cv2_image)
|
upscaled_image = upscaler.upscale(cv2_image)
|
||||||
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
|
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
|
||||||
|
|
||||||
torch.cuda.empty_cache()
|
TorchDevice.empty_cache()
|
||||||
if choose_torch_device() == torch.device("mps"):
|
|
||||||
mps.empty_cache()
|
|
||||||
|
|
||||||
image_dto = context.images.save(image=pil_image)
|
image_dto = context.images.save(image=pil_image)
|
||||||
|
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import locale
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
@ -24,13 +25,13 @@ DB_FILE = Path("invokeai.db")
|
|||||||
LEGACY_INIT_FILE = Path("invokeai.init")
|
LEGACY_INIT_FILE = Path("invokeai.init")
|
||||||
DEFAULT_RAM_CACHE = 10.0
|
DEFAULT_RAM_CACHE = 10.0
|
||||||
DEFAULT_CONVERT_CACHE = 20.0
|
DEFAULT_CONVERT_CACHE = 20.0
|
||||||
DEVICE = Literal["auto", "cpu", "cuda:0", "cuda:1", "cuda:2", "cuda:3", "cuda:4", "cuda:5", "mps"]
|
DEVICE = Literal["auto", "cpu", "cuda:0", "cuda:1", "cuda:2", "cuda:3", "cuda:4", "cuda:5", "cuda:6", "cuda:7", "mps"]
|
||||||
PRECISION = Literal["auto", "float16", "bfloat16", "float32", "autocast"]
|
PRECISION = Literal["auto", "float16", "bfloat16", "float32", "autocast"]
|
||||||
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
|
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
|
||||||
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
|
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
|
||||||
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
|
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
|
||||||
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
|
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
|
||||||
CONFIG_SCHEMA_VERSION = "4.0.0"
|
CONFIG_SCHEMA_VERSION = "4.0.1"
|
||||||
|
|
||||||
|
|
||||||
def get_default_ram_cache_size() -> float:
|
def get_default_ram_cache_size() -> float:
|
||||||
@ -100,9 +101,9 @@ class InvokeAIAppConfig(BaseSettings):
|
|||||||
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
|
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
|
||||||
convert_cache: Maximum size of on-disk converted models cache (GB).
|
convert_cache: Maximum size of on-disk converted models cache (GB).
|
||||||
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
|
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
|
||||||
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda:0`, `cuda:1`, `cuda:2`, `cuda:3`, `cuda:4`, `cuda:5`, `mps`
|
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda:0`, `cuda:1`, `cuda:2`, `cuda:3`, `cuda:4`, `cuda:5`, `cuda:6`, `cuda:7`, `cuda:8`, `mps`
|
||||||
devices: List of execution devices; will override default device selected.
|
devices: List of execution devices to use in a multi-GPU environment; will override default device selected.
|
||||||
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`, `autocast`
|
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`
|
||||||
sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
|
sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
|
||||||
attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
|
attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
|
||||||
attention_slice_size: Slice size, valid when attention_type=="sliced".<br>Valid values: `auto`, `balanced`, `max`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`
|
attention_slice_size: Slice size, valid when attention_type=="sliced".<br>Valid values: `auto`, `balanced`, `max`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`
|
||||||
@ -316,11 +317,10 @@ class InvokeAIAppConfig(BaseSettings):
|
|||||||
@staticmethod
|
@staticmethod
|
||||||
def find_root() -> Path:
|
def find_root() -> Path:
|
||||||
"""Choose the runtime root directory when not specified on command line or init file."""
|
"""Choose the runtime root directory when not specified on command line or init file."""
|
||||||
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
|
|
||||||
if os.environ.get("INVOKEAI_ROOT"):
|
if os.environ.get("INVOKEAI_ROOT"):
|
||||||
root = Path(os.environ["INVOKEAI_ROOT"])
|
root = Path(os.environ["INVOKEAI_ROOT"])
|
||||||
elif any((venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]):
|
elif venv := os.environ.get("VIRTUAL_ENV", None):
|
||||||
root = (venv.parent).resolve()
|
root = Path(venv).parent.resolve()
|
||||||
else:
|
else:
|
||||||
root = Path("~/invokeai").expanduser().resolve()
|
root = Path("~/invokeai").expanduser().resolve()
|
||||||
return root
|
return root
|
||||||
@ -366,16 +366,25 @@ def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
|
|||||||
# `max_cache_size` was renamed to `ram` some time in v3, but both names were used
|
# `max_cache_size` was renamed to `ram` some time in v3, but both names were used
|
||||||
if k == "max_cache_size" and "ram" not in category_dict:
|
if k == "max_cache_size" and "ram" not in category_dict:
|
||||||
parsed_config_dict["ram"] = v
|
parsed_config_dict["ram"] = v
|
||||||
|
# `max_vram_cache_size` was renamed to `vram` some time in v3, but both names were used
|
||||||
|
if k == "max_vram_cache_size" and "vram" not in category_dict:
|
||||||
|
parsed_config_dict["vram"] = v
|
||||||
|
# autocast was removed in v4.0.1
|
||||||
|
if k == "precision" and v == "autocast":
|
||||||
|
parsed_config_dict["precision"] = "auto"
|
||||||
if k == "conf_path":
|
if k == "conf_path":
|
||||||
parsed_config_dict["legacy_models_yaml_path"] = v
|
parsed_config_dict["legacy_models_yaml_path"] = v
|
||||||
if k == "legacy_conf_dir":
|
if k == "legacy_conf_dir":
|
||||||
# The old default for this was "configs/stable-diffusion". If if the incoming config has that as the value, we won't set it.
|
# The old default for this was "configs/stable-diffusion" ("configs\stable-diffusion" on Windows).
|
||||||
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
|
if v == "configs/stable-diffusion" or v == "configs\\stable-diffusion":
|
||||||
# Else we do not attempt to migrate this setting
|
# If if the incoming config has the default value, skip
|
||||||
if v != "configs/stable-diffusion":
|
continue
|
||||||
parsed_config_dict["legacy_conf_dir"] = v
|
|
||||||
elif Path(v).name == "stable-diffusion":
|
elif Path(v).name == "stable-diffusion":
|
||||||
|
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
|
||||||
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
|
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
|
||||||
|
else:
|
||||||
|
# Else we do not attempt to migrate this setting
|
||||||
|
parsed_config_dict["legacy_conf_dir"] = v
|
||||||
elif k in InvokeAIAppConfig.model_fields:
|
elif k in InvokeAIAppConfig.model_fields:
|
||||||
# skip unknown fields
|
# skip unknown fields
|
||||||
parsed_config_dict[k] = v
|
parsed_config_dict[k] = v
|
||||||
@ -385,6 +394,28 @@ def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
|
|||||||
return config
|
return config
|
||||||
|
|
||||||
|
|
||||||
|
def migrate_v4_0_0_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
|
||||||
|
"""Migrate v4.0.0 config dictionary to a current config object.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config_dict: A dictionary of settings from a v4.0.0 config file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
An instance of `InvokeAIAppConfig` with the migrated settings.
|
||||||
|
"""
|
||||||
|
parsed_config_dict: dict[str, Any] = {}
|
||||||
|
for k, v in config_dict.items():
|
||||||
|
# autocast was removed from precision in v4.0.1
|
||||||
|
if k == "precision" and v == "autocast":
|
||||||
|
parsed_config_dict["precision"] = "auto"
|
||||||
|
else:
|
||||||
|
parsed_config_dict[k] = v
|
||||||
|
if k == "schema_version":
|
||||||
|
parsed_config_dict[k] = CONFIG_SCHEMA_VERSION
|
||||||
|
config = DefaultInvokeAIAppConfig.model_validate(parsed_config_dict)
|
||||||
|
return config
|
||||||
|
|
||||||
|
|
||||||
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
|
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
|
||||||
"""Load and migrate a config file to the latest version.
|
"""Load and migrate a config file to the latest version.
|
||||||
|
|
||||||
@ -395,7 +426,7 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
|
|||||||
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
|
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
|
||||||
"""
|
"""
|
||||||
assert config_path.suffix == ".yaml"
|
assert config_path.suffix == ".yaml"
|
||||||
with open(config_path) as file:
|
with open(config_path, "rt", encoding=locale.getpreferredencoding()) as file:
|
||||||
loaded_config_dict = yaml.safe_load(file)
|
loaded_config_dict = yaml.safe_load(file)
|
||||||
|
|
||||||
assert isinstance(loaded_config_dict, dict)
|
assert isinstance(loaded_config_dict, dict)
|
||||||
@ -411,17 +442,21 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
|
|||||||
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
|
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
|
||||||
migrated_config.write_file(config_path)
|
migrated_config.write_file(config_path)
|
||||||
return migrated_config
|
return migrated_config
|
||||||
else:
|
|
||||||
# Attempt to load as a v4 config file
|
if loaded_config_dict["schema_version"] == "4.0.0":
|
||||||
try:
|
loaded_config_dict = migrate_v4_0_0_config_dict(loaded_config_dict)
|
||||||
# Meta is not included in the model fields, so we need to validate it separately
|
loaded_config_dict.write_file(config_path)
|
||||||
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
|
|
||||||
assert (
|
# Attempt to load as a v4 config file
|
||||||
config.schema_version == CONFIG_SCHEMA_VERSION
|
try:
|
||||||
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
|
# Meta is not included in the model fields, so we need to validate it separately
|
||||||
return config
|
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
|
||||||
except Exception as e:
|
assert (
|
||||||
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
|
config.schema_version == CONFIG_SCHEMA_VERSION
|
||||||
|
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
|
||||||
|
return config
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=1)
|
@lru_cache(maxsize=1)
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
"""Model installation class."""
|
"""Model installation class."""
|
||||||
|
|
||||||
|
import locale
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import signal
|
import signal
|
||||||
@ -12,6 +13,7 @@ from shutil import copyfile, copytree, move, rmtree
|
|||||||
from tempfile import mkdtemp
|
from tempfile import mkdtemp
|
||||||
from typing import Any, Dict, List, Optional, Union
|
from typing import Any, Dict, List, Optional, Union
|
||||||
|
|
||||||
|
import torch
|
||||||
import yaml
|
import yaml
|
||||||
from huggingface_hub import HfFolder
|
from huggingface_hub import HfFolder
|
||||||
from pydantic.networks import AnyHttpUrl
|
from pydantic.networks import AnyHttpUrl
|
||||||
@ -41,7 +43,7 @@ from invokeai.backend.model_manager.metadata.metadata_base import HuggingFaceMet
|
|||||||
from invokeai.backend.model_manager.probe import ModelProbe
|
from invokeai.backend.model_manager.probe import ModelProbe
|
||||||
from invokeai.backend.model_manager.search import ModelSearch
|
from invokeai.backend.model_manager.search import ModelSearch
|
||||||
from invokeai.backend.util import InvokeAILogger
|
from invokeai.backend.util import InvokeAILogger
|
||||||
from invokeai.backend.util.devices import choose_precision, choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .model_install_base import (
|
from .model_install_base import (
|
||||||
MODEL_SOURCE_TO_TYPE_MAP,
|
MODEL_SOURCE_TO_TYPE_MAP,
|
||||||
@ -323,7 +325,8 @@ class ModelInstallService(ModelInstallServiceBase):
|
|||||||
legacy_models_yaml_path = Path(self._app_config.root_path, legacy_models_yaml_path)
|
legacy_models_yaml_path = Path(self._app_config.root_path, legacy_models_yaml_path)
|
||||||
|
|
||||||
if legacy_models_yaml_path.exists():
|
if legacy_models_yaml_path.exists():
|
||||||
legacy_models_yaml = yaml.safe_load(legacy_models_yaml_path.read_text())
|
with open(legacy_models_yaml_path, "rt", encoding=locale.getpreferredencoding()) as file:
|
||||||
|
legacy_models_yaml = yaml.safe_load(file)
|
||||||
|
|
||||||
yaml_metadata = legacy_models_yaml.pop("__metadata__")
|
yaml_metadata = legacy_models_yaml.pop("__metadata__")
|
||||||
yaml_version = yaml_metadata.get("version")
|
yaml_version = yaml_metadata.get("version")
|
||||||
@ -564,7 +567,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
|||||||
# The model is not in the models directory - we don't need to move it.
|
# The model is not in the models directory - we don't need to move it.
|
||||||
return model
|
return model
|
||||||
|
|
||||||
new_path = (models_dir / model.base.value / model.type.value / model.name).with_suffix(old_path.suffix)
|
new_path = models_dir / model.base.value / model.type.value / old_path.name
|
||||||
|
|
||||||
if old_path == new_path or new_path.exists() and old_path == new_path.resolve():
|
if old_path == new_path or new_path.exists() and old_path == new_path.resolve():
|
||||||
return model
|
return model
|
||||||
@ -632,11 +635,10 @@ class ModelInstallService(ModelInstallServiceBase):
|
|||||||
self._next_job_id += 1
|
self._next_job_id += 1
|
||||||
return id
|
return id
|
||||||
|
|
||||||
@staticmethod
|
def _guess_variant(self) -> Optional[ModelRepoVariant]:
|
||||||
def _guess_variant() -> Optional[ModelRepoVariant]:
|
|
||||||
"""Guess the best HuggingFace variant type to download."""
|
"""Guess the best HuggingFace variant type to download."""
|
||||||
precision = choose_precision(choose_torch_device())
|
precision = TorchDevice.choose_torch_dtype()
|
||||||
return ModelRepoVariant.FP16 if precision == "float16" else None
|
return ModelRepoVariant.FP16 if precision == torch.float16 else None
|
||||||
|
|
||||||
def _import_local_model(self, source: LocalModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
|
def _import_local_model(self, source: LocalModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
|
||||||
return ModelInstallJob(
|
return ModelInstallJob(
|
||||||
|
@ -1,11 +1,14 @@
|
|||||||
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Team
|
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Team
|
||||||
"""Implementation of ModelManagerServiceBase."""
|
"""Implementation of ModelManagerServiceBase."""
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
from typing_extensions import Self
|
from typing_extensions import Self
|
||||||
|
|
||||||
from invokeai.app.services.invoker import Invoker
|
from invokeai.app.services.invoker import Invoker
|
||||||
from invokeai.backend.model_manager.load import ModelCache, ModelConvertCache, ModelLoaderRegistry
|
from invokeai.backend.model_manager.load import ModelCache, ModelConvertCache, ModelLoaderRegistry
|
||||||
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from ..config import InvokeAIAppConfig
|
from ..config import InvokeAIAppConfig
|
||||||
@ -86,6 +89,8 @@ class ModelManagerService(ModelManagerServiceBase):
|
|||||||
max_cache_size=app_config.ram,
|
max_cache_size=app_config.ram,
|
||||||
logger=logger,
|
logger=logger,
|
||||||
execution_devices=execution_devices,
|
execution_devices=execution_devices,
|
||||||
|
max_vram_cache_size=app_config.vram,
|
||||||
|
lazy_offloading=app_config.lazy_offload,
|
||||||
)
|
)
|
||||||
convert_cache = ModelConvertCache(cache_path=app_config.convert_cache_path, max_size=app_config.convert_cache)
|
convert_cache = ModelConvertCache(cache_path=app_config.convert_cache_path, max_size=app_config.convert_cache)
|
||||||
loader = ModelLoadService(
|
loader = ModelLoadService(
|
||||||
|
@ -98,6 +98,12 @@ class DefaultSessionProcessor(SessionProcessorBase):
|
|||||||
self._poll_now()
|
self._poll_now()
|
||||||
elif event_name == "batch_enqueued":
|
elif event_name == "batch_enqueued":
|
||||||
self._poll_now()
|
self._poll_now()
|
||||||
|
elif event_name == "queue_item_status_changed" and event[1]["data"]["queue_item"]["status"] in [
|
||||||
|
"completed",
|
||||||
|
"failed",
|
||||||
|
"canceled",
|
||||||
|
]:
|
||||||
|
self._poll_now()
|
||||||
|
|
||||||
def resume(self) -> SessionProcessorStatus:
|
def resume(self) -> SessionProcessorStatus:
|
||||||
if not self._resume_event.is_set():
|
if not self._resume_event.is_set():
|
||||||
@ -188,11 +194,7 @@ class DefaultSessionProcessor(SessionProcessorBase):
|
|||||||
invocation = session.session.next()
|
invocation = session.session.next()
|
||||||
|
|
||||||
# Loop over invocations until the session is complete or canceled
|
# Loop over invocations until the session is complete or canceled
|
||||||
while invocation is not None:
|
while invocation is not None and not self._cancel_event.is_set():
|
||||||
if self._stop_event.is_set():
|
|
||||||
break
|
|
||||||
self._resume_event.wait()
|
|
||||||
|
|
||||||
self._process_next_invocation(session, invocation, stats_service)
|
self._process_next_invocation(session, invocation, stats_service)
|
||||||
|
|
||||||
# The session is complete if all invocations are complete or there was an error
|
# The session is complete if all invocations are complete or there was an error
|
||||||
|
@ -245,6 +245,18 @@ class ImagesInterface(InvocationContextInterface):
|
|||||||
"""
|
"""
|
||||||
return self._services.images.get_dto(image_name)
|
return self._services.images.get_dto(image_name)
|
||||||
|
|
||||||
|
def get_path(self, image_name: str, thumbnail: bool = False) -> Path:
|
||||||
|
"""Gets the internal path to an image or thumbnail.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
image_name: The name of the image to get the path of.
|
||||||
|
thumbnail: Get the path of the thumbnail instead of the full image
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The local path of the image or thumbnail.
|
||||||
|
"""
|
||||||
|
return self._services.images.get_path(image_name, thumbnail)
|
||||||
|
|
||||||
|
|
||||||
class TensorsInterface(InvocationContextInterface):
|
class TensorsInterface(InvocationContextInterface):
|
||||||
def save(self, tensor: Tensor) -> str:
|
def save(self, tensor: Tensor) -> str:
|
||||||
|
@ -11,6 +11,7 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_5 import
|
|||||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_6 import build_migration_6
|
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_6 import build_migration_6
|
||||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_7 import build_migration_7
|
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_7 import build_migration_7
|
||||||
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_8 import build_migration_8
|
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_8 import build_migration_8
|
||||||
|
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_9 import build_migration_9
|
||||||
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
|
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
|
||||||
|
|
||||||
|
|
||||||
@ -39,6 +40,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
|
|||||||
migrator.register_migration(build_migration_6())
|
migrator.register_migration(build_migration_6())
|
||||||
migrator.register_migration(build_migration_7())
|
migrator.register_migration(build_migration_7())
|
||||||
migrator.register_migration(build_migration_8(app_config=config))
|
migrator.register_migration(build_migration_8(app_config=config))
|
||||||
|
migrator.register_migration(build_migration_9())
|
||||||
migrator.run_migrations()
|
migrator.run_migrations()
|
||||||
|
|
||||||
return db
|
return db
|
||||||
|
@ -0,0 +1,29 @@
|
|||||||
|
import sqlite3
|
||||||
|
|
||||||
|
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
|
||||||
|
|
||||||
|
|
||||||
|
class Migration9Callback:
|
||||||
|
def __call__(self, cursor: sqlite3.Cursor) -> None:
|
||||||
|
self._empty_session_queue(cursor)
|
||||||
|
|
||||||
|
def _empty_session_queue(self, cursor: sqlite3.Cursor) -> None:
|
||||||
|
"""Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas."""
|
||||||
|
|
||||||
|
cursor.execute("DELETE FROM session_queue;")
|
||||||
|
|
||||||
|
|
||||||
|
def build_migration_9() -> Migration:
|
||||||
|
"""
|
||||||
|
Build the migration from database version 8 to 9.
|
||||||
|
|
||||||
|
This migration does the following:
|
||||||
|
- Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas.
|
||||||
|
"""
|
||||||
|
migration_9 = Migration(
|
||||||
|
from_version=8,
|
||||||
|
to_version=9,
|
||||||
|
callback=Migration9Callback(),
|
||||||
|
)
|
||||||
|
|
||||||
|
return migration_9
|
@ -1,4 +1,6 @@
|
|||||||
import sqlite3
|
import sqlite3
|
||||||
|
from contextlib import closing
|
||||||
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
@ -32,6 +34,7 @@ class SqliteMigrator:
|
|||||||
self._db = db
|
self._db = db
|
||||||
self._logger = db.logger
|
self._logger = db.logger
|
||||||
self._migration_set = MigrationSet()
|
self._migration_set = MigrationSet()
|
||||||
|
self._backup_path: Optional[Path] = None
|
||||||
|
|
||||||
def register_migration(self, migration: Migration) -> None:
|
def register_migration(self, migration: Migration) -> None:
|
||||||
"""Registers a migration."""
|
"""Registers a migration."""
|
||||||
@ -55,6 +58,18 @@ class SqliteMigrator:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
self._logger.info("Database update needed")
|
self._logger.info("Database update needed")
|
||||||
|
|
||||||
|
# Make a backup of the db if it needs to be updated and is a file db
|
||||||
|
if self._db.db_path is not None:
|
||||||
|
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||||
|
self._backup_path = self._db.db_path.parent / f"{self._db.db_path.stem}_backup_{timestamp}.db"
|
||||||
|
self._logger.info(f"Backing up database to {str(self._backup_path)}")
|
||||||
|
# Use SQLite to do the backup
|
||||||
|
with closing(sqlite3.connect(self._backup_path)) as backup_conn:
|
||||||
|
self._db.conn.backup(backup_conn)
|
||||||
|
else:
|
||||||
|
self._logger.info("Using in-memory database, no backup needed")
|
||||||
|
|
||||||
next_migration = self._migration_set.get(from_version=self._get_current_version(cursor))
|
next_migration = self._migration_set.get(from_version=self._get_current_version(cursor))
|
||||||
while next_migration is not None:
|
while next_migration is not None:
|
||||||
self._run_migration(next_migration)
|
self._run_migration(next_migration)
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
Initialization file for invokeai.backend.image_util methods.
|
Initialization file for invokeai.backend.image_util methods.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .patchmatch import PatchMatch # noqa: F401
|
from .infill_methods.patchmatch import PatchMatch # noqa: F401
|
||||||
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
|
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
|
||||||
from .seamless import configure_model_padding # noqa: F401
|
from .seamless import configure_model_padding # noqa: F401
|
||||||
from .util import InitImageResizer, make_grid # noqa: F401
|
from .util import InitImageResizer, make_grid # noqa: F401
|
||||||
|
@ -13,7 +13,7 @@ from invokeai.app.services.config.config_default import get_config
|
|||||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||||
from invokeai.backend.image_util.depth_anything.model.dpt import DPT_DINOv2
|
from invokeai.backend.image_util.depth_anything.model.dpt import DPT_DINOv2
|
||||||
from invokeai.backend.image_util.depth_anything.utilities.util import NormalizeImage, PrepareForNet, Resize
|
from invokeai.backend.image_util.depth_anything.utilities.util import NormalizeImage, PrepareForNet, Resize
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
config = get_config()
|
config = get_config()
|
||||||
@ -56,7 +56,7 @@ class DepthAnythingDetector:
|
|||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
self.model = None
|
self.model = None
|
||||||
self.model_size: Union[Literal["large", "base", "small"], None] = None
|
self.model_size: Union[Literal["large", "base", "small"], None] = None
|
||||||
self.device = choose_torch_device()
|
self.device = TorchDevice.choose_torch_device()
|
||||||
|
|
||||||
def load_model(self, model_size: Literal["large", "base", "small"] = "small"):
|
def load_model(self, model_size: Literal["large", "base", "small"] = "small"):
|
||||||
DEPTH_ANYTHING_MODEL_PATH = config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"]
|
DEPTH_ANYTHING_MODEL_PATH = config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"]
|
||||||
@ -81,7 +81,7 @@ class DepthAnythingDetector:
|
|||||||
self.model.load_state_dict(torch.load(DEPTH_ANYTHING_MODEL_PATH.as_posix(), map_location="cpu"))
|
self.model.load_state_dict(torch.load(DEPTH_ANYTHING_MODEL_PATH.as_posix(), map_location="cpu"))
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
|
|
||||||
self.model.to(choose_torch_device())
|
self.model.to(self.device)
|
||||||
return self.model
|
return self.model
|
||||||
|
|
||||||
def __call__(self, image: Image.Image, resolution: int = 512) -> Image.Image:
|
def __call__(self, image: Image.Image, resolution: int = 512) -> Image.Image:
|
||||||
@ -94,7 +94,7 @@ class DepthAnythingDetector:
|
|||||||
|
|
||||||
image_height, image_width = np_image.shape[:2]
|
image_height, image_width = np_image.shape[:2]
|
||||||
np_image = transform({"image": np_image})["image"]
|
np_image = transform({"image": np_image})["image"]
|
||||||
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(choose_torch_device())
|
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(self.device)
|
||||||
|
|
||||||
with torch.no_grad():
|
with torch.no_grad():
|
||||||
depth = self.model(tensor_image)
|
depth = self.model(tensor_image)
|
||||||
|
@ -7,7 +7,7 @@ import onnxruntime as ort
|
|||||||
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from .onnxdet import inference_detector
|
from .onnxdet import inference_detector
|
||||||
from .onnxpose import inference_pose
|
from .onnxpose import inference_pose
|
||||||
@ -28,9 +28,9 @@ config = get_config()
|
|||||||
|
|
||||||
class Wholebody:
|
class Wholebody:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
device = choose_torch_device()
|
device = TorchDevice.choose_torch_device()
|
||||||
|
|
||||||
providers = ["CUDAExecutionProvider"] if device == "cuda" else ["CPUExecutionProvider"]
|
providers = ["CUDAExecutionProvider"] if device.type == "cuda" else ["CPUExecutionProvider"]
|
||||||
|
|
||||||
DET_MODEL_PATH = config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"]
|
DET_MODEL_PATH = config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"]
|
||||||
download_with_progress_bar("yolox_l.onnx", DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
|
download_with_progress_bar("yolox_l.onnx", DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
|
||||||
|
@ -7,7 +7,8 @@ from PIL import Image
|
|||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.app.util.download_with_progress import download_with_progress_bar
|
||||||
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
|
|
||||||
def norm_img(np_img):
|
def norm_img(np_img):
|
||||||
@ -28,8 +29,16 @@ def load_jit_model(url_or_path, device):
|
|||||||
|
|
||||||
class LaMA:
|
class LaMA:
|
||||||
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
|
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
|
||||||
device = choose_torch_device()
|
device = TorchDevice.choose_torch_device()
|
||||||
model_location = get_config().models_path / "core/misc/lama/lama.pt"
|
model_location = get_config().models_path / "core/misc/lama/lama.pt"
|
||||||
|
|
||||||
|
if not model_location.exists():
|
||||||
|
download_with_progress_bar(
|
||||||
|
name="LaMa Inpainting Model",
|
||||||
|
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
|
||||||
|
dest_path=model_location,
|
||||||
|
)
|
||||||
|
|
||||||
model = load_jit_model(model_location, device)
|
model = load_jit_model(model_location, device)
|
||||||
|
|
||||||
image = np.asarray(input_image.convert("RGB"))
|
image = np.asarray(input_image.convert("RGB"))
|
60
invokeai/backend/image_util/infill_methods/mosaic.py
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
from typing import Tuple
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from PIL import Image
|
||||||
|
|
||||||
|
|
||||||
|
def infill_mosaic(
|
||||||
|
image: Image.Image,
|
||||||
|
tile_shape: Tuple[int, int] = (64, 64),
|
||||||
|
min_color: Tuple[int, int, int, int] = (0, 0, 0, 0),
|
||||||
|
max_color: Tuple[int, int, int, int] = (255, 255, 255, 0),
|
||||||
|
) -> Image.Image:
|
||||||
|
"""
|
||||||
|
image:PIL - A PIL Image
|
||||||
|
tile_shape: Tuple[int,int] - Tile width & Tile Height
|
||||||
|
min_color: Tuple[int,int,int] - RGB values for the lowest color to clip to (0-255)
|
||||||
|
max_color: Tuple[int,int,int] - RGB values for the highest color to clip to (0-255)
|
||||||
|
"""
|
||||||
|
|
||||||
|
np_image = np.array(image) # Convert image to np array
|
||||||
|
alpha = np_image[:, :, 3] # Get the mask from the alpha channel of the image
|
||||||
|
non_transparent_pixels = np_image[alpha != 0, :3] # List of non-transparent pixels
|
||||||
|
|
||||||
|
# Create color tiles to paste in the empty areas of the image
|
||||||
|
tile_width, tile_height = tile_shape
|
||||||
|
|
||||||
|
# Clip the range of colors in the image to a particular spectrum only
|
||||||
|
r_min, g_min, b_min, _ = min_color
|
||||||
|
r_max, g_max, b_max, _ = max_color
|
||||||
|
non_transparent_pixels[:, 0] = np.clip(non_transparent_pixels[:, 0], r_min, r_max)
|
||||||
|
non_transparent_pixels[:, 1] = np.clip(non_transparent_pixels[:, 1], g_min, g_max)
|
||||||
|
non_transparent_pixels[:, 2] = np.clip(non_transparent_pixels[:, 2], b_min, b_max)
|
||||||
|
|
||||||
|
tiles = []
|
||||||
|
for _ in range(256):
|
||||||
|
color = non_transparent_pixels[np.random.randint(len(non_transparent_pixels))]
|
||||||
|
tile = np.zeros((tile_height, tile_width, 3), dtype=np.uint8)
|
||||||
|
tile[:, :] = color
|
||||||
|
tiles.append(tile)
|
||||||
|
|
||||||
|
# Fill the transparent area with tiles
|
||||||
|
filled_image = np.zeros((image.height, image.width, 3), dtype=np.uint8)
|
||||||
|
|
||||||
|
for x in range(image.width):
|
||||||
|
for y in range(image.height):
|
||||||
|
tile = tiles[np.random.randint(len(tiles))]
|
||||||
|
try:
|
||||||
|
filled_image[
|
||||||
|
y - (y % tile_height) : y - (y % tile_height) + tile_height,
|
||||||
|
x - (x % tile_width) : x - (x % tile_width) + tile_width,
|
||||||
|
] = tile
|
||||||
|
except ValueError:
|
||||||
|
# Need to handle edge cases - literally
|
||||||
|
pass
|
||||||
|
|
||||||
|
filled_image = Image.fromarray(filled_image) # Convert the filled tiles image to PIL
|
||||||
|
image = Image.composite(
|
||||||
|
image, filled_image, image.split()[-1]
|
||||||
|
) # Composite the original image on top of the filled tiles
|
||||||
|
return image
|
67
invokeai/backend/image_util/infill_methods/patchmatch.py
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
"""
|
||||||
|
This module defines a singleton object, "patchmatch" that
|
||||||
|
wraps the actual patchmatch object. It respects the global
|
||||||
|
"try_patchmatch" attribute, so that patchmatch loading can
|
||||||
|
be suppressed or deferred
|
||||||
|
"""
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from PIL import Image
|
||||||
|
|
||||||
|
import invokeai.backend.util.logging as logger
|
||||||
|
from invokeai.app.services.config.config_default import get_config
|
||||||
|
|
||||||
|
|
||||||
|
class PatchMatch:
|
||||||
|
"""
|
||||||
|
Thin class wrapper around the patchmatch function.
|
||||||
|
"""
|
||||||
|
|
||||||
|
patch_match = None
|
||||||
|
tried_load: bool = False
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _load_patch_match(cls):
|
||||||
|
if cls.tried_load:
|
||||||
|
return
|
||||||
|
if get_config().patchmatch:
|
||||||
|
from patchmatch import patch_match as pm
|
||||||
|
|
||||||
|
if pm.patchmatch_available:
|
||||||
|
logger.info("Patchmatch initialized")
|
||||||
|
cls.patch_match = pm
|
||||||
|
else:
|
||||||
|
logger.info("Patchmatch not loaded (nonfatal)")
|
||||||
|
else:
|
||||||
|
logger.info("Patchmatch loading disabled")
|
||||||
|
cls.tried_load = True
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def patchmatch_available(cls) -> bool:
|
||||||
|
cls._load_patch_match()
|
||||||
|
if not cls.patch_match:
|
||||||
|
return False
|
||||||
|
return cls.patch_match.patchmatch_available
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def inpaint(cls, image: Image.Image) -> Image.Image:
|
||||||
|
if cls.patch_match is None or not cls.patchmatch_available():
|
||||||
|
return image
|
||||||
|
|
||||||
|
np_image = np.array(image)
|
||||||
|
mask = 255 - np_image[:, :, 3]
|
||||||
|
infilled = cls.patch_match.inpaint(np_image[:, :, :3], mask, patch_size=3)
|
||||||
|
return Image.fromarray(infilled, mode="RGB")
|
||||||
|
|
||||||
|
|
||||||
|
def infill_patchmatch(image: Image.Image) -> Image.Image:
|
||||||
|
IS_PATCHMATCH_AVAILABLE = PatchMatch.patchmatch_available()
|
||||||
|
|
||||||
|
if not IS_PATCHMATCH_AVAILABLE:
|
||||||
|
logger.warning("PatchMatch is not available on this system")
|
||||||
|
return image
|
||||||
|
|
||||||
|
return PatchMatch.inpaint(image)
|
After Width: | Height: | Size: 45 KiB |
After Width: | Height: | Size: 2.2 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 49 KiB |
After Width: | Height: | Size: 60 KiB |
95
invokeai/backend/image_util/infill_methods/tile.ipynb
Normal file
@ -0,0 +1,95 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"\"\"\"Smoke test for the tile infill\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
"from pathlib import Path\n",
|
||||||
|
"from typing import Optional\n",
|
||||||
|
"from PIL import Image\n",
|
||||||
|
"from invokeai.backend.image_util.infill_methods.tile import infill_tile\n",
|
||||||
|
"\n",
|
||||||
|
"images: list[tuple[str, Image.Image]] = []\n",
|
||||||
|
"\n",
|
||||||
|
"for i in sorted(Path(\"./test_images/\").glob(\"*.webp\")):\n",
|
||||||
|
" images.append((i.name, Image.open(i)))\n",
|
||||||
|
" images.append((i.name, Image.open(i).transpose(Image.FLIP_LEFT_RIGHT)))\n",
|
||||||
|
" images.append((i.name, Image.open(i).transpose(Image.FLIP_TOP_BOTTOM)))\n",
|
||||||
|
" images.append((i.name, Image.open(i).resize((512, 512))))\n",
|
||||||
|
" images.append((i.name, Image.open(i).resize((1234, 461))))\n",
|
||||||
|
"\n",
|
||||||
|
"outputs: list[tuple[str, Image.Image, Image.Image, Optional[Image.Image]]] = []\n",
|
||||||
|
"\n",
|
||||||
|
"for name, image in images:\n",
|
||||||
|
" try:\n",
|
||||||
|
" output = infill_tile(image, seed=0, tile_size=32)\n",
|
||||||
|
" outputs.append((name, image, output.infilled, output.tile_image))\n",
|
||||||
|
" except ValueError as e:\n",
|
||||||
|
" print(f\"Skipping image {name}: {e}\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Display the images in jupyter notebook\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from PIL import ImageOps\n",
|
||||||
|
"\n",
|
||||||
|
"fig, axes = plt.subplots(len(outputs), 3, figsize=(10, 3 * len(outputs)))\n",
|
||||||
|
"plt.subplots_adjust(hspace=0)\n",
|
||||||
|
"\n",
|
||||||
|
"for i, (name, original, infilled, tile_image) in enumerate(outputs):\n",
|
||||||
|
" # Add a border to each image, helps to see the edges\n",
|
||||||
|
" size = original.size\n",
|
||||||
|
" original = ImageOps.expand(original, border=5, fill=\"red\")\n",
|
||||||
|
" filled = ImageOps.expand(infilled, border=5, fill=\"red\")\n",
|
||||||
|
" if tile_image:\n",
|
||||||
|
" tile_image = ImageOps.expand(tile_image, border=5, fill=\"red\")\n",
|
||||||
|
"\n",
|
||||||
|
" axes[i, 0].imshow(original)\n",
|
||||||
|
" axes[i, 0].axis(\"off\")\n",
|
||||||
|
" axes[i, 0].set_title(f\"Original ({name} - {size})\")\n",
|
||||||
|
"\n",
|
||||||
|
" if tile_image:\n",
|
||||||
|
" axes[i, 1].imshow(tile_image)\n",
|
||||||
|
" axes[i, 1].axis(\"off\")\n",
|
||||||
|
" axes[i, 1].set_title(\"Tile Image\")\n",
|
||||||
|
" else:\n",
|
||||||
|
" axes[i, 1].axis(\"off\")\n",
|
||||||
|
" axes[i, 1].set_title(\"NO TILES GENERATED (NO TRANSPARENCY)\")\n",
|
||||||
|
"\n",
|
||||||
|
" axes[i, 2].imshow(filled)\n",
|
||||||
|
" axes[i, 2].axis(\"off\")\n",
|
||||||
|
" axes[i, 2].set_title(\"Filled\")"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": ".invokeai",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.12"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
122
invokeai/backend/image_util/infill_methods/tile.py
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from PIL import Image
|
||||||
|
|
||||||
|
|
||||||
|
def create_tile_pool(img_array: np.ndarray, tile_size: tuple[int, int]) -> list[np.ndarray]:
|
||||||
|
"""
|
||||||
|
Create a pool of tiles from non-transparent areas of the image by systematically walking through the image.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
img_array: numpy array of the image.
|
||||||
|
tile_size: tuple (tile_width, tile_height) specifying the size of each tile.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of numpy arrays, each representing a tile.
|
||||||
|
"""
|
||||||
|
tiles: list[np.ndarray] = []
|
||||||
|
rows, cols = img_array.shape[:2]
|
||||||
|
tile_width, tile_height = tile_size
|
||||||
|
|
||||||
|
for y in range(0, rows - tile_height + 1, tile_height):
|
||||||
|
for x in range(0, cols - tile_width + 1, tile_width):
|
||||||
|
tile = img_array[y : y + tile_height, x : x + tile_width]
|
||||||
|
# Check if the image has an alpha channel and the tile is completely opaque
|
||||||
|
if img_array.shape[2] == 4 and np.all(tile[:, :, 3] == 255):
|
||||||
|
tiles.append(tile)
|
||||||
|
elif img_array.shape[2] == 3: # If no alpha channel, append the tile
|
||||||
|
tiles.append(tile)
|
||||||
|
|
||||||
|
if not tiles:
|
||||||
|
raise ValueError(
|
||||||
|
"Not enough opaque pixels to generate any tiles. Use a smaller tile size or a different image."
|
||||||
|
)
|
||||||
|
|
||||||
|
return tiles
|
||||||
|
|
||||||
|
|
||||||
|
def create_filled_image(
|
||||||
|
img_array: np.ndarray, tile_pool: list[np.ndarray], tile_size: tuple[int, int], seed: int
|
||||||
|
) -> np.ndarray:
|
||||||
|
"""
|
||||||
|
Create an image of the same dimensions as the original, filled entirely with tiles from the pool.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
img_array: numpy array of the original image.
|
||||||
|
tile_pool: A list of numpy arrays, each representing a tile.
|
||||||
|
tile_size: tuple (tile_width, tile_height) specifying the size of each tile.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A numpy array representing the filled image.
|
||||||
|
"""
|
||||||
|
|
||||||
|
rows, cols, _ = img_array.shape
|
||||||
|
tile_width, tile_height = tile_size
|
||||||
|
|
||||||
|
# Prep an empty RGB image
|
||||||
|
filled_img_array = np.zeros((rows, cols, 3), dtype=img_array.dtype)
|
||||||
|
|
||||||
|
# Make the random tile selection reproducible
|
||||||
|
rng = np.random.default_rng(seed)
|
||||||
|
|
||||||
|
for y in range(0, rows, tile_height):
|
||||||
|
for x in range(0, cols, tile_width):
|
||||||
|
# Pick a random tile from the pool
|
||||||
|
tile = tile_pool[rng.integers(len(tile_pool))]
|
||||||
|
|
||||||
|
# Calculate the space available (may be less than tile size near the edges)
|
||||||
|
space_y = min(tile_height, rows - y)
|
||||||
|
space_x = min(tile_width, cols - x)
|
||||||
|
|
||||||
|
# Crop the tile if necessary to fit into the available space
|
||||||
|
cropped_tile = tile[:space_y, :space_x, :3]
|
||||||
|
|
||||||
|
# Fill the available space with the (possibly cropped) tile
|
||||||
|
filled_img_array[y : y + space_y, x : x + space_x, :3] = cropped_tile
|
||||||
|
|
||||||
|
return filled_img_array
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InfillTileOutput:
|
||||||
|
infilled: Image.Image
|
||||||
|
tile_image: Optional[Image.Image] = None
|
||||||
|
|
||||||
|
|
||||||
|
def infill_tile(image_to_infill: Image.Image, seed: int, tile_size: int) -> InfillTileOutput:
|
||||||
|
"""Infills an image with random tiles from the image itself.
|
||||||
|
|
||||||
|
If the image is not an RGBA image, it is returned untouched.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
image: The image to infill.
|
||||||
|
tile_size: The size of the tiles to use for infilling.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If there are not enough opaque pixels to generate any tiles.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if image_to_infill.mode != "RGBA":
|
||||||
|
return InfillTileOutput(infilled=image_to_infill)
|
||||||
|
|
||||||
|
# Internally, we want a tuple of (tile_width, tile_height). In the future, the tile size can be any rectangle.
|
||||||
|
_tile_size = (tile_size, tile_size)
|
||||||
|
np_image = np.array(image_to_infill, dtype=np.uint8)
|
||||||
|
|
||||||
|
# Create the pool of tiles that we will use to infill
|
||||||
|
tile_pool = create_tile_pool(np_image, _tile_size)
|
||||||
|
|
||||||
|
# Create an image from the tiles, same size as the original
|
||||||
|
tile_np_image = create_filled_image(np_image, tile_pool, _tile_size, seed)
|
||||||
|
|
||||||
|
# Paste the OG image over the tile image, effectively infilling the area
|
||||||
|
tile_image = Image.fromarray(tile_np_image, "RGB")
|
||||||
|
infilled = tile_image.copy()
|
||||||
|
infilled.paste(image_to_infill, (0, 0), image_to_infill.split()[-1])
|
||||||
|
|
||||||
|
# I think we want this to be "RGBA"?
|
||||||
|
infilled.convert("RGBA")
|
||||||
|
|
||||||
|
return InfillTileOutput(infilled=infilled, tile_image=tile_image)
|
@ -1,49 +0,0 @@
|
|||||||
"""
|
|
||||||
This module defines a singleton object, "patchmatch" that
|
|
||||||
wraps the actual patchmatch object. It respects the global
|
|
||||||
"try_patchmatch" attribute, so that patchmatch loading can
|
|
||||||
be suppressed or deferred
|
|
||||||
"""
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
|
||||||
|
|
||||||
|
|
||||||
class PatchMatch:
|
|
||||||
"""
|
|
||||||
Thin class wrapper around the patchmatch function.
|
|
||||||
"""
|
|
||||||
|
|
||||||
patch_match = None
|
|
||||||
tried_load: bool = False
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _load_patch_match(self):
|
|
||||||
if self.tried_load:
|
|
||||||
return
|
|
||||||
if get_config().patchmatch:
|
|
||||||
from patchmatch import patch_match as pm
|
|
||||||
|
|
||||||
if pm.patchmatch_available:
|
|
||||||
logger.info("Patchmatch initialized")
|
|
||||||
else:
|
|
||||||
logger.info("Patchmatch not loaded (nonfatal)")
|
|
||||||
self.patch_match = pm
|
|
||||||
else:
|
|
||||||
logger.info("Patchmatch loading disabled")
|
|
||||||
self.tried_load = True
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def patchmatch_available(self) -> bool:
|
|
||||||
self._load_patch_match()
|
|
||||||
return self.patch_match and self.patch_match.patchmatch_available
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def inpaint(self, *args, **kwargs) -> np.ndarray:
|
|
||||||
if self.patchmatch_available():
|
|
||||||
return self.patch_match.inpaint(*args, **kwargs)
|
|
@ -11,7 +11,7 @@ from cv2.typing import MatLike
|
|||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|
||||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Adapted from https://github.com/xinntao/Real-ESRGAN/blob/master/realesrgan/utils.py
|
Adapted from https://github.com/xinntao/Real-ESRGAN/blob/master/realesrgan/utils.py
|
||||||
@ -65,7 +65,7 @@ class RealESRGAN:
|
|||||||
self.pre_pad = pre_pad
|
self.pre_pad = pre_pad
|
||||||
self.mod_scale: Optional[int] = None
|
self.mod_scale: Optional[int] = None
|
||||||
self.half = half
|
self.half = half
|
||||||
self.device = choose_torch_device()
|
self.device = TorchDevice.choose_torch_device()
|
||||||
|
|
||||||
loadnet = torch.load(model_path, map_location=torch.device("cpu"))
|
loadnet = torch.load(model_path, map_location=torch.device("cpu"))
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ from transformers import AutoFeatureExtractor
|
|||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.backend.util.devices import choose_torch_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
from invokeai.backend.util.silence_warnings import SilenceWarnings
|
||||||
|
|
||||||
CHECKER_PATH = "core/convert/stable-diffusion-safety-checker"
|
CHECKER_PATH = "core/convert/stable-diffusion-safety-checker"
|
||||||
@ -51,7 +51,7 @@ class SafetyChecker:
|
|||||||
cls._load_safety_checker()
|
cls._load_safety_checker()
|
||||||
if cls.safety_checker is None or cls.feature_extractor is None:
|
if cls.safety_checker is None or cls.feature_extractor is None:
|
||||||
return False
|
return False
|
||||||
device = choose_torch_device()
|
device = TorchDevice.choose_torch_device()
|
||||||
features = cls.feature_extractor([image], return_tensors="pt")
|
features = cls.feature_extractor([image], return_tensors="pt")
|
||||||
features.to(device)
|
features.to(device)
|
||||||
cls.safety_checker.to(device)
|
cls.safety_checker.to(device)
|
||||||
|
@ -1,182 +0,0 @@
|
|||||||
# copied from https://github.com/tencent-ailab/IP-Adapter (Apache License 2.0)
|
|
||||||
# and modified as needed
|
|
||||||
|
|
||||||
# tencent-ailab comment:
|
|
||||||
# modified from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
import torch.nn.functional as F
|
|
||||||
from diffusers.models.attention_processor import AttnProcessor2_0 as DiffusersAttnProcessor2_0
|
|
||||||
|
|
||||||
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionProcessorWeights
|
|
||||||
|
|
||||||
|
|
||||||
# Create a version of AttnProcessor2_0 that is a sub-class of nn.Module. This is required for IP-Adapter state_dict
|
|
||||||
# loading.
|
|
||||||
class AttnProcessor2_0(DiffusersAttnProcessor2_0, nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
DiffusersAttnProcessor2_0.__init__(self)
|
|
||||||
nn.Module.__init__(self)
|
|
||||||
|
|
||||||
def __call__(
|
|
||||||
self,
|
|
||||||
attn,
|
|
||||||
hidden_states,
|
|
||||||
encoder_hidden_states=None,
|
|
||||||
attention_mask=None,
|
|
||||||
temb=None,
|
|
||||||
ip_adapter_image_prompt_embeds=None,
|
|
||||||
):
|
|
||||||
"""Re-definition of DiffusersAttnProcessor2_0.__call__(...) that accepts and ignores the
|
|
||||||
ip_adapter_image_prompt_embeds parameter.
|
|
||||||
"""
|
|
||||||
return DiffusersAttnProcessor2_0.__call__(
|
|
||||||
self, attn, hidden_states, encoder_hidden_states, attention_mask, temb
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class IPAttnProcessor2_0(torch.nn.Module):
|
|
||||||
r"""
|
|
||||||
Attention processor for IP-Adapater for PyTorch 2.0.
|
|
||||||
Args:
|
|
||||||
hidden_size (`int`):
|
|
||||||
The hidden size of the attention layer.
|
|
||||||
cross_attention_dim (`int`):
|
|
||||||
The number of channels in the `encoder_hidden_states`.
|
|
||||||
scale (`float`, defaults to 1.0):
|
|
||||||
the weight scale of image prompt.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, weights: list[IPAttentionProcessorWeights], scales: list[float]):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
if not hasattr(F, "scaled_dot_product_attention"):
|
|
||||||
raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
|
|
||||||
|
|
||||||
assert len(weights) == len(scales)
|
|
||||||
|
|
||||||
self._weights = weights
|
|
||||||
self._scales = scales
|
|
||||||
|
|
||||||
def __call__(
|
|
||||||
self,
|
|
||||||
attn,
|
|
||||||
hidden_states,
|
|
||||||
encoder_hidden_states=None,
|
|
||||||
attention_mask=None,
|
|
||||||
temb=None,
|
|
||||||
ip_adapter_image_prompt_embeds=None,
|
|
||||||
):
|
|
||||||
"""Apply IP-Adapter attention.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
ip_adapter_image_prompt_embeds (torch.Tensor): The image prompt embeddings.
|
|
||||||
Shape: (batch_size, num_ip_images, seq_len, ip_embedding_len).
|
|
||||||
"""
|
|
||||||
residual = hidden_states
|
|
||||||
|
|
||||||
if attn.spatial_norm is not None:
|
|
||||||
hidden_states = attn.spatial_norm(hidden_states, temb)
|
|
||||||
|
|
||||||
input_ndim = hidden_states.ndim
|
|
||||||
|
|
||||||
if input_ndim == 4:
|
|
||||||
batch_size, channel, height, width = hidden_states.shape
|
|
||||||
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
|
|
||||||
|
|
||||||
batch_size, sequence_length, _ = (
|
|
||||||
hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
|
|
||||||
)
|
|
||||||
|
|
||||||
if attention_mask is not None:
|
|
||||||
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
|
|
||||||
# scaled_dot_product_attention expects attention_mask shape to be
|
|
||||||
# (batch, heads, source_length, target_length)
|
|
||||||
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
|
|
||||||
|
|
||||||
if attn.group_norm is not None:
|
|
||||||
hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
|
|
||||||
|
|
||||||
query = attn.to_q(hidden_states)
|
|
||||||
|
|
||||||
if encoder_hidden_states is None:
|
|
||||||
encoder_hidden_states = hidden_states
|
|
||||||
elif attn.norm_cross:
|
|
||||||
encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
|
|
||||||
|
|
||||||
key = attn.to_k(encoder_hidden_states)
|
|
||||||
value = attn.to_v(encoder_hidden_states)
|
|
||||||
|
|
||||||
inner_dim = key.shape[-1]
|
|
||||||
head_dim = inner_dim // attn.heads
|
|
||||||
|
|
||||||
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
|
||||||
|
|
||||||
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
|
||||||
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
|
||||||
|
|
||||||
# the output of sdp = (batch, num_heads, seq_len, head_dim)
|
|
||||||
# TODO: add support for attn.scale when we move to Torch 2.1
|
|
||||||
hidden_states = F.scaled_dot_product_attention(
|
|
||||||
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
|
|
||||||
)
|
|
||||||
|
|
||||||
hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
|
||||||
hidden_states = hidden_states.to(query.dtype)
|
|
||||||
|
|
||||||
if encoder_hidden_states is not None:
|
|
||||||
# If encoder_hidden_states is not None, then we are doing cross-attention, not self-attention. In this case,
|
|
||||||
# we will apply IP-Adapter conditioning. We validate the inputs for IP-Adapter conditioning here.
|
|
||||||
assert ip_adapter_image_prompt_embeds is not None
|
|
||||||
assert len(ip_adapter_image_prompt_embeds) == len(self._weights)
|
|
||||||
|
|
||||||
for ipa_embed, ipa_weights, scale in zip(
|
|
||||||
ip_adapter_image_prompt_embeds, self._weights, self._scales, strict=True
|
|
||||||
):
|
|
||||||
# The batch dimensions should match.
|
|
||||||
assert ipa_embed.shape[0] == encoder_hidden_states.shape[0]
|
|
||||||
# The token_len dimensions should match.
|
|
||||||
assert ipa_embed.shape[-1] == encoder_hidden_states.shape[-1]
|
|
||||||
|
|
||||||
ip_hidden_states = ipa_embed
|
|
||||||
|
|
||||||
# Expected ip_hidden_state shape: (batch_size, num_ip_images, ip_seq_len, ip_image_embedding)
|
|
||||||
|
|
||||||
ip_key = ipa_weights.to_k_ip(ip_hidden_states)
|
|
||||||
ip_value = ipa_weights.to_v_ip(ip_hidden_states)
|
|
||||||
|
|
||||||
# Expected ip_key and ip_value shape: (batch_size, num_ip_images, ip_seq_len, head_dim * num_heads)
|
|
||||||
|
|
||||||
ip_key = ip_key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
|
||||||
ip_value = ip_value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
|
||||||
|
|
||||||
# Expected ip_key and ip_value shape: (batch_size, num_heads, num_ip_images * ip_seq_len, head_dim)
|
|
||||||
|
|
||||||
# TODO: add support for attn.scale when we move to Torch 2.1
|
|
||||||
ip_hidden_states = F.scaled_dot_product_attention(
|
|
||||||
query, ip_key, ip_value, attn_mask=None, dropout_p=0.0, is_causal=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Expected ip_hidden_states shape: (batch_size, num_heads, query_seq_len, head_dim)
|
|
||||||
|
|
||||||
ip_hidden_states = ip_hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
|
||||||
ip_hidden_states = ip_hidden_states.to(query.dtype)
|
|
||||||
|
|
||||||
# Expected ip_hidden_states shape: (batch_size, query_seq_len, num_heads * head_dim)
|
|
||||||
|
|
||||||
hidden_states = hidden_states + scale * ip_hidden_states
|
|
||||||
|
|
||||||
# linear proj
|
|
||||||
hidden_states = attn.to_out[0](hidden_states)
|
|
||||||
# dropout
|
|
||||||
hidden_states = attn.to_out[1](hidden_states)
|
|
||||||
|
|
||||||
if input_ndim == 4:
|
|
||||||
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
|
|
||||||
|
|
||||||
if attn.residual_connection:
|
|
||||||
hidden_states = hidden_states + residual
|
|
||||||
|
|
||||||
hidden_states = hidden_states / attn.rescale_output_factor
|
|
||||||
|
|
||||||
return hidden_states
|
|
@ -1,8 +1,11 @@
|
|||||||
# copied from https://github.com/tencent-ailab/IP-Adapter (Apache License 2.0)
|
# copied from https://github.com/tencent-ailab/IP-Adapter (Apache License 2.0)
|
||||||
# and modified as needed
|
# and modified as needed
|
||||||
|
|
||||||
from typing import Optional, Union
|
import pathlib
|
||||||
|
from typing import List, Optional, TypedDict, Union
|
||||||
|
|
||||||
|
import safetensors
|
||||||
|
import safetensors.torch
|
||||||
import torch
|
import torch
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
||||||
@ -13,10 +16,17 @@ from ..raw_model import RawModel
|
|||||||
from .resampler import Resampler
|
from .resampler import Resampler
|
||||||
|
|
||||||
|
|
||||||
|
class IPAdapterStateDict(TypedDict):
|
||||||
|
ip_adapter: dict[str, torch.Tensor]
|
||||||
|
image_proj: dict[str, torch.Tensor]
|
||||||
|
|
||||||
|
|
||||||
class ImageProjModel(torch.nn.Module):
|
class ImageProjModel(torch.nn.Module):
|
||||||
"""Image Projection Model"""
|
"""Image Projection Model"""
|
||||||
|
|
||||||
def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024, clip_extra_context_tokens=4):
|
def __init__(
|
||||||
|
self, cross_attention_dim: int = 1024, clip_embeddings_dim: int = 1024, clip_extra_context_tokens: int = 4
|
||||||
|
):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
self.cross_attention_dim = cross_attention_dim
|
self.cross_attention_dim = cross_attention_dim
|
||||||
@ -25,7 +35,7 @@ class ImageProjModel(torch.nn.Module):
|
|||||||
self.norm = torch.nn.LayerNorm(cross_attention_dim)
|
self.norm = torch.nn.LayerNorm(cross_attention_dim)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_state_dict(cls, state_dict: dict[torch.Tensor], clip_extra_context_tokens=4):
|
def from_state_dict(cls, state_dict: dict[str, torch.Tensor], clip_extra_context_tokens: int = 4):
|
||||||
"""Initialize an ImageProjModel from a state_dict.
|
"""Initialize an ImageProjModel from a state_dict.
|
||||||
|
|
||||||
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
|
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
|
||||||
@ -45,7 +55,7 @@ class ImageProjModel(torch.nn.Module):
|
|||||||
model.load_state_dict(state_dict)
|
model.load_state_dict(state_dict)
|
||||||
return model
|
return model
|
||||||
|
|
||||||
def forward(self, image_embeds):
|
def forward(self, image_embeds: torch.Tensor):
|
||||||
embeds = image_embeds
|
embeds = image_embeds
|
||||||
clip_extra_context_tokens = self.proj(embeds).reshape(
|
clip_extra_context_tokens = self.proj(embeds).reshape(
|
||||||
-1, self.clip_extra_context_tokens, self.cross_attention_dim
|
-1, self.clip_extra_context_tokens, self.cross_attention_dim
|
||||||
@ -57,7 +67,7 @@ class ImageProjModel(torch.nn.Module):
|
|||||||
class MLPProjModel(torch.nn.Module):
|
class MLPProjModel(torch.nn.Module):
|
||||||
"""SD model with image prompt"""
|
"""SD model with image prompt"""
|
||||||
|
|
||||||
def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024):
|
def __init__(self, cross_attention_dim: int = 1024, clip_embeddings_dim: int = 1024):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
self.proj = torch.nn.Sequential(
|
self.proj = torch.nn.Sequential(
|
||||||
@ -68,7 +78,7 @@ class MLPProjModel(torch.nn.Module):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_state_dict(cls, state_dict: dict[torch.Tensor]):
|
def from_state_dict(cls, state_dict: dict[str, torch.Tensor]):
|
||||||
"""Initialize an MLPProjModel from a state_dict.
|
"""Initialize an MLPProjModel from a state_dict.
|
||||||
|
|
||||||
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
|
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
|
||||||
@ -87,7 +97,7 @@ class MLPProjModel(torch.nn.Module):
|
|||||||
model.load_state_dict(state_dict)
|
model.load_state_dict(state_dict)
|
||||||
return model
|
return model
|
||||||
|
|
||||||
def forward(self, image_embeds):
|
def forward(self, image_embeds: torch.Tensor):
|
||||||
clip_extra_context_tokens = self.proj(image_embeds)
|
clip_extra_context_tokens = self.proj(image_embeds)
|
||||||
return clip_extra_context_tokens
|
return clip_extra_context_tokens
|
||||||
|
|
||||||
@ -97,7 +107,7 @@ class IPAdapter(RawModel):
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
state_dict: dict[str, torch.Tensor],
|
state_dict: IPAdapterStateDict,
|
||||||
device: torch.device,
|
device: torch.device,
|
||||||
dtype: torch.dtype = torch.float16,
|
dtype: torch.dtype = torch.float16,
|
||||||
num_tokens: int = 4,
|
num_tokens: int = 4,
|
||||||
@ -129,24 +139,27 @@ class IPAdapter(RawModel):
|
|||||||
|
|
||||||
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
|
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
|
||||||
|
|
||||||
def _init_image_proj_model(self, state_dict):
|
def _init_image_proj_model(
|
||||||
|
self, state_dict: dict[str, torch.Tensor]
|
||||||
|
) -> Union[ImageProjModel, Resampler, MLPProjModel]:
|
||||||
return ImageProjModel.from_state_dict(state_dict, self._num_tokens).to(self.device, dtype=self.dtype)
|
return ImageProjModel.from_state_dict(state_dict, self._num_tokens).to(self.device, dtype=self.dtype)
|
||||||
|
|
||||||
@torch.inference_mode()
|
@torch.inference_mode()
|
||||||
def get_image_embeds(self, pil_image, image_encoder: CLIPVisionModelWithProjection):
|
def get_image_embeds(self, pil_image: List[Image.Image], image_encoder: CLIPVisionModelWithProjection):
|
||||||
if isinstance(pil_image, Image.Image):
|
|
||||||
pil_image = [pil_image]
|
|
||||||
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
|
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
|
||||||
clip_image_embeds = image_encoder(clip_image.to(self.device, dtype=self.dtype)).image_embeds
|
clip_image_embeds = image_encoder(clip_image.to(self.device, dtype=self.dtype)).image_embeds
|
||||||
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
|
try:
|
||||||
uncond_image_prompt_embeds = self._image_proj_model(torch.zeros_like(clip_image_embeds))
|
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
|
||||||
return image_prompt_embeds, uncond_image_prompt_embeds
|
uncond_image_prompt_embeds = self._image_proj_model(torch.zeros_like(clip_image_embeds))
|
||||||
|
return image_prompt_embeds, uncond_image_prompt_embeds
|
||||||
|
except RuntimeError as e:
|
||||||
|
raise RuntimeError("Selected CLIP Vision Model is incompatible with the current IP Adapter") from e
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterPlus(IPAdapter):
|
class IPAdapterPlus(IPAdapter):
|
||||||
"""IP-Adapter with fine-grained features"""
|
"""IP-Adapter with fine-grained features"""
|
||||||
|
|
||||||
def _init_image_proj_model(self, state_dict):
|
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]) -> Union[Resampler, MLPProjModel]:
|
||||||
return Resampler.from_state_dict(
|
return Resampler.from_state_dict(
|
||||||
state_dict=state_dict,
|
state_dict=state_dict,
|
||||||
depth=4,
|
depth=4,
|
||||||
@ -157,31 +170,32 @@ class IPAdapterPlus(IPAdapter):
|
|||||||
).to(self.device, dtype=self.dtype)
|
).to(self.device, dtype=self.dtype)
|
||||||
|
|
||||||
@torch.inference_mode()
|
@torch.inference_mode()
|
||||||
def get_image_embeds(self, pil_image, image_encoder: CLIPVisionModelWithProjection):
|
def get_image_embeds(self, pil_image: List[Image.Image], image_encoder: CLIPVisionModelWithProjection):
|
||||||
if isinstance(pil_image, Image.Image):
|
|
||||||
pil_image = [pil_image]
|
|
||||||
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
|
clip_image = self._clip_image_processor(images=pil_image, return_tensors="pt").pixel_values
|
||||||
clip_image = clip_image.to(self.device, dtype=self.dtype)
|
clip_image = clip_image.to(self.device, dtype=self.dtype)
|
||||||
clip_image_embeds = image_encoder(clip_image, output_hidden_states=True).hidden_states[-2]
|
clip_image_embeds = image_encoder(clip_image, output_hidden_states=True).hidden_states[-2]
|
||||||
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
|
|
||||||
uncond_clip_image_embeds = image_encoder(torch.zeros_like(clip_image), output_hidden_states=True).hidden_states[
|
uncond_clip_image_embeds = image_encoder(torch.zeros_like(clip_image), output_hidden_states=True).hidden_states[
|
||||||
-2
|
-2
|
||||||
]
|
]
|
||||||
uncond_image_prompt_embeds = self._image_proj_model(uncond_clip_image_embeds)
|
try:
|
||||||
return image_prompt_embeds, uncond_image_prompt_embeds
|
image_prompt_embeds = self._image_proj_model(clip_image_embeds)
|
||||||
|
uncond_image_prompt_embeds = self._image_proj_model(uncond_clip_image_embeds)
|
||||||
|
return image_prompt_embeds, uncond_image_prompt_embeds
|
||||||
|
except RuntimeError as e:
|
||||||
|
raise RuntimeError("Selected CLIP Vision Model is incompatible with the current IP Adapter") from e
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterFull(IPAdapterPlus):
|
class IPAdapterFull(IPAdapterPlus):
|
||||||
"""IP-Adapter Plus with full features."""
|
"""IP-Adapter Plus with full features."""
|
||||||
|
|
||||||
def _init_image_proj_model(self, state_dict: dict[torch.Tensor]):
|
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]):
|
||||||
return MLPProjModel.from_state_dict(state_dict).to(self.device, dtype=self.dtype)
|
return MLPProjModel.from_state_dict(state_dict).to(self.device, dtype=self.dtype)
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterPlusXL(IPAdapterPlus):
|
class IPAdapterPlusXL(IPAdapterPlus):
|
||||||
"""IP-Adapter Plus for SDXL."""
|
"""IP-Adapter Plus for SDXL."""
|
||||||
|
|
||||||
def _init_image_proj_model(self, state_dict):
|
def _init_image_proj_model(self, state_dict: dict[str, torch.Tensor]):
|
||||||
return Resampler.from_state_dict(
|
return Resampler.from_state_dict(
|
||||||
state_dict=state_dict,
|
state_dict=state_dict,
|
||||||
depth=4,
|
depth=4,
|
||||||
@ -192,24 +206,48 @@ class IPAdapterPlusXL(IPAdapterPlus):
|
|||||||
).to(self.device, dtype=self.dtype)
|
).to(self.device, dtype=self.dtype)
|
||||||
|
|
||||||
|
|
||||||
def build_ip_adapter(
|
def load_ip_adapter_tensors(ip_adapter_ckpt_path: pathlib.Path, device: str) -> IPAdapterStateDict:
|
||||||
ip_adapter_ckpt_path: str, device: torch.device, dtype: torch.dtype = torch.float16
|
state_dict: IPAdapterStateDict = {"ip_adapter": {}, "image_proj": {}}
|
||||||
) -> Union[IPAdapter, IPAdapterPlus]:
|
|
||||||
state_dict = torch.load(ip_adapter_ckpt_path, map_location="cpu")
|
|
||||||
|
|
||||||
if "proj.weight" in state_dict["image_proj"]: # IPAdapter (with ImageProjModel).
|
if ip_adapter_ckpt_path.suffix == ".safetensors":
|
||||||
|
model = safetensors.torch.load_file(ip_adapter_ckpt_path, device=device)
|
||||||
|
for key in model.keys():
|
||||||
|
if key.startswith("image_proj."):
|
||||||
|
state_dict["image_proj"][key.replace("image_proj.", "")] = model[key]
|
||||||
|
elif key.startswith("ip_adapter."):
|
||||||
|
state_dict["ip_adapter"][key.replace("ip_adapter.", "")] = model[key]
|
||||||
|
else:
|
||||||
|
raise RuntimeError(f"Encountered unexpected IP Adapter state dict key: '{key}'.")
|
||||||
|
else:
|
||||||
|
ip_adapter_diffusers_checkpoint_path = ip_adapter_ckpt_path / "ip_adapter.bin"
|
||||||
|
state_dict = torch.load(ip_adapter_diffusers_checkpoint_path, map_location="cpu")
|
||||||
|
|
||||||
|
return state_dict
|
||||||
|
|
||||||
|
|
||||||
|
def build_ip_adapter(
|
||||||
|
ip_adapter_ckpt_path: pathlib.Path, device: torch.device, dtype: torch.dtype = torch.float16
|
||||||
|
) -> Union[IPAdapter, IPAdapterPlus, IPAdapterPlusXL, IPAdapterPlus]:
|
||||||
|
state_dict = load_ip_adapter_tensors(ip_adapter_ckpt_path, device.type)
|
||||||
|
|
||||||
|
# IPAdapter (with ImageProjModel)
|
||||||
|
if "proj.weight" in state_dict["image_proj"]:
|
||||||
return IPAdapter(state_dict, device=device, dtype=dtype)
|
return IPAdapter(state_dict, device=device, dtype=dtype)
|
||||||
elif "proj_in.weight" in state_dict["image_proj"]: # IPAdaterPlus or IPAdapterPlusXL (with Resampler).
|
|
||||||
|
# IPAdaterPlus or IPAdapterPlusXL (with Resampler)
|
||||||
|
elif "proj_in.weight" in state_dict["image_proj"]:
|
||||||
cross_attention_dim = state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[-1]
|
cross_attention_dim = state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[-1]
|
||||||
if cross_attention_dim == 768:
|
if cross_attention_dim == 768:
|
||||||
# SD1 IP-Adapter Plus
|
return IPAdapterPlus(state_dict, device=device, dtype=dtype) # SD1 IP-Adapter Plus
|
||||||
return IPAdapterPlus(state_dict, device=device, dtype=dtype)
|
|
||||||
elif cross_attention_dim == 2048:
|
elif cross_attention_dim == 2048:
|
||||||
# SDXL IP-Adapter Plus
|
return IPAdapterPlusXL(state_dict, device=device, dtype=dtype) # SDXL IP-Adapter Plus
|
||||||
return IPAdapterPlusXL(state_dict, device=device, dtype=dtype)
|
|
||||||
else:
|
else:
|
||||||
raise Exception(f"Unsupported IP-Adapter Plus cross-attention dimension: {cross_attention_dim}.")
|
raise Exception(f"Unsupported IP-Adapter Plus cross-attention dimension: {cross_attention_dim}.")
|
||||||
elif "proj.0.weight" in state_dict["image_proj"]: # IPAdapterFull (with MLPProjModel).
|
|
||||||
|
# IPAdapterFull (with MLPProjModel)
|
||||||
|
elif "proj.0.weight" in state_dict["image_proj"]:
|
||||||
return IPAdapterFull(state_dict, device=device, dtype=dtype)
|
return IPAdapterFull(state_dict, device=device, dtype=dtype)
|
||||||
|
|
||||||
|
# Unrecognized IP Adapter Architectures
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"'{ip_adapter_ckpt_path}' has an unrecognized IP-Adapter model architecture.")
|
raise ValueError(f"'{ip_adapter_ckpt_path}' has an unrecognized IP-Adapter model architecture.")
|
||||||
|
@ -9,8 +9,8 @@ import torch.nn as nn
|
|||||||
|
|
||||||
|
|
||||||
# FFN
|
# FFN
|
||||||
def FeedForward(dim, mult=4):
|
def FeedForward(dim: int, mult: int = 4):
|
||||||
inner_dim = int(dim * mult)
|
inner_dim = dim * mult
|
||||||
return nn.Sequential(
|
return nn.Sequential(
|
||||||
nn.LayerNorm(dim),
|
nn.LayerNorm(dim),
|
||||||
nn.Linear(dim, inner_dim, bias=False),
|
nn.Linear(dim, inner_dim, bias=False),
|
||||||
@ -19,8 +19,8 @@ def FeedForward(dim, mult=4):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def reshape_tensor(x, heads):
|
def reshape_tensor(x: torch.Tensor, heads: int):
|
||||||
bs, length, width = x.shape
|
bs, length, _ = x.shape
|
||||||
# (bs, length, width) --> (bs, length, n_heads, dim_per_head)
|
# (bs, length, width) --> (bs, length, n_heads, dim_per_head)
|
||||||
x = x.view(bs, length, heads, -1)
|
x = x.view(bs, length, heads, -1)
|
||||||
# (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)
|
# (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)
|
||||||
@ -31,7 +31,7 @@ def reshape_tensor(x, heads):
|
|||||||
|
|
||||||
|
|
||||||
class PerceiverAttention(nn.Module):
|
class PerceiverAttention(nn.Module):
|
||||||
def __init__(self, *, dim, dim_head=64, heads=8):
|
def __init__(self, *, dim: int, dim_head: int = 64, heads: int = 8):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.scale = dim_head**-0.5
|
self.scale = dim_head**-0.5
|
||||||
self.dim_head = dim_head
|
self.dim_head = dim_head
|
||||||
@ -45,7 +45,7 @@ class PerceiverAttention(nn.Module):
|
|||||||
self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)
|
self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)
|
||||||
self.to_out = nn.Linear(inner_dim, dim, bias=False)
|
self.to_out = nn.Linear(inner_dim, dim, bias=False)
|
||||||
|
|
||||||
def forward(self, x, latents):
|
def forward(self, x: torch.Tensor, latents: torch.Tensor):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
x (torch.Tensor): image features
|
x (torch.Tensor): image features
|
||||||
@ -80,14 +80,14 @@ class PerceiverAttention(nn.Module):
|
|||||||
class Resampler(nn.Module):
|
class Resampler(nn.Module):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
dim=1024,
|
dim: int = 1024,
|
||||||
depth=8,
|
depth: int = 8,
|
||||||
dim_head=64,
|
dim_head: int = 64,
|
||||||
heads=16,
|
heads: int = 16,
|
||||||
num_queries=8,
|
num_queries: int = 8,
|
||||||
embedding_dim=768,
|
embedding_dim: int = 768,
|
||||||
output_dim=1024,
|
output_dim: int = 1024,
|
||||||
ff_mult=4,
|
ff_mult: int = 4,
|
||||||
):
|
):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
@ -110,7 +110,15 @@ class Resampler(nn.Module):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_state_dict(cls, state_dict: dict[torch.Tensor], depth=8, dim_head=64, heads=16, num_queries=8, ff_mult=4):
|
def from_state_dict(
|
||||||
|
cls,
|
||||||
|
state_dict: dict[str, torch.Tensor],
|
||||||
|
depth: int = 8,
|
||||||
|
dim_head: int = 64,
|
||||||
|
heads: int = 16,
|
||||||
|
num_queries: int = 8,
|
||||||
|
ff_mult: int = 4,
|
||||||
|
):
|
||||||
"""A convenience function that initializes a Resampler from a state_dict.
|
"""A convenience function that initializes a Resampler from a state_dict.
|
||||||
|
|
||||||
Some of the shape parameters are inferred from the state_dict (e.g. dim, embedding_dim, etc.). At the time of
|
Some of the shape parameters are inferred from the state_dict (e.g. dim, embedding_dim, etc.). At the time of
|
||||||
@ -145,7 +153,7 @@ class Resampler(nn.Module):
|
|||||||
model.load_state_dict(state_dict)
|
model.load_state_dict(state_dict)
|
||||||
return model
|
return model
|
||||||
|
|
||||||
def forward(self, x):
|
def forward(self, x: torch.Tensor):
|
||||||
latents = self.latents.repeat(x.size(0), 1, 1)
|
latents = self.latents.repeat(x.size(0), 1, 1)
|
||||||
|
|
||||||
x = self.proj_in(x)
|
x = self.proj_in(x)
|
||||||
|
@ -323,10 +323,13 @@ class MainDiffusersConfig(DiffusersConfigBase, MainConfigBase):
|
|||||||
return Tag(f"{ModelType.Main.value}.{ModelFormat.Diffusers.value}")
|
return Tag(f"{ModelType.Main.value}.{ModelFormat.Diffusers.value}")
|
||||||
|
|
||||||
|
|
||||||
class IPAdapterConfig(ModelConfigBase):
|
class IPAdapterBaseConfig(ModelConfigBase):
|
||||||
"""Model config for IP Adaptor format models."""
|
|
||||||
|
|
||||||
type: Literal[ModelType.IPAdapter] = ModelType.IPAdapter
|
type: Literal[ModelType.IPAdapter] = ModelType.IPAdapter
|
||||||
|
|
||||||
|
|
||||||
|
class IPAdapterInvokeAIConfig(IPAdapterBaseConfig):
|
||||||
|
"""Model config for IP Adapter diffusers format models."""
|
||||||
|
|
||||||
image_encoder_model_id: str
|
image_encoder_model_id: str
|
||||||
format: Literal[ModelFormat.InvokeAI]
|
format: Literal[ModelFormat.InvokeAI]
|
||||||
|
|
||||||
@ -335,6 +338,16 @@ class IPAdapterConfig(ModelConfigBase):
|
|||||||
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.InvokeAI.value}")
|
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.InvokeAI.value}")
|
||||||
|
|
||||||
|
|
||||||
|
class IPAdapterCheckpointConfig(IPAdapterBaseConfig):
|
||||||
|
"""Model config for IP Adapter checkpoint format models."""
|
||||||
|
|
||||||
|
format: Literal[ModelFormat.Checkpoint]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_tag() -> Tag:
|
||||||
|
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.Checkpoint.value}")
|
||||||
|
|
||||||
|
|
||||||
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
|
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
|
||||||
"""Model config for CLIPVision."""
|
"""Model config for CLIPVision."""
|
||||||
|
|
||||||
@ -390,7 +403,8 @@ AnyModelConfig = Annotated[
|
|||||||
Annotated[LoRADiffusersConfig, LoRADiffusersConfig.get_tag()],
|
Annotated[LoRADiffusersConfig, LoRADiffusersConfig.get_tag()],
|
||||||
Annotated[TextualInversionFileConfig, TextualInversionFileConfig.get_tag()],
|
Annotated[TextualInversionFileConfig, TextualInversionFileConfig.get_tag()],
|
||||||
Annotated[TextualInversionFolderConfig, TextualInversionFolderConfig.get_tag()],
|
Annotated[TextualInversionFolderConfig, TextualInversionFolderConfig.get_tag()],
|
||||||
Annotated[IPAdapterConfig, IPAdapterConfig.get_tag()],
|
Annotated[IPAdapterInvokeAIConfig, IPAdapterInvokeAIConfig.get_tag()],
|
||||||
|
Annotated[IPAdapterCheckpointConfig, IPAdapterCheckpointConfig.get_tag()],
|
||||||
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
|
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
|
||||||
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
|
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
|
||||||
],
|
],
|
||||||
|
@ -3,10 +3,10 @@
|
|||||||
"""Conversion script for the Stable Diffusion checkpoints."""
|
"""Conversion script for the Stable Diffusion checkpoints."""
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Optional
|
from typing import Optional
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
from diffusers import AutoencoderKL
|
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
|
||||||
from diffusers.pipelines.stable_diffusion.convert_from_ckpt import (
|
from diffusers.pipelines.stable_diffusion.convert_from_ckpt import (
|
||||||
convert_ldm_vae_checkpoint,
|
convert_ldm_vae_checkpoint,
|
||||||
create_vae_diffusers_config,
|
create_vae_diffusers_config,
|
||||||
@ -19,9 +19,10 @@ from . import AnyModel
|
|||||||
|
|
||||||
|
|
||||||
def convert_ldm_vae_to_diffusers(
|
def convert_ldm_vae_to_diffusers(
|
||||||
checkpoint: Dict[str, torch.Tensor],
|
checkpoint: torch.Tensor | dict[str, torch.Tensor],
|
||||||
vae_config: DictConfig,
|
vae_config: DictConfig,
|
||||||
image_size: int,
|
image_size: int,
|
||||||
|
dump_path: Optional[Path] = None,
|
||||||
precision: torch.dtype = torch.float16,
|
precision: torch.dtype = torch.float16,
|
||||||
) -> AutoencoderKL:
|
) -> AutoencoderKL:
|
||||||
"""Convert a checkpoint-style VAE into a Diffusers VAE"""
|
"""Convert a checkpoint-style VAE into a Diffusers VAE"""
|
||||||
@ -30,7 +31,12 @@ def convert_ldm_vae_to_diffusers(
|
|||||||
|
|
||||||
vae = AutoencoderKL(**vae_config)
|
vae = AutoencoderKL(**vae_config)
|
||||||
vae.load_state_dict(converted_vae_checkpoint)
|
vae.load_state_dict(converted_vae_checkpoint)
|
||||||
return vae.to(precision)
|
vae.to(precision)
|
||||||
|
|
||||||
|
if dump_path:
|
||||||
|
vae.save_pretrained(dump_path, safe_serialization=True)
|
||||||
|
|
||||||
|
return vae
|
||||||
|
|
||||||
|
|
||||||
def convert_ckpt_to_diffusers(
|
def convert_ckpt_to_diffusers(
|
||||||
|
@ -18,7 +18,7 @@ from invokeai.backend.model_manager.load.load_base import LoadedModel, ModelLoad
|
|||||||
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase, ModelLockerBase
|
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase, ModelLockerBase
|
||||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data, calc_model_size_by_fs
|
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data, calc_model_size_by_fs
|
||||||
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
|
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
|
||||||
from invokeai.backend.util.devices import choose_torch_device, torch_dtype
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
|
|
||||||
# TO DO: The loader is not thread safe!
|
# TO DO: The loader is not thread safe!
|
||||||
@ -37,7 +37,7 @@ class ModelLoader(ModelLoaderBase):
|
|||||||
self._logger = logger
|
self._logger = logger
|
||||||
self._ram_cache = ram_cache
|
self._ram_cache = ram_cache
|
||||||
self._convert_cache = convert_cache
|
self._convert_cache = convert_cache
|
||||||
self._torch_dtype = torch_dtype(choose_torch_device(), app_config)
|
self._torch_dtype = TorchDevice.choose_torch_dtype()
|
||||||
|
|
||||||
def load_model(self, model_config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
|
def load_model(self, model_config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
|
||||||
"""
|
"""
|
||||||
|
@ -121,7 +121,7 @@ class ModelCacheBase(ABC, Generic[T]):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def stats(self) -> CacheStats:
|
def stats(self) -> Optional[CacheStats]:
|
||||||
"""Return collected CacheStats object."""
|
"""Return collected CacheStats object."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -30,6 +30,7 @@ import torch
|
|||||||
|
|
||||||
from invokeai.backend.model_manager import AnyModel, SubModelType
|
from invokeai.backend.model_manager import AnyModel, SubModelType
|
||||||
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot
|
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot
|
||||||
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
from invokeai.backend.util.logging import InvokeAILogger
|
from invokeai.backend.util.logging import InvokeAILogger
|
||||||
|
|
||||||
from .model_cache_base import CacheRecord, CacheStats, ModelCacheBase, ModelLockerBase
|
from .model_cache_base import CacheRecord, CacheStats, ModelCacheBase, ModelLockerBase
|
||||||
@ -299,11 +300,11 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
|||||||
f" {in_ram_models}/{in_vram_models}({locked_in_vram_models})"
|
f" {in_ram_models}/{in_vram_models}({locked_in_vram_models})"
|
||||||
)
|
)
|
||||||
|
|
||||||
def make_room(self, model_size: int) -> None:
|
def make_room(self, size: int) -> None:
|
||||||
"""Make enough room in the cache to accommodate a new model of indicated size."""
|
"""Make enough room in the cache to accommodate a new model of indicated size."""
|
||||||
# calculate how much memory this model will require
|
# calculate how much memory this model will require
|
||||||
# multiplier = 2 if self.precision==torch.float32 else 1
|
# multiplier = 2 if self.precision==torch.float32 else 1
|
||||||
bytes_needed = model_size
|
bytes_needed = size
|
||||||
maximum_size = self.max_cache_size * GIG # stored in GB, convert to bytes
|
maximum_size = self.max_cache_size * GIG # stored in GB, convert to bytes
|
||||||
current_size = self.cache_size()
|
current_size = self.cache_size()
|
||||||
|
|
||||||
@ -358,12 +359,11 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
|||||||
# 1 from onnx runtime object
|
# 1 from onnx runtime object
|
||||||
if not cache_entry.locked and refs <= (3 if "onnx" in model_key else 2):
|
if not cache_entry.locked and refs <= (3 if "onnx" in model_key else 2):
|
||||||
self.logger.debug(
|
self.logger.debug(
|
||||||
f"Removing {model_key} from RAM cache to free at least {(model_size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
|
f"Removing {model_key} from RAM cache to free at least {(size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
|
||||||
)
|
)
|
||||||
current_size -= cache_entry.size
|
current_size -= cache_entry.size
|
||||||
models_cleared += 1
|
models_cleared += 1
|
||||||
del self._cache_stack[pos]
|
self._delete_cache_entry(cache_entry)
|
||||||
del self._cached_models[model_key]
|
|
||||||
del cache_entry
|
del cache_entry
|
||||||
|
|
||||||
else:
|
else:
|
||||||
@ -384,6 +384,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
|||||||
if self.stats:
|
if self.stats:
|
||||||
self.stats.cleared = models_cleared
|
self.stats.cleared = models_cleared
|
||||||
gc.collect()
|
gc.collect()
|
||||||
|
TorchDevice.empty_cache()
|
||||||
self.logger.debug(f"After making room: cached_models={len(self._cached_models)}")
|
self.logger.debug(f"After making room: cached_models={len(self._cached_models)}")
|
||||||
|
|
||||||
def _check_free_vram(self, target_device: torch.device, needed_size: int) -> None:
|
def _check_free_vram(self, target_device: torch.device, needed_size: int) -> None:
|
||||||
@ -396,6 +397,10 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
|||||||
if needed_size > free_mem:
|
if needed_size > free_mem:
|
||||||
raise torch.cuda.OutOfMemoryError
|
raise torch.cuda.OutOfMemoryError
|
||||||
|
|
||||||
|
def _delete_cache_entry(self, cache_entry: CacheRecord[AnyModel]) -> None:
|
||||||
|
self._cache_stack.remove(cache_entry.key)
|
||||||
|
del self._cached_models[cache_entry.key]
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_execution_devices(devices: Optional[Set[torch.device]] = None) -> Set[torch.device]:
|
def _get_execution_devices(devices: Optional[Set[torch.device]] = None) -> Set[torch.device]:
|
||||||
if not devices:
|
if not devices:
|
||||||
@ -410,3 +415,4 @@ class ModelCache(ModelCacheBase[AnyModel]):
|
|||||||
@staticmethod
|
@staticmethod
|
||||||
def _device_name(device: torch.device) -> str:
|
def _device_name(device: torch.device) -> str:
|
||||||
return f"{device.type}:{device.index}"
|
return f"{device.type}:{device.index}"
|
||||||
|
|
||||||
|
@ -54,7 +54,6 @@ class ModelLocker(ModelLockerBase):
|
|||||||
|
|
||||||
# NOTE that the model has to have the to() method in order for this code to move it into GPU!
|
# NOTE that the model has to have the to() method in order for this code to move it into GPU!
|
||||||
self._cache_entry.lock()
|
self._cache_entry.lock()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# We wait for a gpu to be free - may raise a ValueError
|
# We wait for a gpu to be free - may raise a ValueError
|
||||||
self._execution_device = self._cache.get_execution_device()
|
self._execution_device = self._cache.get_execution_device()
|
||||||
|
@ -7,19 +7,13 @@ from typing import Optional
|
|||||||
import torch
|
import torch
|
||||||
|
|
||||||
from invokeai.backend.ip_adapter.ip_adapter import build_ip_adapter
|
from invokeai.backend.ip_adapter.ip_adapter import build_ip_adapter
|
||||||
from invokeai.backend.model_manager import (
|
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelFormat, ModelType, SubModelType
|
||||||
AnyModel,
|
|
||||||
AnyModelConfig,
|
|
||||||
BaseModelType,
|
|
||||||
ModelFormat,
|
|
||||||
ModelType,
|
|
||||||
SubModelType,
|
|
||||||
)
|
|
||||||
from invokeai.backend.model_manager.load import ModelLoader, ModelLoaderRegistry
|
from invokeai.backend.model_manager.load import ModelLoader, ModelLoaderRegistry
|
||||||
from invokeai.backend.raw_model import RawModel
|
from invokeai.backend.raw_model import RawModel
|
||||||
|
|
||||||
|
|
||||||
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.IPAdapter, format=ModelFormat.InvokeAI)
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.IPAdapter, format=ModelFormat.InvokeAI)
|
||||||
|
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.IPAdapter, format=ModelFormat.Checkpoint)
|
||||||
class IPAdapterInvokeAILoader(ModelLoader):
|
class IPAdapterInvokeAILoader(ModelLoader):
|
||||||
"""Class to load IP Adapter diffusers models."""
|
"""Class to load IP Adapter diffusers models."""
|
||||||
|
|
||||||
@ -32,7 +26,7 @@ class IPAdapterInvokeAILoader(ModelLoader):
|
|||||||
raise ValueError("There are no submodels in an IP-Adapter model.")
|
raise ValueError("There are no submodels in an IP-Adapter model.")
|
||||||
model_path = Path(config.path)
|
model_path = Path(config.path)
|
||||||
model: RawModel = build_ip_adapter(
|
model: RawModel = build_ip_adapter(
|
||||||
ip_adapter_ckpt_path=str(model_path / "ip_adapter.bin"),
|
ip_adapter_ckpt_path=model_path,
|
||||||
device=torch.device("cpu"),
|
device=torch.device("cpu"),
|
||||||
dtype=self._torch_dtype,
|
dtype=self._torch_dtype,
|
||||||
)
|
)
|
||||||
|
@ -2,6 +2,7 @@
|
|||||||
"""Class for VAE model loading in InvokeAI."""
|
"""Class for VAE model loading in InvokeAI."""
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
from omegaconf import DictConfig, OmegaConf
|
from omegaconf import DictConfig, OmegaConf
|
||||||
@ -13,7 +14,7 @@ from invokeai.backend.model_manager import (
|
|||||||
ModelFormat,
|
ModelFormat,
|
||||||
ModelType,
|
ModelType,
|
||||||
)
|
)
|
||||||
from invokeai.backend.model_manager.config import CheckpointConfigBase
|
from invokeai.backend.model_manager.config import AnyModel, CheckpointConfigBase
|
||||||
from invokeai.backend.model_manager.convert_ckpt_to_diffusers import convert_ldm_vae_to_diffusers
|
from invokeai.backend.model_manager.convert_ckpt_to_diffusers import convert_ldm_vae_to_diffusers
|
||||||
|
|
||||||
from .. import ModelLoaderRegistry
|
from .. import ModelLoaderRegistry
|
||||||
@ -38,7 +39,7 @@ class VAELoader(GenericDiffusersLoader):
|
|||||||
else:
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Path) -> Path:
|
def _convert_model(self, config: AnyModelConfig, model_path: Path, output_path: Optional[Path] = None) -> AnyModel:
|
||||||
# TODO(MM2): check whether sdxl VAE models convert.
|
# TODO(MM2): check whether sdxl VAE models convert.
|
||||||
if config.base not in {BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2}:
|
if config.base not in {BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2}:
|
||||||
raise Exception(f"VAE conversion not supported for model type: {config.base}")
|
raise Exception(f"VAE conversion not supported for model type: {config.base}")
|
||||||
@ -63,6 +64,6 @@ class VAELoader(GenericDiffusersLoader):
|
|||||||
vae_config=ckpt_config,
|
vae_config=ckpt_config,
|
||||||
image_size=512,
|
image_size=512,
|
||||||
precision=self._torch_dtype,
|
precision=self._torch_dtype,
|
||||||
|
dump_path=output_path,
|
||||||
)
|
)
|
||||||
vae_model.save_pretrained(output_path, safe_serialization=True)
|
return vae_model
|
||||||
return output_path
|
|
||||||
|
@ -17,7 +17,7 @@ from diffusers.utils import logging as dlogging
|
|||||||
|
|
||||||
from invokeai.app.services.model_install import ModelInstallServiceBase
|
from invokeai.app.services.model_install import ModelInstallServiceBase
|
||||||
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
||||||
from invokeai.backend.util.devices import choose_torch_device, torch_dtype
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
from . import (
|
from . import (
|
||||||
AnyModelConfig,
|
AnyModelConfig,
|
||||||
@ -43,6 +43,7 @@ class ModelMerger(object):
|
|||||||
Initialize a ModelMerger object with the model installer.
|
Initialize a ModelMerger object with the model installer.
|
||||||
"""
|
"""
|
||||||
self._installer = installer
|
self._installer = installer
|
||||||
|
self._dtype = TorchDevice.choose_torch_dtype()
|
||||||
|
|
||||||
def merge_diffusion_models(
|
def merge_diffusion_models(
|
||||||
self,
|
self,
|
||||||
@ -68,7 +69,7 @@ class ModelMerger(object):
|
|||||||
warnings.simplefilter("ignore")
|
warnings.simplefilter("ignore")
|
||||||
verbosity = dlogging.get_verbosity()
|
verbosity = dlogging.get_verbosity()
|
||||||
dlogging.set_verbosity_error()
|
dlogging.set_verbosity_error()
|
||||||
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
|
dtype = torch.float16 if variant == "fp16" else self._dtype
|
||||||
|
|
||||||
# Note that checkpoint_merger will not work with downloaded HuggingFace fp16 models
|
# Note that checkpoint_merger will not work with downloaded HuggingFace fp16 models
|
||||||
# until upstream https://github.com/huggingface/diffusers/pull/6670 is merged and released.
|
# until upstream https://github.com/huggingface/diffusers/pull/6670 is merged and released.
|
||||||
@ -151,7 +152,7 @@ class ModelMerger(object):
|
|||||||
dump_path.mkdir(parents=True, exist_ok=True)
|
dump_path.mkdir(parents=True, exist_ok=True)
|
||||||
dump_path = dump_path / merged_model_name
|
dump_path = dump_path / merged_model_name
|
||||||
|
|
||||||
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
|
dtype = torch.float16 if variant == "fp16" else self._dtype
|
||||||
merged_pipe.save_pretrained(dump_path.as_posix(), safe_serialization=True, torch_dtype=dtype, variant=variant)
|
merged_pipe.save_pretrained(dump_path.as_posix(), safe_serialization=True, torch_dtype=dtype, variant=variant)
|
||||||
|
|
||||||
# register model and get its unique key
|
# register model and get its unique key
|
||||||
|
@ -230,9 +230,10 @@ class ModelProbe(object):
|
|||||||
return ModelType.LoRA
|
return ModelType.LoRA
|
||||||
elif any(key.startswith(v) for v in {"controlnet", "control_model", "input_blocks"}):
|
elif any(key.startswith(v) for v in {"controlnet", "control_model", "input_blocks"}):
|
||||||
return ModelType.ControlNet
|
return ModelType.ControlNet
|
||||||
|
elif any(key.startswith(v) for v in {"image_proj.", "ip_adapter."}):
|
||||||
|
return ModelType.IPAdapter
|
||||||
elif key in {"emb_params", "string_to_param"}:
|
elif key in {"emb_params", "string_to_param"}:
|
||||||
return ModelType.TextualInversion
|
return ModelType.TextualInversion
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# diffusers-ti
|
# diffusers-ti
|
||||||
if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
|
if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
|
||||||
@ -323,7 +324,7 @@ class ModelProbe(object):
|
|||||||
with SilenceWarnings():
|
with SilenceWarnings():
|
||||||
if model_path.suffix.endswith((".ckpt", ".pt", ".pth", ".bin")):
|
if model_path.suffix.endswith((".ckpt", ".pt", ".pth", ".bin")):
|
||||||
cls._scan_model(model_path.name, model_path)
|
cls._scan_model(model_path.name, model_path)
|
||||||
model = torch.load(model_path)
|
model = torch.load(model_path, map_location="cpu")
|
||||||
assert isinstance(model, dict)
|
assert isinstance(model, dict)
|
||||||
return model
|
return model
|
||||||
else:
|
else:
|
||||||
@ -527,8 +528,25 @@ class ControlNetCheckpointProbe(CheckpointProbeBase):
|
|||||||
|
|
||||||
|
|
||||||
class IPAdapterCheckpointProbe(CheckpointProbeBase):
|
class IPAdapterCheckpointProbe(CheckpointProbeBase):
|
||||||
|
"""Class for probing IP Adapters"""
|
||||||
|
|
||||||
def get_base_type(self) -> BaseModelType:
|
def get_base_type(self) -> BaseModelType:
|
||||||
raise NotImplementedError()
|
checkpoint = self.checkpoint
|
||||||
|
for key in checkpoint.keys():
|
||||||
|
if not key.startswith(("image_proj.", "ip_adapter.")):
|
||||||
|
continue
|
||||||
|
cross_attention_dim = checkpoint["ip_adapter.1.to_k_ip.weight"].shape[-1]
|
||||||
|
if cross_attention_dim == 768:
|
||||||
|
return BaseModelType.StableDiffusion1
|
||||||
|
elif cross_attention_dim == 1024:
|
||||||
|
return BaseModelType.StableDiffusion2
|
||||||
|
elif cross_attention_dim == 2048:
|
||||||
|
return BaseModelType.StableDiffusionXL
|
||||||
|
else:
|
||||||
|
raise InvalidModelConfigException(
|
||||||
|
f"IP-Adapter had unexpected cross-attention dimension: {cross_attention_dim}."
|
||||||
|
)
|
||||||
|
raise InvalidModelConfigException(f"{self.model_path}: Unable to determine base type")
|
||||||
|
|
||||||
|
|
||||||
class CLIPVisionCheckpointProbe(CheckpointProbeBase):
|
class CLIPVisionCheckpointProbe(CheckpointProbeBase):
|
||||||
@ -768,7 +786,7 @@ class T2IAdapterFolderProbe(FolderProbeBase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
############## register probe classes ######
|
# Register probe classes
|
||||||
ModelProbe.register_probe("diffusers", ModelType.Main, PipelineFolderProbe)
|
ModelProbe.register_probe("diffusers", ModelType.Main, PipelineFolderProbe)
|
||||||
ModelProbe.register_probe("diffusers", ModelType.VAE, VaeFolderProbe)
|
ModelProbe.register_probe("diffusers", ModelType.VAE, VaeFolderProbe)
|
||||||
ModelProbe.register_probe("diffusers", ModelType.LoRA, LoRAFolderProbe)
|
ModelProbe.register_probe("diffusers", ModelType.LoRA, LoRAFolderProbe)
|
||||||
|
@ -21,12 +21,14 @@ from pydantic import Field
|
|||||||
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
||||||
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||||
from invokeai.backend.ip_adapter.unet_patcher import UNetPatcher
|
IPAdapterData,
|
||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningData
|
TextConditioningData,
|
||||||
|
)
|
||||||
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
|
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.unet_attention_patcher import UNetAttentionPatcher
|
||||||
from invokeai.backend.util.attention import auto_detect_slice_size
|
from invokeai.backend.util.attention import auto_detect_slice_size
|
||||||
from invokeai.backend.util.devices import normalize_device
|
from invokeai.backend.util.devices import TorchDevice
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@ -149,16 +151,6 @@ class ControlNetData:
|
|||||||
resize_mode: str = Field(default="just_resize")
|
resize_mode: str = Field(default="just_resize")
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class IPAdapterData:
|
|
||||||
ip_adapter_model: IPAdapter = Field(default=None)
|
|
||||||
# TODO: change to polymorphic so can do different weights per step (once implemented...)
|
|
||||||
weight: Union[float, List[float]] = Field(default=1.0)
|
|
||||||
# weight: float = Field(default=1.0)
|
|
||||||
begin_step_percent: float = Field(default=0.0)
|
|
||||||
end_step_percent: float = Field(default=1.0)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class T2IAdapterData:
|
class T2IAdapterData:
|
||||||
"""A structure containing the information required to apply conditioning from a single T2I-Adapter model."""
|
"""A structure containing the information required to apply conditioning from a single T2I-Adapter model."""
|
||||||
@ -266,7 +258,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
if self.unet.device.type == "cpu" or self.unet.device.type == "mps":
|
if self.unet.device.type == "cpu" or self.unet.device.type == "mps":
|
||||||
mem_free = psutil.virtual_memory().free
|
mem_free = psutil.virtual_memory().free
|
||||||
elif self.unet.device.type == "cuda":
|
elif self.unet.device.type == "cuda":
|
||||||
mem_free, _ = torch.cuda.mem_get_info(normalize_device(self.unet.device))
|
mem_free, _ = torch.cuda.mem_get_info(TorchDevice.normalize(self.unet.device))
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"unrecognized device {self.unet.device}")
|
raise ValueError(f"unrecognized device {self.unet.device}")
|
||||||
# input tensor of [1, 4, h/8, w/8]
|
# input tensor of [1, 4, h/8, w/8]
|
||||||
@ -295,7 +287,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
self,
|
self,
|
||||||
latents: torch.Tensor,
|
latents: torch.Tensor,
|
||||||
num_inference_steps: int,
|
num_inference_steps: int,
|
||||||
conditioning_data: ConditioningData,
|
scheduler_step_kwargs: dict[str, Any],
|
||||||
|
conditioning_data: TextConditioningData,
|
||||||
*,
|
*,
|
||||||
noise: Optional[torch.Tensor],
|
noise: Optional[torch.Tensor],
|
||||||
timesteps: torch.Tensor,
|
timesteps: torch.Tensor,
|
||||||
@ -308,7 +301,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
mask: Optional[torch.Tensor] = None,
|
mask: Optional[torch.Tensor] = None,
|
||||||
masked_latents: Optional[torch.Tensor] = None,
|
masked_latents: Optional[torch.Tensor] = None,
|
||||||
gradient_mask: Optional[bool] = False,
|
gradient_mask: Optional[bool] = False,
|
||||||
seed: Optional[int] = None,
|
seed: int,
|
||||||
) -> torch.Tensor:
|
) -> torch.Tensor:
|
||||||
if init_timestep.shape[0] == 0:
|
if init_timestep.shape[0] == 0:
|
||||||
return latents
|
return latents
|
||||||
@ -326,20 +319,6 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
latents = self.scheduler.add_noise(latents, noise, batched_t)
|
latents = self.scheduler.add_noise(latents, noise, batched_t)
|
||||||
|
|
||||||
if mask is not None:
|
if mask is not None:
|
||||||
# if no noise provided, noisify unmasked area based on seed(or 0 as fallback)
|
|
||||||
if noise is None:
|
|
||||||
noise = torch.randn(
|
|
||||||
orig_latents.shape,
|
|
||||||
dtype=torch.float32,
|
|
||||||
device="cpu",
|
|
||||||
generator=torch.Generator(device="cpu").manual_seed(seed or 0),
|
|
||||||
).to(device=orig_latents.device, dtype=orig_latents.dtype)
|
|
||||||
|
|
||||||
latents = self.scheduler.add_noise(latents, noise, batched_t)
|
|
||||||
latents = torch.lerp(
|
|
||||||
orig_latents, latents.to(dtype=orig_latents.dtype), mask.to(dtype=orig_latents.dtype)
|
|
||||||
)
|
|
||||||
|
|
||||||
if is_inpainting_model(self.unet):
|
if is_inpainting_model(self.unet):
|
||||||
if masked_latents is None:
|
if masked_latents is None:
|
||||||
raise Exception("Source image required for inpaint mask when inpaint model used!")
|
raise Exception("Source image required for inpaint mask when inpaint model used!")
|
||||||
@ -348,6 +327,15 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
self._unet_forward, mask, masked_latents
|
self._unet_forward, mask, masked_latents
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
|
# if no noise provided, noisify unmasked area based on seed
|
||||||
|
if noise is None:
|
||||||
|
noise = torch.randn(
|
||||||
|
orig_latents.shape,
|
||||||
|
dtype=torch.float32,
|
||||||
|
device="cpu",
|
||||||
|
generator=torch.Generator(device="cpu").manual_seed(seed),
|
||||||
|
).to(device=orig_latents.device, dtype=orig_latents.dtype)
|
||||||
|
|
||||||
additional_guidance.append(AddsMaskGuidance(mask, orig_latents, self.scheduler, noise, gradient_mask))
|
additional_guidance.append(AddsMaskGuidance(mask, orig_latents, self.scheduler, noise, gradient_mask))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -355,6 +343,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
latents,
|
latents,
|
||||||
timesteps,
|
timesteps,
|
||||||
conditioning_data,
|
conditioning_data,
|
||||||
|
scheduler_step_kwargs=scheduler_step_kwargs,
|
||||||
additional_guidance=additional_guidance,
|
additional_guidance=additional_guidance,
|
||||||
control_data=control_data,
|
control_data=control_data,
|
||||||
ip_adapter_data=ip_adapter_data,
|
ip_adapter_data=ip_adapter_data,
|
||||||
@ -380,7 +369,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
self,
|
self,
|
||||||
latents: torch.Tensor,
|
latents: torch.Tensor,
|
||||||
timesteps,
|
timesteps,
|
||||||
conditioning_data: ConditioningData,
|
conditioning_data: TextConditioningData,
|
||||||
|
scheduler_step_kwargs: dict[str, Any],
|
||||||
*,
|
*,
|
||||||
additional_guidance: List[Callable] = None,
|
additional_guidance: List[Callable] = None,
|
||||||
control_data: List[ControlNetData] = None,
|
control_data: List[ControlNetData] = None,
|
||||||
@ -397,22 +387,17 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
if timesteps.shape[0] == 0:
|
if timesteps.shape[0] == 0:
|
||||||
return latents
|
return latents
|
||||||
|
|
||||||
ip_adapter_unet_patcher = None
|
use_ip_adapter = ip_adapter_data is not None
|
||||||
extra_conditioning_info = conditioning_data.text_embeddings.extra_conditioning
|
use_regional_prompting = (
|
||||||
if extra_conditioning_info is not None and extra_conditioning_info.wants_cross_attention_control:
|
conditioning_data.cond_regions is not None or conditioning_data.uncond_regions is not None
|
||||||
attn_ctx = self.invokeai_diffuser.custom_attention_context(
|
)
|
||||||
self.invokeai_diffuser.model,
|
unet_attention_patcher = None
|
||||||
extra_conditioning_info=extra_conditioning_info,
|
self.use_ip_adapter = use_ip_adapter
|
||||||
)
|
attn_ctx = nullcontext()
|
||||||
self.use_ip_adapter = False
|
if use_ip_adapter or use_regional_prompting:
|
||||||
elif ip_adapter_data is not None:
|
ip_adapters = [ipa.ip_adapter_model for ipa in ip_adapter_data] if use_ip_adapter else None
|
||||||
# TODO(ryand): Should we raise an exception if both custom attention and IP-Adapter attention are active?
|
unet_attention_patcher = UNetAttentionPatcher(ip_adapters)
|
||||||
# As it is now, the IP-Adapter will silently be skipped.
|
attn_ctx = unet_attention_patcher.apply_ip_adapter_attention(self.invokeai_diffuser.model)
|
||||||
ip_adapter_unet_patcher = UNetPatcher([ipa.ip_adapter_model for ipa in ip_adapter_data])
|
|
||||||
attn_ctx = ip_adapter_unet_patcher.apply_ip_adapter_attention(self.invokeai_diffuser.model)
|
|
||||||
self.use_ip_adapter = True
|
|
||||||
else:
|
|
||||||
attn_ctx = nullcontext()
|
|
||||||
|
|
||||||
# NOTE error is not here!
|
# NOTE error is not here!
|
||||||
if conditioning_data.unconditioned_embeddings.embeds.device != \
|
if conditioning_data.unconditioned_embeddings.embeds.device != \
|
||||||
@ -444,11 +429,11 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
conditioning_data,
|
conditioning_data,
|
||||||
step_index=i,
|
step_index=i,
|
||||||
total_step_count=len(timesteps),
|
total_step_count=len(timesteps),
|
||||||
|
scheduler_step_kwargs=scheduler_step_kwargs,
|
||||||
additional_guidance=additional_guidance,
|
additional_guidance=additional_guidance,
|
||||||
control_data=control_data,
|
control_data=control_data,
|
||||||
ip_adapter_data=ip_adapter_data,
|
ip_adapter_data=ip_adapter_data,
|
||||||
t2i_adapter_data=t2i_adapter_data,
|
t2i_adapter_data=t2i_adapter_data,
|
||||||
ip_adapter_unet_patcher=ip_adapter_unet_patcher,
|
|
||||||
)
|
)
|
||||||
latents = step_output.prev_sample
|
latents = step_output.prev_sample
|
||||||
predicted_original = getattr(step_output, "pred_original_sample", None)
|
predicted_original = getattr(step_output, "pred_original_sample", None)
|
||||||
@ -472,14 +457,14 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
self,
|
self,
|
||||||
t: torch.Tensor,
|
t: torch.Tensor,
|
||||||
latents: torch.Tensor,
|
latents: torch.Tensor,
|
||||||
conditioning_data: ConditioningData,
|
conditioning_data: TextConditioningData,
|
||||||
step_index: int,
|
step_index: int,
|
||||||
total_step_count: int,
|
total_step_count: int,
|
||||||
|
scheduler_step_kwargs: dict[str, Any],
|
||||||
additional_guidance: List[Callable] = None,
|
additional_guidance: List[Callable] = None,
|
||||||
control_data: List[ControlNetData] = None,
|
control_data: List[ControlNetData] = None,
|
||||||
ip_adapter_data: Optional[list[IPAdapterData]] = None,
|
ip_adapter_data: Optional[list[IPAdapterData]] = None,
|
||||||
t2i_adapter_data: Optional[list[T2IAdapterData]] = None,
|
t2i_adapter_data: Optional[list[T2IAdapterData]] = None,
|
||||||
ip_adapter_unet_patcher: Optional[UNetPatcher] = None,
|
|
||||||
):
|
):
|
||||||
|
|
||||||
# invokeai_diffuser has batched timesteps, but diffusers schedulers expect a single value
|
# invokeai_diffuser has batched timesteps, but diffusers schedulers expect a single value
|
||||||
@ -495,23 +480,6 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
# i.e. before or after passing it to InvokeAIDiffuserComponent
|
# i.e. before or after passing it to InvokeAIDiffuserComponent
|
||||||
latent_model_input = self.scheduler.scale_model_input(latents, timestep)
|
latent_model_input = self.scheduler.scale_model_input(latents, timestep)
|
||||||
|
|
||||||
# handle IP-Adapter
|
|
||||||
if self.use_ip_adapter and ip_adapter_data is not None: # somewhat redundant but logic is clearer
|
|
||||||
for i, single_ip_adapter_data in enumerate(ip_adapter_data):
|
|
||||||
first_adapter_step = math.floor(single_ip_adapter_data.begin_step_percent * total_step_count)
|
|
||||||
last_adapter_step = math.ceil(single_ip_adapter_data.end_step_percent * total_step_count)
|
|
||||||
weight = (
|
|
||||||
single_ip_adapter_data.weight[step_index]
|
|
||||||
if isinstance(single_ip_adapter_data.weight, List)
|
|
||||||
else single_ip_adapter_data.weight
|
|
||||||
)
|
|
||||||
if step_index >= first_adapter_step and step_index <= last_adapter_step:
|
|
||||||
# Only apply this IP-Adapter if the current step is within the IP-Adapter's begin/end step range.
|
|
||||||
ip_adapter_unet_patcher.set_scale(i, weight)
|
|
||||||
else:
|
|
||||||
# Otherwise, set the IP-Adapter's scale to 0, so it has no effect.
|
|
||||||
ip_adapter_unet_patcher.set_scale(i, 0.0)
|
|
||||||
|
|
||||||
# Handle ControlNet(s)
|
# Handle ControlNet(s)
|
||||||
down_block_additional_residuals = None
|
down_block_additional_residuals = None
|
||||||
mid_block_additional_residual = None
|
mid_block_additional_residual = None
|
||||||
@ -560,6 +528,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
step_index=step_index,
|
step_index=step_index,
|
||||||
total_step_count=total_step_count,
|
total_step_count=total_step_count,
|
||||||
conditioning_data=conditioning_data,
|
conditioning_data=conditioning_data,
|
||||||
|
ip_adapter_data=ip_adapter_data,
|
||||||
down_block_additional_residuals=down_block_additional_residuals, # for ControlNet
|
down_block_additional_residuals=down_block_additional_residuals, # for ControlNet
|
||||||
mid_block_additional_residual=mid_block_additional_residual, # for ControlNet
|
mid_block_additional_residual=mid_block_additional_residual, # for ControlNet
|
||||||
down_intrablock_additional_residuals=down_intrablock_additional_residuals, # for T2I-Adapter
|
down_intrablock_additional_residuals=down_intrablock_additional_residuals, # for T2I-Adapter
|
||||||
@ -579,7 +548,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
)
|
)
|
||||||
|
|
||||||
# compute the previous noisy sample x_t -> x_t-1
|
# compute the previous noisy sample x_t -> x_t-1
|
||||||
step_output = self.scheduler.step(noise_pred, timestep, latents, **conditioning_data.scheduler_args)
|
step_output = self.scheduler.step(noise_pred, timestep, latents, **scheduler_step_kwargs)
|
||||||
|
|
||||||
# TODO: discuss injection point options. For now this is a patch to get progress images working with inpainting again.
|
# TODO: discuss injection point options. For now this is a patch to get progress images working with inpainting again.
|
||||||
for guidance in additional_guidance:
|
for guidance in additional_guidance:
|
||||||
|
@ -1,27 +1,17 @@
|
|||||||
import dataclasses
|
import math
|
||||||
import inspect
|
from dataclasses import dataclass
|
||||||
from dataclasses import dataclass, field
|
from typing import List, Optional, Union
|
||||||
from typing import Any, List, Optional, Union
|
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
from .cross_attention_control import Arguments
|
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ExtraConditioningInfo:
|
|
||||||
tokens_count_including_eos_bos: int
|
|
||||||
cross_attention_control_args: Optional[Arguments] = None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def wants_cross_attention_control(self):
|
|
||||||
return self.cross_attention_control_args is not None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class BasicConditioningInfo:
|
class BasicConditioningInfo:
|
||||||
|
"""SD 1/2 text conditioning information produced by Compel."""
|
||||||
|
|
||||||
embeds: torch.Tensor
|
embeds: torch.Tensor
|
||||||
extra_conditioning: Optional[ExtraConditioningInfo]
|
|
||||||
|
|
||||||
def to(self, device, dtype=None):
|
def to(self, device, dtype=None):
|
||||||
self.embeds = self.embeds.to(device=device, dtype=dtype)
|
self.embeds = self.embeds.to(device=device, dtype=dtype)
|
||||||
@ -35,6 +25,8 @@ class ConditioningFieldData:
|
|||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class SDXLConditioningInfo(BasicConditioningInfo):
|
class SDXLConditioningInfo(BasicConditioningInfo):
|
||||||
|
"""SDXL text conditioning information produced by Compel."""
|
||||||
|
|
||||||
pooled_embeds: torch.Tensor
|
pooled_embeds: torch.Tensor
|
||||||
add_time_ids: torch.Tensor
|
add_time_ids: torch.Tensor
|
||||||
|
|
||||||
@ -57,37 +49,74 @@ class IPAdapterConditioningInfo:
|
|||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ConditioningData:
|
class IPAdapterData:
|
||||||
unconditioned_embeddings: BasicConditioningInfo
|
ip_adapter_model: IPAdapter
|
||||||
text_embeddings: BasicConditioningInfo
|
ip_adapter_conditioning: IPAdapterConditioningInfo
|
||||||
"""
|
mask: torch.Tensor
|
||||||
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
|
||||||
`guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf).
|
|
||||||
Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate
|
|
||||||
images that are closely linked to the text `prompt`, usually at the expense of lower image quality.
|
|
||||||
"""
|
|
||||||
guidance_scale: Union[float, List[float]]
|
|
||||||
""" for models trained using zero-terminal SNR ("ztsnr"), it's suggested to use guidance_rescale_multiplier of 0.7 .
|
|
||||||
ref [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf)
|
|
||||||
"""
|
|
||||||
guidance_rescale_multiplier: float = 0
|
|
||||||
scheduler_args: dict[str, Any] = field(default_factory=dict)
|
|
||||||
|
|
||||||
ip_adapter_conditioning: Optional[list[IPAdapterConditioningInfo]] = None
|
# Either a single weight applied to all steps, or a list of weights for each step.
|
||||||
|
weight: Union[float, List[float]] = 1.0
|
||||||
|
begin_step_percent: float = 0.0
|
||||||
|
end_step_percent: float = 1.0
|
||||||
|
|
||||||
@property
|
def scale_for_step(self, step_index: int, total_steps: int) -> float:
|
||||||
def dtype(self):
|
first_adapter_step = math.floor(self.begin_step_percent * total_steps)
|
||||||
return self.text_embeddings.dtype
|
last_adapter_step = math.ceil(self.end_step_percent * total_steps)
|
||||||
|
weight = self.weight[step_index] if isinstance(self.weight, List) else self.weight
|
||||||
|
if step_index >= first_adapter_step and step_index <= last_adapter_step:
|
||||||
|
# Only apply this IP-Adapter if the current step is within the IP-Adapter's begin/end step range.
|
||||||
|
return weight
|
||||||
|
# Otherwise, set the IP-Adapter's scale to 0, so it has no effect.
|
||||||
|
return 0.0
|
||||||
|
|
||||||
def add_scheduler_args_if_applicable(self, scheduler, **kwargs):
|
|
||||||
scheduler_args = dict(self.scheduler_args)
|
@dataclass
|
||||||
step_method = inspect.signature(scheduler.step)
|
class Range:
|
||||||
for name, value in kwargs.items():
|
start: int
|
||||||
try:
|
end: int
|
||||||
step_method.bind_partial(**{name: value})
|
|
||||||
except TypeError:
|
|
||||||
# FIXME: don't silently discard arguments
|
class TextConditioningRegions:
|
||||||
pass # debug("%s does not accept argument named %r", scheduler, name)
|
def __init__(
|
||||||
else:
|
self,
|
||||||
scheduler_args[name] = value
|
masks: torch.Tensor,
|
||||||
return dataclasses.replace(self, scheduler_args=scheduler_args)
|
ranges: list[Range],
|
||||||
|
):
|
||||||
|
# A binary mask indicating the regions of the image that the prompt should be applied to.
|
||||||
|
# Shape: (1, num_prompts, height, width)
|
||||||
|
# Dtype: torch.bool
|
||||||
|
self.masks = masks
|
||||||
|
|
||||||
|
# A list of ranges indicating the start and end indices of the embeddings that corresponding mask applies to.
|
||||||
|
# ranges[i] contains the embedding range for the i'th prompt / mask.
|
||||||
|
self.ranges = ranges
|
||||||
|
|
||||||
|
assert self.masks.shape[1] == len(self.ranges)
|
||||||
|
|
||||||
|
|
||||||
|
class TextConditioningData:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
uncond_text: Union[BasicConditioningInfo, SDXLConditioningInfo],
|
||||||
|
cond_text: Union[BasicConditioningInfo, SDXLConditioningInfo],
|
||||||
|
uncond_regions: Optional[TextConditioningRegions],
|
||||||
|
cond_regions: Optional[TextConditioningRegions],
|
||||||
|
guidance_scale: Union[float, List[float]],
|
||||||
|
guidance_rescale_multiplier: float = 0,
|
||||||
|
):
|
||||||
|
self.uncond_text = uncond_text
|
||||||
|
self.cond_text = cond_text
|
||||||
|
self.uncond_regions = uncond_regions
|
||||||
|
self.cond_regions = cond_regions
|
||||||
|
# Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
||||||
|
# `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf).
|
||||||
|
# Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate
|
||||||
|
# images that are closely linked to the text `prompt`, usually at the expense of lower image quality.
|
||||||
|
self.guidance_scale = guidance_scale
|
||||||
|
# For models trained using zero-terminal SNR ("ztsnr"), it's suggested to use guidance_rescale_multiplier of 0.7.
|
||||||
|
# See [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
|
||||||
|
self.guidance_rescale_multiplier = guidance_rescale_multiplier
|
||||||
|
|
||||||
|
def is_sdxl(self):
|
||||||
|
assert isinstance(self.uncond_text, SDXLConditioningInfo) == isinstance(self.cond_text, SDXLConditioningInfo)
|
||||||
|
return isinstance(self.cond_text, SDXLConditioningInfo)
|
||||||
|
@ -1,218 +0,0 @@
|
|||||||
# adapted from bloc97's CrossAttentionControl colab
|
|
||||||
# https://github.com/bloc97/CrossAttentionControl
|
|
||||||
|
|
||||||
|
|
||||||
import enum
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from typing import Optional
|
|
||||||
|
|
||||||
import torch
|
|
||||||
from compel.cross_attention_control import Arguments
|
|
||||||
from diffusers.models.attention_processor import Attention, SlicedAttnProcessor
|
|
||||||
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
|
|
||||||
|
|
||||||
from invokeai.backend.util.devices import torch_dtype
|
|
||||||
|
|
||||||
|
|
||||||
class CrossAttentionType(enum.Enum):
|
|
||||||
SELF = 1
|
|
||||||
TOKENS = 2
|
|
||||||
|
|
||||||
|
|
||||||
class CrossAttnControlContext:
|
|
||||||
def __init__(self, arguments: Arguments):
|
|
||||||
"""
|
|
||||||
:param arguments: Arguments for the cross-attention control process
|
|
||||||
"""
|
|
||||||
self.cross_attention_mask: Optional[torch.Tensor] = None
|
|
||||||
self.cross_attention_index_map: Optional[torch.Tensor] = None
|
|
||||||
self.arguments = arguments
|
|
||||||
|
|
||||||
def get_active_cross_attention_control_types_for_step(
|
|
||||||
self, percent_through: float = None
|
|
||||||
) -> list[CrossAttentionType]:
|
|
||||||
"""
|
|
||||||
Should cross-attention control be applied on the given step?
|
|
||||||
:param percent_through: How far through the step sequence are we (0.0=pure noise, 1.0=completely denoised image). Expected range 0.0..<1.0.
|
|
||||||
:return: A list of attention types that cross-attention control should be performed for on the given step. May be [].
|
|
||||||
"""
|
|
||||||
if percent_through is None:
|
|
||||||
return [CrossAttentionType.SELF, CrossAttentionType.TOKENS]
|
|
||||||
|
|
||||||
opts = self.arguments.edit_options
|
|
||||||
to_control = []
|
|
||||||
if opts["s_start"] <= percent_through < opts["s_end"]:
|
|
||||||
to_control.append(CrossAttentionType.SELF)
|
|
||||||
if opts["t_start"] <= percent_through < opts["t_end"]:
|
|
||||||
to_control.append(CrossAttentionType.TOKENS)
|
|
||||||
return to_control
|
|
||||||
|
|
||||||
|
|
||||||
def setup_cross_attention_control_attention_processors(unet: UNet2DConditionModel, context: CrossAttnControlContext):
|
|
||||||
"""
|
|
||||||
Inject attention parameters and functions into the passed in model to enable cross attention editing.
|
|
||||||
|
|
||||||
:param model: The unet model to inject into.
|
|
||||||
:return: None
|
|
||||||
"""
|
|
||||||
|
|
||||||
# adapted from init_attention_edit
|
|
||||||
device = context.arguments.edited_conditioning.device
|
|
||||||
|
|
||||||
# urgh. should this be hardcoded?
|
|
||||||
max_length = 77
|
|
||||||
# mask=1 means use base prompt attention, mask=0 means use edited prompt attention
|
|
||||||
mask = torch.zeros(max_length, dtype=torch_dtype(device))
|
|
||||||
indices_target = torch.arange(max_length, dtype=torch.long)
|
|
||||||
indices = torch.arange(max_length, dtype=torch.long)
|
|
||||||
for name, a0, a1, b0, b1 in context.arguments.edit_opcodes:
|
|
||||||
if b0 < max_length:
|
|
||||||
if name == "equal": # or (name == "replace" and a1 - a0 == b1 - b0):
|
|
||||||
# these tokens have not been edited
|
|
||||||
indices[b0:b1] = indices_target[a0:a1]
|
|
||||||
mask[b0:b1] = 1
|
|
||||||
|
|
||||||
context.cross_attention_mask = mask.to(device)
|
|
||||||
context.cross_attention_index_map = indices.to(device)
|
|
||||||
old_attn_processors = unet.attn_processors
|
|
||||||
if torch.backends.mps.is_available():
|
|
||||||
# see note in StableDiffusionGeneratorPipeline.__init__ about borked slicing on MPS
|
|
||||||
unet.set_attn_processor(SwapCrossAttnProcessor())
|
|
||||||
else:
|
|
||||||
# try to re-use an existing slice size
|
|
||||||
default_slice_size = 4
|
|
||||||
slice_size = next(
|
|
||||||
(p.slice_size for p in old_attn_processors.values() if type(p) is SlicedAttnProcessor), default_slice_size
|
|
||||||
)
|
|
||||||
unet.set_attn_processor(SlicedSwapCrossAttnProcesser(slice_size=slice_size))
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SwapCrossAttnContext:
|
|
||||||
modified_text_embeddings: torch.Tensor
|
|
||||||
index_map: torch.Tensor # maps from original prompt token indices to the equivalent tokens in the modified prompt
|
|
||||||
mask: torch.Tensor # in the target space of the index_map
|
|
||||||
cross_attention_types_to_do: list[CrossAttentionType] = field(default_factory=list)
|
|
||||||
|
|
||||||
def wants_cross_attention_control(self, attn_type: CrossAttentionType) -> bool:
|
|
||||||
return attn_type in self.cross_attention_types_to_do
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def make_mask_and_index_map(
|
|
||||||
cls, edit_opcodes: list[tuple[str, int, int, int, int]], max_length: int
|
|
||||||
) -> tuple[torch.Tensor, torch.Tensor]:
|
|
||||||
# mask=1 means use original prompt attention, mask=0 means use modified prompt attention
|
|
||||||
mask = torch.zeros(max_length)
|
|
||||||
indices_target = torch.arange(max_length, dtype=torch.long)
|
|
||||||
indices = torch.arange(max_length, dtype=torch.long)
|
|
||||||
for name, a0, a1, b0, b1 in edit_opcodes:
|
|
||||||
if b0 < max_length:
|
|
||||||
if name == "equal":
|
|
||||||
# these tokens remain the same as in the original prompt
|
|
||||||
indices[b0:b1] = indices_target[a0:a1]
|
|
||||||
mask[b0:b1] = 1
|
|
||||||
|
|
||||||
return mask, indices
|
|
||||||
|
|
||||||
|
|
||||||
class SlicedSwapCrossAttnProcesser(SlicedAttnProcessor):
|
|
||||||
# TODO: dynamically pick slice size based on memory conditions
|
|
||||||
|
|
||||||
def __call__(
|
|
||||||
self,
|
|
||||||
attn: Attention,
|
|
||||||
hidden_states,
|
|
||||||
encoder_hidden_states=None,
|
|
||||||
attention_mask=None,
|
|
||||||
# kwargs
|
|
||||||
swap_cross_attn_context: SwapCrossAttnContext = None,
|
|
||||||
**kwargs,
|
|
||||||
):
|
|
||||||
attention_type = CrossAttentionType.SELF if encoder_hidden_states is None else CrossAttentionType.TOKENS
|
|
||||||
|
|
||||||
# if cross-attention control is not in play, just call through to the base implementation.
|
|
||||||
if (
|
|
||||||
attention_type is CrossAttentionType.SELF
|
|
||||||
or swap_cross_attn_context is None
|
|
||||||
or not swap_cross_attn_context.wants_cross_attention_control(attention_type)
|
|
||||||
):
|
|
||||||
# print(f"SwapCrossAttnContext for {attention_type} not active - passing request to superclass")
|
|
||||||
return super().__call__(attn, hidden_states, encoder_hidden_states, attention_mask)
|
|
||||||
# else:
|
|
||||||
# print(f"SwapCrossAttnContext for {attention_type} active")
|
|
||||||
|
|
||||||
batch_size, sequence_length, _ = hidden_states.shape
|
|
||||||
attention_mask = attn.prepare_attention_mask(
|
|
||||||
attention_mask=attention_mask,
|
|
||||||
target_length=sequence_length,
|
|
||||||
batch_size=batch_size,
|
|
||||||
)
|
|
||||||
|
|
||||||
query = attn.to_q(hidden_states)
|
|
||||||
dim = query.shape[-1]
|
|
||||||
query = attn.head_to_batch_dim(query)
|
|
||||||
|
|
||||||
original_text_embeddings = encoder_hidden_states
|
|
||||||
modified_text_embeddings = swap_cross_attn_context.modified_text_embeddings
|
|
||||||
original_text_key = attn.to_k(original_text_embeddings)
|
|
||||||
modified_text_key = attn.to_k(modified_text_embeddings)
|
|
||||||
original_value = attn.to_v(original_text_embeddings)
|
|
||||||
modified_value = attn.to_v(modified_text_embeddings)
|
|
||||||
|
|
||||||
original_text_key = attn.head_to_batch_dim(original_text_key)
|
|
||||||
modified_text_key = attn.head_to_batch_dim(modified_text_key)
|
|
||||||
original_value = attn.head_to_batch_dim(original_value)
|
|
||||||
modified_value = attn.head_to_batch_dim(modified_value)
|
|
||||||
|
|
||||||
# compute slices and prepare output tensor
|
|
||||||
batch_size_attention = query.shape[0]
|
|
||||||
hidden_states = torch.zeros(
|
|
||||||
(batch_size_attention, sequence_length, dim // attn.heads),
|
|
||||||
device=query.device,
|
|
||||||
dtype=query.dtype,
|
|
||||||
)
|
|
||||||
|
|
||||||
# do slices
|
|
||||||
for i in range(max(1, hidden_states.shape[0] // self.slice_size)):
|
|
||||||
start_idx = i * self.slice_size
|
|
||||||
end_idx = (i + 1) * self.slice_size
|
|
||||||
|
|
||||||
query_slice = query[start_idx:end_idx]
|
|
||||||
original_key_slice = original_text_key[start_idx:end_idx]
|
|
||||||
modified_key_slice = modified_text_key[start_idx:end_idx]
|
|
||||||
attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
|
|
||||||
|
|
||||||
original_attn_slice = attn.get_attention_scores(query_slice, original_key_slice, attn_mask_slice)
|
|
||||||
modified_attn_slice = attn.get_attention_scores(query_slice, modified_key_slice, attn_mask_slice)
|
|
||||||
|
|
||||||
# because the prompt modifications may result in token sequences shifted forwards or backwards,
|
|
||||||
# the original attention probabilities must be remapped to account for token index changes in the
|
|
||||||
# modified prompt
|
|
||||||
remapped_original_attn_slice = torch.index_select(
|
|
||||||
original_attn_slice, -1, swap_cross_attn_context.index_map
|
|
||||||
)
|
|
||||||
|
|
||||||
# only some tokens taken from the original attention probabilities. this is controlled by the mask.
|
|
||||||
mask = swap_cross_attn_context.mask
|
|
||||||
inverse_mask = 1 - mask
|
|
||||||
attn_slice = remapped_original_attn_slice * mask + modified_attn_slice * inverse_mask
|
|
||||||
|
|
||||||
del remapped_original_attn_slice, modified_attn_slice
|
|
||||||
|
|
||||||
attn_slice = torch.bmm(attn_slice, modified_value[start_idx:end_idx])
|
|
||||||
hidden_states[start_idx:end_idx] = attn_slice
|
|
||||||
|
|
||||||
# done
|
|
||||||
hidden_states = attn.batch_to_head_dim(hidden_states)
|
|
||||||
|
|
||||||
# linear proj
|
|
||||||
hidden_states = attn.to_out[0](hidden_states)
|
|
||||||
# dropout
|
|
||||||
hidden_states = attn.to_out[1](hidden_states)
|
|
||||||
|
|
||||||
return hidden_states
|
|
||||||
|
|
||||||
|
|
||||||
class SwapCrossAttnProcessor(SlicedSwapCrossAttnProcesser):
|
|
||||||
def __init__(self):
|
|
||||||
super(SwapCrossAttnProcessor, self).__init__(slice_size=int(1e9)) # massive slice size = don't slice
|
|
198
invokeai/backend/stable_diffusion/diffusion/custom_atttention.py
Normal file
@ -0,0 +1,198 @@
|
|||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import torch.nn.functional as F
|
||||||
|
from diffusers.models.attention_processor import Attention, AttnProcessor2_0
|
||||||
|
|
||||||
|
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionProcessorWeights
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.regional_ip_data import RegionalIPData
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.regional_prompt_data import RegionalPromptData
|
||||||
|
|
||||||
|
|
||||||
|
class CustomAttnProcessor2_0(AttnProcessor2_0):
|
||||||
|
"""A custom implementation of AttnProcessor2_0 that supports additional Invoke features.
|
||||||
|
This implementation is based on
|
||||||
|
https://github.com/huggingface/diffusers/blame/fcfa270fbd1dc294e2f3a505bae6bcb791d721c3/src/diffusers/models/attention_processor.py#L1204
|
||||||
|
Supported custom features:
|
||||||
|
- IP-Adapter
|
||||||
|
- Regional prompt attention
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
ip_adapter_weights: Optional[list[IPAttentionProcessorWeights]] = None,
|
||||||
|
):
|
||||||
|
"""Initialize a CustomAttnProcessor2_0.
|
||||||
|
Note: Arguments that are the same for all attention layers are passed to __call__(). Arguments that are
|
||||||
|
layer-specific are passed to __init__().
|
||||||
|
Args:
|
||||||
|
ip_adapter_weights: The IP-Adapter attention weights. ip_adapter_weights[i] contains the attention weights
|
||||||
|
for the i'th IP-Adapter.
|
||||||
|
"""
|
||||||
|
super().__init__()
|
||||||
|
self._ip_adapter_weights = ip_adapter_weights
|
||||||
|
|
||||||
|
def _is_ip_adapter_enabled(self) -> bool:
|
||||||
|
return self._ip_adapter_weights is not None
|
||||||
|
|
||||||
|
def __call__(
|
||||||
|
self,
|
||||||
|
attn: Attention,
|
||||||
|
hidden_states: torch.FloatTensor,
|
||||||
|
encoder_hidden_states: Optional[torch.FloatTensor] = None,
|
||||||
|
attention_mask: Optional[torch.FloatTensor] = None,
|
||||||
|
temb: Optional[torch.FloatTensor] = None,
|
||||||
|
# For regional prompting:
|
||||||
|
regional_prompt_data: Optional[RegionalPromptData] = None,
|
||||||
|
percent_through: Optional[torch.FloatTensor] = None,
|
||||||
|
# For IP-Adapter:
|
||||||
|
regional_ip_data: Optional[RegionalIPData] = None,
|
||||||
|
) -> torch.FloatTensor:
|
||||||
|
"""Apply attention.
|
||||||
|
Args:
|
||||||
|
regional_prompt_data: The regional prompt data for the current batch. If not None, this will be used to
|
||||||
|
apply regional prompt masking.
|
||||||
|
regional_ip_data: The IP-Adapter data for the current batch.
|
||||||
|
"""
|
||||||
|
# If true, we are doing cross-attention, if false we are doing self-attention.
|
||||||
|
is_cross_attention = encoder_hidden_states is not None
|
||||||
|
|
||||||
|
# Start unmodified block from AttnProcessor2_0.
|
||||||
|
# vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
|
||||||
|
residual = hidden_states
|
||||||
|
if attn.spatial_norm is not None:
|
||||||
|
hidden_states = attn.spatial_norm(hidden_states, temb)
|
||||||
|
|
||||||
|
input_ndim = hidden_states.ndim
|
||||||
|
|
||||||
|
if input_ndim == 4:
|
||||||
|
batch_size, channel, height, width = hidden_states.shape
|
||||||
|
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
|
||||||
|
|
||||||
|
batch_size, sequence_length, _ = (
|
||||||
|
hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
|
||||||
|
)
|
||||||
|
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
# End unmodified block from AttnProcessor2_0.
|
||||||
|
|
||||||
|
_, query_seq_len, _ = hidden_states.shape
|
||||||
|
# Handle regional prompt attention masks.
|
||||||
|
if regional_prompt_data is not None and is_cross_attention:
|
||||||
|
assert percent_through is not None
|
||||||
|
prompt_region_attention_mask = regional_prompt_data.get_cross_attn_mask(
|
||||||
|
query_seq_len=query_seq_len, key_seq_len=sequence_length
|
||||||
|
)
|
||||||
|
|
||||||
|
if attention_mask is None:
|
||||||
|
attention_mask = prompt_region_attention_mask
|
||||||
|
else:
|
||||||
|
attention_mask = prompt_region_attention_mask + attention_mask
|
||||||
|
|
||||||
|
# Start unmodified block from AttnProcessor2_0.
|
||||||
|
# vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
|
||||||
|
if attention_mask is not None:
|
||||||
|
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
|
||||||
|
# scaled_dot_product_attention expects attention_mask shape to be
|
||||||
|
# (batch, heads, source_length, target_length)
|
||||||
|
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
|
||||||
|
|
||||||
|
if attn.group_norm is not None:
|
||||||
|
hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
|
||||||
|
|
||||||
|
query = attn.to_q(hidden_states)
|
||||||
|
|
||||||
|
if encoder_hidden_states is None:
|
||||||
|
encoder_hidden_states = hidden_states
|
||||||
|
elif attn.norm_cross:
|
||||||
|
encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
|
||||||
|
|
||||||
|
key = attn.to_k(encoder_hidden_states)
|
||||||
|
value = attn.to_v(encoder_hidden_states)
|
||||||
|
|
||||||
|
inner_dim = key.shape[-1]
|
||||||
|
head_dim = inner_dim // attn.heads
|
||||||
|
|
||||||
|
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||||
|
|
||||||
|
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||||
|
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||||
|
|
||||||
|
# the output of sdp = (batch, num_heads, seq_len, head_dim)
|
||||||
|
# TODO: add support for attn.scale when we move to Torch 2.1
|
||||||
|
hidden_states = F.scaled_dot_product_attention(
|
||||||
|
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
|
||||||
|
)
|
||||||
|
|
||||||
|
hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
||||||
|
hidden_states = hidden_states.to(query.dtype)
|
||||||
|
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
# End unmodified block from AttnProcessor2_0.
|
||||||
|
|
||||||
|
# Apply IP-Adapter conditioning.
|
||||||
|
if is_cross_attention:
|
||||||
|
if self._is_ip_adapter_enabled():
|
||||||
|
assert regional_ip_data is not None
|
||||||
|
ip_masks = regional_ip_data.get_masks(query_seq_len=query_seq_len)
|
||||||
|
assert (
|
||||||
|
len(regional_ip_data.image_prompt_embeds)
|
||||||
|
== len(self._ip_adapter_weights)
|
||||||
|
== len(regional_ip_data.scales)
|
||||||
|
== ip_masks.shape[1]
|
||||||
|
)
|
||||||
|
for ipa_index, ipa_embed in enumerate(regional_ip_data.image_prompt_embeds):
|
||||||
|
ipa_weights = self._ip_adapter_weights[ipa_index]
|
||||||
|
ipa_scale = regional_ip_data.scales[ipa_index]
|
||||||
|
ip_mask = ip_masks[0, ipa_index, ...]
|
||||||
|
|
||||||
|
# The batch dimensions should match.
|
||||||
|
assert ipa_embed.shape[0] == encoder_hidden_states.shape[0]
|
||||||
|
# The token_len dimensions should match.
|
||||||
|
assert ipa_embed.shape[-1] == encoder_hidden_states.shape[-1]
|
||||||
|
|
||||||
|
ip_hidden_states = ipa_embed
|
||||||
|
|
||||||
|
# Expected ip_hidden_state shape: (batch_size, num_ip_images, ip_seq_len, ip_image_embedding)
|
||||||
|
|
||||||
|
ip_key = ipa_weights.to_k_ip(ip_hidden_states)
|
||||||
|
ip_value = ipa_weights.to_v_ip(ip_hidden_states)
|
||||||
|
|
||||||
|
# Expected ip_key and ip_value shape: (batch_size, num_ip_images, ip_seq_len, head_dim * num_heads)
|
||||||
|
|
||||||
|
ip_key = ip_key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||||
|
ip_value = ip_value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||||
|
|
||||||
|
# Expected ip_key and ip_value shape: (batch_size, num_heads, num_ip_images * ip_seq_len, head_dim)
|
||||||
|
|
||||||
|
# TODO: add support for attn.scale when we move to Torch 2.1
|
||||||
|
ip_hidden_states = F.scaled_dot_product_attention(
|
||||||
|
query, ip_key, ip_value, attn_mask=None, dropout_p=0.0, is_causal=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Expected ip_hidden_states shape: (batch_size, num_heads, query_seq_len, head_dim)
|
||||||
|
|
||||||
|
ip_hidden_states = ip_hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
||||||
|
ip_hidden_states = ip_hidden_states.to(query.dtype)
|
||||||
|
|
||||||
|
# Expected ip_hidden_states shape: (batch_size, query_seq_len, num_heads * head_dim)
|
||||||
|
|
||||||
|
hidden_states = hidden_states + ipa_scale * ip_hidden_states * ip_mask
|
||||||
|
else:
|
||||||
|
# If IP-Adapter is not enabled, then regional_ip_data should not be passed in.
|
||||||
|
assert regional_ip_data is None
|
||||||
|
|
||||||
|
# Start unmodified block from AttnProcessor2_0.
|
||||||
|
# vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
|
||||||
|
# linear proj
|
||||||
|
hidden_states = attn.to_out[0](hidden_states)
|
||||||
|
# dropout
|
||||||
|
hidden_states = attn.to_out[1](hidden_states)
|
||||||
|
|
||||||
|
if input_ndim == 4:
|
||||||
|
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
|
||||||
|
|
||||||
|
if attn.residual_connection:
|
||||||
|
hidden_states = hidden_states + residual
|
||||||
|
|
||||||
|
hidden_states = hidden_states / attn.rescale_output_factor
|
||||||
|
|
||||||
|
return hidden_states
|
@ -0,0 +1,72 @@
|
|||||||
|
import torch
|
||||||
|
|
||||||
|
|
||||||
|
class RegionalIPData:
|
||||||
|
"""A class to manage the data for regional IP-Adapter conditioning."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
image_prompt_embeds: list[torch.Tensor],
|
||||||
|
scales: list[float],
|
||||||
|
masks: list[torch.Tensor],
|
||||||
|
dtype: torch.dtype,
|
||||||
|
device: torch.device,
|
||||||
|
max_downscale_factor: int = 8,
|
||||||
|
):
|
||||||
|
"""Initialize a `IPAdapterConditioningData` object."""
|
||||||
|
assert len(image_prompt_embeds) == len(scales) == len(masks)
|
||||||
|
|
||||||
|
# The image prompt embeddings.
|
||||||
|
# regional_ip_data[i] contains the image prompt embeddings for the i'th IP-Adapter. Each tensor
|
||||||
|
# has shape (batch_size, num_ip_images, seq_len, ip_embedding_len).
|
||||||
|
self.image_prompt_embeds = image_prompt_embeds
|
||||||
|
|
||||||
|
# The scales for the IP-Adapter attention.
|
||||||
|
# scales[i] contains the attention scale for the i'th IP-Adapter.
|
||||||
|
self.scales = scales
|
||||||
|
|
||||||
|
# The IP-Adapter masks.
|
||||||
|
# self._masks_by_seq_len[s] contains the spatial masks for the downsampling level with query sequence length of
|
||||||
|
# s. It has shape (batch_size, num_ip_images, query_seq_len, 1). The masks have values of 1.0 for included
|
||||||
|
# regions and 0.0 for excluded regions.
|
||||||
|
self._masks_by_seq_len = self._prepare_masks(masks, max_downscale_factor, device, dtype)
|
||||||
|
|
||||||
|
def _prepare_masks(
|
||||||
|
self, masks: list[torch.Tensor], max_downscale_factor: int, device: torch.device, dtype: torch.dtype
|
||||||
|
) -> dict[int, torch.Tensor]:
|
||||||
|
"""Prepare the masks for the IP-Adapter attention."""
|
||||||
|
# Concatenate the masks so that they can be processed more efficiently.
|
||||||
|
mask_tensor = torch.cat(masks, dim=1)
|
||||||
|
|
||||||
|
mask_tensor = mask_tensor.to(device=device, dtype=dtype)
|
||||||
|
|
||||||
|
masks_by_seq_len: dict[int, torch.Tensor] = {}
|
||||||
|
|
||||||
|
# Downsample the spatial dimensions by factors of 2 until max_downscale_factor is reached.
|
||||||
|
downscale_factor = 1
|
||||||
|
while downscale_factor <= max_downscale_factor:
|
||||||
|
b, num_ip_adapters, h, w = mask_tensor.shape
|
||||||
|
# Assert that the batch size is 1, because I haven't thought through batch handling for this feature yet.
|
||||||
|
assert b == 1
|
||||||
|
|
||||||
|
# The IP-Adapters are applied in the cross-attention layers, where the query sequence length is the h * w of
|
||||||
|
# the spatial features.
|
||||||
|
query_seq_len = h * w
|
||||||
|
|
||||||
|
masks_by_seq_len[query_seq_len] = mask_tensor.view((b, num_ip_adapters, -1, 1))
|
||||||
|
|
||||||
|
downscale_factor *= 2
|
||||||
|
if downscale_factor <= max_downscale_factor:
|
||||||
|
# We use max pooling because we downscale to a pretty low resolution, so we don't want small mask
|
||||||
|
# regions to be lost entirely.
|
||||||
|
#
|
||||||
|
# ceil_mode=True is set to mirror the downsampling behavior of SD and SDXL.
|
||||||
|
#
|
||||||
|
# TODO(ryand): In the future, we may want to experiment with other downsampling methods.
|
||||||
|
mask_tensor = torch.nn.functional.max_pool2d(mask_tensor, kernel_size=2, stride=2, ceil_mode=True)
|
||||||
|
|
||||||
|
return masks_by_seq_len
|
||||||
|
|
||||||
|
def get_masks(self, query_seq_len: int) -> torch.Tensor:
|
||||||
|
"""Get the mask for the given query sequence length."""
|
||||||
|
return self._masks_by_seq_len[query_seq_len]
|
@ -0,0 +1,105 @@
|
|||||||
|
import torch
|
||||||
|
import torch.nn.functional as F
|
||||||
|
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||||
|
TextConditioningRegions,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class RegionalPromptData:
|
||||||
|
"""A class to manage the prompt data for regional conditioning."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
regions: list[TextConditioningRegions],
|
||||||
|
device: torch.device,
|
||||||
|
dtype: torch.dtype,
|
||||||
|
max_downscale_factor: int = 8,
|
||||||
|
):
|
||||||
|
"""Initialize a `RegionalPromptData` object.
|
||||||
|
Args:
|
||||||
|
regions (list[TextConditioningRegions]): regions[i] contains the prompt regions for the i'th sample in the
|
||||||
|
batch.
|
||||||
|
device (torch.device): The device to use for the attention masks.
|
||||||
|
dtype (torch.dtype): The data type to use for the attention masks.
|
||||||
|
max_downscale_factor: Spatial masks will be prepared for downscale factors from 1 to max_downscale_factor
|
||||||
|
in steps of 2x.
|
||||||
|
"""
|
||||||
|
self._regions = regions
|
||||||
|
self._device = device
|
||||||
|
self._dtype = dtype
|
||||||
|
# self._spatial_masks_by_seq_len[b][s] contains the spatial masks for the b'th batch sample with a query
|
||||||
|
# sequence length of s.
|
||||||
|
self._spatial_masks_by_seq_len: list[dict[int, torch.Tensor]] = self._prepare_spatial_masks(
|
||||||
|
regions, max_downscale_factor
|
||||||
|
)
|
||||||
|
self._negative_cross_attn_mask_score = -10000.0
|
||||||
|
|
||||||
|
def _prepare_spatial_masks(
|
||||||
|
self, regions: list[TextConditioningRegions], max_downscale_factor: int = 8
|
||||||
|
) -> list[dict[int, torch.Tensor]]:
|
||||||
|
"""Prepare the spatial masks for all downscaling factors."""
|
||||||
|
# batch_masks_by_seq_len[b][s] contains the spatial masks for the b'th batch sample with a query sequence length
|
||||||
|
# of s.
|
||||||
|
batch_sample_masks_by_seq_len: list[dict[int, torch.Tensor]] = []
|
||||||
|
|
||||||
|
for batch_sample_regions in regions:
|
||||||
|
batch_sample_masks_by_seq_len.append({})
|
||||||
|
|
||||||
|
batch_sample_masks = batch_sample_regions.masks.to(device=self._device, dtype=self._dtype)
|
||||||
|
|
||||||
|
# Downsample the spatial dimensions by factors of 2 until max_downscale_factor is reached.
|
||||||
|
downscale_factor = 1
|
||||||
|
while downscale_factor <= max_downscale_factor:
|
||||||
|
b, _num_prompts, h, w = batch_sample_masks.shape
|
||||||
|
assert b == 1
|
||||||
|
query_seq_len = h * w
|
||||||
|
|
||||||
|
batch_sample_masks_by_seq_len[-1][query_seq_len] = batch_sample_masks
|
||||||
|
|
||||||
|
downscale_factor *= 2
|
||||||
|
if downscale_factor <= max_downscale_factor:
|
||||||
|
# We use max pooling because we downscale to a pretty low resolution, so we don't want small prompt
|
||||||
|
# regions to be lost entirely.
|
||||||
|
#
|
||||||
|
# ceil_mode=True is set to mirror the downsampling behavior of SD and SDXL.
|
||||||
|
#
|
||||||
|
# TODO(ryand): In the future, we may want to experiment with other downsampling methods (e.g.
|
||||||
|
# nearest interpolation), and could potentially use a weighted mask rather than a binary mask.
|
||||||
|
batch_sample_masks = F.max_pool2d(batch_sample_masks, kernel_size=2, stride=2, ceil_mode=True)
|
||||||
|
|
||||||
|
return batch_sample_masks_by_seq_len
|
||||||
|
|
||||||
|
def get_cross_attn_mask(self, query_seq_len: int, key_seq_len: int) -> torch.Tensor:
|
||||||
|
"""Get the cross-attention mask for the given query sequence length.
|
||||||
|
Args:
|
||||||
|
query_seq_len: The length of the flattened spatial features at the current downscaling level.
|
||||||
|
key_seq_len (int): The sequence length of the prompt embeddings (which act as the key in the cross-attention
|
||||||
|
layers). This is most likely equal to the max embedding range end, but we pass it explicitly to be sure.
|
||||||
|
Returns:
|
||||||
|
torch.Tensor: The cross-attention score mask.
|
||||||
|
shape: (batch_size, query_seq_len, key_seq_len).
|
||||||
|
dtype: float
|
||||||
|
"""
|
||||||
|
batch_size = len(self._spatial_masks_by_seq_len)
|
||||||
|
batch_spatial_masks = [self._spatial_masks_by_seq_len[b][query_seq_len] for b in range(batch_size)]
|
||||||
|
|
||||||
|
# Create an empty attention mask with the correct shape.
|
||||||
|
attn_mask = torch.zeros((batch_size, query_seq_len, key_seq_len), dtype=self._dtype, device=self._device)
|
||||||
|
|
||||||
|
for batch_idx in range(batch_size):
|
||||||
|
batch_sample_spatial_masks = batch_spatial_masks[batch_idx]
|
||||||
|
batch_sample_regions = self._regions[batch_idx]
|
||||||
|
|
||||||
|
# Flatten the spatial dimensions of the mask by reshaping to (1, num_prompts, query_seq_len, 1).
|
||||||
|
_, num_prompts, _, _ = batch_sample_spatial_masks.shape
|
||||||
|
batch_sample_query_masks = batch_sample_spatial_masks.view((1, num_prompts, query_seq_len, 1))
|
||||||
|
|
||||||
|
for prompt_idx, embedding_range in enumerate(batch_sample_regions.ranges):
|
||||||
|
batch_sample_query_scores = batch_sample_query_masks[0, prompt_idx, :, :].clone()
|
||||||
|
batch_sample_query_mask = batch_sample_query_scores > 0.5
|
||||||
|
batch_sample_query_scores[batch_sample_query_mask] = 0.0
|
||||||
|
batch_sample_query_scores[~batch_sample_query_mask] = self._negative_cross_attn_mask_score
|
||||||
|
attn_mask[batch_idx, :, embedding_range.start : embedding_range.end] = batch_sample_query_scores
|
||||||
|
|
||||||
|
return attn_mask
|
@ -1,27 +1,21 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import math
|
import math
|
||||||
from contextlib import contextmanager
|
|
||||||
from typing import Any, Callable, Optional, Union
|
from typing import Any, Callable, Optional, Union
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
import threading
|
import threading
|
||||||
from diffusers import UNet2DConditionModel
|
|
||||||
from typing_extensions import TypeAlias
|
from typing_extensions import TypeAlias
|
||||||
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||||
ConditioningData,
|
IPAdapterData,
|
||||||
ExtraConditioningInfo,
|
Range,
|
||||||
SDXLConditioningInfo,
|
TextConditioningData,
|
||||||
)
|
TextConditioningRegions,
|
||||||
|
|
||||||
from .cross_attention_control import (
|
|
||||||
CrossAttentionType,
|
|
||||||
CrossAttnControlContext,
|
|
||||||
SwapCrossAttnContext,
|
|
||||||
setup_cross_attention_control_attention_processors,
|
|
||||||
)
|
)
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.regional_ip_data import RegionalIPData
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.regional_prompt_data import RegionalPromptData
|
||||||
|
|
||||||
ModelForwardCallback: TypeAlias = Union[
|
ModelForwardCallback: TypeAlias = Union[
|
||||||
# x, t, conditioning, Optional[cross-attention kwargs]
|
# x, t, conditioning, Optional[cross-attention kwargs]
|
||||||
@ -59,31 +53,8 @@ class InvokeAIDiffuserComponent:
|
|||||||
self.conditioning = None
|
self.conditioning = None
|
||||||
self.model = model
|
self.model = model
|
||||||
self.model_forward_callback = model_forward_callback
|
self.model_forward_callback = model_forward_callback
|
||||||
self.cross_attention_control_context = None
|
|
||||||
self.sequential_guidance = config.sequential_guidance
|
self.sequential_guidance = config.sequential_guidance
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def custom_attention_context(
|
|
||||||
self,
|
|
||||||
unet: UNet2DConditionModel,
|
|
||||||
extra_conditioning_info: Optional[ExtraConditioningInfo],
|
|
||||||
):
|
|
||||||
old_attn_processors = unet.attn_processors
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.cross_attention_control_context = CrossAttnControlContext(
|
|
||||||
arguments=extra_conditioning_info.cross_attention_control_args,
|
|
||||||
)
|
|
||||||
setup_cross_attention_control_attention_processors(
|
|
||||||
unet,
|
|
||||||
self.cross_attention_control_context,
|
|
||||||
)
|
|
||||||
|
|
||||||
yield None
|
|
||||||
finally:
|
|
||||||
self.cross_attention_control_context = None
|
|
||||||
unet.set_attn_processor(old_attn_processors)
|
|
||||||
|
|
||||||
def do_controlnet_step(
|
def do_controlnet_step(
|
||||||
self,
|
self,
|
||||||
control_data,
|
control_data,
|
||||||
@ -91,7 +62,7 @@ class InvokeAIDiffuserComponent:
|
|||||||
timestep: torch.Tensor,
|
timestep: torch.Tensor,
|
||||||
step_index: int,
|
step_index: int,
|
||||||
total_step_count: int,
|
total_step_count: int,
|
||||||
conditioning_data,
|
conditioning_data: TextConditioningData,
|
||||||
):
|
):
|
||||||
down_block_res_samples, mid_block_res_sample = None, None
|
down_block_res_samples, mid_block_res_sample = None, None
|
||||||
|
|
||||||
@ -124,28 +95,28 @@ class InvokeAIDiffuserComponent:
|
|||||||
added_cond_kwargs = None
|
added_cond_kwargs = None
|
||||||
|
|
||||||
if cfg_injection: # only applying ControlNet to conditional instead of in unconditioned
|
if cfg_injection: # only applying ControlNet to conditional instead of in unconditioned
|
||||||
if type(conditioning_data.text_embeddings) is SDXLConditioningInfo:
|
if conditioning_data.is_sdxl():
|
||||||
added_cond_kwargs = {
|
added_cond_kwargs = {
|
||||||
"text_embeds": conditioning_data.text_embeddings.pooled_embeds,
|
"text_embeds": conditioning_data.cond_text.pooled_embeds,
|
||||||
"time_ids": conditioning_data.text_embeddings.add_time_ids,
|
"time_ids": conditioning_data.cond_text.add_time_ids,
|
||||||
}
|
}
|
||||||
encoder_hidden_states = conditioning_data.text_embeddings.embeds
|
encoder_hidden_states = conditioning_data.cond_text.embeds
|
||||||
encoder_attention_mask = None
|
encoder_attention_mask = None
|
||||||
else:
|
else:
|
||||||
if type(conditioning_data.text_embeddings) is SDXLConditioningInfo:
|
if conditioning_data.is_sdxl():
|
||||||
added_cond_kwargs = {
|
added_cond_kwargs = {
|
||||||
"text_embeds": torch.cat(
|
"text_embeds": torch.cat(
|
||||||
[
|
[
|
||||||
# TODO: how to pad? just by zeros? or even truncate?
|
# TODO: how to pad? just by zeros? or even truncate?
|
||||||
conditioning_data.unconditioned_embeddings.pooled_embeds,
|
conditioning_data.uncond_text.pooled_embeds,
|
||||||
conditioning_data.text_embeddings.pooled_embeds,
|
conditioning_data.cond_text.pooled_embeds,
|
||||||
],
|
],
|
||||||
dim=0,
|
dim=0,
|
||||||
),
|
),
|
||||||
"time_ids": torch.cat(
|
"time_ids": torch.cat(
|
||||||
[
|
[
|
||||||
conditioning_data.unconditioned_embeddings.add_time_ids,
|
conditioning_data.uncond_text.add_time_ids,
|
||||||
conditioning_data.text_embeddings.add_time_ids,
|
conditioning_data.cond_text.add_time_ids,
|
||||||
],
|
],
|
||||||
dim=0,
|
dim=0,
|
||||||
),
|
),
|
||||||
@ -154,8 +125,8 @@ class InvokeAIDiffuserComponent:
|
|||||||
encoder_hidden_states,
|
encoder_hidden_states,
|
||||||
encoder_attention_mask,
|
encoder_attention_mask,
|
||||||
) = self._concat_conditionings_for_batch(
|
) = self._concat_conditionings_for_batch(
|
||||||
conditioning_data.unconditioned_embeddings.embeds,
|
conditioning_data.uncond_text.embeds,
|
||||||
conditioning_data.text_embeddings.embeds,
|
conditioning_data.cond_text.embeds,
|
||||||
)
|
)
|
||||||
if isinstance(control_datum.weight, list):
|
if isinstance(control_datum.weight, list):
|
||||||
# if controlnet has multiple weights, use the weight for the current step
|
# if controlnet has multiple weights, use the weight for the current step
|
||||||
@ -199,24 +170,15 @@ class InvokeAIDiffuserComponent:
|
|||||||
self,
|
self,
|
||||||
sample: torch.Tensor,
|
sample: torch.Tensor,
|
||||||
timestep: torch.Tensor,
|
timestep: torch.Tensor,
|
||||||
conditioning_data: ConditioningData,
|
conditioning_data: TextConditioningData,
|
||||||
|
ip_adapter_data: Optional[list[IPAdapterData]],
|
||||||
step_index: int,
|
step_index: int,
|
||||||
total_step_count: int,
|
total_step_count: int,
|
||||||
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
||||||
):
|
):
|
||||||
cross_attention_control_types_to_do = []
|
if self.sequential_guidance:
|
||||||
if self.cross_attention_control_context is not None:
|
|
||||||
percent_through = step_index / total_step_count
|
|
||||||
cross_attention_control_types_to_do = (
|
|
||||||
self.cross_attention_control_context.get_active_cross_attention_control_types_for_step(percent_through)
|
|
||||||
)
|
|
||||||
wants_cross_attention_control = len(cross_attention_control_types_to_do) > 0
|
|
||||||
|
|
||||||
if wants_cross_attention_control or self.sequential_guidance:
|
|
||||||
# If wants_cross_attention_control is True, we force the sequential mode to be used, because cross-attention
|
|
||||||
# control is currently only supported in sequential mode.
|
|
||||||
(
|
(
|
||||||
unconditioned_next_x,
|
unconditioned_next_x,
|
||||||
conditioned_next_x,
|
conditioned_next_x,
|
||||||
@ -224,7 +186,9 @@ class InvokeAIDiffuserComponent:
|
|||||||
x=sample,
|
x=sample,
|
||||||
sigma=timestep,
|
sigma=timestep,
|
||||||
conditioning_data=conditioning_data,
|
conditioning_data=conditioning_data,
|
||||||
cross_attention_control_types_to_do=cross_attention_control_types_to_do,
|
ip_adapter_data=ip_adapter_data,
|
||||||
|
step_index=step_index,
|
||||||
|
total_step_count=total_step_count,
|
||||||
down_block_additional_residuals=down_block_additional_residuals,
|
down_block_additional_residuals=down_block_additional_residuals,
|
||||||
mid_block_additional_residual=mid_block_additional_residual,
|
mid_block_additional_residual=mid_block_additional_residual,
|
||||||
down_intrablock_additional_residuals=down_intrablock_additional_residuals,
|
down_intrablock_additional_residuals=down_intrablock_additional_residuals,
|
||||||
@ -237,6 +201,9 @@ class InvokeAIDiffuserComponent:
|
|||||||
x=sample,
|
x=sample,
|
||||||
sigma=timestep,
|
sigma=timestep,
|
||||||
conditioning_data=conditioning_data,
|
conditioning_data=conditioning_data,
|
||||||
|
ip_adapter_data=ip_adapter_data,
|
||||||
|
step_index=step_index,
|
||||||
|
total_step_count=total_step_count,
|
||||||
down_block_additional_residuals=down_block_additional_residuals,
|
down_block_additional_residuals=down_block_additional_residuals,
|
||||||
mid_block_additional_residual=mid_block_additional_residual,
|
mid_block_additional_residual=mid_block_additional_residual,
|
||||||
down_intrablock_additional_residuals=down_intrablock_additional_residuals,
|
down_intrablock_additional_residuals=down_intrablock_additional_residuals,
|
||||||
@ -297,53 +264,84 @@ class InvokeAIDiffuserComponent:
|
|||||||
|
|
||||||
def _apply_standard_conditioning(
|
def _apply_standard_conditioning(
|
||||||
self,
|
self,
|
||||||
x,
|
x: torch.Tensor,
|
||||||
sigma,
|
sigma: torch.Tensor,
|
||||||
conditioning_data: ConditioningData,
|
conditioning_data: TextConditioningData,
|
||||||
|
ip_adapter_data: Optional[list[IPAdapterData]],
|
||||||
|
step_index: int,
|
||||||
|
total_step_count: int,
|
||||||
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
||||||
):
|
) -> tuple[torch.Tensor, torch.Tensor]:
|
||||||
"""Runs the conditioned and unconditioned UNet forward passes in a single batch for faster inference speed at
|
"""Runs the conditioned and unconditioned UNet forward passes in a single batch for faster inference speed at
|
||||||
the cost of higher memory usage.
|
the cost of higher memory usage.
|
||||||
"""
|
"""
|
||||||
x_twice = torch.cat([x] * 2)
|
x_twice = torch.cat([x] * 2)
|
||||||
sigma_twice = torch.cat([sigma] * 2)
|
sigma_twice = torch.cat([sigma] * 2)
|
||||||
|
|
||||||
cross_attention_kwargs = None
|
cross_attention_kwargs = {}
|
||||||
if conditioning_data.ip_adapter_conditioning is not None:
|
if ip_adapter_data is not None:
|
||||||
|
ip_adapter_conditioning = [ipa.ip_adapter_conditioning for ipa in ip_adapter_data]
|
||||||
# Note that we 'stack' to produce tensors of shape (batch_size, num_ip_images, seq_len, token_len).
|
# Note that we 'stack' to produce tensors of shape (batch_size, num_ip_images, seq_len, token_len).
|
||||||
cross_attention_kwargs = {
|
image_prompt_embeds = [
|
||||||
"ip_adapter_image_prompt_embeds": [
|
torch.stack([ipa_conditioning.uncond_image_prompt_embeds, ipa_conditioning.cond_image_prompt_embeds])
|
||||||
torch.stack(
|
for ipa_conditioning in ip_adapter_conditioning
|
||||||
[ipa_conditioning.uncond_image_prompt_embeds, ipa_conditioning.cond_image_prompt_embeds]
|
]
|
||||||
)
|
scales = [ipa.scale_for_step(step_index, total_step_count) for ipa in ip_adapter_data]
|
||||||
for ipa_conditioning in conditioning_data.ip_adapter_conditioning
|
ip_masks = [ipa.mask for ipa in ip_adapter_data]
|
||||||
]
|
regional_ip_data = RegionalIPData(
|
||||||
}
|
image_prompt_embeds=image_prompt_embeds, scales=scales, masks=ip_masks, dtype=x.dtype, device=x.device
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["regional_ip_data"] = regional_ip_data
|
||||||
|
|
||||||
added_cond_kwargs = None
|
added_cond_kwargs = None
|
||||||
if type(conditioning_data.text_embeddings) is SDXLConditioningInfo:
|
if conditioning_data.is_sdxl():
|
||||||
added_cond_kwargs = {
|
added_cond_kwargs = {
|
||||||
"text_embeds": torch.cat(
|
"text_embeds": torch.cat(
|
||||||
[
|
[
|
||||||
# TODO: how to pad? just by zeros? or even truncate?
|
# TODO: how to pad? just by zeros? or even truncate?
|
||||||
conditioning_data.unconditioned_embeddings.pooled_embeds,
|
conditioning_data.uncond_text.pooled_embeds,
|
||||||
conditioning_data.text_embeddings.pooled_embeds,
|
conditioning_data.cond_text.pooled_embeds,
|
||||||
],
|
],
|
||||||
dim=0,
|
dim=0,
|
||||||
),
|
),
|
||||||
"time_ids": torch.cat(
|
"time_ids": torch.cat(
|
||||||
[
|
[
|
||||||
conditioning_data.unconditioned_embeddings.add_time_ids,
|
conditioning_data.uncond_text.add_time_ids,
|
||||||
conditioning_data.text_embeddings.add_time_ids,
|
conditioning_data.cond_text.add_time_ids,
|
||||||
],
|
],
|
||||||
dim=0,
|
dim=0,
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if conditioning_data.cond_regions is not None or conditioning_data.uncond_regions is not None:
|
||||||
|
# TODO(ryand): We currently initialize RegionalPromptData for every denoising step. The text conditionings
|
||||||
|
# and masks are not changing from step-to-step, so this really only needs to be done once. While this seems
|
||||||
|
# painfully inefficient, the time spent is typically negligible compared to the forward inference pass of
|
||||||
|
# the UNet. The main reason that this hasn't been moved up to eliminate redundancy is that it is slightly
|
||||||
|
# awkward to handle both standard conditioning and sequential conditioning further up the stack.
|
||||||
|
regions = []
|
||||||
|
for c, r in [
|
||||||
|
(conditioning_data.uncond_text, conditioning_data.uncond_regions),
|
||||||
|
(conditioning_data.cond_text, conditioning_data.cond_regions),
|
||||||
|
]:
|
||||||
|
if r is None:
|
||||||
|
# Create a dummy mask and range for text conditioning that doesn't have region masks.
|
||||||
|
_, _, h, w = x.shape
|
||||||
|
r = TextConditioningRegions(
|
||||||
|
masks=torch.ones((1, 1, h, w), dtype=x.dtype),
|
||||||
|
ranges=[Range(start=0, end=c.embeds.shape[1])],
|
||||||
|
)
|
||||||
|
regions.append(r)
|
||||||
|
|
||||||
|
cross_attention_kwargs["regional_prompt_data"] = RegionalPromptData(
|
||||||
|
regions=regions, device=x.device, dtype=x.dtype
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["percent_through"] = step_index / total_step_count
|
||||||
|
|
||||||
both_conditionings, encoder_attention_mask = self._concat_conditionings_for_batch(
|
both_conditionings, encoder_attention_mask = self._concat_conditionings_for_batch(
|
||||||
conditioning_data.unconditioned_embeddings.embeds, conditioning_data.text_embeddings.embeds
|
conditioning_data.uncond_text.embeds, conditioning_data.cond_text.embeds
|
||||||
)
|
)
|
||||||
both_results = self.model_forward_callback(
|
both_results = self.model_forward_callback(
|
||||||
x_twice,
|
x_twice,
|
||||||
@ -363,8 +361,10 @@ class InvokeAIDiffuserComponent:
|
|||||||
self,
|
self,
|
||||||
x: torch.Tensor,
|
x: torch.Tensor,
|
||||||
sigma,
|
sigma,
|
||||||
conditioning_data: ConditioningData,
|
conditioning_data: TextConditioningData,
|
||||||
cross_attention_control_types_to_do: list[CrossAttentionType],
|
ip_adapter_data: Optional[list[IPAdapterData]],
|
||||||
|
step_index: int,
|
||||||
|
total_step_count: int,
|
||||||
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
down_block_additional_residuals: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
mid_block_additional_residual: Optional[torch.Tensor] = None, # for ControlNet
|
||||||
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
down_intrablock_additional_residuals: Optional[torch.Tensor] = None, # for T2I-Adapter
|
||||||
@ -394,53 +394,48 @@ class InvokeAIDiffuserComponent:
|
|||||||
if mid_block_additional_residual is not None:
|
if mid_block_additional_residual is not None:
|
||||||
uncond_mid_block, cond_mid_block = mid_block_additional_residual.chunk(2)
|
uncond_mid_block, cond_mid_block = mid_block_additional_residual.chunk(2)
|
||||||
|
|
||||||
# If cross-attention control is enabled, prepare the SwapCrossAttnContext.
|
|
||||||
cross_attn_processor_context = None
|
|
||||||
if self.cross_attention_control_context is not None:
|
|
||||||
# Note that the SwapCrossAttnContext is initialized with an empty list of cross_attention_types_to_do.
|
|
||||||
# This list is empty because cross-attention control is not applied in the unconditioned pass. This field
|
|
||||||
# will be populated before the conditioned pass.
|
|
||||||
cross_attn_processor_context = SwapCrossAttnContext(
|
|
||||||
modified_text_embeddings=self.cross_attention_control_context.arguments.edited_conditioning,
|
|
||||||
index_map=self.cross_attention_control_context.cross_attention_index_map,
|
|
||||||
mask=self.cross_attention_control_context.cross_attention_mask,
|
|
||||||
cross_attention_types_to_do=[],
|
|
||||||
)
|
|
||||||
|
|
||||||
#####################
|
#####################
|
||||||
# Unconditioned pass
|
# Unconditioned pass
|
||||||
#####################
|
#####################
|
||||||
|
|
||||||
cross_attention_kwargs = None
|
cross_attention_kwargs = {}
|
||||||
|
|
||||||
# Prepare IP-Adapter cross-attention kwargs for the unconditioned pass.
|
# Prepare IP-Adapter cross-attention kwargs for the unconditioned pass.
|
||||||
if conditioning_data.ip_adapter_conditioning is not None:
|
if ip_adapter_data is not None:
|
||||||
|
ip_adapter_conditioning = [ipa.ip_adapter_conditioning for ipa in ip_adapter_data]
|
||||||
# Note that we 'unsqueeze' to produce tensors of shape (batch_size=1, num_ip_images, seq_len, token_len).
|
# Note that we 'unsqueeze' to produce tensors of shape (batch_size=1, num_ip_images, seq_len, token_len).
|
||||||
cross_attention_kwargs = {
|
image_prompt_embeds = [
|
||||||
"ip_adapter_image_prompt_embeds": [
|
torch.unsqueeze(ipa_conditioning.uncond_image_prompt_embeds, dim=0)
|
||||||
torch.unsqueeze(ipa_conditioning.uncond_image_prompt_embeds, dim=0)
|
for ipa_conditioning in ip_adapter_conditioning
|
||||||
for ipa_conditioning in conditioning_data.ip_adapter_conditioning
|
]
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
# Prepare cross-attention control kwargs for the unconditioned pass.
|
scales = [ipa.scale_for_step(step_index, total_step_count) for ipa in ip_adapter_data]
|
||||||
if cross_attn_processor_context is not None:
|
ip_masks = [ipa.mask for ipa in ip_adapter_data]
|
||||||
cross_attention_kwargs = {"swap_cross_attn_context": cross_attn_processor_context}
|
regional_ip_data = RegionalIPData(
|
||||||
|
image_prompt_embeds=image_prompt_embeds, scales=scales, masks=ip_masks, dtype=x.dtype, device=x.device
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["regional_ip_data"] = regional_ip_data
|
||||||
|
|
||||||
# Prepare SDXL conditioning kwargs for the unconditioned pass.
|
# Prepare SDXL conditioning kwargs for the unconditioned pass.
|
||||||
added_cond_kwargs = None
|
added_cond_kwargs = None
|
||||||
is_sdxl = type(conditioning_data.text_embeddings) is SDXLConditioningInfo
|
if conditioning_data.is_sdxl():
|
||||||
if is_sdxl:
|
|
||||||
added_cond_kwargs = {
|
added_cond_kwargs = {
|
||||||
"text_embeds": conditioning_data.unconditioned_embeddings.pooled_embeds,
|
"text_embeds": conditioning_data.uncond_text.pooled_embeds,
|
||||||
"time_ids": conditioning_data.unconditioned_embeddings.add_time_ids,
|
"time_ids": conditioning_data.uncond_text.add_time_ids,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Prepare prompt regions for the unconditioned pass.
|
||||||
|
if conditioning_data.uncond_regions is not None:
|
||||||
|
cross_attention_kwargs["regional_prompt_data"] = RegionalPromptData(
|
||||||
|
regions=[conditioning_data.uncond_regions], device=x.device, dtype=x.dtype
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["percent_through"] = step_index / total_step_count
|
||||||
|
|
||||||
# Run unconditioned UNet denoising (i.e. negative prompt).
|
# Run unconditioned UNet denoising (i.e. negative prompt).
|
||||||
unconditioned_next_x = self.model_forward_callback(
|
unconditioned_next_x = self.model_forward_callback(
|
||||||
x,
|
x,
|
||||||
sigma,
|
sigma,
|
||||||
conditioning_data.unconditioned_embeddings.embeds,
|
conditioning_data.uncond_text.embeds,
|
||||||
cross_attention_kwargs=cross_attention_kwargs,
|
cross_attention_kwargs=cross_attention_kwargs,
|
||||||
down_block_additional_residuals=uncond_down_block,
|
down_block_additional_residuals=uncond_down_block,
|
||||||
mid_block_additional_residual=uncond_mid_block,
|
mid_block_additional_residual=uncond_mid_block,
|
||||||
@ -452,36 +447,43 @@ class InvokeAIDiffuserComponent:
|
|||||||
# Conditioned pass
|
# Conditioned pass
|
||||||
###################
|
###################
|
||||||
|
|
||||||
cross_attention_kwargs = None
|
cross_attention_kwargs = {}
|
||||||
|
|
||||||
# Prepare IP-Adapter cross-attention kwargs for the conditioned pass.
|
if ip_adapter_data is not None:
|
||||||
if conditioning_data.ip_adapter_conditioning is not None:
|
ip_adapter_conditioning = [ipa.ip_adapter_conditioning for ipa in ip_adapter_data]
|
||||||
# Note that we 'unsqueeze' to produce tensors of shape (batch_size=1, num_ip_images, seq_len, token_len).
|
# Note that we 'unsqueeze' to produce tensors of shape (batch_size=1, num_ip_images, seq_len, token_len).
|
||||||
cross_attention_kwargs = {
|
image_prompt_embeds = [
|
||||||
"ip_adapter_image_prompt_embeds": [
|
torch.unsqueeze(ipa_conditioning.cond_image_prompt_embeds, dim=0)
|
||||||
torch.unsqueeze(ipa_conditioning.cond_image_prompt_embeds, dim=0)
|
for ipa_conditioning in ip_adapter_conditioning
|
||||||
for ipa_conditioning in conditioning_data.ip_adapter_conditioning
|
]
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
# Prepare cross-attention control kwargs for the conditioned pass.
|
scales = [ipa.scale_for_step(step_index, total_step_count) for ipa in ip_adapter_data]
|
||||||
if cross_attn_processor_context is not None:
|
ip_masks = [ipa.mask for ipa in ip_adapter_data]
|
||||||
cross_attn_processor_context.cross_attention_types_to_do = cross_attention_control_types_to_do
|
regional_ip_data = RegionalIPData(
|
||||||
cross_attention_kwargs = {"swap_cross_attn_context": cross_attn_processor_context}
|
image_prompt_embeds=image_prompt_embeds, scales=scales, masks=ip_masks, dtype=x.dtype, device=x.device
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["regional_ip_data"] = regional_ip_data
|
||||||
|
|
||||||
# Prepare SDXL conditioning kwargs for the conditioned pass.
|
# Prepare SDXL conditioning kwargs for the conditioned pass.
|
||||||
added_cond_kwargs = None
|
added_cond_kwargs = None
|
||||||
if is_sdxl:
|
if conditioning_data.is_sdxl():
|
||||||
added_cond_kwargs = {
|
added_cond_kwargs = {
|
||||||
"text_embeds": conditioning_data.text_embeddings.pooled_embeds,
|
"text_embeds": conditioning_data.cond_text.pooled_embeds,
|
||||||
"time_ids": conditioning_data.text_embeddings.add_time_ids,
|
"time_ids": conditioning_data.cond_text.add_time_ids,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Prepare prompt regions for the conditioned pass.
|
||||||
|
if conditioning_data.cond_regions is not None:
|
||||||
|
cross_attention_kwargs["regional_prompt_data"] = RegionalPromptData(
|
||||||
|
regions=[conditioning_data.cond_regions], device=x.device, dtype=x.dtype
|
||||||
|
)
|
||||||
|
cross_attention_kwargs["percent_through"] = step_index / total_step_count
|
||||||
|
|
||||||
# Run conditioned UNet denoising (i.e. positive prompt).
|
# Run conditioned UNet denoising (i.e. positive prompt).
|
||||||
conditioned_next_x = self.model_forward_callback(
|
conditioned_next_x = self.model_forward_callback(
|
||||||
x,
|
x,
|
||||||
sigma,
|
sigma,
|
||||||
conditioning_data.text_embeddings.embeds,
|
conditioning_data.cond_text.embeds,
|
||||||
cross_attention_kwargs=cross_attention_kwargs,
|
cross_attention_kwargs=cross_attention_kwargs,
|
||||||
down_block_additional_residuals=cond_down_block,
|
down_block_additional_residuals=cond_down_block,
|
||||||
mid_block_additional_residual=cond_mid_block,
|
mid_block_additional_residual=cond_mid_block,
|
||||||
|
@ -1,52 +1,46 @@
|
|||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
from diffusers.models import UNet2DConditionModel
|
from diffusers.models import UNet2DConditionModel
|
||||||
|
|
||||||
from invokeai.backend.ip_adapter.attention_processor import AttnProcessor2_0, IPAttnProcessor2_0
|
|
||||||
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
|
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
|
||||||
|
from invokeai.backend.stable_diffusion.diffusion.custom_atttention import CustomAttnProcessor2_0
|
||||||
|
|
||||||
|
|
||||||
class UNetPatcher:
|
class UNetAttentionPatcher:
|
||||||
"""A class that contains multiple IP-Adapters and can apply them to a UNet."""
|
"""A class for patching a UNet with CustomAttnProcessor2_0 attention layers."""
|
||||||
|
|
||||||
def __init__(self, ip_adapters: list[IPAdapter]):
|
def __init__(self, ip_adapters: Optional[list[IPAdapter]]):
|
||||||
self._ip_adapters = ip_adapters
|
self._ip_adapters = ip_adapters
|
||||||
self._scales = [1.0] * len(self._ip_adapters)
|
|
||||||
|
|
||||||
def set_scale(self, idx: int, value: float):
|
|
||||||
self._scales[idx] = value
|
|
||||||
|
|
||||||
def _prepare_attention_processors(self, unet: UNet2DConditionModel):
|
def _prepare_attention_processors(self, unet: UNet2DConditionModel):
|
||||||
"""Prepare a dict of attention processors that can be injected into a unet, and load the IP-Adapter attention
|
"""Prepare a dict of attention processors that can be injected into a unet, and load the IP-Adapter attention
|
||||||
weights into them.
|
weights into them (if IP-Adapters are being applied).
|
||||||
|
|
||||||
Note that the `unet` param is only used to determine attention block dimensions and naming.
|
Note that the `unet` param is only used to determine attention block dimensions and naming.
|
||||||
"""
|
"""
|
||||||
# Construct a dict of attention processors based on the UNet's architecture.
|
# Construct a dict of attention processors based on the UNet's architecture.
|
||||||
attn_procs = {}
|
attn_procs = {}
|
||||||
for idx, name in enumerate(unet.attn_processors.keys()):
|
for idx, name in enumerate(unet.attn_processors.keys()):
|
||||||
if name.endswith("attn1.processor"):
|
if name.endswith("attn1.processor") or self._ip_adapters is None:
|
||||||
attn_procs[name] = AttnProcessor2_0()
|
# "attn1" processors do not use IP-Adapters.
|
||||||
|
attn_procs[name] = CustomAttnProcessor2_0()
|
||||||
else:
|
else:
|
||||||
# Collect the weights from each IP Adapter for the idx'th attention processor.
|
# Collect the weights from each IP Adapter for the idx'th attention processor.
|
||||||
attn_procs[name] = IPAttnProcessor2_0(
|
attn_procs[name] = CustomAttnProcessor2_0(
|
||||||
[ip_adapter.attn_weights.get_attention_processor_weights(idx) for ip_adapter in self._ip_adapters],
|
[ip_adapter.attn_weights.get_attention_processor_weights(idx) for ip_adapter in self._ip_adapters],
|
||||||
self._scales,
|
|
||||||
)
|
)
|
||||||
return attn_procs
|
return attn_procs
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def apply_ip_adapter_attention(self, unet: UNet2DConditionModel):
|
def apply_ip_adapter_attention(self, unet: UNet2DConditionModel):
|
||||||
"""A context manager that patches `unet` with IP-Adapter attention processors."""
|
"""A context manager that patches `unet` with CustomAttnProcessor2_0 attention layers."""
|
||||||
|
|
||||||
attn_procs = self._prepare_attention_processors(unet)
|
attn_procs = self._prepare_attention_processors(unet)
|
||||||
|
|
||||||
orig_attn_processors = unet.attn_processors
|
orig_attn_processors = unet.attn_processors
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Note to future devs: set_attn_processor(...) does something slightly unexpected - it pops elements from the
|
# Note to future devs: set_attn_processor(...) does something slightly unexpected - it pops elements from
|
||||||
# passed dict. So, if you wanted to keep the dict for future use, you'd have to make a moderately-shallow copy
|
# the passed dict. So, if you wanted to keep the dict for future use, you'd have to make a
|
||||||
# of it. E.g. `attn_procs_copy = {k: v for k, v in attn_procs.items()}`.
|
# moderately-shallow copy of it. E.g. `attn_procs_copy = {k: v for k, v in attn_procs.items()}`.
|
||||||
unet.set_attn_processor(attn_procs)
|
unet.set_attn_processor(attn_procs)
|
||||||
yield None
|
yield None
|
||||||
finally:
|
finally:
|
@ -2,7 +2,6 @@
|
|||||||
Initialization file for invokeai.backend.util
|
Initialization file for invokeai.backend.util
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .devices import choose_precision, choose_torch_device
|
|
||||||
from .logging import InvokeAILogger
|
from .logging import InvokeAILogger
|
||||||
from .util import GIG, Chdir, directory_size
|
from .util import GIG, Chdir, directory_size
|
||||||
|
|
||||||
@ -11,6 +10,4 @@ __all__ = [
|
|||||||
"directory_size",
|
"directory_size",
|
||||||
"Chdir",
|
"Chdir",
|
||||||
"InvokeAILogger",
|
"InvokeAILogger",
|
||||||
"choose_precision",
|
|
||||||
"choose_torch_device",
|
|
||||||
]
|
]
|
||||||
|
@ -1,97 +1,109 @@
|
|||||||
from __future__ import annotations
|
from typing import Dict, Literal, Optional, Union
|
||||||
|
|
||||||
from contextlib import nullcontext
|
|
||||||
from typing import Literal, Optional, Union
|
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
from torch import autocast
|
from deprecated import deprecated
|
||||||
|
|
||||||
from invokeai.app.services.config import InvokeAIAppConfig
|
|
||||||
from invokeai.app.services.config.config_default import get_config
|
from invokeai.app.services.config.config_default import get_config
|
||||||
|
|
||||||
|
# legacy APIs
|
||||||
|
TorchPrecisionNames = Literal["float32", "float16", "bfloat16"]
|
||||||
CPU_DEVICE = torch.device("cpu")
|
CPU_DEVICE = torch.device("cpu")
|
||||||
CUDA_DEVICE = torch.device("cuda")
|
CUDA_DEVICE = torch.device("cuda")
|
||||||
MPS_DEVICE = torch.device("mps")
|
MPS_DEVICE = torch.device("mps")
|
||||||
RAM_CACHE = None # horrible hack
|
|
||||||
|
|
||||||
|
@deprecated("Use TorchDevice.choose_torch_dtype() instead.") # type: ignore
|
||||||
|
def choose_precision(device: torch.device) -> TorchPrecisionNames:
|
||||||
|
"""Return the string representation of the recommended torch device."""
|
||||||
|
torch_dtype = TorchDevice.choose_torch_dtype(device)
|
||||||
|
return PRECISION_TO_NAME[torch_dtype]
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("Use TorchDevice.choose_torch_device() instead.") # type: ignore
|
||||||
def choose_torch_device() -> torch.device:
|
def choose_torch_device() -> torch.device:
|
||||||
"""Convenience routine for guessing which GPU device to run model on."""
|
"""Return the torch.device to use for accelerated inference."""
|
||||||
"""Temporarily modified to use the model manager's get_execution_device()"""
|
return TorchDevice.choose_torch_device()
|
||||||
global RAM_CACHE
|
|
||||||
try:
|
|
||||||
device = RAM_CACHE.get_execution_device()
|
@deprecated("Use TorchDevice.choose_torch_dtype() instead.") # type: ignore
|
||||||
return device
|
def torch_dtype(device: torch.device) -> torch.dtype:
|
||||||
except (ValueError, AttributeError):
|
"""Return the torch precision for the recommended torch device."""
|
||||||
config = get_config()
|
return TorchDevice.choose_torch_dtype(device)
|
||||||
if config.device == "auto":
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
return torch.device("cuda")
|
NAME_TO_PRECISION: Dict[TorchPrecisionNames, torch.dtype] = {
|
||||||
if hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
|
"float32": torch.float32,
|
||||||
return torch.device("mps")
|
"float16": torch.float16,
|
||||||
else:
|
"bfloat16": torch.bfloat16,
|
||||||
return CPU_DEVICE
|
}
|
||||||
|
PRECISION_TO_NAME: Dict[torch.dtype, TorchPrecisionNames] = {v: k for k, v in NAME_TO_PRECISION.items()}
|
||||||
|
|
||||||
|
|
||||||
|
class TorchDevice:
|
||||||
|
"""Abstraction layer for torch devices."""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def choose_torch_device(cls) -> torch.device:
|
||||||
|
"""Return the torch.device to use for accelerated inference."""
|
||||||
|
app_config = get_config()
|
||||||
|
if app_config.device != "auto":
|
||||||
|
device = torch.device(app_config.device)
|
||||||
|
elif torch.cuda.is_available():
|
||||||
|
device = CUDA_DEVICE
|
||||||
|
elif torch.backends.mps.is_available():
|
||||||
|
device = MPS_DEVICE
|
||||||
else:
|
else:
|
||||||
return torch.device(config.device)
|
device = CPU_DEVICE
|
||||||
|
return cls.normalize(device)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
def get_torch_device_name() -> str:
|
def choose_torch_dtype(cls, device: Optional[torch.device] = None) -> torch.dtype:
|
||||||
device = choose_torch_device()
|
"""Return the precision to use for accelerated inference."""
|
||||||
return torch.cuda.get_device_name(device) if device.type == "cuda" else device.type.upper()
|
device = device or cls.choose_torch_device()
|
||||||
|
config = get_config()
|
||||||
|
if device.type == "cuda" and torch.cuda.is_available():
|
||||||
# We are in transition here from using a single global AppConfig to allowing multiple
|
device_name = torch.cuda.get_device_name(device)
|
||||||
# configurations. It is strongly recommended to pass the app_config to this function.
|
if "GeForce GTX 1660" in device_name or "GeForce GTX 1650" in device_name:
|
||||||
def choose_precision(
|
# These GPUs have limited support for float16
|
||||||
device: torch.device, app_config: Optional[InvokeAIAppConfig] = None
|
return cls._to_dtype("float32")
|
||||||
) -> Literal["float32", "float16", "bfloat16"]:
|
elif config.precision == "auto":
|
||||||
"""Return an appropriate precision for the given torch device."""
|
# Default to float16 for CUDA devices
|
||||||
app_config = app_config or get_config()
|
return cls._to_dtype("float16")
|
||||||
if device.type == "cuda":
|
|
||||||
device_name = torch.cuda.get_device_name(device)
|
|
||||||
if not ("GeForce GTX 1660" in device_name or "GeForce GTX 1650" in device_name):
|
|
||||||
if app_config.precision == "float32":
|
|
||||||
return "float32"
|
|
||||||
elif app_config.precision == "bfloat16":
|
|
||||||
return "bfloat16"
|
|
||||||
else:
|
else:
|
||||||
return "float16"
|
# Use the user-defined precision
|
||||||
elif device.type == "mps":
|
return cls._to_dtype(config.precision)
|
||||||
return "float16"
|
|
||||||
return "float32"
|
|
||||||
|
|
||||||
|
elif device.type == "mps" and torch.backends.mps.is_available():
|
||||||
|
if config.precision == "auto":
|
||||||
|
# Default to float16 for MPS devices
|
||||||
|
return cls._to_dtype("float16")
|
||||||
|
else:
|
||||||
|
# Use the user-defined precision
|
||||||
|
return cls._to_dtype(config.precision)
|
||||||
|
# CPU / safe fallback
|
||||||
|
return cls._to_dtype("float32")
|
||||||
|
|
||||||
# We are in transition here from using a single global AppConfig to allowing multiple
|
@classmethod
|
||||||
# configurations. It is strongly recommended to pass the app_config to this function.
|
def get_torch_device_name(cls) -> str:
|
||||||
def torch_dtype(
|
"""Return the device name for the current torch device."""
|
||||||
device: Optional[torch.device] = None,
|
device = cls.choose_torch_device()
|
||||||
app_config: Optional[InvokeAIAppConfig] = None,
|
return torch.cuda.get_device_name(device) if device.type == "cuda" else device.type.upper()
|
||||||
) -> torch.dtype:
|
|
||||||
device = device or choose_torch_device()
|
|
||||||
precision = choose_precision(device, app_config)
|
|
||||||
if precision == "float16":
|
|
||||||
return torch.float16
|
|
||||||
if precision == "bfloat16":
|
|
||||||
return torch.bfloat16
|
|
||||||
else:
|
|
||||||
# "auto", "autocast", "float32"
|
|
||||||
return torch.float32
|
|
||||||
|
|
||||||
|
@classmethod
|
||||||
def choose_autocast(precision):
|
def normalize(cls, device: Union[str, torch.device]) -> torch.device:
|
||||||
"""Returns an autocast context or nullcontext for the given precision string"""
|
"""Add the device index to CUDA devices."""
|
||||||
# float16 currently requires autocast to avoid errors like:
|
device = torch.device(device)
|
||||||
# 'expected scalar type Half but found Float'
|
if device.index is None and device.type == "cuda" and torch.cuda.is_available():
|
||||||
if precision == "autocast" or precision == "float16":
|
|
||||||
return autocast
|
|
||||||
return nullcontext
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_device(device: Union[str, torch.device]) -> torch.device:
|
|
||||||
"""Ensure device has a device index defined, if appropriate."""
|
|
||||||
device = torch.device(device)
|
|
||||||
if device.index is None:
|
|
||||||
# cuda might be the only torch backend that currently uses the device index?
|
|
||||||
# I don't see anything like `current_device` for cpu or mps.
|
|
||||||
if device.type == "cuda":
|
|
||||||
device = torch.device(device.type, torch.cuda.current_device())
|
device = torch.device(device.type, torch.cuda.current_device())
|
||||||
return device
|
return device
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def empty_cache(cls) -> None:
|
||||||
|
"""Clear the GPU device cache."""
|
||||||
|
if torch.backends.mps.is_available():
|
||||||
|
torch.mps.empty_cache()
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
torch.cuda.empty_cache()
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _to_dtype(cls, precision_name: TorchPrecisionNames) -> torch.dtype:
|
||||||
|
return NAME_TO_PRECISION[precision_name]
|
||||||
|
53
invokeai/backend/util/mask.py
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
import torch
|
||||||
|
|
||||||
|
|
||||||
|
def to_standard_mask_dim(mask: torch.Tensor) -> torch.Tensor:
|
||||||
|
"""Standardize the dimensions of a mask tensor.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
mask (torch.Tensor): A mask tensor. The shape can be (1, h, w) or (h, w).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
torch.Tensor: The output mask tensor. The shape is (1, h, w).
|
||||||
|
"""
|
||||||
|
# Get the mask height and width.
|
||||||
|
if mask.ndim == 2:
|
||||||
|
mask = mask.unsqueeze(0)
|
||||||
|
elif mask.ndim == 3 and mask.shape[0] == 1:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported mask shape: {mask.shape}. Expected (1, h, w) or (h, w).")
|
||||||
|
|
||||||
|
return mask
|
||||||
|
|
||||||
|
|
||||||
|
def to_standard_float_mask(mask: torch.Tensor, out_dtype: torch.dtype) -> torch.Tensor:
|
||||||
|
"""Standardize the format of a mask tensor.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
mask (torch.Tensor): A mask tensor. The dtype can be any bool, float, or int type. The shape must be (1, h, w)
|
||||||
|
or (h, w).
|
||||||
|
|
||||||
|
out_dtype (torch.dtype): The dtype of the output mask tensor. Must be a float type.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
torch.Tensor: The output mask tensor. The dtype is out_dtype. The shape is (1, h, w). All values are either 0.0
|
||||||
|
or 1.0.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not out_dtype.is_floating_point:
|
||||||
|
raise ValueError(f"out_dtype must be a float type, but got {out_dtype}")
|
||||||
|
|
||||||
|
mask = to_standard_mask_dim(mask)
|
||||||
|
mask = mask.to(out_dtype)
|
||||||
|
|
||||||
|
# Set masked regions to 1.0.
|
||||||
|
if mask.dtype == torch.bool:
|
||||||
|
mask = mask.to(out_dtype)
|
||||||
|
else:
|
||||||
|
mask = mask.to(out_dtype)
|
||||||
|
mask_region = mask > 0.5
|
||||||
|
mask[mask_region] = 1.0
|
||||||
|
mask[~mask_region] = 0.0
|
||||||
|
|
||||||
|
return mask
|
@ -8,7 +8,7 @@
|
|||||||
<meta http-equiv="Pragma" content="no-cache">
|
<meta http-equiv="Pragma" content="no-cache">
|
||||||
<meta http-equiv="Expires" content="0">
|
<meta http-equiv="Expires" content="0">
|
||||||
<title>Invoke - Community Edition</title>
|
<title>Invoke - Community Edition</title>
|
||||||
<link rel="icon" type="icon" href="assets/images/invoke-favicon.svg" />
|
<link id="invoke-favicon" rel="icon" type="icon" href="assets/images/invoke-favicon.svg" />
|
||||||
<style>
|
<style>
|
||||||
html,
|
html,
|
||||||
body {
|
body {
|
||||||
@ -23,4 +23,4 @@
|
|||||||
<script type="module" src="/src/main.tsx"></script>
|
<script type="module" src="/src/main.tsx"></script>
|
||||||
</body>
|
</body>
|
||||||
|
|
||||||
</html>
|
</html>
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
import type { KnipConfig } from 'knip';
|
import type { KnipConfig } from 'knip';
|
||||||
|
|
||||||
const config: KnipConfig = {
|
const config: KnipConfig = {
|
||||||
|
project: ['src/**/*.{ts,tsx}!'],
|
||||||
ignore: [
|
ignore: [
|
||||||
// This file is only used during debugging
|
// This file is only used during debugging
|
||||||
'src/app/store/middleware/debugLoggerMiddleware.ts',
|
'src/app/store/middleware/debugLoggerMiddleware.ts',
|
||||||
@ -10,6 +11,9 @@ const config: KnipConfig = {
|
|||||||
'src/features/nodes/types/v2/**',
|
'src/features/nodes/types/v2/**',
|
||||||
],
|
],
|
||||||
ignoreBinaries: ['only-allow'],
|
ignoreBinaries: ['only-allow'],
|
||||||
|
paths: {
|
||||||
|
'public/*': ['public/*'],
|
||||||
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
export default config;
|
export default config;
|
||||||
|
@ -24,7 +24,7 @@
|
|||||||
"build": "pnpm run lint && vite build",
|
"build": "pnpm run lint && vite build",
|
||||||
"typegen": "node scripts/typegen.js",
|
"typegen": "node scripts/typegen.js",
|
||||||
"preview": "vite preview",
|
"preview": "vite preview",
|
||||||
"lint:knip": "knip --tags=-@knipignore",
|
"lint:knip": "knip",
|
||||||
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:1 src/main.tsx",
|
"lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:1 src/main.tsx",
|
||||||
"lint:eslint": "eslint --max-warnings=0 .",
|
"lint:eslint": "eslint --max-warnings=0 .",
|
||||||
"lint:prettier": "prettier --check .",
|
"lint:prettier": "prettier --check .",
|
||||||
@ -52,6 +52,7 @@
|
|||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@chakra-ui/react-use-size": "^2.1.0",
|
"@chakra-ui/react-use-size": "^2.1.0",
|
||||||
|
"@dagrejs/dagre": "^1.1.1",
|
||||||
"@dagrejs/graphlib": "^2.2.1",
|
"@dagrejs/graphlib": "^2.2.1",
|
||||||
"@dnd-kit/core": "^6.1.0",
|
"@dnd-kit/core": "^6.1.0",
|
||||||
"@dnd-kit/sortable": "^8.0.0",
|
"@dnd-kit/sortable": "^8.0.0",
|
||||||
@ -94,6 +95,7 @@
|
|||||||
"reactflow": "^11.10.4",
|
"reactflow": "^11.10.4",
|
||||||
"redux-dynamic-middlewares": "^2.2.0",
|
"redux-dynamic-middlewares": "^2.2.0",
|
||||||
"redux-remember": "^5.1.0",
|
"redux-remember": "^5.1.0",
|
||||||
|
"rfdc": "^1.3.1",
|
||||||
"roarr": "^7.21.1",
|
"roarr": "^7.21.1",
|
||||||
"serialize-error": "^11.0.3",
|
"serialize-error": "^11.0.3",
|
||||||
"socket.io-client": "^4.7.5",
|
"socket.io-client": "^4.7.5",
|
||||||
|
@ -11,6 +11,9 @@ dependencies:
|
|||||||
'@chakra-ui/react-use-size':
|
'@chakra-ui/react-use-size':
|
||||||
specifier: ^2.1.0
|
specifier: ^2.1.0
|
||||||
version: 2.1.0(react@18.2.0)
|
version: 2.1.0(react@18.2.0)
|
||||||
|
'@dagrejs/dagre':
|
||||||
|
specifier: ^1.1.1
|
||||||
|
version: 1.1.1
|
||||||
'@dagrejs/graphlib':
|
'@dagrejs/graphlib':
|
||||||
specifier: ^2.2.1
|
specifier: ^2.2.1
|
||||||
version: 2.2.1
|
version: 2.2.1
|
||||||
@ -137,6 +140,9 @@ dependencies:
|
|||||||
redux-remember:
|
redux-remember:
|
||||||
specifier: ^5.1.0
|
specifier: ^5.1.0
|
||||||
version: 5.1.0(redux@5.0.1)
|
version: 5.1.0(redux@5.0.1)
|
||||||
|
rfdc:
|
||||||
|
specifier: ^1.3.1
|
||||||
|
version: 1.3.1
|
||||||
roarr:
|
roarr:
|
||||||
specifier: ^7.21.1
|
specifier: ^7.21.1
|
||||||
version: 7.21.1
|
version: 7.21.1
|
||||||
@ -3089,6 +3095,12 @@ packages:
|
|||||||
dev: true
|
dev: true
|
||||||
optional: true
|
optional: true
|
||||||
|
|
||||||
|
/@dagrejs/dagre@1.1.1:
|
||||||
|
resolution: {integrity: sha512-AQfT6pffEuPE32weFzhS/u3UpX+bRXUARIXL7UqLaxz497cN8pjuBlX6axO4IIECE2gBV8eLFQkGCtKX5sDaUA==}
|
||||||
|
dependencies:
|
||||||
|
'@dagrejs/graphlib': 2.2.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
/@dagrejs/graphlib@2.2.1:
|
/@dagrejs/graphlib@2.2.1:
|
||||||
resolution: {integrity: sha512-xJsN1v6OAxXk6jmNdM+OS/bBE8nDCwM0yDNprXR18ZNatL6to9ggod9+l2XtiLhXfLm0NkE7+Er/cpdlM+SkUA==}
|
resolution: {integrity: sha512-xJsN1v6OAxXk6jmNdM+OS/bBE8nDCwM0yDNprXR18ZNatL6to9ggod9+l2XtiLhXfLm0NkE7+Er/cpdlM+SkUA==}
|
||||||
engines: {node: '>17.0.0'}
|
engines: {node: '>17.0.0'}
|
||||||
@ -12128,6 +12140,10 @@ packages:
|
|||||||
resolution: {integrity: sha512-/x8uIPdTafBqakK0TmPNJzgkLP+3H+yxpUJhCQHsLBg1rYEVNR2D8BRYNWQhVBjyOd7oo1dZRVzIkwMY2oqfYQ==}
|
resolution: {integrity: sha512-/x8uIPdTafBqakK0TmPNJzgkLP+3H+yxpUJhCQHsLBg1rYEVNR2D8BRYNWQhVBjyOd7oo1dZRVzIkwMY2oqfYQ==}
|
||||||
dev: true
|
dev: true
|
||||||
|
|
||||||
|
/rfdc@1.3.1:
|
||||||
|
resolution: {integrity: sha512-r5a3l5HzYlIC68TpmYKlxWjmOP6wiPJ1vWv2HeLhNsRZMrCkxeqxiHlQ21oXmQ4F3SiryXBHhAD7JZqvOJjFmg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
/rimraf@2.6.3:
|
/rimraf@2.6.3:
|
||||||
resolution: {integrity: sha512-mwqeW5XsA2qAejG46gYdENaxXjx9onRNCfn7L0duuP4hCuTIi/QO7PDK07KJfp1d+izWPrzEJDcSqBa0OZQriA==}
|
resolution: {integrity: sha512-mwqeW5XsA2qAejG46gYdENaxXjx9onRNCfn7L0duuP4hCuTIi/QO7PDK07KJfp1d+izWPrzEJDcSqBa0OZQriA==}
|
||||||
hasBin: true
|
hasBin: true
|
||||||
|
@ -0,0 +1,5 @@
|
|||||||
|
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||||
|
<rect width="16" height="16" rx="2" fill="#E6FD13"/>
|
||||||
|
<path d="M9.61889 5.45H12.5V3.5H3.5V5.45H6.38111L9.61889 10.55H12.5V12.5H3.5V10.55H6.38111" stroke="black"/>
|
||||||
|
<circle cx="12" cy="4" r="3" fill="#f5480c" stroke="#0d1117" stroke-width="1"/>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 345 B |
@ -291,7 +291,6 @@
|
|||||||
"canvasMerged": "تم دمج الخط",
|
"canvasMerged": "تم دمج الخط",
|
||||||
"sentToImageToImage": "تم إرسال إلى صورة إلى صورة",
|
"sentToImageToImage": "تم إرسال إلى صورة إلى صورة",
|
||||||
"sentToUnifiedCanvas": "تم إرسال إلى لوحة موحدة",
|
"sentToUnifiedCanvas": "تم إرسال إلى لوحة موحدة",
|
||||||
"parametersSet": "تم تعيين المعلمات",
|
|
||||||
"parametersNotSet": "لم يتم تعيين المعلمات",
|
"parametersNotSet": "لم يتم تعيين المعلمات",
|
||||||
"metadataLoadFailed": "فشل تحميل البيانات الوصفية"
|
"metadataLoadFailed": "فشل تحميل البيانات الوصفية"
|
||||||
},
|
},
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
"reportBugLabel": "Fehler melden",
|
"reportBugLabel": "Fehler melden",
|
||||||
"settingsLabel": "Einstellungen",
|
"settingsLabel": "Einstellungen",
|
||||||
"img2img": "Bild zu Bild",
|
"img2img": "Bild zu Bild",
|
||||||
"nodes": "Knoten Editor",
|
"nodes": "Arbeitsabläufe",
|
||||||
"upload": "Hochladen",
|
"upload": "Hochladen",
|
||||||
"load": "Laden",
|
"load": "Laden",
|
||||||
"statusDisconnected": "Getrennt",
|
"statusDisconnected": "Getrennt",
|
||||||
@ -74,7 +74,9 @@
|
|||||||
"updated": "Aktualisiert",
|
"updated": "Aktualisiert",
|
||||||
"copy": "Kopieren",
|
"copy": "Kopieren",
|
||||||
"aboutHeading": "Nutzen Sie Ihre kreative Energie",
|
"aboutHeading": "Nutzen Sie Ihre kreative Energie",
|
||||||
"toResolve": "Lösen"
|
"toResolve": "Lösen",
|
||||||
|
"add": "Hinzufügen",
|
||||||
|
"loglevel": "Protokoll Stufe"
|
||||||
},
|
},
|
||||||
"gallery": {
|
"gallery": {
|
||||||
"galleryImageSize": "Bildgröße",
|
"galleryImageSize": "Bildgröße",
|
||||||
@ -104,11 +106,16 @@
|
|||||||
"dropToUpload": "$t(gallery.drop) zum hochladen",
|
"dropToUpload": "$t(gallery.drop) zum hochladen",
|
||||||
"dropOrUpload": "$t(gallery.drop) oder hochladen",
|
"dropOrUpload": "$t(gallery.drop) oder hochladen",
|
||||||
"drop": "Ablegen",
|
"drop": "Ablegen",
|
||||||
"problemDeletingImages": "Problem beim Löschen der Bilder"
|
"problemDeletingImages": "Problem beim Löschen der Bilder",
|
||||||
|
"bulkDownloadRequested": "Download vorbereiten",
|
||||||
|
"bulkDownloadRequestedDesc": "Dein Download wird vorbereitet. Dies kann ein paar Momente dauern.",
|
||||||
|
"bulkDownloadRequestFailed": "Problem beim Download vorbereiten",
|
||||||
|
"bulkDownloadFailed": "Download fehlgeschlagen",
|
||||||
|
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen"
|
||||||
},
|
},
|
||||||
"hotkeys": {
|
"hotkeys": {
|
||||||
"keyboardShortcuts": "Tastenkürzel",
|
"keyboardShortcuts": "Tastenkürzel",
|
||||||
"appHotkeys": "App-Tastenkombinationen",
|
"appHotkeys": "App",
|
||||||
"generalHotkeys": "Allgemein",
|
"generalHotkeys": "Allgemein",
|
||||||
"galleryHotkeys": "Galerie",
|
"galleryHotkeys": "Galerie",
|
||||||
"unifiedCanvasHotkeys": "Leinwand",
|
"unifiedCanvasHotkeys": "Leinwand",
|
||||||
@ -382,7 +389,14 @@
|
|||||||
"vaePrecision": "VAE-Präzision",
|
"vaePrecision": "VAE-Präzision",
|
||||||
"variant": "Variante",
|
"variant": "Variante",
|
||||||
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
|
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
|
||||||
"noModelSelected": "Kein Modell ausgewählt"
|
"noModelSelected": "Kein Modell ausgewählt",
|
||||||
|
"huggingFace": "HuggingFace",
|
||||||
|
"defaultSettings": "Standardeinstellungen",
|
||||||
|
"edit": "Bearbeiten",
|
||||||
|
"cancel": "Stornieren",
|
||||||
|
"defaultSettingsSaved": "Standardeinstellungen gespeichert",
|
||||||
|
"addModels": "Model hinzufügen",
|
||||||
|
"deleteModelImage": "Lösche Model Bild"
|
||||||
},
|
},
|
||||||
"parameters": {
|
"parameters": {
|
||||||
"images": "Bilder",
|
"images": "Bilder",
|
||||||
@ -466,7 +480,6 @@
|
|||||||
"canvasMerged": "Leinwand zusammengeführt",
|
"canvasMerged": "Leinwand zusammengeführt",
|
||||||
"sentToImageToImage": "Gesendet an Bild zu Bild",
|
"sentToImageToImage": "Gesendet an Bild zu Bild",
|
||||||
"sentToUnifiedCanvas": "Gesendet an Leinwand",
|
"sentToUnifiedCanvas": "Gesendet an Leinwand",
|
||||||
"parametersSet": "Parameter festlegen",
|
|
||||||
"parametersNotSet": "Parameter nicht festgelegt",
|
"parametersNotSet": "Parameter nicht festgelegt",
|
||||||
"metadataLoadFailed": "Metadaten konnten nicht geladen werden",
|
"metadataLoadFailed": "Metadaten konnten nicht geladen werden",
|
||||||
"setCanvasInitialImage": "Ausgangsbild setzen",
|
"setCanvasInitialImage": "Ausgangsbild setzen",
|
||||||
@ -671,7 +684,8 @@
|
|||||||
"body": "Körper",
|
"body": "Körper",
|
||||||
"hands": "Hände",
|
"hands": "Hände",
|
||||||
"dwOpenpose": "DW Openpose",
|
"dwOpenpose": "DW Openpose",
|
||||||
"dwOpenposeDescription": "Posenschätzung mit DW Openpose"
|
"dwOpenposeDescription": "Posenschätzung mit DW Openpose",
|
||||||
|
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus"
|
||||||
},
|
},
|
||||||
"queue": {
|
"queue": {
|
||||||
"status": "Status",
|
"status": "Status",
|
||||||
@ -757,7 +771,12 @@
|
|||||||
"scheduler": "Planer",
|
"scheduler": "Planer",
|
||||||
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden",
|
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden",
|
||||||
"recallParameters": "Parameter wiederherstellen",
|
"recallParameters": "Parameter wiederherstellen",
|
||||||
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)"
|
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)",
|
||||||
|
"allPrompts": "Alle Prompts",
|
||||||
|
"imageDimensions": "Bilder Auslösungen",
|
||||||
|
"parameterSet": "Parameter {{parameter}} setzen",
|
||||||
|
"recallParameter": "{{label}} Abrufen",
|
||||||
|
"parsingFailed": "Parsing Fehlgeschlagen"
|
||||||
},
|
},
|
||||||
"popovers": {
|
"popovers": {
|
||||||
"noiseUseCPU": {
|
"noiseUseCPU": {
|
||||||
@ -1022,7 +1041,8 @@
|
|||||||
"title": "Bild"
|
"title": "Bild"
|
||||||
},
|
},
|
||||||
"advanced": {
|
"advanced": {
|
||||||
"title": "Erweitert"
|
"title": "Erweitert",
|
||||||
|
"options": "$t(accordions.advanced.title) Optionen"
|
||||||
},
|
},
|
||||||
"control": {
|
"control": {
|
||||||
"title": "Kontrolle"
|
"title": "Kontrolle"
|
||||||
@ -1068,5 +1088,10 @@
|
|||||||
},
|
},
|
||||||
"dynamicPrompts": {
|
"dynamicPrompts": {
|
||||||
"showDynamicPrompts": "Dynamische Prompts anzeigen"
|
"showDynamicPrompts": "Dynamische Prompts anzeigen"
|
||||||
|
},
|
||||||
|
"prompt": {
|
||||||
|
"noMatchingTriggers": "Keine passenden Auslöser",
|
||||||
|
"addPromptTrigger": "Auslöse Text hinzufügen",
|
||||||
|
"compatibleEmbeddings": "Kompatible Einbettungen"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -217,6 +217,7 @@
|
|||||||
"saveControlImage": "Save Control Image",
|
"saveControlImage": "Save Control Image",
|
||||||
"scribble": "scribble",
|
"scribble": "scribble",
|
||||||
"selectModel": "Select a model",
|
"selectModel": "Select a model",
|
||||||
|
"selectCLIPVisionModel": "Select a CLIP Vision model",
|
||||||
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
|
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
|
||||||
"showAdvanced": "Show Advanced",
|
"showAdvanced": "Show Advanced",
|
||||||
"small": "Small",
|
"small": "Small",
|
||||||
@ -325,7 +326,8 @@
|
|||||||
"drop": "Drop",
|
"drop": "Drop",
|
||||||
"dropOrUpload": "$t(gallery.drop) or Upload",
|
"dropOrUpload": "$t(gallery.drop) or Upload",
|
||||||
"dropToUpload": "$t(gallery.drop) to Upload",
|
"dropToUpload": "$t(gallery.drop) to Upload",
|
||||||
"deleteImage": "Delete Image",
|
"deleteImage_one": "Delete Image",
|
||||||
|
"deleteImage_other": "Delete {{count}} Images",
|
||||||
"deleteImageBin": "Deleted images will be sent to your operating system's Bin.",
|
"deleteImageBin": "Deleted images will be sent to your operating system's Bin.",
|
||||||
"deleteImagePermanent": "Deleted images cannot be restored.",
|
"deleteImagePermanent": "Deleted images cannot be restored.",
|
||||||
"download": "Download",
|
"download": "Download",
|
||||||
@ -655,6 +657,7 @@
|
|||||||
"install": "Install",
|
"install": "Install",
|
||||||
"installAll": "Install All",
|
"installAll": "Install All",
|
||||||
"installRepo": "Install Repo",
|
"installRepo": "Install Repo",
|
||||||
|
"ipAdapters": "IP Adapters",
|
||||||
"load": "Load",
|
"load": "Load",
|
||||||
"localOnly": "local only",
|
"localOnly": "local only",
|
||||||
"manual": "Manual",
|
"manual": "Manual",
|
||||||
@ -682,6 +685,7 @@
|
|||||||
"noModelsInstalled": "No Models Installed",
|
"noModelsInstalled": "No Models Installed",
|
||||||
"noModelsInstalledDesc1": "Install models with the",
|
"noModelsInstalledDesc1": "Install models with the",
|
||||||
"noModelSelected": "No Model Selected",
|
"noModelSelected": "No Model Selected",
|
||||||
|
"noMatchingModels": "No matching Models",
|
||||||
"none": "none",
|
"none": "none",
|
||||||
"path": "Path",
|
"path": "Path",
|
||||||
"pathToConfig": "Path To Config",
|
"pathToConfig": "Path To Config",
|
||||||
@ -766,6 +770,8 @@
|
|||||||
"float": "Float",
|
"float": "Float",
|
||||||
"fullyContainNodes": "Fully Contain Nodes to Select",
|
"fullyContainNodes": "Fully Contain Nodes to Select",
|
||||||
"fullyContainNodesHelp": "Nodes must be fully inside the selection box to be selected",
|
"fullyContainNodesHelp": "Nodes must be fully inside the selection box to be selected",
|
||||||
|
"showEdgeLabels": "Show Edge Labels",
|
||||||
|
"showEdgeLabelsHelp": "Show labels on edges, indicating the connected nodes",
|
||||||
"hideLegendNodes": "Hide Field Type Legend",
|
"hideLegendNodes": "Hide Field Type Legend",
|
||||||
"hideMinimapnodes": "Hide MiniMap",
|
"hideMinimapnodes": "Hide MiniMap",
|
||||||
"inputMayOnlyHaveOneConnection": "Input may only have one connection",
|
"inputMayOnlyHaveOneConnection": "Input may only have one connection",
|
||||||
@ -846,6 +852,7 @@
|
|||||||
"version": "Version",
|
"version": "Version",
|
||||||
"versionUnknown": " Version Unknown",
|
"versionUnknown": " Version Unknown",
|
||||||
"workflow": "Workflow",
|
"workflow": "Workflow",
|
||||||
|
"graph": "Graph",
|
||||||
"workflowAuthor": "Author",
|
"workflowAuthor": "Author",
|
||||||
"workflowContact": "Contact",
|
"workflowContact": "Contact",
|
||||||
"workflowDescription": "Short Description",
|
"workflowDescription": "Short Description",
|
||||||
@ -885,6 +892,11 @@
|
|||||||
"imageFit": "Fit Initial Image To Output Size",
|
"imageFit": "Fit Initial Image To Output Size",
|
||||||
"images": "Images",
|
"images": "Images",
|
||||||
"infillMethod": "Infill Method",
|
"infillMethod": "Infill Method",
|
||||||
|
"infillMosaicTileWidth": "Tile Width",
|
||||||
|
"infillMosaicTileHeight": "Tile Height",
|
||||||
|
"infillMosaicMinColor": "Min Color",
|
||||||
|
"infillMosaicMaxColor": "Max Color",
|
||||||
|
"infillColorValue": "Fill Color",
|
||||||
"info": "Info",
|
"info": "Info",
|
||||||
"invoke": {
|
"invoke": {
|
||||||
"addingImagesTo": "Adding images to",
|
"addingImagesTo": "Adding images to",
|
||||||
@ -1033,10 +1045,10 @@
|
|||||||
"metadataLoadFailed": "Failed to load metadata",
|
"metadataLoadFailed": "Failed to load metadata",
|
||||||
"modelAddedSimple": "Model Added to Queue",
|
"modelAddedSimple": "Model Added to Queue",
|
||||||
"modelImportCanceled": "Model Import Canceled",
|
"modelImportCanceled": "Model Import Canceled",
|
||||||
|
"parameters": "Parameters",
|
||||||
"parameterNotSet": "{{parameter}} not set",
|
"parameterNotSet": "{{parameter}} not set",
|
||||||
"parameterSet": "{{parameter}} set",
|
"parameterSet": "{{parameter}} set",
|
||||||
"parametersNotSet": "Parameters Not Set",
|
"parametersNotSet": "Parameters Not Set",
|
||||||
"parametersSet": "Parameters Set",
|
|
||||||
"problemCopyingCanvas": "Problem Copying Canvas",
|
"problemCopyingCanvas": "Problem Copying Canvas",
|
||||||
"problemCopyingCanvasDesc": "Unable to export base layer",
|
"problemCopyingCanvasDesc": "Unable to export base layer",
|
||||||
"problemCopyingImage": "Unable to Copy Image",
|
"problemCopyingImage": "Unable to Copy Image",
|
||||||
@ -1415,6 +1427,8 @@
|
|||||||
"eraseBoundingBox": "Erase Bounding Box",
|
"eraseBoundingBox": "Erase Bounding Box",
|
||||||
"eraser": "Eraser",
|
"eraser": "Eraser",
|
||||||
"fillBoundingBox": "Fill Bounding Box",
|
"fillBoundingBox": "Fill Bounding Box",
|
||||||
|
"hideBoundingBox": "Hide Bounding Box",
|
||||||
|
"initialFitImageSize": "Fit Image Size on Drop",
|
||||||
"invertBrushSizeScrollDirection": "Invert Scroll for Brush Size",
|
"invertBrushSizeScrollDirection": "Invert Scroll for Brush Size",
|
||||||
"layer": "Layer",
|
"layer": "Layer",
|
||||||
"limitStrokesToBox": "Limit Strokes to Box",
|
"limitStrokesToBox": "Limit Strokes to Box",
|
||||||
@ -1431,6 +1445,7 @@
|
|||||||
"saveMask": "Save $t(unifiedCanvas.mask)",
|
"saveMask": "Save $t(unifiedCanvas.mask)",
|
||||||
"saveToGallery": "Save To Gallery",
|
"saveToGallery": "Save To Gallery",
|
||||||
"scaledBoundingBox": "Scaled Bounding Box",
|
"scaledBoundingBox": "Scaled Bounding Box",
|
||||||
|
"showBoundingBox": "Show Bounding Box",
|
||||||
"showCanvasDebugInfo": "Show Additional Canvas Info",
|
"showCanvasDebugInfo": "Show Additional Canvas Info",
|
||||||
"showGrid": "Show Grid",
|
"showGrid": "Show Grid",
|
||||||
"showResultsOn": "Show Results (On)",
|
"showResultsOn": "Show Results (On)",
|
||||||
@ -1473,7 +1488,11 @@
|
|||||||
"workflowName": "Workflow Name",
|
"workflowName": "Workflow Name",
|
||||||
"newWorkflowCreated": "New Workflow Created",
|
"newWorkflowCreated": "New Workflow Created",
|
||||||
"workflowCleared": "Workflow Cleared",
|
"workflowCleared": "Workflow Cleared",
|
||||||
"workflowEditorMenu": "Workflow Editor Menu"
|
"workflowEditorMenu": "Workflow Editor Menu",
|
||||||
|
"loadFromGraph": "Load Workflow from Graph",
|
||||||
|
"convertGraph": "Convert Graph",
|
||||||
|
"loadWorkflow": "$t(common.load) Workflow",
|
||||||
|
"autoLayout": "Auto Layout"
|
||||||
},
|
},
|
||||||
"app": {
|
"app": {
|
||||||
"storeNotInitialized": "Store is not initialized"
|
"storeNotInitialized": "Store is not initialized"
|
||||||
|
@ -363,7 +363,6 @@
|
|||||||
"canvasMerged": "Lienzo consolidado",
|
"canvasMerged": "Lienzo consolidado",
|
||||||
"sentToImageToImage": "Enviar hacia Imagen a Imagen",
|
"sentToImageToImage": "Enviar hacia Imagen a Imagen",
|
||||||
"sentToUnifiedCanvas": "Enviar hacia Lienzo Consolidado",
|
"sentToUnifiedCanvas": "Enviar hacia Lienzo Consolidado",
|
||||||
"parametersSet": "Parámetros establecidos",
|
|
||||||
"parametersNotSet": "Parámetros no establecidos",
|
"parametersNotSet": "Parámetros no establecidos",
|
||||||
"metadataLoadFailed": "Error al cargar metadatos",
|
"metadataLoadFailed": "Error al cargar metadatos",
|
||||||
"serverError": "Error en el servidor",
|
"serverError": "Error en el servidor",
|
||||||
|
@ -298,7 +298,6 @@
|
|||||||
"canvasMerged": "Canvas fusionné",
|
"canvasMerged": "Canvas fusionné",
|
||||||
"sentToImageToImage": "Envoyé à Image à Image",
|
"sentToImageToImage": "Envoyé à Image à Image",
|
||||||
"sentToUnifiedCanvas": "Envoyé à Canvas unifié",
|
"sentToUnifiedCanvas": "Envoyé à Canvas unifié",
|
||||||
"parametersSet": "Paramètres définis",
|
|
||||||
"parametersNotSet": "Paramètres non définis",
|
"parametersNotSet": "Paramètres non définis",
|
||||||
"metadataLoadFailed": "Échec du chargement des métadonnées"
|
"metadataLoadFailed": "Échec du chargement des métadonnées"
|
||||||
},
|
},
|
||||||
|
@ -306,7 +306,6 @@
|
|||||||
"canvasMerged": "קנבס מוזג",
|
"canvasMerged": "קנבס מוזג",
|
||||||
"sentToImageToImage": "נשלח לתמונה לתמונה",
|
"sentToImageToImage": "נשלח לתמונה לתמונה",
|
||||||
"sentToUnifiedCanvas": "נשלח אל קנבס מאוחד",
|
"sentToUnifiedCanvas": "נשלח אל קנבס מאוחד",
|
||||||
"parametersSet": "הגדרת פרמטרים",
|
|
||||||
"parametersNotSet": "פרמטרים לא הוגדרו",
|
"parametersNotSet": "פרמטרים לא הוגדרו",
|
||||||
"metadataLoadFailed": "טעינת מטא-נתונים נכשלה"
|
"metadataLoadFailed": "טעינת מטא-נתונים נכשלה"
|
||||||
},
|
},
|
||||||
|
@ -73,7 +73,8 @@
|
|||||||
"ai": "ia",
|
"ai": "ia",
|
||||||
"file": "File",
|
"file": "File",
|
||||||
"toResolve": "Da risolvere",
|
"toResolve": "Da risolvere",
|
||||||
"add": "Aggiungi"
|
"add": "Aggiungi",
|
||||||
|
"loglevel": "Livello di log"
|
||||||
},
|
},
|
||||||
"gallery": {
|
"gallery": {
|
||||||
"galleryImageSize": "Dimensione dell'immagine",
|
"galleryImageSize": "Dimensione dell'immagine",
|
||||||
@ -365,7 +366,7 @@
|
|||||||
"modelConverted": "Modello convertito",
|
"modelConverted": "Modello convertito",
|
||||||
"alpha": "Alpha",
|
"alpha": "Alpha",
|
||||||
"convertToDiffusersHelpText1": "Questo modello verrà convertito nel formato 🧨 Diffusori.",
|
"convertToDiffusersHelpText1": "Questo modello verrà convertito nel formato 🧨 Diffusori.",
|
||||||
"convertToDiffusersHelpText3": "Il file Checkpoint su disco verrà eliminato se si trova nella cartella principale di InvokeAI. Se si trova invece in una posizione personalizzata, NON verrà eliminato.",
|
"convertToDiffusersHelpText3": "Il file del modello su disco verrà eliminato se si trova nella cartella principale di InvokeAI. Se si trova invece in una posizione personalizzata, NON verrà eliminato.",
|
||||||
"v2_base": "v2 (512px)",
|
"v2_base": "v2 (512px)",
|
||||||
"v2_768": "v2 (768px)",
|
"v2_768": "v2 (768px)",
|
||||||
"none": "nessuno",
|
"none": "nessuno",
|
||||||
@ -442,7 +443,9 @@
|
|||||||
"noModelsInstalled": "Nessun modello installato",
|
"noModelsInstalled": "Nessun modello installato",
|
||||||
"hfTokenInvalidErrorMessage2": "Aggiornalo in ",
|
"hfTokenInvalidErrorMessage2": "Aggiornalo in ",
|
||||||
"main": "Principali",
|
"main": "Principali",
|
||||||
"noModelsInstalledDesc1": "Installa i modelli con"
|
"noModelsInstalledDesc1": "Installa i modelli con",
|
||||||
|
"ipAdapters": "Adattatori IP",
|
||||||
|
"noMatchingModels": "Nessun modello corrispondente"
|
||||||
},
|
},
|
||||||
"parameters": {
|
"parameters": {
|
||||||
"images": "Immagini",
|
"images": "Immagini",
|
||||||
@ -524,7 +527,12 @@
|
|||||||
"aspect": "Aspetto",
|
"aspect": "Aspetto",
|
||||||
"setToOptimalSizeTooLarge": "$t(parameters.setToOptimalSize) (potrebbe essere troppo grande)",
|
"setToOptimalSizeTooLarge": "$t(parameters.setToOptimalSize) (potrebbe essere troppo grande)",
|
||||||
"remixImage": "Remixa l'immagine",
|
"remixImage": "Remixa l'immagine",
|
||||||
"coherenceEdgeSize": "Dim. bordo"
|
"coherenceEdgeSize": "Dim. bordo",
|
||||||
|
"infillMosaicTileWidth": "Larghezza piastrella",
|
||||||
|
"infillMosaicMinColor": "Colore minimo",
|
||||||
|
"infillMosaicMaxColor": "Colore massimo",
|
||||||
|
"infillMosaicTileHeight": "Altezza piastrella",
|
||||||
|
"infillColorValue": "Colore di riempimento"
|
||||||
},
|
},
|
||||||
"settings": {
|
"settings": {
|
||||||
"models": "Modelli",
|
"models": "Modelli",
|
||||||
@ -567,7 +575,6 @@
|
|||||||
"canvasMerged": "Tela unita",
|
"canvasMerged": "Tela unita",
|
||||||
"sentToImageToImage": "Inviato a Immagine a Immagine",
|
"sentToImageToImage": "Inviato a Immagine a Immagine",
|
||||||
"sentToUnifiedCanvas": "Inviato a Tela Unificata",
|
"sentToUnifiedCanvas": "Inviato a Tela Unificata",
|
||||||
"parametersSet": "Parametri impostati",
|
|
||||||
"parametersNotSet": "Parametri non impostati",
|
"parametersNotSet": "Parametri non impostati",
|
||||||
"metadataLoadFailed": "Impossibile caricare i metadati",
|
"metadataLoadFailed": "Impossibile caricare i metadati",
|
||||||
"serverError": "Errore del Server",
|
"serverError": "Errore del Server",
|
||||||
@ -619,7 +626,8 @@
|
|||||||
"uploadInitialImage": "Carica l'immagine iniziale",
|
"uploadInitialImage": "Carica l'immagine iniziale",
|
||||||
"problemDownloadingImage": "Impossibile scaricare l'immagine",
|
"problemDownloadingImage": "Impossibile scaricare l'immagine",
|
||||||
"prunedQueue": "Coda ripulita",
|
"prunedQueue": "Coda ripulita",
|
||||||
"modelImportCanceled": "Importazione del modello annullata"
|
"modelImportCanceled": "Importazione del modello annullata",
|
||||||
|
"parameters": "Parametri"
|
||||||
},
|
},
|
||||||
"tooltip": {
|
"tooltip": {
|
||||||
"feature": {
|
"feature": {
|
||||||
@ -688,7 +696,10 @@
|
|||||||
"coherenceModeBoxBlur": "Sfocatura Box",
|
"coherenceModeBoxBlur": "Sfocatura Box",
|
||||||
"coherenceModeStaged": "Maschera espansa",
|
"coherenceModeStaged": "Maschera espansa",
|
||||||
"invertBrushSizeScrollDirection": "Inverti scorrimento per dimensione pennello",
|
"invertBrushSizeScrollDirection": "Inverti scorrimento per dimensione pennello",
|
||||||
"discardCurrent": "Scarta l'attuale"
|
"discardCurrent": "Scarta l'attuale",
|
||||||
|
"initialFitImageSize": "Adatta dimensione immagine al rilascio",
|
||||||
|
"hideBoundingBox": "Nascondi il rettangolo di selezione",
|
||||||
|
"showBoundingBox": "Mostra il rettangolo di selezione"
|
||||||
},
|
},
|
||||||
"accessibility": {
|
"accessibility": {
|
||||||
"invokeProgressBar": "Barra di avanzamento generazione",
|
"invokeProgressBar": "Barra di avanzamento generazione",
|
||||||
@ -831,7 +842,8 @@
|
|||||||
"editMode": "Modifica nell'editor del flusso di lavoro",
|
"editMode": "Modifica nell'editor del flusso di lavoro",
|
||||||
"resetToDefaultValue": "Ripristina il valore predefinito",
|
"resetToDefaultValue": "Ripristina il valore predefinito",
|
||||||
"noFieldsViewMode": "Questo flusso di lavoro non ha campi selezionati da visualizzare. Visualizza il flusso di lavoro completo per configurare i valori.",
|
"noFieldsViewMode": "Questo flusso di lavoro non ha campi selezionati da visualizzare. Visualizza il flusso di lavoro completo per configurare i valori.",
|
||||||
"edit": "Modifica"
|
"edit": "Modifica",
|
||||||
|
"graph": "Grafico"
|
||||||
},
|
},
|
||||||
"boards": {
|
"boards": {
|
||||||
"autoAddBoard": "Aggiungi automaticamente bacheca",
|
"autoAddBoard": "Aggiungi automaticamente bacheca",
|
||||||
@ -934,7 +946,10 @@
|
|||||||
"base": "Base",
|
"base": "Base",
|
||||||
"lineart": "Linea",
|
"lineart": "Linea",
|
||||||
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
|
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
|
||||||
"mediapipeFace": "Mediapipe Volto"
|
"mediapipeFace": "Mediapipe Volto",
|
||||||
|
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
|
||||||
|
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
|
||||||
|
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision"
|
||||||
},
|
},
|
||||||
"queue": {
|
"queue": {
|
||||||
"queueFront": "Aggiungi all'inizio della coda",
|
"queueFront": "Aggiungi all'inizio della coda",
|
||||||
@ -1342,13 +1357,13 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"seamlessTilingXAxis": {
|
"seamlessTilingXAxis": {
|
||||||
"heading": "Asse X di piastrellatura senza cuciture",
|
"heading": "Piastrella senza giunte sull'asse X",
|
||||||
"paragraphs": [
|
"paragraphs": [
|
||||||
"Affianca senza soluzione di continuità un'immagine lungo l'asse orizzontale."
|
"Affianca senza soluzione di continuità un'immagine lungo l'asse orizzontale."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"seamlessTilingYAxis": {
|
"seamlessTilingYAxis": {
|
||||||
"heading": "Asse Y di piastrellatura senza cuciture",
|
"heading": "Piastrella senza giunte sull'asse Y",
|
||||||
"paragraphs": [
|
"paragraphs": [
|
||||||
"Affianca senza soluzione di continuità un'immagine lungo l'asse verticale."
|
"Affianca senza soluzione di continuità un'immagine lungo l'asse verticale."
|
||||||
]
|
]
|
||||||
@ -1472,7 +1487,11 @@
|
|||||||
"name": "Nome",
|
"name": "Nome",
|
||||||
"updated": "Aggiornato",
|
"updated": "Aggiornato",
|
||||||
"projectWorkflows": "Flussi di lavoro del progetto",
|
"projectWorkflows": "Flussi di lavoro del progetto",
|
||||||
"opened": "Aperto"
|
"opened": "Aperto",
|
||||||
|
"convertGraph": "Converti grafico",
|
||||||
|
"loadWorkflow": "$t(common.load) Flusso di lavoro",
|
||||||
|
"autoLayout": "Disposizione automatica",
|
||||||
|
"loadFromGraph": "Carica il flusso di lavoro dal grafico"
|
||||||
},
|
},
|
||||||
"app": {
|
"app": {
|
||||||
"storeNotInitialized": "Il negozio non è inizializzato"
|
"storeNotInitialized": "Il negozio non è inizializzato"
|
||||||
@ -1490,7 +1509,8 @@
|
|||||||
"title": "Generazione"
|
"title": "Generazione"
|
||||||
},
|
},
|
||||||
"advanced": {
|
"advanced": {
|
||||||
"title": "Avanzate"
|
"title": "Avanzate",
|
||||||
|
"options": "Opzioni $t(accordions.advanced.title)"
|
||||||
},
|
},
|
||||||
"image": {
|
"image": {
|
||||||
"title": "Immagine"
|
"title": "Immagine"
|
||||||
|
@ -420,7 +420,6 @@
|
|||||||
"canvasMerged": "Canvas samengevoegd",
|
"canvasMerged": "Canvas samengevoegd",
|
||||||
"sentToImageToImage": "Gestuurd naar Afbeelding naar afbeelding",
|
"sentToImageToImage": "Gestuurd naar Afbeelding naar afbeelding",
|
||||||
"sentToUnifiedCanvas": "Gestuurd naar Centraal canvas",
|
"sentToUnifiedCanvas": "Gestuurd naar Centraal canvas",
|
||||||
"parametersSet": "Parameters ingesteld",
|
|
||||||
"parametersNotSet": "Parameters niet ingesteld",
|
"parametersNotSet": "Parameters niet ingesteld",
|
||||||
"metadataLoadFailed": "Fout bij laden metagegevens",
|
"metadataLoadFailed": "Fout bij laden metagegevens",
|
||||||
"serverError": "Serverfout",
|
"serverError": "Serverfout",
|
||||||
|
@ -267,7 +267,6 @@
|
|||||||
"canvasMerged": "Scalono widoczne warstwy",
|
"canvasMerged": "Scalono widoczne warstwy",
|
||||||
"sentToImageToImage": "Wysłano do Obraz na obraz",
|
"sentToImageToImage": "Wysłano do Obraz na obraz",
|
||||||
"sentToUnifiedCanvas": "Wysłano do trybu uniwersalnego",
|
"sentToUnifiedCanvas": "Wysłano do trybu uniwersalnego",
|
||||||
"parametersSet": "Ustawiono parametry",
|
|
||||||
"parametersNotSet": "Nie ustawiono parametrów",
|
"parametersNotSet": "Nie ustawiono parametrów",
|
||||||
"metadataLoadFailed": "Błąd wczytywania metadanych"
|
"metadataLoadFailed": "Błąd wczytywania metadanych"
|
||||||
},
|
},
|
||||||
|
@ -310,7 +310,6 @@
|
|||||||
"canvasMerged": "Tela Fundida",
|
"canvasMerged": "Tela Fundida",
|
||||||
"sentToImageToImage": "Mandar Para Imagem Para Imagem",
|
"sentToImageToImage": "Mandar Para Imagem Para Imagem",
|
||||||
"sentToUnifiedCanvas": "Enviada para a Tela Unificada",
|
"sentToUnifiedCanvas": "Enviada para a Tela Unificada",
|
||||||
"parametersSet": "Parâmetros Definidos",
|
|
||||||
"parametersNotSet": "Parâmetros Não Definidos",
|
"parametersNotSet": "Parâmetros Não Definidos",
|
||||||
"metadataLoadFailed": "Falha ao tentar carregar metadados"
|
"metadataLoadFailed": "Falha ao tentar carregar metadados"
|
||||||
},
|
},
|
||||||
|