diff --git a/README.md b/README.md
index ff06db8d21..f540e7be75 100644
--- a/README.md
+++ b/README.md
@@ -2,21 +2,102 @@
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
-# Invoke - Professional Creative AI Tools for Visual Media
-## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
-
+# Invoke - Professional Creative AI Tools for Visual Media
+#### To learn more about Invoke, or implement our Business solutions, visit [invoke.com]
-[![discord badge]][discord link]
+[![discord badge]][discord link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
-[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
+
-[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
+Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
-[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
+[Installation][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
+
+
+![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
+
+
+
+## Quick Start
+
+1. Download and unzip the installer from the bottom of the [latest release][latest release link].
+2. Run the installer script.
+
+ - **Windows**: Double-click on the `install.bat` script.
+ - **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
+ - **Linux**: Run `install.sh`.
+
+3. When prompted, enter a location for the install and select your GPU type.
+4. Once the install finishes, find the directory you selected during install. The default location is `C:\Users\Username\invokeai` for Windows or `~/invokeai` for Linux/macOS.
+5. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) the same way you ran the installer script in step 2.
+6. Select option 1 to start the application. Once it starts up, open your browser and go to .
+7. Open the model manager tab to install a starter model and then you'll be ready to generate.
+
+More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
+
+## Troubleshooting, FAQ and Support
+
+Please review our [FAQ][faq] for solutions to common installation problems and other issues.
+
+For more help, please join our [Discord][discord link].
+
+## Features
+
+Full details on features can be found in [our documentation][features docs].
+
+### Web Server & UI
+
+Invoke runs a locally hosted web server & React UI with an industry-leading user experience.
+
+### Unified Canvas
+
+The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
+
+### Workflows & Nodes
+
+Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
+
+### Board & Gallery Management
+
+Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
+
+### Other features
+
+- Support for both ckpt and diffusers models
+- SD1.5, SD2.0, and SDXL support
+- Upscaling Tools
+- Embedding Manager & Support
+- Model Manager & Support
+- Workflow creation & management
+- Node-Based Architecture
+
+## Contributing
+
+Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
+
+Get started with contributing by reading our [contribution documentation][contributing docs], joining the [#dev-chat] or the GitHub discussion board.
+
+We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
+
+## Thanks
+
+Invoke is a combined effort of [passionate and talented people from across the world][contributors]. We thank them for their time, hard work and effort.
+
+Original portions of the software are Copyright © 2024 by respective contributors.
+
+[features docs]: https://invoke-ai.github.io/InvokeAI/features/
+[faq]: https://invoke-ai.github.io/InvokeAI/help/FAQ/
+[contributors]: https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/
+[invoke.com]: https://www.invoke.com/about
+[github issues]: https://github.com/invoke-ai/InvokeAI/issues
+[docs home]: https://invoke-ai.github.io/InvokeAI
+[installation docs]: https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/
+[#dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
+[contributing docs]: https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
-[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
+[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -30,402 +111,6 @@
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
-[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
+[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
-
-
-
-InvokeAI is a leading creative engine built to empower professionals
-and enthusiasts alike. Generate and create stunning visual media using
-the latest AI-driven technologies. InvokeAI offers an industry leading
-Web Interface, interactive Command Line Interface, and also serves as
-the foundation for multiple commercial products.
-
-**Quick links**: [[How to
- Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [Discord Server ] [Documentation and
- Tutorials ]
- [Bug Reports ]
- [Discussion,
- Ideas & Q&A ]
- [Contributing ]
-
-
-
-
-![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
-
-
-
-
-## Table of Contents
-
-Table of Contents 📝
-
-**Getting Started**
-1. 🏁 [Quick Start](#quick-start)
-3. 🖥️ [Hardware Requirements](#hardware-requirements)
-
-**More About Invoke**
-1. 🌟 [Features](#features)
-2. 📣 [Latest Changes](#latest-changes)
-3. 🛠️ [Troubleshooting](#troubleshooting)
-
-**Supporting the Project**
-1. 🤝 [Contributing](#contributing)
-2. 👥 [Contributors](#contributors)
-3. 💕 [Support](#support)
-
-## Quick Start
-
-For full installation and upgrade instructions, please see:
-[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)
-
-If upgrading from version 2.3, please read [Migrating a 2.3 root
-directory to 3.0](#migrating-to-3) first.
-
-### Automatic Installer (suggested for 1st time users)
-
-1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
-
-2. Download the .zip file for your OS (Windows/macOS/Linux).
-
-3. Unzip the file.
-
-4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
-into the Terminal, and press return. **Linux:** run `install.sh`.
-
-5. You'll be asked to confirm the location of the folder in which
-to install InvokeAI and its image generation model files. Pick a
-location with at least 15 GB of free memory. More if you plan on
-installing lots of models.
-
-6. Wait while the installer does its thing. After installing the software,
-the installer will launch a script that lets you configure InvokeAI and
-select a set of starting image generation models.
-
-7. Find the folder that InvokeAI was installed into (it is not the
-same as the unpacked zip file directory!) The default location of this
-folder (if you didn't change it in step 5) is `~/invokeai` on
-Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
-
-8. On Windows systems, double-click on the `invoke.bat` file. On
-macOS, open a Terminal window, drag `invoke.sh` from the folder into
-the Terminal, and press return. On Linux, run `invoke.sh`
-
-9. Press 2 to open the "browser-based UI", press enter/return, wait a
-minute or two for Stable Diffusion to start up, then open your browser
-and go to http://localhost:9090.
-
-10. Type `banana sushi` in the box on the top left and click `Invoke`
-
-### Command-Line Installation (for developers and users familiar with Terminals)
-
-You must have Python 3.10 through 3.11 installed on your machine. Earlier or
-later versions are not supported.
-Node.js also needs to be installed along with `pnpm` (can be installed with
-the command `npm install -g pnpm` if needed)
-
-1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
-2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
-
- ```terminal
- mkdir invokeai
- ````
-
-3. Create a virtual environment named `.venv` inside this directory and activate it:
-
- ```terminal
- cd invokeai
- python -m venv .venv --prompt InvokeAI
- ```
-
-4. Activate the virtual environment (do it every time you run InvokeAI)
-
- _For Linux/Mac users:_
-
- ```sh
- source .venv/bin/activate
- ```
-
- _For Windows users:_
-
- ```ps
- .venv\Scripts\activate
- ```
-
-5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
-
- _For Windows/Linux with an NVIDIA GPU:_
-
- ```terminal
- pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
- ```
-
- _For Linux with an AMD GPU:_
-
- ```sh
- pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
- ```
-
- _For non-GPU systems:_
- ```terminal
- pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
- ```
-
- _For Macintoshes, either Intel or M1/M2/M3:_
-
- ```sh
- pip install InvokeAI --use-pep517
- ```
-
-6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
-
- ```terminal
- invokeai-configure --root .
- ```
- Don't miss the dot at the end!
-
-7. Launch the web server (do it every time you run InvokeAI):
-
- ```terminal
- invokeai-web
- ```
-
-8. Point your browser to http://localhost:9090 to bring up the web interface.
-
-9. Type `banana sushi` in the box on the top left and click `Invoke`.
-
-Be sure to activate the virtual environment each time before re-launching InvokeAI,
-using `source .venv/bin/activate` or `.venv\Scripts\activate`.
-
-## Detailed Installation Instructions
-
-This fork is supported across Linux, Windows and Macintosh. Linux
-users can use either an Nvidia-based card (with CUDA support) or an
-AMD card (using the ROCm driver). For full installation and upgrade
-instructions, please see:
-[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
-
-
-### Migrating a v2.3 InvokeAI root directory
-
-The InvokeAI root directory is where the InvokeAI startup file,
-installed models, and generated images are stored. It is ordinarily
-named `invokeai` and located in your home directory. The contents and
-layout of this directory has changed between versions 2.3 and 3.0 and
-cannot be used directly.
-
-We currently recommend that you use the installer to create a new root
-directory named differently from the 2.3 one, e.g. `invokeai-3` and
-then use a migration script to copy your 2.3 models into the new
-location. However, if you choose, you can upgrade this directory in
-place. This section gives both recipes.
-
-#### Creating a new root directory and migrating old models
-
-This is the safer recipe because it leaves your old root directory in
-place to fall back on.
-
-1. Follow the instructions above to create and install InvokeAI in a
-directory that has a different name from the 2.3 invokeai directory.
-In this example, we will use "invokeai-3"
-
-2. When you are prompted to select models to install, select a minimal
-set of models, such as stable-diffusion-v1.5 only.
-
-3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
-`invokeai.bat` and select option 8 "Open the developers console". This
-will take you to the command line.
-
-4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
-/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
-paths for your v2.3 and v3.0 root directories respectively.
-
-This will copy and convert your old models from 2.3 format to 3.0
-format and create a new `models` directory in the 3.0 directory. The
-old models directory (which contains the models selected at install
-time) will be renamed `models.orig` and can be deleted once you have
-confirmed that the migration was successful.
-
- If you wish, you can pass the 2.3 root directory to both `--from` and
-`--to` in order to update in place. Warning: this directory will no
-longer be usable with InvokeAI 2.3.
-
-#### Migrating in place
-
-For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
-without touching the command line. ***This recipe does not work on
-Windows platforms due to a bug in the Windows version of the 2.3
-upgrade script.** See the next section for a Windows recipe.
-
-##### For Mac and Linux Users:
-
-1. Launch the InvokeAI launcher script in your current v2.3 root directory.
-
-2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
-
-3. Select option [1] to upgrade to the latest release.
-
-4. Once the upgrade is finished you will be returned to the launcher
-menu. Select option [6] "Re-run the configure script to fix a broken
-install or to complete a major upgrade".
-
-This will run the configure script against the v2.3 directory and
-update it to the 3.0 format. The following files will be replaced:
-
- - The invokeai.init file, replaced by invokeai.yaml
- - The models directory
- - The configs/models.yaml model index
-
-The original versions of these files will be saved with the suffix
-".orig" appended to the end. Once you have confirmed that the upgrade
-worked, you can safely remove these files. Alternatively you can
-restore a working v2.3 directory by removing the new files and
-restoring the ".orig" files' original names.
-
-##### For Windows Users:
-
-Windows Users can upgrade with the
-
-1. Enter the 2.3 root directory you wish to upgrade
-2. Launch `invoke.sh` or `invoke.bat`
-3. Select the "Developer's console" option [8]
-4. Type the following commands
-
-```
-pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
-invokeai-configure --root .
-```
-(Replace `v3.0.0` with the current release number if this document is out of date).
-
-The first command will install and upgrade new software to run
-InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
-You may now launch the WebUI in the usual way, by selecting option [1]
-from the launcher script
-
-#### Migrating Images
-
-The migration script will migrate your invokeai settings and models,
-including textual inversion models, LoRAs and merges that you may have
-installed previously. However it does **not** migrate the generated
-images stored in your 2.3-format outputs directory. To do this, you
-need to run an additional step:
-
-1. From a working InvokeAI 3.0 root directory, start the launcher and
-enter menu option [8] to open the "developer's console".
-
-2. At the developer's console command line, type the command:
-
-```bash
-invokeai-import-images
-```
-
-3. This will lead you through the process of confirming the desired
- source and destination for the imported images. The images will
- appear in the gallery board of your choice, and contain the
- original prompt, model name, and other parameters used to generate
- the image.
-
-(Many kudos to **techjedi** for contributing this script.)
-
-## Hardware Requirements
-
-InvokeAI is supported across Linux, Windows and macOS. Linux
-users can use either an Nvidia-based card (with CUDA support) or an
-AMD card (using the ROCm driver).
-
-### System
-
-You will need one of the following:
-
-- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
- of VRAM is highly recommended for rendering using the Stable
- Diffusion XL models
-- An Apple computer with an M1 chip.
-- An AMD-based graphics card with 4GB or more VRAM memory (Linux
- only), 6-8 GB for XL rendering.
-
-We do not recommend the GTX 1650 or 1660 series video cards. They are
-unable to run in half-precision mode and do not have sufficient VRAM
-to render 512x512 images.
-
-**Memory** - At least 12 GB Main Memory RAM.
-
-**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
-
-## Features
-
-Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
-
-### *Web Server & UI*
-
-InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
-
-### *Unified Canvas*
-
-The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
-
-### *Workflows & Nodes*
-
-InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
-
-### *Board & Gallery Management*
-
-Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
-
-### Other features
-
-- *Support for both ckpt and diffusers models*
-- *SD 2.0, 2.1, XL support*
-- *Upscaling Tools*
-- *Embedding Manager & Support*
-- *Model Manager & Support*
-- *Workflow creation & management*
-- *Node-Based Architecture*
-
-
-### Latest Changes
-
-For our latest changes, view our [Release
-Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
-[CHANGELOG](docs/CHANGELOG.md).
-
-### Troubleshooting / FAQ
-
-Please check out our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** to get solutions for common installation
-problems and other issues. For more help, please join our [Discord][discord link]
-
-## Contributing
-
-Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
-cleanup, testing, or code reviews, is very much encouraged to do so.
-
-Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
-
-If you are unfamiliar with how
-to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
-[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
-
-We hope you enjoy using our software as much as we enjoy creating it,
-and we hope that some of those of you who are reading this will elect
-to become part of our community.
-
-Welcome to InvokeAI!
-
-### Contributors
-
-This fork is a combined effort of various people from across the world.
-[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
-their time, hard work and effort.
-
-### Support
-
-For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
-
-Original portions of the software are Copyright (c) 2023 by respective contributors.
-
diff --git a/docs/features/CONFIGURATION.md b/docs/features/CONFIGURATION.md
index 41f7a3ced3..d6bfe44901 100644
--- a/docs/features/CONFIGURATION.md
+++ b/docs/features/CONFIGURATION.md
@@ -51,13 +51,11 @@ The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
+You'll find an example file next to `invokeai.yaml` that shows the default values.
+
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
-You can fix a broken `invokeai.yaml` by deleting it and running the
-configuration script again -- option [6] in the launcher, "Re-run the
-configure script".
-
#### Custom Config File Location
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.
diff --git a/invokeai/app/invocations/controlnet_image_processors.py b/invokeai/app/invocations/controlnet_image_processors.py
index a49c910eeb..6510d2f74a 100644
--- a/invokeai/app/invocations/controlnet_image_processors.py
+++ b/invokeai/app/invocations/controlnet_image_processors.py
@@ -35,22 +35,16 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
+from invokeai.backend.image_util.util import np_to_pil, pil_to_np
-from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
-
-CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
-CONTROLNET_RESIZE_VALUES = Literal[
- "just_resize",
- "crop_resize",
- "fill_resize",
- "just_resize_simple",
-]
+from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
class ControlField(BaseModel):
@@ -641,3 +635,27 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
resolution=self.image_resolution,
)
return processed_image
+
+
+@invocation(
+ "heuristic_resize",
+ title="Heuristic Resize",
+ tags=["image, controlnet"],
+ category="image",
+ version="1.0.0",
+ classification=Classification.Prototype,
+)
+class HeuristicResizeInvocation(BaseInvocation):
+ """Resize an image using a heuristic method. Preserves edge maps."""
+
+ image: ImageField = InputField(description="The image to resize")
+ width: int = InputField(default=512, gt=0, description="The width to resize to (px)")
+ height: int = InputField(default=512, gt=0, description="The height to resize to (px)")
+
+ def invoke(self, context: InvocationContext) -> ImageOutput:
+ image = context.images.get_pil(self.image.image_name, "RGB")
+ np_img = pil_to_np(image)
+ np_resized = heuristic_resize(np_img, (self.width, self.height))
+ resized = np_to_pil(np_resized)
+ image_dto = context.images.save(image=resized)
+ return ImageOutput.build(image_dto)
diff --git a/invokeai/app/invocations/latent.py b/invokeai/app/invocations/latent.py
index 4534df89c1..4ad63f4f89 100644
--- a/invokeai/app/invocations/latent.py
+++ b/invokeai/app/invocations/latent.py
@@ -51,6 +51,7 @@ from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter, IPAdapterPlus
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import BaseModelType, LoadedModel
+from invokeai.backend.model_manager.config import MainConfigBase, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
@@ -185,7 +186,7 @@ class GradientMaskOutput(BaseInvocationOutput):
title="Create Gradient Mask",
tags=["mask", "denoise"],
category="latents",
- version="1.0.0",
+ version="1.1.0",
)
class CreateGradientMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
@@ -198,6 +199,32 @@ class CreateGradientMaskInvocation(BaseInvocation):
minimum_denoise: float = InputField(
default=0.0, ge=0, le=1, description="Minimum denoise level for the coherence region", ui_order=4
)
+ image: Optional[ImageField] = InputField(
+ default=None,
+ description="OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE",
+ title="[OPTIONAL] Image",
+ ui_order=6,
+ )
+ unet: Optional[UNetField] = InputField(
+ description="OPTIONAL: If the Unet is a specialized Inpainting model, masked_latents will be generated from the image with the VAE",
+ default=None,
+ input=Input.Connection,
+ title="[OPTIONAL] UNet",
+ ui_order=5,
+ )
+ vae: Optional[VAEField] = InputField(
+ default=None,
+ description="OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE",
+ title="[OPTIONAL] VAE",
+ input=Input.Connection,
+ ui_order=7,
+ )
+ tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=8)
+ fp32: bool = InputField(
+ default=DEFAULT_PRECISION == "float32",
+ description=FieldDescriptions.fp32,
+ ui_order=9,
+ )
@torch.no_grad()
def invoke(self, context: InvocationContext) -> GradientMaskOutput:
@@ -233,8 +260,27 @@ class CreateGradientMaskInvocation(BaseInvocation):
expanded_mask_image = Image.fromarray((expanded_mask.squeeze(0).numpy() * 255).astype(np.uint8), mode="L")
expanded_image_dto = context.images.save(expanded_mask_image)
+ masked_latents_name = None
+ if self.unet is not None and self.vae is not None and self.image is not None:
+ # all three fields must be present at the same time
+ main_model_config = context.models.get_config(self.unet.unet.key)
+ assert isinstance(main_model_config, MainConfigBase)
+ if main_model_config.variant is ModelVariantType.Inpaint:
+ mask = blur_tensor
+ vae_info: LoadedModel = context.models.load(self.vae.vae)
+ image = context.images.get_pil(self.image.image_name)
+ image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
+ if image_tensor.dim() == 3:
+ image_tensor = image_tensor.unsqueeze(0)
+ img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
+ masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
+ masked_latents = ImageToLatentsInvocation.vae_encode(
+ vae_info, self.fp32, self.tiled, masked_image.clone()
+ )
+ masked_latents_name = context.tensors.save(tensor=masked_latents)
+
return GradientMaskOutput(
- denoise_mask=DenoiseMaskField(mask_name=mask_name, masked_latents_name=None, gradient=True),
+ denoise_mask=DenoiseMaskField(mask_name=mask_name, masked_latents_name=masked_latents_name, gradient=True),
expanded_mask_area=ImageField(image_name=expanded_image_dto.image_name),
)
@@ -295,7 +341,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
- default=7.5, ge=1, description=FieldDescriptions.cfg_scale, title="CFG Scale"
+ default=7.5, description=FieldDescriptions.cfg_scale, title="CFG Scale"
)
denoising_start: float = InputField(
default=0.0,
@@ -517,6 +563,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
dtype=unet.dtype,
)
+ if isinstance(self.cfg_scale, list):
+ assert (
+ len(self.cfg_scale) == self.steps
+ ), "cfg_scale (list) must have the same length as the number of steps"
+
conditioning_data = TextConditioningData(
uncond_text=uncond_text_embedding,
cond_text=cond_text_embedding,
diff --git a/invokeai/app/invocations/mask.py b/invokeai/app/invocations/mask.py
index a7f3207764..6f54660847 100644
--- a/invokeai/app/invocations/mask.py
+++ b/invokeai/app/invocations/mask.py
@@ -1,7 +1,8 @@
+import numpy as np
import torch
-from invokeai.app.invocations.baseinvocation import BaseInvocation, InvocationContext, invocation
-from invokeai.app.invocations.fields import InputField, TensorField, WithMetadata
+from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, InvocationContext, invocation
+from invokeai.app.invocations.fields import ImageField, InputField, TensorField, WithMetadata
from invokeai.app.invocations.primitives import MaskOutput
@@ -34,3 +35,86 @@ class RectangleMaskInvocation(BaseInvocation, WithMetadata):
width=self.width,
height=self.height,
)
+
+
+@invocation(
+ "alpha_mask_to_tensor",
+ title="Alpha Mask to Tensor",
+ tags=["conditioning"],
+ category="conditioning",
+ version="1.0.0",
+ classification=Classification.Beta,
+)
+class AlphaMaskToTensorInvocation(BaseInvocation):
+ """Convert a mask image to a tensor. Opaque regions are 1 and transparent regions are 0."""
+
+ image: ImageField = InputField(description="The mask image to convert.")
+ invert: bool = InputField(default=False, description="Whether to invert the mask.")
+
+ def invoke(self, context: InvocationContext) -> MaskOutput:
+ image = context.images.get_pil(self.image.image_name)
+ mask = torch.zeros((1, image.height, image.width), dtype=torch.bool)
+ if self.invert:
+ mask[0] = torch.tensor(np.array(image)[:, :, 3] == 0, dtype=torch.bool)
+ else:
+ mask[0] = torch.tensor(np.array(image)[:, :, 3] > 0, dtype=torch.bool)
+
+ return MaskOutput(
+ mask=TensorField(tensor_name=context.tensors.save(mask)),
+ height=mask.shape[1],
+ width=mask.shape[2],
+ )
+
+
+@invocation(
+ "invert_tensor_mask",
+ title="Invert Tensor Mask",
+ tags=["conditioning"],
+ category="conditioning",
+ version="1.0.0",
+ classification=Classification.Beta,
+)
+class InvertTensorMaskInvocation(BaseInvocation):
+ """Inverts a tensor mask."""
+
+ mask: TensorField = InputField(description="The tensor mask to convert.")
+
+ def invoke(self, context: InvocationContext) -> MaskOutput:
+ mask = context.tensors.load(self.mask.tensor_name)
+ inverted = ~mask
+
+ return MaskOutput(
+ mask=TensorField(tensor_name=context.tensors.save(inverted)),
+ height=inverted.shape[1],
+ width=inverted.shape[2],
+ )
+
+
+@invocation(
+ "image_mask_to_tensor",
+ title="Image Mask to Tensor",
+ tags=["conditioning"],
+ category="conditioning",
+ version="1.0.0",
+)
+class ImageMaskToTensorInvocation(BaseInvocation, WithMetadata):
+ """Convert a mask image to a tensor. Converts the image to grayscale and uses thresholding at the specified value."""
+
+ image: ImageField = InputField(description="The mask image to convert.")
+ cutoff: int = InputField(ge=0, le=255, description="Cutoff (<)", default=128)
+ invert: bool = InputField(default=False, description="Whether to invert the mask.")
+
+ def invoke(self, context: InvocationContext) -> MaskOutput:
+ image = context.images.get_pil(self.image.image_name, mode="L")
+
+ mask = torch.zeros((1, image.height, image.width), dtype=torch.bool)
+ if self.invert:
+ mask[0] = torch.tensor(np.array(image)[:, :] >= self.cutoff, dtype=torch.bool)
+ else:
+ mask[0] = torch.tensor(np.array(image)[:, :] < self.cutoff, dtype=torch.bool)
+
+ return MaskOutput(
+ mask=TensorField(tensor_name=context.tensors.save(mask)),
+ height=mask.shape[1],
+ width=mask.shape[2],
+ )
diff --git a/invokeai/app/invocations/metadata.py b/invokeai/app/invocations/metadata.py
index a02d0a57ef..9c7264a9bb 100644
--- a/invokeai/app/invocations/metadata.py
+++ b/invokeai/app/invocations/metadata.py
@@ -3,7 +3,6 @@ from typing import Any, Literal, Optional, Union
from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
-from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -14,6 +13,7 @@ from invokeai.app.invocations.fields import (
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from ...version import __version__
diff --git a/invokeai/app/invocations/t2i_adapter.py b/invokeai/app/invocations/t2i_adapter.py
index e550a7b313..b22a089d3f 100644
--- a/invokeai/app/invocations/t2i_adapter.py
+++ b/invokeai/app/invocations/t2i_adapter.py
@@ -8,11 +8,11 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
-from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES
class T2IAdapterField(BaseModel):
diff --git a/invokeai/app/services/download/download_default.py b/invokeai/app/services/download/download_default.py
index f393a18dcb..7d8229fba1 100644
--- a/invokeai/app/services/download/download_default.py
+++ b/invokeai/app/services/download/download_default.py
@@ -318,10 +318,8 @@ class DownloadQueueService(DownloadQueueServiceBase):
in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool:
- pc_name_max = os.pathconf(directory, "PC_NAME_MAX") if hasattr(os, "pathconf") else 260 # hardcoded for windows
- pc_path_max = (
- os.pathconf(directory, "PC_PATH_MAX") if hasattr(os, "pathconf") else 32767
- ) # hardcoded for windows with long names enabled
+ pc_name_max = get_pc_name_max(directory)
+ pc_path_max = get_pc_path_max(directory)
if "/" in filename:
return False
if filename.startswith(".."):
@@ -419,6 +417,26 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._logger.warning(excp)
+def get_pc_name_max(directory: str) -> int:
+ if hasattr(os, "pathconf"):
+ try:
+ return os.pathconf(directory, "PC_NAME_MAX")
+ except OSError:
+ # macOS w/ external drives raise OSError
+ pass
+ return 260 # hardcoded for windows
+
+
+def get_pc_path_max(directory: str) -> int:
+ if hasattr(os, "pathconf"):
+ try:
+ return os.pathconf(directory, "PC_PATH_MAX")
+ except OSError:
+ # some platforms may not have this value
+ pass
+ return 32767 # hardcoded for windows with long names enabled
+
+
# Example on_progress event handler to display a TQDM status bar
# Activate with:
# download_service.download(DownloadJob('http://foo.bar/baz', '/tmp', on_progress=TqdmProgress().update))
diff --git a/invokeai/app/services/model_install/model_install_default.py b/invokeai/app/services/model_install/model_install_default.py
index 6a3117bcb8..6eb9549ef0 100644
--- a/invokeai/app/services/model_install/model_install_default.py
+++ b/invokeai/app/services/model_install/model_install_default.py
@@ -3,7 +3,6 @@
import locale
import os
import re
-import signal
import threading
import time
from hashlib import sha256
@@ -43,6 +42,7 @@ from invokeai.backend.model_manager.metadata.metadata_base import HuggingFaceMet
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import InvokeAILogger
+from invokeai.backend.util.catch_sigint import catch_sigint
from invokeai.backend.util.devices import TorchDevice
from .model_install_base import (
@@ -112,17 +112,6 @@ class ModelInstallService(ModelInstallServiceBase):
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer thread."""
- # Yes, this is weird. When the installer thread is running, the
- # thread masks the ^C signal. When we receive a
- # sigINT, we stop the thread, reset sigINT, and send a new
- # sigINT to the parent process.
- def sigint_handler(signum, frame):
- self.stop()
- signal.signal(signal.SIGINT, signal.SIG_DFL)
- signal.raise_signal(signal.SIGINT)
-
- signal.signal(signal.SIGINT, sigint_handler)
-
with self._lock:
if self._running:
raise Exception("Attempt to start the installer service twice")
@@ -132,7 +121,8 @@ class ModelInstallService(ModelInstallServiceBase):
# In normal use, we do not want to scan the models directory - it should never have orphaned models.
# We should only do the scan when the flag is set (which should only be set when testing).
if self.app_config.scan_models_on_startup:
- self._register_orphaned_models()
+ with catch_sigint():
+ self._register_orphaned_models()
# Check all models' paths and confirm they exist. A model could be missing if it was installed on a volume
# that isn't currently mounted. In this case, we don't want to delete the model from the database, but we do
diff --git a/invokeai/app/services/object_serializer/object_serializer_disk.py b/invokeai/app/services/object_serializer/object_serializer_disk.py
index 354a9b0c04..d3171f8530 100644
--- a/invokeai/app/services/object_serializer/object_serializer_disk.py
+++ b/invokeai/app/services/object_serializer/object_serializer_disk.py
@@ -1,7 +1,7 @@
+import shutil
import tempfile
import threading
import typing
-from dataclasses import dataclass
from pathlib import Path
from typing import TYPE_CHECKING, Optional, TypeVar
@@ -18,12 +18,6 @@ if TYPE_CHECKING:
T = TypeVar("T")
-@dataclass
-class DeleteAllResult:
- deleted_count: int
- freed_space_bytes: float
-
-
class ObjectSerializerDisk(ObjectSerializerBase[T]):
"""Disk-backed storage for arbitrary python objects. Serialization is handled by `torch.save` and `torch.load`.
@@ -36,6 +30,12 @@ class ObjectSerializerDisk(ObjectSerializerBase[T]):
self._ephemeral = ephemeral
self._base_output_dir = output_dir
self._base_output_dir.mkdir(parents=True, exist_ok=True)
+
+ if self._ephemeral:
+ # Remove dangling tempdirs that might have been left over from an earlier unplanned shutdown.
+ for temp_dir in filter(Path.is_dir, self._base_output_dir.glob("tmp*")):
+ shutil.rmtree(temp_dir)
+
# Must specify `ignore_cleanup_errors` to avoid fatal errors during cleanup on Windows
self._tempdir = (
tempfile.TemporaryDirectory(dir=self._base_output_dir, ignore_cleanup_errors=True) if ephemeral else None
diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py
index b3e2560211..fde8d52ee6 100644
--- a/invokeai/app/util/controlnet_utils.py
+++ b/invokeai/app/util/controlnet_utils.py
@@ -1,13 +1,21 @@
-from typing import Union
+from typing import Any, Literal, Union
import cv2
import numpy as np
import torch
-from controlnet_aux.util import HWC3
-from diffusers.utils import PIL_INTERPOLATION
from einops import rearrange
from PIL import Image
+from invokeai.backend.image_util.util import nms, normalize_image_channel_count
+
+CONTROLNET_RESIZE_VALUES = Literal[
+ "just_resize",
+ "crop_resize",
+ "fill_resize",
+ "just_resize_simple",
+]
+CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
+
###################################################################
# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet
###################################################################
@@ -68,17 +76,6 @@ def lvmin_thin(x, prunings=True):
return y
-def nake_nms(x):
- f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8)
- f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8)
- f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8)
- f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8)
- y = np.zeros_like(x)
- for f in [f1, f2, f3, f4]:
- np.putmask(y, cv2.dilate(x, kernel=f) == x, x)
- return y
-
-
################################################################################
# copied from Mikubill/sd-webui-controlnet external_code.py and modified for InvokeAI
################################################################################
@@ -134,98 +131,122 @@ def pixel_perfect_resolution(
return int(np.round(estimation))
+def clone_contiguous(x: np.ndarray[Any, Any]) -> np.ndarray[Any, Any]:
+ """Get a memory-contiguous clone of the given numpy array, as a safety measure and to improve computation efficiency."""
+ return np.ascontiguousarray(x).copy()
+
+
+def np_img_to_torch(np_img: np.ndarray[Any, Any], device: torch.device) -> torch.Tensor:
+ """Convert a numpy image to a PyTorch tensor. The image is normalized to 0-1, rearranged to BCHW format and sent to
+ the specified device."""
+
+ torch_img = torch.from_numpy(np_img)
+ normalized = torch_img.float() / 255.0
+ bchw = rearrange(normalized, "h w c -> 1 c h w")
+ on_device = bchw.to(device)
+ return on_device.clone()
+
+
+def heuristic_resize(np_img: np.ndarray[Any, Any], size: tuple[int, int]) -> np.ndarray[Any, Any]:
+ """Resizes an image using a heuristic to choose the best resizing strategy.
+
+ - If the image appears to be an edge map, special handling will be applied to ensure the edges are not distorted.
+ - Single-pixel edge maps use NMS and thinning to keep the edges as single-pixel lines.
+ - Low-color-count images are resized with nearest-neighbor to preserve color information (for e.g. segmentation maps).
+ - The alpha channel is handled separately to ensure it is resized correctly.
+
+ Args:
+ np_img (np.ndarray): The input image.
+ size (tuple[int, int]): The target size for the image.
+
+ Returns:
+ np.ndarray: The resized image.
+
+ Adapted from https://github.com/Mikubill/sd-webui-controlnet.
+ """
+
+ # Return early if the image is already at the requested size
+ if np_img.shape[0] == size[1] and np_img.shape[1] == size[0]:
+ return np_img
+
+ # If the image has an alpha channel, separate it for special handling later.
+ inpaint_mask = None
+ if np_img.ndim == 3 and np_img.shape[2] == 4:
+ inpaint_mask = np_img[:, :, 3]
+ np_img = np_img[:, :, 0:3]
+
+ new_size_is_smaller = (size[0] * size[1]) < (np_img.shape[0] * np_img.shape[1])
+ new_size_is_bigger = (size[0] * size[1]) > (np_img.shape[0] * np_img.shape[1])
+ unique_color_count = np.unique(np_img.reshape(-1, np_img.shape[2]), axis=0).shape[0]
+ is_one_pixel_edge = False
+ is_binary = False
+
+ if unique_color_count == 2:
+ # If the image has only two colors, it is likely binary. Check if the image has one-pixel edges.
+ is_binary = np.min(np_img) < 16 and np.max(np_img) > 240
+ if is_binary:
+ eroded = cv2.erode(np_img, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
+ dilated = cv2.dilate(eroded, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
+ one_pixel_edge_count = np.where(dilated < np_img)[0].shape[0]
+ all_edge_count = np.where(np_img > 127)[0].shape[0]
+ is_one_pixel_edge = one_pixel_edge_count * 2 > all_edge_count
+
+ if 2 < unique_color_count < 200:
+ # With a low color count, we assume this is a map where exact colors are important. Near-neighbor preserves
+ # the colors as needed.
+ interpolation = cv2.INTER_NEAREST
+ elif new_size_is_smaller:
+ # This works best for downscaling
+ interpolation = cv2.INTER_AREA
+ else:
+ # Fall back for other cases
+ interpolation = cv2.INTER_CUBIC # Must be CUBIC because we now use nms. NEVER CHANGE THIS
+
+ # This may be further transformed depending on the binary nature of the image.
+ resized = cv2.resize(np_img, size, interpolation=interpolation)
+
+ if inpaint_mask is not None:
+ # Resize the inpaint mask to match the resized image using the same interpolation method.
+ inpaint_mask = cv2.resize(inpaint_mask, size, interpolation=interpolation)
+
+ # If the image is binary, we will perform some additional processing to ensure the edges are preserved.
+ if is_binary:
+ resized = np.mean(resized.astype(np.float32), axis=2).clip(0, 255).astype(np.uint8)
+ if is_one_pixel_edge:
+ # Use NMS and thinning to keep the edges as single-pixel lines.
+ resized = nms(resized)
+ _, resized = cv2.threshold(resized, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
+ resized = lvmin_thin(resized, prunings=new_size_is_bigger)
+ else:
+ _, resized = cv2.threshold(resized, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
+ resized = np.stack([resized] * 3, axis=2)
+
+ # Restore the alpha channel if it was present.
+ if inpaint_mask is not None:
+ inpaint_mask = (inpaint_mask > 127).astype(np.float32) * 255.0
+ inpaint_mask = inpaint_mask[:, :, None].clip(0, 255).astype(np.uint8)
+ resized = np.concatenate([resized, inpaint_mask], axis=2)
+
+ return resized
+
+
###########################################################################
# Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet
# modified for InvokeAI
###########################################################################
-# def detectmap_proc(detected_map, module, resize_mode, h, w):
-def np_img_resize(np_img: np.ndarray, resize_mode: str, h: int, w: int, device: torch.device = torch.device("cpu")):
- # if 'inpaint' in module:
- # np_img = np_img.astype(np.float32)
- # else:
- # np_img = HWC3(np_img)
- np_img = HWC3(np_img)
+def np_img_resize(
+ np_img: np.ndarray,
+ resize_mode: CONTROLNET_RESIZE_VALUES,
+ h: int,
+ w: int,
+ device: torch.device = torch.device("cpu"),
+) -> tuple[torch.Tensor, np.ndarray[Any, Any]]:
+ np_img = normalize_image_channel_count(np_img)
- def safe_numpy(x):
- # A very safe method to make sure that Apple/Mac works
- y = x
-
- # below is very boring but do not change these. If you change these Apple or Mac may fail.
- y = y.copy()
- y = np.ascontiguousarray(y)
- y = y.copy()
- return y
-
- def get_pytorch_control(x):
- # A very safe method to make sure that Apple/Mac works
- y = x
-
- # below is very boring but do not change these. If you change these Apple or Mac may fail.
- y = torch.from_numpy(y)
- y = y.float() / 255.0
- y = rearrange(y, "h w c -> 1 c h w")
- y = y.clone()
- # y = y.to(devices.get_device_for("controlnet"))
- y = y.to(device)
- y = y.clone()
- return y
-
- def high_quality_resize(x: np.ndarray, size):
- # Written by lvmin
- # Super high-quality control map up-scaling, considering binary, seg, and one-pixel edges
- inpaint_mask = None
- if x.ndim == 3 and x.shape[2] == 4:
- inpaint_mask = x[:, :, 3]
- x = x[:, :, 0:3]
-
- new_size_is_smaller = (size[0] * size[1]) < (x.shape[0] * x.shape[1])
- new_size_is_bigger = (size[0] * size[1]) > (x.shape[0] * x.shape[1])
- unique_color_count = np.unique(x.reshape(-1, x.shape[2]), axis=0).shape[0]
- is_one_pixel_edge = False
- is_binary = False
- if unique_color_count == 2:
- is_binary = np.min(x) < 16 and np.max(x) > 240
- if is_binary:
- xc = x
- xc = cv2.erode(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
- xc = cv2.dilate(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
- one_pixel_edge_count = np.where(xc < x)[0].shape[0]
- all_edge_count = np.where(x > 127)[0].shape[0]
- is_one_pixel_edge = one_pixel_edge_count * 2 > all_edge_count
-
- if 2 < unique_color_count < 200:
- interpolation = cv2.INTER_NEAREST
- elif new_size_is_smaller:
- interpolation = cv2.INTER_AREA
- else:
- interpolation = cv2.INTER_CUBIC # Must be CUBIC because we now use nms. NEVER CHANGE THIS
-
- y = cv2.resize(x, size, interpolation=interpolation)
- if inpaint_mask is not None:
- inpaint_mask = cv2.resize(inpaint_mask, size, interpolation=interpolation)
-
- if is_binary:
- y = np.mean(y.astype(np.float32), axis=2).clip(0, 255).astype(np.uint8)
- if is_one_pixel_edge:
- y = nake_nms(y)
- _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
- y = lvmin_thin(y, prunings=new_size_is_bigger)
- else:
- _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
- y = np.stack([y] * 3, axis=2)
-
- if inpaint_mask is not None:
- inpaint_mask = (inpaint_mask > 127).astype(np.float32) * 255.0
- inpaint_mask = inpaint_mask[:, :, None].clip(0, 255).astype(np.uint8)
- y = np.concatenate([y, inpaint_mask], axis=2)
-
- return y
-
- # if resize_mode == external_code.ResizeMode.RESIZE:
if resize_mode == "just_resize": # RESIZE
- np_img = high_quality_resize(np_img, (w, h))
- np_img = safe_numpy(np_img)
- return get_pytorch_control(np_img), np_img
+ np_img = heuristic_resize(np_img, (w, h))
+ np_img = clone_contiguous(np_img)
+ return np_img_to_torch(np_img, device), np_img
old_h, old_w, _ = np_img.shape
old_w = float(old_w)
@@ -236,7 +257,6 @@ def np_img_resize(np_img: np.ndarray, resize_mode: str, h: int, w: int, device:
def safeint(x: Union[int, float]) -> int:
return int(np.round(x))
- # if resize_mode == external_code.ResizeMode.OUTER_FIT:
if resize_mode == "fill_resize": # OUTER_FIT
k = min(k0, k1)
borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0)
@@ -245,23 +265,23 @@ def np_img_resize(np_img: np.ndarray, resize_mode: str, h: int, w: int, device:
# Inpaint hijack
high_quality_border_color[3] = 255
high_quality_background = np.tile(high_quality_border_color[None, None], [h, w, 1])
- np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
+ np_img = heuristic_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (h - new_h) // 2)
pad_w = max(0, (w - new_w) // 2)
high_quality_background[pad_h : pad_h + new_h, pad_w : pad_w + new_w] = np_img
np_img = high_quality_background
- np_img = safe_numpy(np_img)
- return get_pytorch_control(np_img), np_img
+ np_img = clone_contiguous(np_img)
+ return np_img_to_torch(np_img, device), np_img
else: # resize_mode == "crop_resize" (INNER_FIT)
k = max(k0, k1)
- np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
+ np_img = heuristic_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (new_h - h) // 2)
pad_w = max(0, (new_w - w) // 2)
np_img = np_img[pad_h : pad_h + h, pad_w : pad_w + w]
- np_img = safe_numpy(np_img)
- return get_pytorch_control(np_img), np_img
+ np_img = clone_contiguous(np_img)
+ return np_img_to_torch(np_img, device), np_img
def prepare_control_image(
@@ -269,12 +289,12 @@ def prepare_control_image(
width: int,
height: int,
num_channels: int = 3,
- device="cuda",
- dtype=torch.float16,
- do_classifier_free_guidance=True,
- control_mode="balanced",
- resize_mode="just_resize_simple",
-):
+ device: str = "cuda",
+ dtype: torch.dtype = torch.float16,
+ control_mode: CONTROLNET_MODE_VALUES = "balanced",
+ resize_mode: CONTROLNET_RESIZE_VALUES = "just_resize_simple",
+ do_classifier_free_guidance: bool = True,
+) -> torch.Tensor:
"""Pre-process images for ControlNets or T2I-Adapters.
Args:
@@ -292,26 +312,15 @@ def prepare_control_image(
resize_mode (str, optional): Defaults to "just_resize_simple".
Raises:
- NotImplementedError: If resize_mode == "crop_resize_simple".
- NotImplementedError: If resize_mode == "fill_resize_simple".
ValueError: If `resize_mode` is not recognized.
ValueError: If `num_channels` is out of range.
Returns:
torch.Tensor: The pre-processed input tensor.
"""
- if (
- resize_mode == "just_resize_simple"
- or resize_mode == "crop_resize_simple"
- or resize_mode == "fill_resize_simple"
- ):
+ if resize_mode == "just_resize_simple":
image = image.convert("RGB")
- if resize_mode == "just_resize_simple":
- image = image.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
- elif resize_mode == "crop_resize_simple":
- raise NotImplementedError(f"prepare_control_image is not implemented for resize_mode='{resize_mode}'.")
- elif resize_mode == "fill_resize_simple":
- raise NotImplementedError(f"prepare_control_image is not implemented for resize_mode='{resize_mode}'.")
+ image = image.resize((width, height), resample=Image.LANCZOS)
nimage = np.array(image)
nimage = nimage[None, :]
nimage = np.concatenate([nimage], axis=0)
@@ -328,8 +337,7 @@ def prepare_control_image(
resize_mode=resize_mode,
h=height,
w=width,
- # device=torch.device('cpu')
- device=device,
+ device=torch.device(device),
)
else:
raise ValueError(f"Unsupported resize_mode: '{resize_mode}'.")
diff --git a/invokeai/backend/image_util/hed.py b/invokeai/backend/image_util/hed.py
index 378e3b96e9..97706df8b9 100644
--- a/invokeai/backend/image_util/hed.py
+++ b/invokeai/backend/image_util/hed.py
@@ -8,7 +8,7 @@ from huggingface_hub import hf_hub_download
from PIL import Image
from invokeai.backend.image_util.util import (
- non_maximum_suppression,
+ nms,
normalize_image_channel_count,
np_to_pil,
pil_to_np,
@@ -134,7 +134,7 @@ class HEDProcessor:
detected_map = cv2.resize(detected_map, (width, height), interpolation=cv2.INTER_LINEAR)
if scribble:
- detected_map = non_maximum_suppression(detected_map, 127, 3.0)
+ detected_map = nms(detected_map, 127, 3.0)
detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)
detected_map[detected_map > 4] = 255
detected_map[detected_map < 255] = 0
diff --git a/invokeai/backend/image_util/util.py b/invokeai/backend/image_util/util.py
index 7cfe0ad1a5..5b2116975f 100644
--- a/invokeai/backend/image_util/util.py
+++ b/invokeai/backend/image_util/util.py
@@ -1,4 +1,5 @@
from math import ceil, floor, sqrt
+from typing import Optional
import cv2
import numpy as np
@@ -143,20 +144,21 @@ def resize_image_to_resolution(input_image: np.ndarray, resolution: int) -> np.n
h = float(input_image.shape[0])
w = float(input_image.shape[1])
scaling_factor = float(resolution) / min(h, w)
- h *= scaling_factor
- w *= scaling_factor
- h = int(np.round(h / 64.0)) * 64
- w = int(np.round(w / 64.0)) * 64
+ h = int(h * scaling_factor)
+ w = int(w * scaling_factor)
if scaling_factor > 1:
return cv2.resize(input_image, (w, h), interpolation=cv2.INTER_LANCZOS4)
else:
return cv2.resize(input_image, (w, h), interpolation=cv2.INTER_AREA)
-def non_maximum_suppression(image: np.ndarray, threshold: int, sigma: float):
+def nms(np_img: np.ndarray, threshold: Optional[int] = None, sigma: Optional[float] = None) -> np.ndarray:
"""
Apply non-maximum suppression to an image.
+ If both threshold and sigma are provided, the image will blurred before the suppression and thresholded afterwards,
+ resulting in a binary output image.
+
This function is adapted from https://github.com/lllyasviel/ControlNet.
Args:
@@ -166,23 +168,36 @@ def non_maximum_suppression(image: np.ndarray, threshold: int, sigma: float):
Returns:
The image after non-maximum suppression.
+
+ Raises:
+ ValueError: If only one of threshold and sigma provided.
"""
- image = cv2.GaussianBlur(image.astype(np.float32), (0, 0), sigma)
+ # Raise a value error if only one of threshold and sigma is provided
+ if (threshold is None) != (sigma is None):
+ raise ValueError("Both threshold and sigma must be provided if one is provided.")
+
+ if sigma is not None and threshold is not None:
+ # Blurring the image can help to thin out features
+ np_img = cv2.GaussianBlur(np_img.astype(np.float32), (0, 0), sigma)
filter_1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8)
filter_2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8)
filter_3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8)
filter_4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8)
- y = np.zeros_like(image)
+ nms_img = np.zeros_like(np_img)
for f in [filter_1, filter_2, filter_3, filter_4]:
- np.putmask(y, cv2.dilate(image, kernel=f) == image, image)
+ np.putmask(nms_img, cv2.dilate(np_img, kernel=f) == np_img, np_img)
- z = np.zeros_like(y, dtype=np.uint8)
- z[y > threshold] = 255
- return z
+ if sigma is not None and threshold is not None:
+ # We blurred - now threshold to get a binary image
+ thresholded = np.zeros_like(nms_img, dtype=np.uint8)
+ thresholded[nms_img > threshold] = 255
+ return thresholded
+
+ return nms_img
def safe_step(x: np.ndarray, step: int = 2) -> np.ndarray:
diff --git a/invokeai/backend/model_manager/config.py b/invokeai/backend/model_manager/config.py
index 82f88c0e81..b19501843c 100644
--- a/invokeai/backend/model_manager/config.py
+++ b/invokeai/backend/model_manager/config.py
@@ -301,12 +301,12 @@ class MainConfigBase(ModelConfigBase):
default_settings: Optional[MainModelDefaultSettings] = Field(
description="Default settings for this model", default=None
)
+ variant: ModelVariantType = ModelVariantType.Normal
class MainCheckpointConfig(CheckpointConfigBase, MainConfigBase):
"""Model config for main checkpoint models."""
- variant: ModelVariantType = ModelVariantType.Normal
prediction_type: SchedulerPredictionType = SchedulerPredictionType.Epsilon
upcast_attention: bool = False
diff --git a/invokeai/backend/model_manager/probe.py b/invokeai/backend/model_manager/probe.py
index bf21a7fe7b..8f33e4b49f 100644
--- a/invokeai/backend/model_manager/probe.py
+++ b/invokeai/backend/model_manager/probe.py
@@ -51,6 +51,7 @@ LEGACY_CONFIGS: Dict[BaseModelType, Dict[ModelVariantType, Union[str, Dict[Sched
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
+ ModelVariantType.Inpaint: "sd_xl_inpaint.yaml",
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",
diff --git a/invokeai/backend/model_manager/starter_models.py b/invokeai/backend/model_manager/starter_models.py
index f92f5e08d5..31b16d9c8a 100644
--- a/invokeai/backend/model_manager/starter_models.py
+++ b/invokeai/backend/model_manager/starter_models.py
@@ -155,7 +155,7 @@ STARTER_MODELS: list[StarterModel] = [
StarterModel(
name="IP Adapter",
base=BaseModelType.StableDiffusion1,
- source="InvokeAI/ip_adapter_sd15",
+ source="https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/ip-adapter_sd15.safetensors",
description="IP-Adapter for SD 1.5 models",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
@@ -163,7 +163,7 @@ STARTER_MODELS: list[StarterModel] = [
StarterModel(
name="IP Adapter Plus",
base=BaseModelType.StableDiffusion1,
- source="InvokeAI/ip_adapter_plus_sd15",
+ source="https://huggingface.co/InvokeAI/ip_adapter_plus_sd15/resolve/main/ip-adapter-plus_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
@@ -171,7 +171,7 @@ STARTER_MODELS: list[StarterModel] = [
StarterModel(
name="IP Adapter Plus Face",
base=BaseModelType.StableDiffusion1,
- source="InvokeAI/ip_adapter_plus_face_sd15",
+ source="https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15/resolve/main/ip-adapter-plus-face_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models, adapted for faces",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
@@ -179,7 +179,7 @@ STARTER_MODELS: list[StarterModel] = [
StarterModel(
name="IP Adapter SDXL",
base=BaseModelType.StableDiffusionXL,
- source="InvokeAI/ip_adapter_sdxl",
+ source="https://huggingface.co/InvokeAI/ip_adapter_sdxl_vit_h/resolve/main/ip-adapter_sdxl_vit-h.safetensors",
description="IP-Adapter for SDXL models",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sdxl_image_encoder],
diff --git a/invokeai/backend/util/catch_sigint.py b/invokeai/backend/util/catch_sigint.py
new file mode 100644
index 0000000000..b9735d94f9
--- /dev/null
+++ b/invokeai/backend/util/catch_sigint.py
@@ -0,0 +1,29 @@
+"""
+This module defines a context manager `catch_sigint()` which temporarily replaces
+the sigINT handler defined by the ASGI in order to allow the user to ^C the application
+and shut it down immediately. This was implemented in order to allow the user to interrupt
+slow model hashing during startup.
+
+Use like this:
+
+ from invokeai.backend.util.catch_sigint import catch_sigint
+ with catch_sigint():
+ run_some_hard_to_interrupt_process()
+"""
+
+import signal
+from contextlib import contextmanager
+from typing import Generator
+
+
+def sigint_handler(signum, frame): # type: ignore
+ signal.signal(signal.SIGINT, signal.SIG_DFL)
+ signal.raise_signal(signal.SIGINT)
+
+
+@contextmanager
+def catch_sigint() -> Generator[None, None, None]:
+ original_handler = signal.getsignal(signal.SIGINT)
+ signal.signal(signal.SIGINT, sigint_handler)
+ yield
+ signal.signal(signal.SIGINT, original_handler)
diff --git a/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml b/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml
new file mode 100644
index 0000000000..eea5c15a49
--- /dev/null
+++ b/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml
@@ -0,0 +1,98 @@
+model:
+ target: sgm.models.diffusion.DiffusionEngine
+ params:
+ scale_factor: 0.13025
+ disable_first_stage_autocast: True
+
+ denoiser_config:
+ target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
+ params:
+ num_idx: 1000
+
+ weighting_config:
+ target: sgm.modules.diffusionmodules.denoiser_weighting.EpsWeighting
+ scaling_config:
+ target: sgm.modules.diffusionmodules.denoiser_scaling.EpsScaling
+ discretization_config:
+ target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
+
+ network_config:
+ target: sgm.modules.diffusionmodules.openaimodel.UNetModel
+ params:
+ adm_in_channels: 2816
+ num_classes: sequential
+ use_checkpoint: True
+ in_channels: 9
+ out_channels: 4
+ model_channels: 320
+ attention_resolutions: [4, 2]
+ num_res_blocks: 2
+ channel_mult: [1, 2, 4]
+ num_head_channels: 64
+ use_spatial_transformer: True
+ use_linear_in_transformer: True
+ transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
+ context_dim: 2048
+ spatial_transformer_attn_type: softmax-xformers
+ legacy: False
+
+ conditioner_config:
+ target: sgm.modules.GeneralConditioner
+ params:
+ emb_models:
+ # crossattn cond
+ - is_trainable: False
+ input_key: txt
+ target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
+ params:
+ layer: hidden
+ layer_idx: 11
+ # crossattn and vector cond
+ - is_trainable: False
+ input_key: txt
+ target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
+ params:
+ arch: ViT-bigG-14
+ version: laion2b_s39b_b160k
+ freeze: True
+ layer: penultimate
+ always_return_pooled: True
+ legacy: False
+ # vector cond
+ - is_trainable: False
+ input_key: original_size_as_tuple
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+ # vector cond
+ - is_trainable: False
+ input_key: crop_coords_top_left
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+ # vector cond
+ - is_trainable: False
+ input_key: target_size_as_tuple
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+
+ first_stage_config:
+ target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
+ params:
+ embed_dim: 4
+ monitor: val/rec_loss
+ ddconfig:
+ attn_type: vanilla-xformers
+ double_z: true
+ z_channels: 4
+ resolution: 256
+ in_channels: 3
+ out_ch: 3
+ ch: 128
+ ch_mult: [1, 2, 4, 4]
+ num_res_blocks: 2
+ attn_resolutions: []
+ dropout: 0.0
+ lossconfig:
+ target: torch.nn.Identity
\ No newline at end of file
diff --git a/invokeai/frontend/web/.storybook/preview.tsx b/invokeai/frontend/web/.storybook/preview.tsx
index 791a48ab9e..8b21b48230 100644
--- a/invokeai/frontend/web/.storybook/preview.tsx
+++ b/invokeai/frontend/web/.storybook/preview.tsx
@@ -11,6 +11,7 @@ import { createStore } from '../src/app/store/store';
// @ts-ignore
import translationEN from '../public/locales/en.json';
import { ReduxInit } from './ReduxInit';
+import { $store } from 'app/store/nanostores/store';
i18n.use(initReactI18next).init({
lng: 'en',
@@ -25,6 +26,7 @@ i18n.use(initReactI18next).init({
});
const store = createStore(undefined, false);
+$store.set(store);
$baseUrl.set('http://localhost:9090');
const preview: Preview = {
diff --git a/invokeai/frontend/web/package.json b/invokeai/frontend/web/package.json
index aabaa17c73..9e661e0737 100644
--- a/invokeai/frontend/web/package.json
+++ b/invokeai/frontend/web/package.json
@@ -25,7 +25,7 @@
"typegen": "node scripts/typegen.js",
"preview": "vite preview",
"lint:knip": "knip",
- "lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:1 src/main.tsx",
+ "lint:dpdm": "dpdm --no-warning --no-tree --transform --exit-code circular:0 src/main.tsx",
"lint:eslint": "eslint --max-warnings=0 .",
"lint:prettier": "prettier --check .",
"lint:tsc": "tsc --noEmit",
@@ -95,11 +95,13 @@
"reactflow": "^11.10.4",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
+ "redux-undo": "^1.1.0",
"rfdc": "^1.3.1",
"roarr": "^7.21.1",
"serialize-error": "^11.0.3",
"socket.io-client": "^4.7.5",
"use-debounce": "^10.0.0",
+ "use-device-pixel-ratio": "^1.1.2",
"use-image": "^1.1.1",
"uuid": "^9.0.1",
"zod": "^3.22.4",
diff --git a/invokeai/frontend/web/pnpm-lock.yaml b/invokeai/frontend/web/pnpm-lock.yaml
index bf423c3d46..9910e32391 100644
--- a/invokeai/frontend/web/pnpm-lock.yaml
+++ b/invokeai/frontend/web/pnpm-lock.yaml
@@ -140,6 +140,9 @@ dependencies:
redux-remember:
specifier: ^5.1.0
version: 5.1.0(redux@5.0.1)
+ redux-undo:
+ specifier: ^1.1.0
+ version: 1.1.0
rfdc:
specifier: ^1.3.1
version: 1.3.1
@@ -155,6 +158,9 @@ dependencies:
use-debounce:
specifier: ^10.0.0
version: 10.0.0(react@18.2.0)
+ use-device-pixel-ratio:
+ specifier: ^1.1.2
+ version: 1.1.2(react@18.2.0)
use-image:
specifier: ^1.1.1
version: 1.1.1(react-dom@18.2.0)(react@18.2.0)
@@ -11962,6 +11968,10 @@ packages:
redux: 5.0.1
dev: false
+ /redux-undo@1.1.0:
+ resolution: {integrity: sha512-zzLFh2qeF0MTIlzDhDLm9NtkfBqCllQJ3OCuIl5RKlG/ayHw6GUdIFdMhzMS9NnrnWdBX5u//ExMOHpfudGGOg==}
+ dev: false
+
/redux@5.0.1:
resolution: {integrity: sha512-M9/ELqF6fy8FwmkpnF0S3YKOqMyoWJ4+CS5Efg2ct3oY9daQvd/Pc71FpGZsVsbl3Cpb+IIcjBDUnnyBdQbq4w==}
dev: false
@@ -13317,6 +13327,14 @@ packages:
react: 18.2.0
dev: false
+ /use-device-pixel-ratio@1.1.2(react@18.2.0):
+ resolution: {integrity: sha512-nFxV0HwLdRUt20kvIgqHYZe6PK/v4mU1X8/eLsT1ti5ck0l2ob0HDRziaJPx+YWzBo6dMm4cTac3mcyk68Gh+A==}
+ peerDependencies:
+ react: '>=16.8.0'
+ dependencies:
+ react: 18.2.0
+ dev: false
+
/use-image@1.1.1(react-dom@18.2.0)(react@18.2.0):
resolution: {integrity: sha512-n4YO2k8AJG/BcDtxmBx8Aa+47kxY5m335dJiCQA5tTeVU4XdhrhqR6wT0WISRXwdMEOv5CSjqekDZkEMiiWaYQ==}
peerDependencies:
diff --git a/invokeai/frontend/web/public/assets/images/transparent_bg.png b/invokeai/frontend/web/public/assets/images/transparent_bg.png
new file mode 100644
index 0000000000..e1a3c339ce
Binary files /dev/null and b/invokeai/frontend/web/public/assets/images/transparent_bg.png differ
diff --git a/invokeai/frontend/web/public/locales/de.json b/invokeai/frontend/web/public/locales/de.json
index 033dffdc44..0a104c083b 100644
--- a/invokeai/frontend/web/public/locales/de.json
+++ b/invokeai/frontend/web/public/locales/de.json
@@ -85,7 +85,8 @@
"loadMore": "Mehr laden",
"noImagesInGallery": "Keine Bilder in der Galerie",
"loading": "Lade",
- "deleteImage": "Lösche Bild",
+ "deleteImage_one": "Lösche Bild",
+ "deleteImage_other": "",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild",
diff --git a/invokeai/frontend/web/public/locales/en.json b/invokeai/frontend/web/public/locales/en.json
index f69f09552a..885a937de3 100644
--- a/invokeai/frontend/web/public/locales/en.json
+++ b/invokeai/frontend/web/public/locales/en.json
@@ -69,6 +69,7 @@
"auto": "Auto",
"back": "Back",
"batch": "Batch Manager",
+ "beta": "Beta",
"cancel": "Cancel",
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
@@ -83,6 +84,8 @@
"direction": "Direction",
"ipAdapter": "IP Adapter",
"t2iAdapter": "T2I Adapter",
+ "positivePrompt": "Positive Prompt",
+ "negativePrompt": "Negative Prompt",
"discordLabel": "Discord",
"dontAskMeAgain": "Don't ask me again",
"error": "Error",
@@ -135,7 +138,9 @@
"red": "Red",
"green": "Green",
"blue": "Blue",
- "alpha": "Alpha"
+ "alpha": "Alpha",
+ "selected": "Selected",
+ "viewer": "Viewer"
},
"controlnet": {
"controlAdapter_one": "Control Adapter",
@@ -151,6 +156,7 @@
"balanced": "Balanced",
"base": "Base",
"beginEndStepPercent": "Begin / End Step Percentage",
+ "beginEndStepPercentShort": "Begin/End %",
"bgth": "bg_th",
"canny": "Canny",
"cannyDescription": "Canny edge detection",
@@ -222,7 +228,8 @@
"scribble": "scribble",
"selectModel": "Select a model",
"selectCLIPVisionModel": "Select a CLIP Vision model",
- "setControlImageDimensions": "Set Control Image Dimensions To W/H",
+ "setControlImageDimensions": "Copy size to W/H (optimize for model)",
+ "setControlImageDimensionsForce": "Copy size to W/H (ignore model)",
"showAdvanced": "Show Advanced",
"small": "Small",
"toggleControlNet": "Toggle this ControlNet",
@@ -892,6 +899,7 @@
"denoisingStrength": "Denoising Strength",
"downloadImage": "Download Image",
"general": "General",
+ "globalSettings": "Global Settings",
"height": "Height",
"imageFit": "Fit Initial Image To Output Size",
"images": "Images",
@@ -1182,6 +1190,10 @@
"heading": "Resize Mode",
"paragraphs": ["Method to fit Control Adapter's input image size to the output generation size."]
},
+ "ipAdapterMethod": {
+ "heading": "Method",
+ "paragraphs": ["Method by which to apply the current IP Adapter."]
+ },
"controlNetWeight": {
"heading": "Weight",
"paragraphs": [
@@ -1500,5 +1512,36 @@
},
"app": {
"storeNotInitialized": "Store is not initialized"
+ },
+ "controlLayers": {
+ "deleteAll": "Delete All",
+ "addLayer": "Add Layer",
+ "moveToFront": "Move to Front",
+ "moveToBack": "Move to Back",
+ "moveForward": "Move Forward",
+ "moveBackward": "Move Backward",
+ "brushSize": "Brush Size",
+ "controlLayers": "Control Layers (BETA)",
+ "globalMaskOpacity": "Global Mask Opacity",
+ "autoNegative": "Auto Negative",
+ "toggleVisibility": "Toggle Layer Visibility",
+ "deletePrompt": "Delete Prompt",
+ "resetRegion": "Reset Region",
+ "debugLayers": "Debug Layers",
+ "rectangle": "Rectangle",
+ "maskPreviewColor": "Mask Preview Color",
+ "addPositivePrompt": "Add $t(common.positivePrompt)",
+ "addNegativePrompt": "Add $t(common.negativePrompt)",
+ "addIPAdapter": "Add $t(common.ipAdapter)",
+ "regionalGuidance": "Regional Guidance",
+ "regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
+ "controlNetLayer": "$t(common.controlNet) $t(unifiedCanvas.layer)",
+ "ipAdapterLayer": "$t(common.ipAdapter) $t(unifiedCanvas.layer)",
+ "opacity": "Opacity",
+ "globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
+ "globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
+ "globalIPAdapter": "Global $t(common.ipAdapter)",
+ "globalIPAdapterLayer": "Global $t(common.ipAdapter) $t(unifiedCanvas.layer)",
+ "opacityFilter": "Opacity Filter"
}
}
diff --git a/invokeai/frontend/web/public/locales/es.json b/invokeai/frontend/web/public/locales/es.json
index 3037045db5..6b410cd0bf 100644
--- a/invokeai/frontend/web/public/locales/es.json
+++ b/invokeai/frontend/web/public/locales/es.json
@@ -33,7 +33,9 @@
"autoSwitchNewImages": "Auto seleccionar Imágenes nuevas",
"loadMore": "Cargar más",
"noImagesInGallery": "No hay imágenes para mostrar",
- "deleteImage": "Eliminar Imagen",
+ "deleteImage_one": "Eliminar Imagen",
+ "deleteImage_many": "",
+ "deleteImage_other": "",
"deleteImageBin": "Las imágenes eliminadas se enviarán a la papelera de tu sistema operativo.",
"deleteImagePermanent": "Las imágenes eliminadas no se pueden restaurar.",
"assets": "Activos",
diff --git a/invokeai/frontend/web/public/locales/it.json b/invokeai/frontend/web/public/locales/it.json
index db01e0af4b..491b31907b 100644
--- a/invokeai/frontend/web/public/locales/it.json
+++ b/invokeai/frontend/web/public/locales/it.json
@@ -82,7 +82,9 @@
"autoSwitchNewImages": "Passaggio automatico a nuove immagini",
"loadMore": "Carica altro",
"noImagesInGallery": "Nessuna immagine da visualizzare",
- "deleteImage": "Elimina l'immagine",
+ "deleteImage_one": "Elimina l'immagine",
+ "deleteImage_many": "Elimina {{count}} immagini",
+ "deleteImage_other": "Elimina {{count}} immagini",
"deleteImagePermanent": "Le immagini eliminate non possono essere ripristinate.",
"deleteImageBin": "Le immagini eliminate verranno spostate nel cestino del tuo sistema operativo.",
"assets": "Risorse",
diff --git a/invokeai/frontend/web/public/locales/ja.json b/invokeai/frontend/web/public/locales/ja.json
index d13b1e4cb0..264593153a 100644
--- a/invokeai/frontend/web/public/locales/ja.json
+++ b/invokeai/frontend/web/public/locales/ja.json
@@ -90,7 +90,7 @@
"problemDeletingImages": "画像の削除中に問題が発生",
"drop": "ドロップ",
"dropOrUpload": "$t(gallery.drop) またはアップロード",
- "deleteImage": "画像を削除",
+ "deleteImage_other": "画像を削除",
"deleteImageBin": "削除された画像はOSのゴミ箱に送られます。",
"deleteImagePermanent": "削除された画像は復元できません。",
"download": "ダウンロード",
diff --git a/invokeai/frontend/web/public/locales/ko.json b/invokeai/frontend/web/public/locales/ko.json
index 44f0f5eac6..1c02d86105 100644
--- a/invokeai/frontend/web/public/locales/ko.json
+++ b/invokeai/frontend/web/public/locales/ko.json
@@ -82,7 +82,7 @@
"drop": "드랍",
"problemDeletingImages": "이미지 삭제 중 발생한 문제",
"downloadSelection": "선택 항목 다운로드",
- "deleteImage": "이미지 삭제",
+ "deleteImage_other": "이미지 삭제",
"currentlyInUse": "이 이미지는 현재 다음 기능에서 사용되고 있습니다:",
"dropOrUpload": "$t(gallery.drop) 또는 업로드",
"copy": "복사",
diff --git a/invokeai/frontend/web/public/locales/nl.json b/invokeai/frontend/web/public/locales/nl.json
index 70adbb371d..29ceb3227b 100644
--- a/invokeai/frontend/web/public/locales/nl.json
+++ b/invokeai/frontend/web/public/locales/nl.json
@@ -42,7 +42,8 @@
"autoSwitchNewImages": "Wissel autom. naar nieuwe afbeeldingen",
"loadMore": "Laad meer",
"noImagesInGallery": "Geen afbeeldingen om te tonen",
- "deleteImage": "Verwijder afbeelding",
+ "deleteImage_one": "Verwijder afbeelding",
+ "deleteImage_other": "",
"deleteImageBin": "Verwijderde afbeeldingen worden naar de prullenbak van je besturingssysteem gestuurd.",
"deleteImagePermanent": "Verwijderde afbeeldingen kunnen niet worden hersteld.",
"assets": "Eigen onderdelen",
diff --git a/invokeai/frontend/web/public/locales/ru.json b/invokeai/frontend/web/public/locales/ru.json
index 8ac36ef2de..f254b7faa5 100644
--- a/invokeai/frontend/web/public/locales/ru.json
+++ b/invokeai/frontend/web/public/locales/ru.json
@@ -86,7 +86,9 @@
"noImagesInGallery": "Изображений нет",
"deleteImagePermanent": "Удаленные изображения невозможно восстановить.",
"deleteImageBin": "Удаленные изображения будут отправлены в корзину вашей операционной системы.",
- "deleteImage": "Удалить изображение",
+ "deleteImage_one": "Удалить изображение",
+ "deleteImage_few": "",
+ "deleteImage_many": "",
"assets": "Ресурсы",
"autoAssignBoardOnClick": "Авто-назначение доски по клику",
"deleteSelection": "Удалить выделенное",
diff --git a/invokeai/frontend/web/public/locales/tr.json b/invokeai/frontend/web/public/locales/tr.json
index 2a666a128c..415bd2d744 100644
--- a/invokeai/frontend/web/public/locales/tr.json
+++ b/invokeai/frontend/web/public/locales/tr.json
@@ -298,7 +298,8 @@
"noImagesInGallery": "Gösterilecek Görsel Yok",
"autoSwitchNewImages": "Yeni Görseli Biter Bitmez Gör",
"currentlyInUse": "Bu görsel şurada kullanımda:",
- "deleteImage": "Görseli Sil",
+ "deleteImage_one": "Görseli Sil",
+ "deleteImage_other": "",
"loadMore": "Daha Getir",
"setCurrentImage": "Çalışma Görseli Yap",
"unableToLoad": "Galeri Yüklenemedi",
diff --git a/invokeai/frontend/web/public/locales/zh_CN.json b/invokeai/frontend/web/public/locales/zh_CN.json
index e2cb35af74..8aff73d2a1 100644
--- a/invokeai/frontend/web/public/locales/zh_CN.json
+++ b/invokeai/frontend/web/public/locales/zh_CN.json
@@ -78,7 +78,7 @@
"autoSwitchNewImages": "自动切换到新图像",
"loadMore": "加载更多",
"noImagesInGallery": "无图像可用于显示",
- "deleteImage": "删除图片",
+ "deleteImage_other": "删除图片",
"deleteImageBin": "被删除的图片会发送到你操作系统的回收站。",
"deleteImagePermanent": "删除的图片无法被恢复。",
"assets": "素材",
diff --git a/invokeai/frontend/web/src/app/logging/logger.ts b/invokeai/frontend/web/src/app/logging/logger.ts
index d0e6340625..ca7a24201a 100644
--- a/invokeai/frontend/web/src/app/logging/logger.ts
+++ b/invokeai/frontend/web/src/app/logging/logger.ts
@@ -27,7 +27,8 @@ export type LoggerNamespace =
| 'socketio'
| 'session'
| 'queue'
- | 'dnd';
+ | 'dnd'
+ | 'controlLayers';
export const logger = (namespace: LoggerNamespace) => $logger.get().child({ namespace });
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
index cd0c1290e9..ac039c2df6 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
@@ -16,6 +16,7 @@ import { addCanvasMaskSavedToGalleryListener } from 'app/store/middleware/listen
import { addCanvasMaskToControlNetListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet';
import { addCanvasMergedListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasMerged';
import { addCanvasSavedToGalleryListener } from 'app/store/middleware/listenerMiddleware/listeners/canvasSavedToGallery';
+import { addControlLayersToControlAdapterBridge } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
import { addControlNetAutoProcessListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess';
import { addControlNetImageProcessedListener } from 'app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed';
import { addEnqueueRequestedCanvasListener } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedCanvas';
@@ -157,3 +158,5 @@ addUpscaleRequestedListener(startAppListening);
addDynamicPromptsListener(startAppListening);
addSetDefaultSettingsListener(startAppListening);
+
+addControlLayersToControlAdapterBridge(startAppListening);
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasImageToControlNet.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasImageToControlNet.ts
index 55392ebff4..b1b19b35dc 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasImageToControlNet.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasImageToControlNet.ts
@@ -48,12 +48,10 @@ export const addCanvasImageToControlNetListener = (startAppListening: AppStartLi
})
).unwrap();
- const { image_name } = imageDTO;
-
dispatch(
controlAdapterImageChanged({
id,
- controlImage: image_name,
+ controlImage: imageDTO,
})
);
},
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet.ts
index 569b4badc7..b3014277f1 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/canvasMaskToControlNet.ts
@@ -58,12 +58,10 @@ export const addCanvasMaskToControlNetListener = (startAppListening: AppStartLis
})
).unwrap();
- const { image_name } = imageDTO;
-
dispatch(
controlAdapterImageChanged({
id,
- controlImage: image_name,
+ controlImage: imageDTO,
})
);
},
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge.ts
new file mode 100644
index 0000000000..bc14277f88
--- /dev/null
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge.ts
@@ -0,0 +1,144 @@
+import { createAction } from '@reduxjs/toolkit';
+import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
+import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants';
+import { controlAdapterAdded, controlAdapterRemoved } from 'features/controlAdapters/store/controlAdaptersSlice';
+import type { ControlNetConfig, IPAdapterConfig } from 'features/controlAdapters/store/types';
+import { isControlAdapterProcessorType } from 'features/controlAdapters/store/types';
+import {
+ controlAdapterLayerAdded,
+ ipAdapterLayerAdded,
+ layerDeleted,
+ maskLayerIPAdapterAdded,
+ maskLayerIPAdapterDeleted,
+ regionalGuidanceLayerAdded,
+} from 'features/controlLayers/store/controlLayersSlice';
+import type { Layer } from 'features/controlLayers/store/types';
+import { modelConfigsAdapterSelectors, modelsApi } from 'services/api/endpoints/models';
+import { isControlNetModelConfig, isIPAdapterModelConfig } from 'services/api/types';
+import { assert } from 'tsafe';
+import { v4 as uuidv4 } from 'uuid';
+
+export const guidanceLayerAdded = createAction('controlLayers/guidanceLayerAdded');
+export const guidanceLayerDeleted = createAction('controlLayers/guidanceLayerDeleted');
+export const allLayersDeleted = createAction('controlLayers/allLayersDeleted');
+export const guidanceLayerIPAdapterAdded = createAction('controlLayers/guidanceLayerIPAdapterAdded');
+export const guidanceLayerIPAdapterDeleted = createAction<{ layerId: string; ipAdapterId: string }>(
+ 'controlLayers/guidanceLayerIPAdapterDeleted'
+);
+
+export const addControlLayersToControlAdapterBridge = (startAppListening: AppStartListening) => {
+ startAppListening({
+ actionCreator: guidanceLayerAdded,
+ effect: (action, { dispatch, getState }) => {
+ const type = action.payload;
+ const layerId = uuidv4();
+ if (type === 'regional_guidance_layer') {
+ dispatch(regionalGuidanceLayerAdded({ layerId }));
+ return;
+ }
+
+ const state = getState();
+ const baseModel = state.generation.model?.base;
+ const modelConfigs = modelsApi.endpoints.getModelConfigs.select(undefined)(state).data;
+
+ if (type === 'ip_adapter_layer') {
+ const ipAdapterId = uuidv4();
+ const overrides: Partial = {
+ id: ipAdapterId,
+ };
+
+ // Find and select the first matching model
+ if (modelConfigs) {
+ const models = modelConfigsAdapterSelectors.selectAll(modelConfigs).filter(isIPAdapterModelConfig);
+ overrides.model = models.find((m) => m.base === baseModel) ?? null;
+ }
+ dispatch(controlAdapterAdded({ type: 'ip_adapter', overrides }));
+ dispatch(ipAdapterLayerAdded({ layerId, ipAdapterId }));
+ return;
+ }
+
+ if (type === 'control_adapter_layer') {
+ const controlNetId = uuidv4();
+ const overrides: Partial = {
+ id: controlNetId,
+ };
+
+ // Find and select the first matching model
+ if (modelConfigs) {
+ const models = modelConfigsAdapterSelectors.selectAll(modelConfigs).filter(isControlNetModelConfig);
+ const model = models.find((m) => m.base === baseModel) ?? null;
+ overrides.model = model;
+ const defaultPreprocessor = model?.default_settings?.preprocessor;
+ overrides.processorType = isControlAdapterProcessorType(defaultPreprocessor) ? defaultPreprocessor : 'none';
+ overrides.processorNode = CONTROLNET_PROCESSORS[overrides.processorType].buildDefaults(baseModel);
+ }
+ dispatch(controlAdapterAdded({ type: 'controlnet', overrides }));
+ dispatch(controlAdapterLayerAdded({ layerId, controlNetId }));
+ return;
+ }
+ },
+ });
+
+ startAppListening({
+ actionCreator: guidanceLayerDeleted,
+ effect: (action, { getState, dispatch }) => {
+ const layerId = action.payload;
+ const state = getState();
+ const layer = state.controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(layer, `Layer ${layerId} not found`);
+
+ if (layer.type === 'ip_adapter_layer') {
+ dispatch(controlAdapterRemoved({ id: layer.ipAdapterId }));
+ } else if (layer.type === 'control_adapter_layer') {
+ dispatch(controlAdapterRemoved({ id: layer.controlNetId }));
+ } else if (layer.type === 'regional_guidance_layer') {
+ for (const ipAdapterId of layer.ipAdapterIds) {
+ dispatch(controlAdapterRemoved({ id: ipAdapterId }));
+ }
+ }
+ dispatch(layerDeleted(layerId));
+ },
+ });
+
+ startAppListening({
+ actionCreator: allLayersDeleted,
+ effect: (action, { dispatch, getOriginalState }) => {
+ const state = getOriginalState();
+ for (const layer of state.controlLayers.present.layers) {
+ dispatch(guidanceLayerDeleted(layer.id));
+ }
+ },
+ });
+
+ startAppListening({
+ actionCreator: guidanceLayerIPAdapterAdded,
+ effect: (action, { dispatch, getState }) => {
+ const layerId = action.payload;
+ const ipAdapterId = uuidv4();
+ const overrides: Partial = {
+ id: ipAdapterId,
+ };
+
+ // Find and select the first matching model
+ const state = getState();
+ const baseModel = state.generation.model?.base;
+ const modelConfigs = modelsApi.endpoints.getModelConfigs.select(undefined)(state).data;
+ if (modelConfigs) {
+ const models = modelConfigsAdapterSelectors.selectAll(modelConfigs).filter(isIPAdapterModelConfig);
+ overrides.model = models.find((m) => m.base === baseModel) ?? null;
+ }
+
+ dispatch(controlAdapterAdded({ type: 'ip_adapter', overrides }));
+ dispatch(maskLayerIPAdapterAdded({ layerId, ipAdapterId }));
+ },
+ });
+
+ startAppListening({
+ actionCreator: guidanceLayerIPAdapterDeleted,
+ effect: (action, { dispatch }) => {
+ const { layerId, ipAdapterId } = action.payload;
+ dispatch(controlAdapterRemoved({ id: ipAdapterId }));
+ dispatch(maskLayerIPAdapterDeleted({ layerId, ipAdapterId }));
+ },
+ });
+};
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess.ts
index e52df30681..14af0246a2 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetAutoProcess.ts
@@ -12,6 +12,7 @@ import {
selectControlAdapterById,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
+import { isEqual } from 'lodash-es';
type AnyControlAdapterParamChangeAction =
| ReturnType
@@ -52,6 +53,11 @@ const predicate: AnyListenerPredicate = (action, state, prevState) =>
return false;
}
+ if (prevCA.controlImage === ca.controlImage && isEqual(prevCA.processorNode, ca.processorNode)) {
+ // Don't re-process if the processor hasn't changed
+ return false;
+ }
+
const isProcessorSelected = processorType !== 'none';
const hasControlImage = Boolean(controlImage);
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed.ts
index 0055866aa7..08afc98836 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/controlNetImageProcessed.ts
@@ -91,7 +91,7 @@ export const addControlNetImageProcessedListener = (startAppListening: AppStartL
dispatch(
controlAdapterProcessedImageChanged({
id,
- processedControlImage: processedControlImage.image_name,
+ processedControlImage,
})
);
}
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDropped.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDropped.ts
index 5c1f321b64..307e3487dd 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDropped.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDropped.ts
@@ -71,7 +71,7 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
dispatch(
controlAdapterImageChanged({
id,
- controlImage: activeData.payload.imageDTO.image_name,
+ controlImage: activeData.payload.imageDTO,
})
);
dispatch(
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageUploaded.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageUploaded.ts
index 2cebf0aef8..a2ca4baeb1 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageUploaded.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageUploaded.ts
@@ -96,7 +96,7 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
dispatch(
controlAdapterImageChanged({
id,
- controlImage: imageDTO.image_name,
+ controlImage: imageDTO,
})
);
dispatch(
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelSelected.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelSelected.ts
index bc049cf498..b69e56e84a 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelSelected.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelSelected.ts
@@ -1,7 +1,7 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import {
- controlAdapterIsEnabledChanged,
+ controlAdapterModelChanged,
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { loraRemoved } from 'features/lora/store/loraSlice';
@@ -54,7 +54,7 @@ export const addModelSelectedListener = (startAppListening: AppStartListening) =
// handle incompatible controlnets
selectControlAdapterAll(state.controlAdapters).forEach((ca) => {
if (ca.model?.base !== newBaseModel) {
- dispatch(controlAdapterIsEnabledChanged({ id: ca.id, isEnabled: false }));
+ dispatch(controlAdapterModelChanged({ id: ca.id, modelConfig: null }));
modelsCleared += 1;
}
});
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelsLoaded.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelsLoaded.ts
index 2ba9aa3cbf..eb86f54c84 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelsLoaded.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/modelsLoaded.ts
@@ -6,9 +6,10 @@ import {
controlAdapterModelCleared,
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { loraRemoved } from 'features/lora/store/loraSlice';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
-import { heightChanged, modelChanged, vaeSelected, widthChanged } from 'features/parameters/store/generationSlice';
+import { modelChanged, vaeSelected } from 'features/parameters/store/generationSlice';
import { zParameterModel, zParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import { refinerModelChanged } from 'features/sdxl/store/sdxlSlice';
@@ -69,16 +70,22 @@ const handleMainModels: ModelHandler = (models, state, dispatch, log) => {
dispatch(modelChanged(defaultModelInList, currentModel));
const optimalDimension = getOptimalDimension(defaultModelInList);
- if (getIsSizeOptimal(state.generation.width, state.generation.height, optimalDimension)) {
+ if (
+ getIsSizeOptimal(
+ state.controlLayers.present.size.width,
+ state.controlLayers.present.size.height,
+ optimalDimension
+ )
+ ) {
return;
}
const { width, height } = calculateNewSize(
- state.generation.aspectRatio.value,
+ state.controlLayers.present.size.aspectRatio.value,
optimalDimension * optimalDimension
);
- dispatch(widthChanged(width));
- dispatch(heightChanged(height));
+ dispatch(widthChanged({ width }));
+ dispatch(heightChanged({ height }));
return;
}
}
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/promptChanged.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/promptChanged.ts
index b78ddc3f69..4633eb45a5 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/promptChanged.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/promptChanged.ts
@@ -1,5 +1,6 @@
import { isAnyOf } from '@reduxjs/toolkit';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
+import { positivePromptChanged } from 'features/controlLayers/store/controlLayersSlice';
import {
combinatorialToggled,
isErrorChanged,
@@ -10,11 +11,16 @@ import {
promptsChanged,
} from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
-import { setPositivePrompt } from 'features/parameters/store/generationSlice';
import { utilitiesApi } from 'services/api/endpoints/utilities';
import { socketConnected } from 'services/events/actions';
-const matcher = isAnyOf(setPositivePrompt, combinatorialToggled, maxPromptsChanged, maxPromptsReset, socketConnected);
+const matcher = isAnyOf(
+ positivePromptChanged,
+ combinatorialToggled,
+ maxPromptsChanged,
+ maxPromptsReset,
+ socketConnected
+);
export const addDynamicPromptsListener = (startAppListening: AppStartListening) => {
startAppListening({
@@ -22,7 +28,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
effect: async (action, { dispatch, getState, cancelActiveListeners, delay }) => {
cancelActiveListeners();
const state = getState();
- const { positivePrompt } = state.generation;
+ const { positivePrompt } = state.controlLayers.present;
const { maxPrompts } = state.dynamicPrompts;
if (state.config.disabledFeatures.includes('dynamicPrompting')) {
@@ -32,7 +38,7 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
const cachedPrompts = utilitiesApi.endpoints.dynamicPrompts.select({
prompt: positivePrompt,
max_prompts: maxPrompts,
- })(getState()).data;
+ })(state).data;
if (cachedPrompts) {
dispatch(promptsChanged(cachedPrompts.prompts));
@@ -40,8 +46,8 @@ export const addDynamicPromptsListener = (startAppListening: AppStartListening)
return;
}
- if (!getShouldProcessPrompt(state.generation.positivePrompt)) {
- dispatch(promptsChanged([state.generation.positivePrompt]));
+ if (!getShouldProcessPrompt(positivePrompt)) {
+ dispatch(promptsChanged([positivePrompt]));
dispatch(parsingErrorChanged(undefined));
dispatch(isErrorChanged(false));
return;
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/setDefaultSettings.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/setDefaultSettings.ts
index 7fbb55845f..6f3aa9756a 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/setDefaultSettings.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/setDefaultSettings.ts
@@ -1,14 +1,13 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
+import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { setDefaultSettings } from 'features/parameters/store/actions';
import {
- heightRecalled,
setCfgRescaleMultiplier,
setCfgScale,
setScheduler,
setSteps,
vaePrecisionChanged,
vaeSelected,
- widthRecalled,
} from 'features/parameters/store/generationSlice';
import {
isParameterCFGRescaleMultiplier,
@@ -100,13 +99,13 @@ export const addSetDefaultSettingsListener = (startAppListening: AppStartListeni
if (width) {
if (isParameterWidth(width)) {
- dispatch(widthRecalled(width));
+ dispatch(widthChanged({ width, updateAspectRatio: true }));
}
}
if (height) {
if (isParameterHeight(height)) {
- dispatch(heightRecalled(height));
+ dispatch(heightChanged({ height, updateAspectRatio: true }));
}
}
diff --git a/invokeai/frontend/web/src/app/store/store.ts b/invokeai/frontend/web/src/app/store/store.ts
index b538a3eaeb..9661f57f99 100644
--- a/invokeai/frontend/web/src/app/store/store.ts
+++ b/invokeai/frontend/web/src/app/store/store.ts
@@ -10,6 +10,11 @@ import {
controlAdaptersPersistConfig,
controlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
+import {
+ controlLayersPersistConfig,
+ controlLayersSlice,
+ controlLayersUndoableConfig,
+} from 'features/controlLayers/store/controlLayersSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
@@ -30,6 +35,7 @@ import { defaultsDeep, keys, omit, pick } from 'lodash-es';
import dynamicMiddlewares from 'redux-dynamic-middlewares';
import type { SerializeFunction, UnserializeFunction } from 'redux-remember';
import { rememberEnhancer, rememberReducer } from 'redux-remember';
+import undoable from 'redux-undo';
import { serializeError } from 'serialize-error';
import { api } from 'services/api';
import { authToastMiddleware } from 'services/api/authToastMiddleware';
@@ -59,6 +65,7 @@ const allReducers = {
[queueSlice.name]: queueSlice.reducer,
[workflowSlice.name]: workflowSlice.reducer,
[hrfSlice.name]: hrfSlice.reducer,
+ [controlLayersSlice.name]: undoable(controlLayersSlice.reducer, controlLayersUndoableConfig),
[api.reducerPath]: api.reducer,
};
@@ -103,6 +110,7 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[loraPersistConfig.name]: loraPersistConfig,
[modelManagerV2PersistConfig.name]: modelManagerV2PersistConfig,
[hrfPersistConfig.name]: hrfPersistConfig,
+ [controlLayersPersistConfig.name]: controlLayersPersistConfig,
};
const unserialize: UnserializeFunction = (data, key) => {
@@ -114,6 +122,7 @@ const unserialize: UnserializeFunction = (data, key) => {
try {
const { initialState, migrate } = persistConfig;
const parsed = JSON.parse(data);
+
// strip out old keys
const stripped = pick(parsed, keys(initialState));
// run (additive) migrations
@@ -141,7 +150,9 @@ const serialize: SerializeFunction = (data, key) => {
if (!persistConfig) {
throw new Error(`No persist config for slice "${key}"`);
}
- const result = omit(data, persistConfig.persistDenylist);
+ // Heuristic to determine if the slice is undoable - could just hardcode it in the persistConfig
+ const isUndoable = 'present' in data && 'past' in data && 'future' in data && '_latestUnfiltered' in data;
+ const result = omit(isUndoable ? data.present : data, persistConfig.persistDenylist);
return JSON.stringify(result);
};
diff --git a/invokeai/frontend/web/src/common/components/IAIColorPicker.tsx b/invokeai/frontend/web/src/common/components/IAIColorPicker.tsx
index 68ffa5369e..25b129f678 100644
--- a/invokeai/frontend/web/src/common/components/IAIColorPicker.tsx
+++ b/invokeai/frontend/web/src/common/components/IAIColorPicker.tsx
@@ -26,7 +26,7 @@ const sx: ChakraProps['sx'] = {
const colorPickerStyles: CSSProperties = { width: '100%' };
-const numberInputWidth: ChakraProps['w'] = '4.2rem';
+const numberInputWidth: ChakraProps['w'] = '3.5rem';
const IAIColorPicker = (props: IAIColorPickerProps) => {
const { color, onChange, withNumberInput, ...rest } = props;
@@ -41,7 +41,7 @@ const IAIColorPicker = (props: IAIColorPickerProps) => {
{withNumberInput && (
- {t('common.red')}
+ {t('common.red')[0]}
{
/>
- {t('common.green')}
+ {t('common.green')[0]}
{
/>
- {t('common.blue')}
+ {t('common.blue')[0]}
{
/>
- {t('common.alpha')}
+ {t('common.alpha')[0]}
& {
+ withNumberInput?: boolean;
+};
+
+const colorPickerPointerStyles: NonNullable = {
+ width: 6,
+ height: 6,
+ borderColor: 'base.100',
+};
+
+const sx: ChakraProps['sx'] = {
+ '.react-colorful__hue-pointer': colorPickerPointerStyles,
+ '.react-colorful__saturation-pointer': colorPickerPointerStyles,
+ '.react-colorful__alpha-pointer': colorPickerPointerStyles,
+ gap: 5,
+ flexDir: 'column',
+};
+
+const colorPickerStyles: CSSProperties = { width: '100%' };
+
+const numberInputWidth: ChakraProps['w'] = '3.5rem';
+
+const RgbColorPicker = (props: RgbColorPickerProps) => {
+ const { color, onChange, withNumberInput, ...rest } = props;
+ const { t } = useTranslation();
+ const handleChangeR = useCallback((r: number) => onChange({ ...color, r }), [color, onChange]);
+ const handleChangeG = useCallback((g: number) => onChange({ ...color, g }), [color, onChange]);
+ const handleChangeB = useCallback((b: number) => onChange({ ...color, b }), [color, onChange]);
+ return (
+
+
+ {withNumberInput && (
+
+
+ {t('common.red')[0]}
+
+
+
+ {t('common.green')[0]}
+
+
+
+ {t('common.blue')[0]}
+
+
+
+ )}
+
+ );
+};
+
+export default memo(RgbColorPicker);
diff --git a/invokeai/frontend/web/src/common/hooks/useIsReadyToEnqueue.ts b/invokeai/frontend/web/src/common/hooks/useIsReadyToEnqueue.ts
index b31efed970..d765e987eb 100644
--- a/invokeai/frontend/web/src/common/hooks/useIsReadyToEnqueue.ts
+++ b/invokeai/frontend/web/src/common/hooks/useIsReadyToEnqueue.ts
@@ -5,6 +5,7 @@ import {
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
+import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { selectNodesSlice } from 'features/nodes/store/nodesSlice';
@@ -23,10 +24,12 @@ const selector = createMemoizedSelector(
selectSystemSlice,
selectNodesSlice,
selectDynamicPromptsSlice,
+ selectControlLayersSlice,
activeTabNameSelector,
],
- (controlAdapters, generation, system, nodes, dynamicPrompts, activeTabName) => {
- const { initialImage, model, positivePrompt } = generation;
+ (controlAdapters, generation, system, nodes, dynamicPrompts, controlLayers, activeTabName) => {
+ const { initialImage, model } = generation;
+ const { positivePrompt } = controlLayers.present;
const { isConnected } = system;
@@ -94,7 +97,41 @@ const selector = createMemoizedSelector(
reasons.push(i18n.t('parameters.invoke.noModelSelected'));
}
- selectControlAdapterAll(controlAdapters).forEach((ca, i) => {
+ let enabledControlAdapters = selectControlAdapterAll(controlAdapters).filter((ca) => ca.isEnabled);
+
+ if (activeTabName === 'txt2img') {
+ // Special handling for control layers on txt2img
+ const enabledControlLayersAdapterIds = controlLayers.present.layers
+ .filter((l) => l.isEnabled)
+ .flatMap((layer) => {
+ if (layer.type === 'regional_guidance_layer') {
+ return layer.ipAdapterIds;
+ }
+ if (layer.type === 'control_adapter_layer') {
+ return [layer.controlNetId];
+ }
+ if (layer.type === 'ip_adapter_layer') {
+ return [layer.ipAdapterId];
+ }
+ });
+
+ enabledControlAdapters = enabledControlAdapters.filter((ca) => enabledControlLayersAdapterIds.includes(ca.id));
+ } else {
+ const allControlLayerAdapterIds = controlLayers.present.layers.flatMap((layer) => {
+ if (layer.type === 'regional_guidance_layer') {
+ return layer.ipAdapterIds;
+ }
+ if (layer.type === 'control_adapter_layer') {
+ return [layer.controlNetId];
+ }
+ if (layer.type === 'ip_adapter_layer') {
+ return [layer.ipAdapterId];
+ }
+ });
+ enabledControlAdapters = enabledControlAdapters.filter((ca) => !allControlLayerAdapterIds.includes(ca.id));
+ }
+
+ enabledControlAdapters.forEach((ca, i) => {
if (!ca.isEnabled) {
return;
}
diff --git a/invokeai/frontend/web/src/common/util/arrayUtils.test.ts b/invokeai/frontend/web/src/common/util/arrayUtils.test.ts
new file mode 100644
index 0000000000..5d0fd090f7
--- /dev/null
+++ b/invokeai/frontend/web/src/common/util/arrayUtils.test.ts
@@ -0,0 +1,85 @@
+import { moveBackward, moveForward, moveToBack, moveToFront } from 'common/util/arrayUtils';
+import { describe, expect, it } from 'vitest';
+
+describe('Array Manipulation Functions', () => {
+ const originalArray = ['a', 'b', 'c', 'd'];
+ describe('moveForwardOne', () => {
+ it('should move an item forward by one position', () => {
+ const array = [...originalArray];
+ const result = moveForward(array, (item) => item === 'b');
+ expect(result).toEqual(['a', 'c', 'b', 'd']);
+ });
+
+ it('should do nothing if the item is at the end', () => {
+ const array = [...originalArray];
+ const result = moveForward(array, (item) => item === 'd');
+ expect(result).toEqual(['a', 'b', 'c', 'd']);
+ });
+
+ it("should leave the array unchanged if the item isn't in the array", () => {
+ const array = [...originalArray];
+ const result = moveForward(array, (item) => item === 'z');
+ expect(result).toEqual(originalArray);
+ });
+ });
+
+ describe('moveToFront', () => {
+ it('should move an item to the front', () => {
+ const array = [...originalArray];
+ const result = moveToFront(array, (item) => item === 'c');
+ expect(result).toEqual(['c', 'a', 'b', 'd']);
+ });
+
+ it('should do nothing if the item is already at the front', () => {
+ const array = [...originalArray];
+ const result = moveToFront(array, (item) => item === 'a');
+ expect(result).toEqual(['a', 'b', 'c', 'd']);
+ });
+
+ it("should leave the array unchanged if the item isn't in the array", () => {
+ const array = [...originalArray];
+ const result = moveToFront(array, (item) => item === 'z');
+ expect(result).toEqual(originalArray);
+ });
+ });
+
+ describe('moveBackwardsOne', () => {
+ it('should move an item backward by one position', () => {
+ const array = [...originalArray];
+ const result = moveBackward(array, (item) => item === 'c');
+ expect(result).toEqual(['a', 'c', 'b', 'd']);
+ });
+
+ it('should do nothing if the item is at the beginning', () => {
+ const array = [...originalArray];
+ const result = moveBackward(array, (item) => item === 'a');
+ expect(result).toEqual(['a', 'b', 'c', 'd']);
+ });
+
+ it("should leave the array unchanged if the item isn't in the array", () => {
+ const array = [...originalArray];
+ const result = moveBackward(array, (item) => item === 'z');
+ expect(result).toEqual(originalArray);
+ });
+ });
+
+ describe('moveToBack', () => {
+ it('should move an item to the back', () => {
+ const array = [...originalArray];
+ const result = moveToBack(array, (item) => item === 'b');
+ expect(result).toEqual(['a', 'c', 'd', 'b']);
+ });
+
+ it('should do nothing if the item is already at the back', () => {
+ const array = [...originalArray];
+ const result = moveToBack(array, (item) => item === 'd');
+ expect(result).toEqual(['a', 'b', 'c', 'd']);
+ });
+
+ it("should leave the array unchanged if the item isn't in the array", () => {
+ const array = [...originalArray];
+ const result = moveToBack(array, (item) => item === 'z');
+ expect(result).toEqual(originalArray);
+ });
+ });
+});
diff --git a/invokeai/frontend/web/src/common/util/arrayUtils.ts b/invokeai/frontend/web/src/common/util/arrayUtils.ts
new file mode 100644
index 0000000000..38c99b63ec
--- /dev/null
+++ b/invokeai/frontend/web/src/common/util/arrayUtils.ts
@@ -0,0 +1,37 @@
+export const moveForward = (array: T[], callback: (item: T) => boolean): T[] => {
+ const index = array.findIndex(callback);
+ if (index >= 0 && index < array.length - 1) {
+ //@ts-expect-error - These indicies are safe per the previous check
+ [array[index], array[index + 1]] = [array[index + 1], array[index]];
+ }
+ return array;
+};
+
+export const moveToFront = (array: T[], callback: (item: T) => boolean): T[] => {
+ const index = array.findIndex(callback);
+ if (index > 0) {
+ const [item] = array.splice(index, 1);
+ //@ts-expect-error - These indicies are safe per the previous check
+ array.unshift(item);
+ }
+ return array;
+};
+
+export const moveBackward = (array: T[], callback: (item: T) => boolean): T[] => {
+ const index = array.findIndex(callback);
+ if (index > 0) {
+ //@ts-expect-error - These indicies are safe per the previous check
+ [array[index], array[index - 1]] = [array[index - 1], array[index]];
+ }
+ return array;
+};
+
+export const moveToBack = (array: T[], callback: (item: T) => boolean): T[] => {
+ const index = array.findIndex(callback);
+ if (index >= 0 && index < array.length - 1) {
+ const [item] = array.splice(index, 1);
+ //@ts-expect-error - These indicies are safe per the previous check
+ array.push(item);
+ }
+ return array;
+};
diff --git a/invokeai/frontend/web/src/common/util/stopPropagation.ts b/invokeai/frontend/web/src/common/util/stopPropagation.ts
new file mode 100644
index 0000000000..b3481b7c0e
--- /dev/null
+++ b/invokeai/frontend/web/src/common/util/stopPropagation.ts
@@ -0,0 +1,3 @@
+export const stopPropagation = (e: React.MouseEvent) => {
+ e.stopPropagation();
+};
diff --git a/invokeai/frontend/web/src/features/canvas/hooks/useCanvasZoom.ts b/invokeai/frontend/web/src/features/canvas/hooks/useCanvasZoom.ts
index ef6a74ae9c..1434bc9afc 100644
--- a/invokeai/frontend/web/src/features/canvas/hooks/useCanvasZoom.ts
+++ b/invokeai/frontend/web/src/features/canvas/hooks/useCanvasZoom.ts
@@ -10,6 +10,18 @@ import { clamp } from 'lodash-es';
import type { MutableRefObject } from 'react';
import { useCallback } from 'react';
+export const calculateNewBrushSize = (brushSize: number, delta: number) => {
+ // This equation was derived by fitting a curve to the desired brush sizes and deltas
+ // see https://github.com/invoke-ai/InvokeAI/pull/5542#issuecomment-1915847565
+ const targetDelta = Math.sign(delta) * 0.7363 * Math.pow(1.0394, brushSize);
+ // This needs to be clamped to prevent the delta from getting too large
+ const finalDelta = clamp(targetDelta, -20, 20);
+ // The new brush size is also clamped to prevent it from getting too large or small
+ const newBrushSize = clamp(brushSize + finalDelta, 1, 500);
+
+ return newBrushSize;
+};
+
const useCanvasWheel = (stageRef: MutableRefObject) => {
const dispatch = useAppDispatch();
const stageScale = useAppSelector((s) => s.canvas.stageScale);
@@ -36,15 +48,7 @@ const useCanvasWheel = (stageRef: MutableRefObject) => {
}
if ($ctrl.get() || $meta.get()) {
- // This equation was derived by fitting a curve to the desired brush sizes and deltas
- // see https://github.com/invoke-ai/InvokeAI/pull/5542#issuecomment-1915847565
- const targetDelta = Math.sign(delta) * 0.7363 * Math.pow(1.0394, brushSize);
- // This needs to be clamped to prevent the delta from getting too large
- const finalDelta = clamp(targetDelta, -20, 20);
- // The new brush size is also clamped to prevent it from getting too large or small
- const newBrushSize = clamp(brushSize + finalDelta, 1, 500);
-
- dispatch(setBrushSize(newBrushSize));
+ dispatch(setBrushSize(calculateNewBrushSize(brushSize, delta)));
} else {
const cursorPos = stageRef.current.getPointerPosition();
let delta = e.evt.deltaY;
diff --git a/invokeai/frontend/web/src/features/canvas/util/blobToDataURL.ts b/invokeai/frontend/web/src/features/canvas/util/blobToDataURL.ts
index 2443396105..f29010c99c 100644
--- a/invokeai/frontend/web/src/features/canvas/util/blobToDataURL.ts
+++ b/invokeai/frontend/web/src/features/canvas/util/blobToDataURL.ts
@@ -7,3 +7,22 @@ export const blobToDataURL = (blob: Blob): Promise => {
reader.readAsDataURL(blob);
});
};
+
+export function imageDataToDataURL(imageData: ImageData): string {
+ const { width, height } = imageData;
+
+ // Create a canvas to transfer the ImageData to
+ const canvas = document.createElement('canvas');
+ canvas.width = width;
+ canvas.height = height;
+
+ // Draw the ImageData onto the canvas
+ const ctx = canvas.getContext('2d');
+ if (!ctx) {
+ throw new Error('Unable to get canvas context');
+ }
+ ctx.putImageData(imageData, 0, 0);
+
+ // Convert the canvas to a data URL (base64)
+ return canvas.toDataURL();
+}
diff --git a/invokeai/frontend/web/src/features/canvas/util/colorToString.ts b/invokeai/frontend/web/src/features/canvas/util/colorToString.ts
index a4b619c5de..25d79fed5a 100644
--- a/invokeai/frontend/web/src/features/canvas/util/colorToString.ts
+++ b/invokeai/frontend/web/src/features/canvas/util/colorToString.ts
@@ -1,6 +1,11 @@
-import type { RgbaColor } from 'react-colorful';
+import type { RgbaColor, RgbColor } from 'react-colorful';
export const rgbaColorToString = (color: RgbaColor): string => {
const { r, g, b, a } = color;
return `rgba(${r}, ${g}, ${b}, ${a})`;
};
+
+export const rgbColorToString = (color: RgbColor): string => {
+ const { r, g, b } = color;
+ return `rgba(${r}, ${g}, ${b})`;
+};
diff --git a/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterConfig.tsx b/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterConfig.tsx
index fcc816d75f..032e46f477 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterConfig.tsx
+++ b/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterConfig.tsx
@@ -113,7 +113,7 @@ const ControlAdapterConfig = (props: { id: string; number: number }) => {
-
+ {controlAdapterType === 'ip_adapter' && }
diff --git a/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterImagePreview.tsx b/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterImagePreview.tsx
index c136fbe064..56589fe613 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterImagePreview.tsx
+++ b/invokeai/frontend/web/src/features/controlAdapters/components/ControlAdapterImagePreview.tsx
@@ -13,9 +13,10 @@ import {
controlAdapterImageChanged,
selectControlAdaptersSlice,
} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
-import { heightChanged, selectOptimalDimension, widthChanged } from 'features/parameters/store/generationSlice';
+import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
@@ -99,8 +100,8 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
controlImage.width / controlImage.height,
optimalDimension * optimalDimension
);
- dispatch(widthChanged(width));
- dispatch(heightChanged(height));
+ dispatch(widthChanged({ width, updateAspectRatio: true }));
+ dispatch(heightChanged({ height, updateAspectRatio: true }));
}
}, [controlImage, activeTabName, dispatch, optimalDimension]);
diff --git a/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterIPMethod.tsx b/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterIPMethod.tsx
index 7385997804..d7d91ab780 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterIPMethod.tsx
+++ b/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterIPMethod.tsx
@@ -23,8 +23,8 @@ const ParamControlAdapterIPMethod = ({ id }: Props) => {
const options: { label: string; value: IPMethod }[] = useMemo(
() => [
{ label: t('controlnet.full'), value: 'full' },
- { label: t('controlnet.style'), value: 'style' },
- { label: t('controlnet.composition'), value: 'composition' },
+ { label: `${t('controlnet.style')} (${t('common.beta')})`, value: 'style' },
+ { label: `${t('controlnet.composition')} (${t('common.beta')})`, value: 'composition' },
],
[t]
);
@@ -46,13 +46,9 @@ const ParamControlAdapterIPMethod = ({ id }: Props) => {
const value = useMemo(() => options.find((o) => o.value === method), [options, method]);
- if (!method) {
- return null;
- }
-
return (
-
+
{t('controlnet.ipAdapterMethod')}
diff --git a/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterModel.tsx b/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterModel.tsx
index 00c7d5859d..73a7d695b3 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterModel.tsx
+++ b/invokeai/frontend/web/src/features/controlAdapters/components/parameters/ParamControlAdapterModel.tsx
@@ -102,13 +102,9 @@ const ParamControlAdapterModel = ({ id }: ParamControlAdapterModelProps) => {
);
return (
-
+
-
+
{
{
const selector = useMemo(
() =>
createMemoizedSelector(selectControlAdaptersSlice, (controlAdapters) => {
- const cn = selectControlAdapterById(controlAdapters, id);
- if (cn && cn?.type === 'ip_adapter') {
- return cn.method;
- }
+ const ca = selectControlAdapterById(controlAdapters, id);
+ assert(ca?.type === 'ip_adapter');
+ return ca.method;
}),
[id]
);
diff --git a/invokeai/frontend/web/src/features/controlAdapters/store/controlAdaptersSlice.ts b/invokeai/frontend/web/src/features/controlAdapters/store/controlAdaptersSlice.ts
index 9a1ce5e984..0c1ac20200 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/store/controlAdaptersSlice.ts
+++ b/invokeai/frontend/web/src/features/controlAdapters/store/controlAdaptersSlice.ts
@@ -7,7 +7,7 @@ import { buildControlAdapter } from 'features/controlAdapters/util/buildControlA
import { buildControlAdapterProcessor } from 'features/controlAdapters/util/buildControlAdapterProcessor';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { merge, uniq } from 'lodash-es';
-import type { ControlNetModelConfig, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
+import type { ControlNetModelConfig, ImageDTO, IPAdapterModelConfig, T2IAdapterModelConfig } from 'services/api/types';
import { socketInvocationError } from 'services/events/actions';
import { v4 as uuidv4 } from 'uuid';
@@ -134,23 +134,46 @@ export const controlAdaptersSlice = createSlice({
const { id, isEnabled } = action.payload;
caAdapter.updateOne(state, { id, changes: { isEnabled } });
},
- controlAdapterImageChanged: (
- state,
- action: PayloadAction<{
- id: string;
- controlImage: string | null;
- }>
- ) => {
+ controlAdapterImageChanged: (state, action: PayloadAction<{ id: string; controlImage: ImageDTO | null }>) => {
const { id, controlImage } = action.payload;
const ca = selectControlAdapterById(state, id);
if (!ca) {
return;
}
- caAdapter.updateOne(state, {
- id,
- changes: { controlImage, processedControlImage: null },
- });
+ if (isControlNetOrT2IAdapter(ca)) {
+ if (controlImage) {
+ const { image_name, width, height } = controlImage;
+ const processorNode = deepClone(ca.processorNode);
+ const minDim = Math.min(controlImage.width, controlImage.height);
+ if ('detect_resolution' in processorNode) {
+ processorNode.detect_resolution = minDim;
+ }
+ if ('image_resolution' in processorNode) {
+ processorNode.image_resolution = minDim;
+ }
+ if ('resolution' in processorNode) {
+ processorNode.resolution = minDim;
+ }
+ caAdapter.updateOne(state, {
+ id,
+ changes: {
+ processorNode,
+ controlImage: image_name,
+ controlImageDimensions: { width, height },
+ processedControlImage: null,
+ },
+ });
+ } else {
+ caAdapter.updateOne(state, {
+ id,
+ changes: { controlImage: null, controlImageDimensions: null, processedControlImage: null },
+ });
+ }
+ } else {
+ // ip adapter
+ caAdapter.updateOne(state, { id, changes: { controlImage: controlImage?.image_name ?? null } });
+ }
if (controlImage !== null && isControlNetOrT2IAdapter(ca) && ca.processorType !== 'none') {
state.pendingControlImages.push(id);
@@ -160,7 +183,7 @@ export const controlAdaptersSlice = createSlice({
state,
action: PayloadAction<{
id: string;
- processedControlImage: string | null;
+ processedControlImage: ImageDTO | null;
}>
) => {
const { id, processedControlImage } = action.payload;
@@ -173,12 +196,24 @@ export const controlAdaptersSlice = createSlice({
return;
}
- caAdapter.updateOne(state, {
- id,
- changes: {
- processedControlImage,
- },
- });
+ if (processedControlImage) {
+ const { image_name, width, height } = processedControlImage;
+ caAdapter.updateOne(state, {
+ id,
+ changes: {
+ processedControlImage: image_name,
+ processedControlImageDimensions: { width, height },
+ },
+ });
+ } else {
+ caAdapter.updateOne(state, {
+ id,
+ changes: {
+ processedControlImage: null,
+ processedControlImageDimensions: null,
+ },
+ });
+ }
state.pendingControlImages = state.pendingControlImages.filter((pendingId) => pendingId !== id);
},
@@ -192,7 +227,7 @@ export const controlAdaptersSlice = createSlice({
state,
action: PayloadAction<{
id: string;
- modelConfig: ControlNetModelConfig | T2IAdapterModelConfig | IPAdapterModelConfig;
+ modelConfig: ControlNetModelConfig | T2IAdapterModelConfig | IPAdapterModelConfig | null;
}>
) => {
const { id, modelConfig } = action.payload;
@@ -201,6 +236,11 @@ export const controlAdaptersSlice = createSlice({
return;
}
+ if (modelConfig === null) {
+ caAdapter.updateOne(state, { id, changes: { model: null } });
+ return;
+ }
+
const model = zModelIdentifierField.parse(modelConfig);
if (!isControlNetOrT2IAdapter(cn)) {
@@ -208,22 +248,36 @@ export const controlAdaptersSlice = createSlice({
return;
}
- const update: Update = {
- id,
- changes: { model, shouldAutoConfig: true },
- };
-
- update.changes.processedControlImage = null;
-
if (modelConfig.type === 'ip_adapter') {
// should never happen...
return;
}
- const processor = buildControlAdapterProcessor(modelConfig);
- update.changes.processorType = processor.processorType;
- update.changes.processorNode = processor.processorNode;
+ // We always update the model
+ const update: Update = { id, changes: { model } };
+ // Build the default processor for this model
+ const processor = buildControlAdapterProcessor(modelConfig);
+ if (processor.processorType !== cn.processorNode.type) {
+ // If the processor type has changed, update the processor node
+ update.changes.shouldAutoConfig = true;
+ update.changes.processedControlImage = null;
+ update.changes.processorType = processor.processorType;
+ update.changes.processorNode = processor.processorNode;
+
+ if (cn.controlImageDimensions) {
+ const minDim = Math.min(cn.controlImageDimensions.width, cn.controlImageDimensions.height);
+ if ('detect_resolution' in update.changes.processorNode) {
+ update.changes.processorNode.detect_resolution = minDim;
+ }
+ if ('image_resolution' in update.changes.processorNode) {
+ update.changes.processorNode.image_resolution = minDim;
+ }
+ if ('resolution' in update.changes.processorNode) {
+ update.changes.processorNode.resolution = minDim;
+ }
+ }
+ }
caAdapter.updateOne(state, update);
},
controlAdapterWeightChanged: (state, action: PayloadAction<{ id: string; weight: number }>) => {
@@ -340,8 +394,23 @@ export const controlAdaptersSlice = createSlice({
if (update.changes.shouldAutoConfig && modelConfig) {
const processor = buildControlAdapterProcessor(modelConfig);
- update.changes.processorType = processor.processorType;
- update.changes.processorNode = processor.processorNode;
+ if (processor.processorType !== cn.processorNode.type) {
+ update.changes.processorType = processor.processorType;
+ update.changes.processorNode = processor.processorNode;
+ // Copy image resolution settings, urgh
+ if (cn.controlImageDimensions) {
+ const minDim = Math.min(cn.controlImageDimensions.width, cn.controlImageDimensions.height);
+ if ('detect_resolution' in update.changes.processorNode) {
+ update.changes.processorNode.detect_resolution = minDim;
+ }
+ if ('image_resolution' in update.changes.processorNode) {
+ update.changes.processorNode.image_resolution = minDim;
+ }
+ if ('resolution' in update.changes.processorNode) {
+ update.changes.processorNode.resolution = minDim;
+ }
+ }
+ }
}
caAdapter.updateOne(state, update);
diff --git a/invokeai/frontend/web/src/features/controlAdapters/store/types.ts b/invokeai/frontend/web/src/features/controlAdapters/store/types.ts
index 7e2f18af5c..80af59cd01 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/store/types.ts
+++ b/invokeai/frontend/web/src/features/controlAdapters/store/types.ts
@@ -225,7 +225,9 @@ export type ControlNetConfig = {
controlMode: ControlMode;
resizeMode: ResizeMode;
controlImage: string | null;
+ controlImageDimensions: { width: number; height: number } | null;
processedControlImage: string | null;
+ processedControlImageDimensions: { width: number; height: number } | null;
processorType: ControlAdapterProcessorType;
processorNode: RequiredControlAdapterProcessorNode;
shouldAutoConfig: boolean;
@@ -241,7 +243,9 @@ export type T2IAdapterConfig = {
endStepPct: number;
resizeMode: ResizeMode;
controlImage: string | null;
+ controlImageDimensions: { width: number; height: number } | null;
processedControlImage: string | null;
+ processedControlImageDimensions: { width: number; height: number } | null;
processorType: ControlAdapterProcessorType;
processorNode: RequiredControlAdapterProcessorNode;
shouldAutoConfig: boolean;
diff --git a/invokeai/frontend/web/src/features/controlAdapters/util/buildControlAdapter.ts b/invokeai/frontend/web/src/features/controlAdapters/util/buildControlAdapter.ts
index ad7bdba363..7c9c28e2b3 100644
--- a/invokeai/frontend/web/src/features/controlAdapters/util/buildControlAdapter.ts
+++ b/invokeai/frontend/web/src/features/controlAdapters/util/buildControlAdapter.ts
@@ -20,7 +20,9 @@ export const initialControlNet: Omit = {
controlMode: 'balanced',
resizeMode: 'just_resize',
controlImage: null,
+ controlImageDimensions: null,
processedControlImage: null,
+ processedControlImageDimensions: null,
processorType: 'canny_image_processor',
processorNode: CONTROLNET_PROCESSORS.canny_image_processor.buildDefaults() as RequiredCannyImageProcessorInvocation,
shouldAutoConfig: true,
@@ -35,7 +37,9 @@ export const initialT2IAdapter: Omit = {
endStepPct: 1,
resizeMode: 'just_resize',
controlImage: null,
+ controlImageDimensions: null,
processedControlImage: null,
+ processedControlImageDimensions: null,
processorType: 'canny_image_processor',
processorNode: CONTROLNET_PROCESSORS.canny_image_processor.buildDefaults() as RequiredCannyImageProcessorInvocation,
shouldAutoConfig: true,
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/AddLayerButton.tsx b/invokeai/frontend/web/src/features/controlLayers/components/AddLayerButton.tsx
new file mode 100644
index 0000000000..b521153239
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/AddLayerButton.tsx
@@ -0,0 +1,41 @@
+import { Button, Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
+import { guidanceLayerAdded } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiPlusBold } from 'react-icons/pi';
+
+export const AddLayerButton = memo(() => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const addRegionalGuidanceLayer = useCallback(() => {
+ dispatch(guidanceLayerAdded('regional_guidance_layer'));
+ }, [dispatch]);
+ const addControlAdapterLayer = useCallback(() => {
+ dispatch(guidanceLayerAdded('control_adapter_layer'));
+ }, [dispatch]);
+ const addIPAdapterLayer = useCallback(() => {
+ dispatch(guidanceLayerAdded('ip_adapter_layer'));
+ }, [dispatch]);
+
+ return (
+
+ } variant="ghost">
+ {t('controlLayers.addLayer')}
+
+
+ } onClick={addRegionalGuidanceLayer}>
+ {t('controlLayers.regionalGuidanceLayer')}
+
+ } onClick={addControlAdapterLayer}>
+ {t('controlLayers.globalControlAdapterLayer')}
+
+ } onClick={addIPAdapterLayer}>
+ {t('controlLayers.globalIPAdapterLayer')}
+
+
+
+ );
+});
+
+AddLayerButton.displayName = 'AddLayerButton';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/AddPromptButtons.tsx b/invokeai/frontend/web/src/features/controlLayers/components/AddPromptButtons.tsx
new file mode 100644
index 0000000000..88eac207b2
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/AddPromptButtons.tsx
@@ -0,0 +1,70 @@
+import { Button, Flex } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { guidanceLayerIPAdapterAdded } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ isRegionalGuidanceLayer,
+ maskLayerNegativePromptChanged,
+ maskLayerPositivePromptChanged,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiPlusBold } from 'react-icons/pi';
+import { assert } from 'tsafe';
+type AddPromptButtonProps = {
+ layerId: string;
+};
+
+export const AddPromptButtons = ({ layerId }: AddPromptButtonProps) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const selectValidActions = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ return {
+ canAddPositivePrompt: layer.positivePrompt === null,
+ canAddNegativePrompt: layer.negativePrompt === null,
+ };
+ }),
+ [layerId]
+ );
+ const validActions = useAppSelector(selectValidActions);
+ const addPositivePrompt = useCallback(() => {
+ dispatch(maskLayerPositivePromptChanged({ layerId, prompt: '' }));
+ }, [dispatch, layerId]);
+ const addNegativePrompt = useCallback(() => {
+ dispatch(maskLayerNegativePromptChanged({ layerId, prompt: '' }));
+ }, [dispatch, layerId]);
+ const addIPAdapter = useCallback(() => {
+ dispatch(guidanceLayerIPAdapterAdded(layerId));
+ }, [dispatch, layerId]);
+
+ return (
+
+ }
+ onClick={addPositivePrompt}
+ isDisabled={!validActions.canAddPositivePrompt}
+ >
+ {t('common.positivePrompt')}
+
+ }
+ onClick={addNegativePrompt}
+ isDisabled={!validActions.canAddNegativePrompt}
+ >
+ {t('common.negativePrompt')}
+
+ } onClick={addIPAdapter}>
+ {t('common.ipAdapter')}
+
+
+ );
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/BrushSize.tsx b/invokeai/frontend/web/src/features/controlLayers/components/BrushSize.tsx
new file mode 100644
index 0000000000..a34250c29f
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/BrushSize.tsx
@@ -0,0 +1,63 @@
+import {
+ CompositeNumberInput,
+ CompositeSlider,
+ FormControl,
+ FormLabel,
+ Popover,
+ PopoverArrow,
+ PopoverBody,
+ PopoverContent,
+ PopoverTrigger,
+} from '@invoke-ai/ui-library';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { brushSizeChanged, initialControlLayersState } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+
+const marks = [0, 100, 200, 300];
+const formatPx = (v: number | string) => `${v} px`;
+
+export const BrushSize = memo(() => {
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+ const brushSize = useAppSelector((s) => s.controlLayers.present.brushSize);
+ const onChange = useCallback(
+ (v: number) => {
+ dispatch(brushSizeChanged(Math.round(v)));
+ },
+ [dispatch]
+ );
+ return (
+
+ {t('controlLayers.brushSize')}
+
+
+
+
+
+
+
+
+
+
+
+
+ );
+});
+
+BrushSize.displayName = 'BrushSize';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/CALayerListItem.tsx b/invokeai/frontend/web/src/features/controlLayers/components/CALayerListItem.tsx
new file mode 100644
index 0000000000..f97546c4fe
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/CALayerListItem.tsx
@@ -0,0 +1,71 @@
+import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import CALayerOpacity from 'features/controlLayers/components/CALayerOpacity';
+import ControlAdapterLayerConfig from 'features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig';
+import { LayerDeleteButton } from 'features/controlLayers/components/LayerDeleteButton';
+import { LayerMenu } from 'features/controlLayers/components/LayerMenu';
+import { LayerTitle } from 'features/controlLayers/components/LayerTitle';
+import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerVisibilityToggle';
+import {
+ isControlAdapterLayer,
+ layerSelected,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { assert } from 'tsafe';
+
+type Props = {
+ layerId: string;
+};
+
+export const CALayerListItem = memo(({ layerId }: Props) => {
+ const dispatch = useAppDispatch();
+ const selector = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isControlAdapterLayer(layer), `Layer ${layerId} not found or not a ControlNet layer`);
+ return {
+ controlNetId: layer.controlNetId,
+ isSelected: layerId === controlLayers.present.selectedLayerId,
+ };
+ }),
+ [layerId]
+ );
+ const { controlNetId, isSelected } = useAppSelector(selector);
+ const onClickCapture = useCallback(() => {
+ // Must be capture so that the layer is selected before deleting/resetting/etc
+ dispatch(layerSelected(layerId));
+ }, [dispatch, layerId]);
+ const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
+
+ return (
+
+
+
+
+
+
+
+
+
+
+ {isOpen && (
+
+
+
+ )}
+
+
+ );
+});
+
+CALayerListItem.displayName = 'CALayerListItem';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/CALayerOpacity.tsx b/invokeai/frontend/web/src/features/controlLayers/components/CALayerOpacity.tsx
new file mode 100644
index 0000000000..a6107da1ec
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/CALayerOpacity.tsx
@@ -0,0 +1,98 @@
+import {
+ CompositeNumberInput,
+ CompositeSlider,
+ Flex,
+ FormControl,
+ FormLabel,
+ IconButton,
+ Popover,
+ PopoverArrow,
+ PopoverBody,
+ PopoverContent,
+ PopoverTrigger,
+ Switch,
+} from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { useLayerOpacity } from 'features/controlLayers/hooks/layerStateHooks';
+import { isFilterEnabledChanged, layerOpacityChanged } from 'features/controlLayers/store/controlLayersSlice';
+import type { ChangeEvent } from 'react';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiDropHalfFill } from 'react-icons/pi';
+
+type Props = {
+ layerId: string;
+};
+
+const marks = [0, 25, 50, 75, 100];
+const formatPct = (v: number | string) => `${v} %`;
+
+const CALayerOpacity = ({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const { opacity, isFilterEnabled } = useLayerOpacity(layerId);
+ const onChangeOpacity = useCallback(
+ (v: number) => {
+ dispatch(layerOpacityChanged({ layerId, opacity: v / 100 }));
+ },
+ [dispatch, layerId]
+ );
+ const onChangeFilter = useCallback(
+ (e: ChangeEvent) => {
+ dispatch(isFilterEnabledChanged({ layerId, isFilterEnabled: e.target.checked }));
+ },
+ [dispatch, layerId]
+ );
+ return (
+
+
+ }
+ variant="ghost"
+ onDoubleClick={stopPropagation}
+ />
+
+
+
+
+
+
+
+ {t('controlLayers.opacityFilter')}
+
+
+
+
+ {t('controlLayers.opacity')}
+
+
+
+
+
+
+
+ );
+};
+
+export default memo(CALayerOpacity);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.stories.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.stories.tsx
new file mode 100644
index 0000000000..c0fa306c6b
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.stories.tsx
@@ -0,0 +1,24 @@
+import { Flex } from '@invoke-ai/ui-library';
+import type { Meta, StoryObj } from '@storybook/react';
+import { ControlLayersEditor } from 'features/controlLayers/components/ControlLayersEditor';
+
+const meta: Meta = {
+ title: 'Feature/ControlLayers',
+ tags: ['autodocs'],
+ component: ControlLayersEditor,
+};
+
+export default meta;
+type Story = StoryObj;
+
+const Component = () => {
+ return (
+
+
+
+ );
+};
+
+export const Default: Story = {
+ render: Component,
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.tsx
new file mode 100644
index 0000000000..e9275426fe
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersEditor.tsx
@@ -0,0 +1,24 @@
+/* eslint-disable i18next/no-literal-string */
+import { Flex } from '@invoke-ai/ui-library';
+import { ControlLayersToolbar } from 'features/controlLayers/components/ControlLayersToolbar';
+import { StageComponent } from 'features/controlLayers/components/StageComponent';
+import { memo } from 'react';
+
+export const ControlLayersEditor = memo(() => {
+ return (
+
+
+
+
+ );
+});
+
+ControlLayersEditor.displayName = 'ControlLayersEditor';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersPanelContent.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersPanelContent.tsx
new file mode 100644
index 0000000000..e2865be356
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersPanelContent.tsx
@@ -0,0 +1,59 @@
+/* eslint-disable i18next/no-literal-string */
+import { Flex } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppSelector } from 'app/store/storeHooks';
+import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
+import { AddLayerButton } from 'features/controlLayers/components/AddLayerButton';
+import { CALayerListItem } from 'features/controlLayers/components/CALayerListItem';
+import { DeleteAllLayersButton } from 'features/controlLayers/components/DeleteAllLayersButton';
+import { IPLayerListItem } from 'features/controlLayers/components/IPLayerListItem';
+import { RGLayerListItem } from 'features/controlLayers/components/RGLayerListItem';
+import { isRenderableLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
+import type { Layer } from 'features/controlLayers/store/types';
+import { partition } from 'lodash-es';
+import { memo } from 'react';
+
+const selectLayerIdTypePairs = createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const [renderableLayers, ipAdapterLayers] = partition(controlLayers.present.layers, isRenderableLayer);
+ return [...ipAdapterLayers, ...renderableLayers].map((l) => ({ id: l.id, type: l.type })).reverse();
+});
+
+export const ControlLayersPanelContent = memo(() => {
+ const layerIdTypePairs = useAppSelector(selectLayerIdTypePairs);
+ return (
+
+
+
+
+
+
+
+ {layerIdTypePairs.map(({ id, type }) => (
+
+ ))}
+
+
+
+ );
+});
+
+ControlLayersPanelContent.displayName = 'ControlLayersPanelContent';
+
+type LayerWrapperProps = {
+ id: string;
+ type: Layer['type'];
+};
+
+const LayerWrapper = memo(({ id, type }: LayerWrapperProps) => {
+ if (type === 'regional_guidance_layer') {
+ return ;
+ }
+ if (type === 'control_adapter_layer') {
+ return ;
+ }
+ if (type === 'ip_adapter_layer') {
+ return ;
+ }
+});
+
+LayerWrapper.displayName = 'LayerWrapper';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersSettingsPopover.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersSettingsPopover.tsx
new file mode 100644
index 0000000000..89032b7c76
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersSettingsPopover.tsx
@@ -0,0 +1,26 @@
+import { Flex, IconButton, Popover, PopoverBody, PopoverContent, PopoverTrigger } from '@invoke-ai/ui-library';
+import { GlobalMaskLayerOpacity } from 'features/controlLayers/components/GlobalMaskLayerOpacity';
+import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { RiSettings4Fill } from 'react-icons/ri';
+
+const ControlLayersSettingsPopover = () => {
+ const { t } = useTranslation();
+
+ return (
+
+
+ } />
+
+
+
+
+
+
+
+
+
+ );
+};
+
+export default memo(ControlLayersSettingsPopover);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersToolbar.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersToolbar.tsx
new file mode 100644
index 0000000000..15a74a332a
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ControlLayersToolbar.tsx
@@ -0,0 +1,20 @@
+/* eslint-disable i18next/no-literal-string */
+import { Flex } from '@invoke-ai/ui-library';
+import { BrushSize } from 'features/controlLayers/components/BrushSize';
+import ControlLayersSettingsPopover from 'features/controlLayers/components/ControlLayersSettingsPopover';
+import { ToolChooser } from 'features/controlLayers/components/ToolChooser';
+import { UndoRedoButtonGroup } from 'features/controlLayers/components/UndoRedoButtonGroup';
+import { memo } from 'react';
+
+export const ControlLayersToolbar = memo(() => {
+ return (
+
+
+
+
+
+
+ );
+});
+
+ControlLayersToolbar.displayName = 'ControlLayersToolbar';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/DeleteAllLayersButton.tsx b/invokeai/frontend/web/src/features/controlLayers/components/DeleteAllLayersButton.tsx
new file mode 100644
index 0000000000..c55864afa5
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/DeleteAllLayersButton.tsx
@@ -0,0 +1,22 @@
+import { Button } from '@invoke-ai/ui-library';
+import { allLayersDeleted } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiTrashSimpleBold } from 'react-icons/pi';
+
+export const DeleteAllLayersButton = memo(() => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const onClick = useCallback(() => {
+ dispatch(allLayersDeleted());
+ }, [dispatch]);
+
+ return (
+ } variant="ghost" colorScheme="error">
+ {t('controlLayers.deleteAll')}
+
+ );
+});
+
+DeleteAllLayersButton.displayName = 'DeleteAllLayersButton';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/GlobalMaskLayerOpacity.tsx b/invokeai/frontend/web/src/features/controlLayers/components/GlobalMaskLayerOpacity.tsx
new file mode 100644
index 0000000000..40985499db
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/GlobalMaskLayerOpacity.tsx
@@ -0,0 +1,54 @@
+import { CompositeNumberInput, CompositeSlider, Flex, FormControl, FormLabel } from '@invoke-ai/ui-library';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ globalMaskLayerOpacityChanged,
+ initialControlLayersState,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+
+const marks = [0, 25, 50, 75, 100];
+const formatPct = (v: number | string) => `${v} %`;
+
+export const GlobalMaskLayerOpacity = memo(() => {
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+ const globalMaskLayerOpacity = useAppSelector((s) =>
+ Math.round(s.controlLayers.present.globalMaskLayerOpacity * 100)
+ );
+ const onChange = useCallback(
+ (v: number) => {
+ dispatch(globalMaskLayerOpacityChanged(v / 100));
+ },
+ [dispatch]
+ );
+ return (
+
+ {t('controlLayers.globalMaskOpacity')}
+
+
+
+
+
+ );
+});
+
+GlobalMaskLayerOpacity.displayName = 'GlobalMaskLayerOpacity';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/IPLayerListItem.tsx b/invokeai/frontend/web/src/features/controlLayers/components/IPLayerListItem.tsx
new file mode 100644
index 0000000000..bdc54373a0
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/IPLayerListItem.tsx
@@ -0,0 +1,47 @@
+import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppSelector } from 'app/store/storeHooks';
+import ControlAdapterLayerConfig from 'features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig';
+import { LayerDeleteButton } from 'features/controlLayers/components/LayerDeleteButton';
+import { LayerTitle } from 'features/controlLayers/components/LayerTitle';
+import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerVisibilityToggle';
+import { isIPAdapterLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useMemo } from 'react';
+import { assert } from 'tsafe';
+
+type Props = {
+ layerId: string;
+};
+
+export const IPLayerListItem = memo(({ layerId }: Props) => {
+ const selector = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isIPAdapterLayer(layer), `Layer ${layerId} not found or not an IP Adapter layer`);
+ return layer.ipAdapterId;
+ }),
+ [layerId]
+ );
+ const ipAdapterId = useAppSelector(selector);
+ const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
+ return (
+
+
+
+
+
+
+
+
+ {isOpen && (
+
+
+
+ )}
+
+
+ );
+});
+
+IPLayerListItem.displayName = 'IPLayerListItem';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerDeleteButton.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerDeleteButton.tsx
new file mode 100644
index 0000000000..0c74b2a9ea
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerDeleteButton.tsx
@@ -0,0 +1,30 @@
+import { IconButton } from '@invoke-ai/ui-library';
+import { guidanceLayerDeleted } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiTrashSimpleBold } from 'react-icons/pi';
+
+type Props = { layerId: string };
+
+export const LayerDeleteButton = memo(({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const deleteLayer = useCallback(() => {
+ dispatch(guidanceLayerDeleted(layerId));
+ }, [dispatch, layerId]);
+ return (
+ }
+ onClick={deleteLayer}
+ onDoubleClick={stopPropagation} // double click expands the layer
+ />
+ );
+});
+
+LayerDeleteButton.displayName = 'LayerDeleteButton';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerMenu.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenu.tsx
new file mode 100644
index 0000000000..e5c8cc0aac
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenu.tsx
@@ -0,0 +1,59 @@
+import { IconButton, Menu, MenuButton, MenuDivider, MenuItem, MenuList } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { LayerMenuArrangeActions } from 'features/controlLayers/components/LayerMenuArrangeActions';
+import { LayerMenuRGActions } from 'features/controlLayers/components/LayerMenuRGActions';
+import { useLayerType } from 'features/controlLayers/hooks/layerStateHooks';
+import { layerDeleted, layerReset } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiArrowCounterClockwiseBold, PiDotsThreeVerticalBold, PiTrashSimpleBold } from 'react-icons/pi';
+
+type Props = { layerId: string };
+
+export const LayerMenu = memo(({ layerId }: Props) => {
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+ const layerType = useLayerType(layerId);
+ const resetLayer = useCallback(() => {
+ dispatch(layerReset(layerId));
+ }, [dispatch, layerId]);
+ const deleteLayer = useCallback(() => {
+ dispatch(layerDeleted(layerId));
+ }, [dispatch, layerId]);
+ return (
+
+ }
+ onDoubleClick={stopPropagation} // double click expands the layer
+ />
+
+ {layerType === 'regional_guidance_layer' && (
+ <>
+
+
+ >
+ )}
+ {(layerType === 'regional_guidance_layer' || layerType === 'control_adapter_layer') && (
+ <>
+
+
+ >
+ )}
+ {layerType === 'regional_guidance_layer' && (
+ }>
+ {t('accessibility.reset')}
+
+ )}
+ } color="error.300">
+ {t('common.delete')}
+
+
+
+ );
+});
+
+LayerMenu.displayName = 'LayerMenu';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuArrangeActions.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuArrangeActions.tsx
new file mode 100644
index 0000000000..9c51671a39
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuArrangeActions.tsx
@@ -0,0 +1,69 @@
+import { MenuItem } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ isRenderableLayer,
+ layerMovedBackward,
+ layerMovedForward,
+ layerMovedToBack,
+ layerMovedToFront,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiArrowDownBold, PiArrowLineDownBold, PiArrowLineUpBold, PiArrowUpBold } from 'react-icons/pi';
+import { assert } from 'tsafe';
+
+type Props = { layerId: string };
+
+export const LayerMenuArrangeActions = memo(({ layerId }: Props) => {
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+ const selectValidActions = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRenderableLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ const layerIndex = controlLayers.present.layers.findIndex((l) => l.id === layerId);
+ const layerCount = controlLayers.present.layers.length;
+ return {
+ canMoveForward: layerIndex < layerCount - 1,
+ canMoveBackward: layerIndex > 0,
+ canMoveToFront: layerIndex < layerCount - 1,
+ canMoveToBack: layerIndex > 0,
+ };
+ }),
+ [layerId]
+ );
+ const validActions = useAppSelector(selectValidActions);
+ const moveForward = useCallback(() => {
+ dispatch(layerMovedForward(layerId));
+ }, [dispatch, layerId]);
+ const moveToFront = useCallback(() => {
+ dispatch(layerMovedToFront(layerId));
+ }, [dispatch, layerId]);
+ const moveBackward = useCallback(() => {
+ dispatch(layerMovedBackward(layerId));
+ }, [dispatch, layerId]);
+ const moveToBack = useCallback(() => {
+ dispatch(layerMovedToBack(layerId));
+ }, [dispatch, layerId]);
+ return (
+ <>
+ }>
+ {t('controlLayers.moveToFront')}
+
+ }>
+ {t('controlLayers.moveForward')}
+
+ }>
+ {t('controlLayers.moveBackward')}
+
+ }>
+ {t('controlLayers.moveToBack')}
+
+ >
+ );
+});
+
+LayerMenuArrangeActions.displayName = 'LayerMenuArrangeActions';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuRGActions.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuRGActions.tsx
new file mode 100644
index 0000000000..6c2bb4c26b
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerMenuRGActions.tsx
@@ -0,0 +1,58 @@
+import { MenuItem } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { guidanceLayerIPAdapterAdded } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ isRegionalGuidanceLayer,
+ maskLayerNegativePromptChanged,
+ maskLayerPositivePromptChanged,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiPlusBold } from 'react-icons/pi';
+import { assert } from 'tsafe';
+
+type Props = { layerId: string };
+
+export const LayerMenuRGActions = memo(({ layerId }: Props) => {
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+ const selectValidActions = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ return {
+ canAddPositivePrompt: layer.positivePrompt === null,
+ canAddNegativePrompt: layer.negativePrompt === null,
+ };
+ }),
+ [layerId]
+ );
+ const validActions = useAppSelector(selectValidActions);
+ const addPositivePrompt = useCallback(() => {
+ dispatch(maskLayerPositivePromptChanged({ layerId, prompt: '' }));
+ }, [dispatch, layerId]);
+ const addNegativePrompt = useCallback(() => {
+ dispatch(maskLayerNegativePromptChanged({ layerId, prompt: '' }));
+ }, [dispatch, layerId]);
+ const addIPAdapter = useCallback(() => {
+ dispatch(guidanceLayerIPAdapterAdded(layerId));
+ }, [dispatch, layerId]);
+ return (
+ <>
+ }>
+ {t('controlLayers.addPositivePrompt')}
+
+ }>
+ {t('controlLayers.addNegativePrompt')}
+
+ }>
+ {t('controlLayers.addIPAdapter')}
+
+ >
+ );
+});
+
+LayerMenuRGActions.displayName = 'LayerMenuRGActions';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerTitle.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerTitle.tsx
new file mode 100644
index 0000000000..ec13ff7bcc
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerTitle.tsx
@@ -0,0 +1,29 @@
+import { Text } from '@invoke-ai/ui-library';
+import type { Layer } from 'features/controlLayers/store/types';
+import { memo, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type Props = {
+ type: Layer['type'];
+};
+
+export const LayerTitle = memo(({ type }: Props) => {
+ const { t } = useTranslation();
+ const title = useMemo(() => {
+ if (type === 'regional_guidance_layer') {
+ return t('controlLayers.regionalGuidance');
+ } else if (type === 'control_adapter_layer') {
+ return t('controlLayers.globalControlAdapter');
+ } else if (type === 'ip_adapter_layer') {
+ return t('controlLayers.globalIPAdapter');
+ }
+ }, [t, type]);
+
+ return (
+
+ {title}
+
+ );
+});
+
+LayerTitle.displayName = 'LayerTitle';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/LayerVisibilityToggle.tsx b/invokeai/frontend/web/src/features/controlLayers/components/LayerVisibilityToggle.tsx
new file mode 100644
index 0000000000..d2dab39e36
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/LayerVisibilityToggle.tsx
@@ -0,0 +1,36 @@
+import { IconButton } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { useLayerIsVisible } from 'features/controlLayers/hooks/layerStateHooks';
+import { layerVisibilityToggled } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiCheckBold } from 'react-icons/pi';
+
+type Props = {
+ layerId: string;
+};
+
+export const LayerVisibilityToggle = memo(({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const isVisible = useLayerIsVisible(layerId);
+ const onClick = useCallback(() => {
+ dispatch(layerVisibilityToggled(layerId));
+ }, [dispatch, layerId]);
+
+ return (
+ : undefined}
+ onClick={onClick}
+ colorScheme="base"
+ onDoubleClick={stopPropagation} // double click expands the layer
+ />
+ );
+});
+
+LayerVisibilityToggle.displayName = 'LayerVisibilityToggle';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerAutoNegativeCheckbox.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerAutoNegativeCheckbox.tsx
new file mode 100644
index 0000000000..6f03d4b28d
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerAutoNegativeCheckbox.tsx
@@ -0,0 +1,51 @@
+import { Checkbox, FormControl, FormLabel } from '@invoke-ai/ui-library';
+import { createSelector } from '@reduxjs/toolkit';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ isRegionalGuidanceLayer,
+ maskLayerAutoNegativeChanged,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import type { ChangeEvent } from 'react';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { assert } from 'tsafe';
+
+type Props = {
+ layerId: string;
+};
+
+const useAutoNegative = (layerId: string) => {
+ const selectAutoNegative = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ return layer.autoNegative;
+ }),
+ [layerId]
+ );
+ const autoNegative = useAppSelector(selectAutoNegative);
+ return autoNegative;
+};
+
+export const RGLayerAutoNegativeCheckbox = memo(({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const autoNegative = useAutoNegative(layerId);
+ const onChange = useCallback(
+ (e: ChangeEvent) => {
+ dispatch(maskLayerAutoNegativeChanged({ layerId, autoNegative: e.target.checked ? 'invert' : 'off' }));
+ },
+ [dispatch, layerId]
+ );
+
+ return (
+
+ {t('controlLayers.autoNegative')}
+
+
+ );
+});
+
+RGLayerAutoNegativeCheckbox.displayName = 'RGLayerAutoNegativeCheckbox';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerColorPicker.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerColorPicker.tsx
new file mode 100644
index 0000000000..e76ab57a51
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerColorPicker.tsx
@@ -0,0 +1,69 @@
+import { Flex, Popover, PopoverBody, PopoverContent, PopoverTrigger, Tooltip } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import RgbColorPicker from 'common/components/RgbColorPicker';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { rgbColorToString } from 'features/canvas/util/colorToString';
+import {
+ isRegionalGuidanceLayer,
+ maskLayerPreviewColorChanged,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import type { RgbColor } from 'react-colorful';
+import { useTranslation } from 'react-i18next';
+import { assert } from 'tsafe';
+
+type Props = {
+ layerId: string;
+};
+
+export const RGLayerColorPicker = memo(({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const selectColor = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an vector mask layer`);
+ return layer.previewColor;
+ }),
+ [layerId]
+ );
+ const color = useAppSelector(selectColor);
+ const dispatch = useAppDispatch();
+ const onColorChange = useCallback(
+ (color: RgbColor) => {
+ dispatch(maskLayerPreviewColorChanged({ layerId, color }));
+ },
+ [dispatch, layerId]
+ );
+ return (
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ );
+});
+
+RGLayerColorPicker.displayName = 'RGLayerColorPicker';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerIPAdapterList.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerIPAdapterList.tsx
new file mode 100644
index 0000000000..464bd41897
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerIPAdapterList.tsx
@@ -0,0 +1,80 @@
+import { Divider, Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { guidanceLayerIPAdapterDeleted } from 'app/store/middleware/listenerMiddleware/listeners/controlLayersToControlAdapterBridge';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import ControlAdapterLayerConfig from 'features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig';
+import { isRegionalGuidanceLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { PiTrashSimpleBold } from 'react-icons/pi';
+import { assert } from 'tsafe';
+
+type Props = {
+ layerId: string;
+};
+
+export const RGLayerIPAdapterList = memo(({ layerId }: Props) => {
+ const selectIPAdapterIds = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.filter(isRegionalGuidanceLayer).find((l) => l.id === layerId);
+ assert(layer, `Layer ${layerId} not found`);
+ return layer.ipAdapterIds;
+ }),
+ [layerId]
+ );
+ const ipAdapterIds = useAppSelector(selectIPAdapterIds);
+
+ if (ipAdapterIds.length === 0) {
+ return null;
+ }
+
+ return (
+ <>
+ {ipAdapterIds.map((id, index) => (
+
+ {index > 0 && (
+
+
+
+ )}
+
+
+ ))}
+ >
+ );
+});
+
+RGLayerIPAdapterList.displayName = 'RGLayerIPAdapterList';
+
+type IPAdapterListItemProps = {
+ layerId: string;
+ ipAdapterId: string;
+ ipAdapterNumber: number;
+};
+
+const RGLayerIPAdapterListItem = memo(({ layerId, ipAdapterId, ipAdapterNumber }: IPAdapterListItemProps) => {
+ const dispatch = useAppDispatch();
+ const onDeleteIPAdapter = useCallback(() => {
+ dispatch(guidanceLayerIPAdapterDeleted({ layerId, ipAdapterId }));
+ }, [dispatch, ipAdapterId, layerId]);
+
+ return (
+
+
+ {`IP Adapter ${ipAdapterNumber}`}
+
+ }
+ aria-label="Delete IP Adapter"
+ onClick={onDeleteIPAdapter}
+ variant="ghost"
+ colorScheme="error"
+ />
+
+
+
+ );
+});
+
+RGLayerIPAdapterListItem.displayName = 'RGLayerIPAdapterListItem';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerListItem.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerListItem.tsx
new file mode 100644
index 0000000000..3c126cabaa
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerListItem.tsx
@@ -0,0 +1,84 @@
+import { Badge, Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { rgbColorToString } from 'features/canvas/util/colorToString';
+import { LayerDeleteButton } from 'features/controlLayers/components/LayerDeleteButton';
+import { LayerMenu } from 'features/controlLayers/components/LayerMenu';
+import { LayerTitle } from 'features/controlLayers/components/LayerTitle';
+import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerVisibilityToggle';
+import { RGLayerColorPicker } from 'features/controlLayers/components/RGLayerColorPicker';
+import { RGLayerIPAdapterList } from 'features/controlLayers/components/RGLayerIPAdapterList';
+import { RGLayerNegativePrompt } from 'features/controlLayers/components/RGLayerNegativePrompt';
+import { RGLayerPositivePrompt } from 'features/controlLayers/components/RGLayerPositivePrompt';
+import RGLayerSettingsPopover from 'features/controlLayers/components/RGLayerSettingsPopover';
+import {
+ isRegionalGuidanceLayer,
+ layerSelected,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { assert } from 'tsafe';
+
+import { AddPromptButtons } from './AddPromptButtons';
+
+type Props = {
+ layerId: string;
+};
+
+export const RGLayerListItem = memo(({ layerId }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const selector = useMemo(
+ () =>
+ createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ return {
+ color: rgbColorToString(layer.previewColor),
+ hasPositivePrompt: layer.positivePrompt !== null,
+ hasNegativePrompt: layer.negativePrompt !== null,
+ hasIPAdapters: layer.ipAdapterIds.length > 0,
+ isSelected: layerId === controlLayers.present.selectedLayerId,
+ autoNegative: layer.autoNegative,
+ };
+ }),
+ [layerId]
+ );
+ const { autoNegative, color, hasPositivePrompt, hasNegativePrompt, hasIPAdapters, isSelected } =
+ useAppSelector(selector);
+ const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
+ const onClick = useCallback(() => {
+ dispatch(layerSelected(layerId));
+ }, [dispatch, layerId]);
+ return (
+
+
+
+
+
+
+ {autoNegative === 'invert' && (
+
+ {t('controlLayers.autoNegative')}
+
+ )}
+
+
+
+
+
+ {isOpen && (
+
+ {!hasPositivePrompt && !hasNegativePrompt && !hasIPAdapters && }
+ {hasPositivePrompt && }
+ {hasNegativePrompt && }
+ {hasIPAdapters && }
+
+ )}
+
+
+ );
+});
+
+RGLayerListItem.displayName = 'RGLayerListItem';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerNegativePrompt.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerNegativePrompt.tsx
new file mode 100644
index 0000000000..e869c8809a
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerNegativePrompt.tsx
@@ -0,0 +1,58 @@
+import { Box, Textarea } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { RGLayerPromptDeleteButton } from 'features/controlLayers/components/RGLayerPromptDeleteButton';
+import { useLayerNegativePrompt } from 'features/controlLayers/hooks/layerStateHooks';
+import { maskLayerNegativePromptChanged } from 'features/controlLayers/store/controlLayersSlice';
+import { PromptOverlayButtonWrapper } from 'features/parameters/components/Prompts/PromptOverlayButtonWrapper';
+import { AddPromptTriggerButton } from 'features/prompt/AddPromptTriggerButton';
+import { PromptPopover } from 'features/prompt/PromptPopover';
+import { usePrompt } from 'features/prompt/usePrompt';
+import { memo, useCallback, useRef } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type Props = {
+ layerId: string;
+};
+
+export const RGLayerNegativePrompt = memo(({ layerId }: Props) => {
+ const prompt = useLayerNegativePrompt(layerId);
+ const dispatch = useAppDispatch();
+ const textareaRef = useRef(null);
+ const { t } = useTranslation();
+ const _onChange = useCallback(
+ (v: string) => {
+ dispatch(maskLayerNegativePromptChanged({ layerId, prompt: v }));
+ },
+ [dispatch, layerId]
+ );
+ const { onChange, isOpen, onClose, onOpen, onSelect, onKeyDown } = usePrompt({
+ prompt,
+ textareaRef,
+ onChange: _onChange,
+ });
+
+ return (
+
+
+
+
+
+
+
+
+
+ );
+});
+
+RGLayerNegativePrompt.displayName = 'RGLayerNegativePrompt';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPositivePrompt.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPositivePrompt.tsx
new file mode 100644
index 0000000000..6d508338c1
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPositivePrompt.tsx
@@ -0,0 +1,58 @@
+import { Box, Textarea } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { RGLayerPromptDeleteButton } from 'features/controlLayers/components/RGLayerPromptDeleteButton';
+import { useLayerPositivePrompt } from 'features/controlLayers/hooks/layerStateHooks';
+import { maskLayerPositivePromptChanged } from 'features/controlLayers/store/controlLayersSlice';
+import { PromptOverlayButtonWrapper } from 'features/parameters/components/Prompts/PromptOverlayButtonWrapper';
+import { AddPromptTriggerButton } from 'features/prompt/AddPromptTriggerButton';
+import { PromptPopover } from 'features/prompt/PromptPopover';
+import { usePrompt } from 'features/prompt/usePrompt';
+import { memo, useCallback, useRef } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type Props = {
+ layerId: string;
+};
+
+export const RGLayerPositivePrompt = memo(({ layerId }: Props) => {
+ const prompt = useLayerPositivePrompt(layerId);
+ const dispatch = useAppDispatch();
+ const textareaRef = useRef(null);
+ const { t } = useTranslation();
+ const _onChange = useCallback(
+ (v: string) => {
+ dispatch(maskLayerPositivePromptChanged({ layerId, prompt: v }));
+ },
+ [dispatch, layerId]
+ );
+ const { onChange, isOpen, onClose, onOpen, onSelect, onKeyDown } = usePrompt({
+ prompt,
+ textareaRef,
+ onChange: _onChange,
+ });
+
+ return (
+
+
+
+
+
+
+
+
+
+ );
+});
+
+RGLayerPositivePrompt.displayName = 'RGLayerPositivePrompt';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPromptDeleteButton.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPromptDeleteButton.tsx
new file mode 100644
index 0000000000..9a32bb68ad
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerPromptDeleteButton.tsx
@@ -0,0 +1,38 @@
+import { IconButton, Tooltip } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import {
+ maskLayerNegativePromptChanged,
+ maskLayerPositivePromptChanged,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiTrashSimpleBold } from 'react-icons/pi';
+
+type Props = {
+ layerId: string;
+ polarity: 'positive' | 'negative';
+};
+
+export const RGLayerPromptDeleteButton = memo(({ layerId, polarity }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const onClick = useCallback(() => {
+ if (polarity === 'positive') {
+ dispatch(maskLayerPositivePromptChanged({ layerId, prompt: null }));
+ } else {
+ dispatch(maskLayerNegativePromptChanged({ layerId, prompt: null }));
+ }
+ }, [dispatch, layerId, polarity]);
+ return (
+
+ }
+ onClick={onClick}
+ />
+
+ );
+});
+
+RGLayerPromptDeleteButton.displayName = 'RGLayerPromptDeleteButton';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/RGLayerSettingsPopover.tsx b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerSettingsPopover.tsx
new file mode 100644
index 0000000000..e270748b9b
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/RGLayerSettingsPopover.tsx
@@ -0,0 +1,55 @@
+import type { FormLabelProps } from '@invoke-ai/ui-library';
+import {
+ Flex,
+ FormControlGroup,
+ IconButton,
+ Popover,
+ PopoverArrow,
+ PopoverBody,
+ PopoverContent,
+ PopoverTrigger,
+} from '@invoke-ai/ui-library';
+import { stopPropagation } from 'common/util/stopPropagation';
+import { RGLayerAutoNegativeCheckbox } from 'features/controlLayers/components/RGLayerAutoNegativeCheckbox';
+import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiGearSixBold } from 'react-icons/pi';
+
+type Props = {
+ layerId: string;
+};
+
+const formLabelProps: FormLabelProps = {
+ flexGrow: 1,
+ minW: 32,
+};
+
+const RGLayerSettingsPopover = ({ layerId }: Props) => {
+ const { t } = useTranslation();
+
+ return (
+
+
+ }
+ onDoubleClick={stopPropagation} // double click expands the layer
+ />
+
+
+
+
+
+
+
+
+
+
+
+
+ );
+};
+
+export default memo(RGLayerSettingsPopover);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/StageComponent.tsx b/invokeai/frontend/web/src/features/controlLayers/components/StageComponent.tsx
new file mode 100644
index 0000000000..ecf1121b41
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/StageComponent.tsx
@@ -0,0 +1,248 @@
+import { Flex } from '@invoke-ai/ui-library';
+import { useStore } from '@nanostores/react';
+import { createSelector } from '@reduxjs/toolkit';
+import { logger } from 'app/logging/logger';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { useMouseEvents } from 'features/controlLayers/hooks/mouseEventHooks';
+import {
+ $cursorPosition,
+ $isMouseOver,
+ $lastMouseDownPos,
+ $tool,
+ isRegionalGuidanceLayer,
+ layerBboxChanged,
+ layerTranslated,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { debouncedRenderers, renderers as normalRenderers } from 'features/controlLayers/util/renderers';
+import Konva from 'konva';
+import type { IRect } from 'konva/lib/types';
+import { memo, useCallback, useLayoutEffect, useMemo, useState } from 'react';
+import { useDevicePixelRatio } from 'use-device-pixel-ratio';
+import { v4 as uuidv4 } from 'uuid';
+
+// This will log warnings when layers > 5 - maybe use `import.meta.env.MODE === 'development'` instead?
+Konva.showWarnings = false;
+
+const log = logger('controlLayers');
+
+const selectSelectedLayerColor = createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers
+ .filter(isRegionalGuidanceLayer)
+ .find((l) => l.id === controlLayers.present.selectedLayerId);
+ return layer?.previewColor ?? null;
+});
+
+const selectSelectedLayerType = createSelector(selectControlLayersSlice, (controlLayers) => {
+ const selectedLayer = controlLayers.present.layers.find((l) => l.id === controlLayers.present.selectedLayerId);
+ return selectedLayer?.type ?? null;
+});
+
+const useStageRenderer = (
+ stage: Konva.Stage,
+ container: HTMLDivElement | null,
+ wrapper: HTMLDivElement | null,
+ asPreview: boolean
+) => {
+ const dispatch = useAppDispatch();
+ const state = useAppSelector((s) => s.controlLayers.present);
+ const tool = useStore($tool);
+ const { onMouseDown, onMouseUp, onMouseMove, onMouseEnter, onMouseLeave, onMouseWheel } = useMouseEvents();
+ const cursorPosition = useStore($cursorPosition);
+ const lastMouseDownPos = useStore($lastMouseDownPos);
+ const isMouseOver = useStore($isMouseOver);
+ const selectedLayerIdColor = useAppSelector(selectSelectedLayerColor);
+ const selectedLayerType = useAppSelector(selectSelectedLayerType);
+ const layerIds = useMemo(() => state.layers.map((l) => l.id), [state.layers]);
+ const layerCount = useMemo(() => state.layers.length, [state.layers]);
+ const renderers = useMemo(() => (asPreview ? debouncedRenderers : normalRenderers), [asPreview]);
+ const dpr = useDevicePixelRatio({ round: false });
+
+ const onLayerPosChanged = useCallback(
+ (layerId: string, x: number, y: number) => {
+ dispatch(layerTranslated({ layerId, x, y }));
+ },
+ [dispatch]
+ );
+
+ const onBboxChanged = useCallback(
+ (layerId: string, bbox: IRect | null) => {
+ dispatch(layerBboxChanged({ layerId, bbox }));
+ },
+ [dispatch]
+ );
+
+ useLayoutEffect(() => {
+ log.trace('Initializing stage');
+ if (!container) {
+ return;
+ }
+ stage.container(container);
+ return () => {
+ log.trace('Cleaning up stage');
+ stage.destroy();
+ };
+ }, [container, stage]);
+
+ useLayoutEffect(() => {
+ log.trace('Adding stage listeners');
+ if (asPreview) {
+ return;
+ }
+ stage.on('mousedown', onMouseDown);
+ stage.on('mouseup', onMouseUp);
+ stage.on('mousemove', onMouseMove);
+ stage.on('mouseenter', onMouseEnter);
+ stage.on('mouseleave', onMouseLeave);
+ stage.on('wheel', onMouseWheel);
+
+ return () => {
+ log.trace('Cleaning up stage listeners');
+ stage.off('mousedown', onMouseDown);
+ stage.off('mouseup', onMouseUp);
+ stage.off('mousemove', onMouseMove);
+ stage.off('mouseenter', onMouseEnter);
+ stage.off('mouseleave', onMouseLeave);
+ stage.off('wheel', onMouseWheel);
+ };
+ }, [stage, asPreview, onMouseDown, onMouseUp, onMouseMove, onMouseEnter, onMouseLeave, onMouseWheel]);
+
+ useLayoutEffect(() => {
+ log.trace('Updating stage dimensions');
+ if (!wrapper) {
+ return;
+ }
+
+ const fitStageToContainer = () => {
+ const newXScale = wrapper.offsetWidth / state.size.width;
+ const newYScale = wrapper.offsetHeight / state.size.height;
+ const newScale = Math.min(newXScale, newYScale, 1);
+ stage.width(state.size.width * newScale);
+ stage.height(state.size.height * newScale);
+ stage.scaleX(newScale);
+ stage.scaleY(newScale);
+ };
+
+ const resizeObserver = new ResizeObserver(fitStageToContainer);
+ resizeObserver.observe(wrapper);
+ fitStageToContainer();
+
+ return () => {
+ resizeObserver.disconnect();
+ };
+ }, [stage, state.size.width, state.size.height, wrapper]);
+
+ useLayoutEffect(() => {
+ log.trace('Rendering tool preview');
+ if (asPreview) {
+ // Preview should not display tool
+ return;
+ }
+ renderers.renderToolPreview(
+ stage,
+ tool,
+ selectedLayerIdColor,
+ selectedLayerType,
+ state.globalMaskLayerOpacity,
+ cursorPosition,
+ lastMouseDownPos,
+ isMouseOver,
+ state.brushSize
+ );
+ }, [
+ asPreview,
+ stage,
+ tool,
+ selectedLayerIdColor,
+ selectedLayerType,
+ state.globalMaskLayerOpacity,
+ cursorPosition,
+ lastMouseDownPos,
+ isMouseOver,
+ state.brushSize,
+ renderers,
+ ]);
+
+ useLayoutEffect(() => {
+ log.trace('Rendering layers');
+ renderers.renderLayers(stage, state.layers, state.globalMaskLayerOpacity, tool, onLayerPosChanged);
+ }, [
+ stage,
+ state.layers,
+ state.globalMaskLayerOpacity,
+ tool,
+ onLayerPosChanged,
+ renderers,
+ state.size.width,
+ state.size.height,
+ ]);
+
+ useLayoutEffect(() => {
+ log.trace('Rendering bbox');
+ if (asPreview) {
+ // Preview should not display bboxes
+ return;
+ }
+ renderers.renderBbox(stage, state.layers, tool, onBboxChanged);
+ }, [stage, asPreview, state.layers, tool, onBboxChanged, renderers]);
+
+ useLayoutEffect(() => {
+ log.trace('Rendering background');
+ if (asPreview) {
+ // The preview should not have a background
+ return;
+ }
+ renderers.renderBackground(stage, state.size.width, state.size.height);
+ }, [stage, asPreview, state.size.width, state.size.height, renderers]);
+
+ useLayoutEffect(() => {
+ log.trace('Arranging layers');
+ renderers.arrangeLayers(stage, layerIds);
+ }, [stage, layerIds, renderers]);
+
+ useLayoutEffect(() => {
+ log.trace('Rendering no layers message');
+ if (asPreview) {
+ // The preview should not display the no layers message
+ return;
+ }
+ renderers.renderNoLayersMessage(stage, layerCount, state.size.width, state.size.height);
+ }, [stage, layerCount, renderers, asPreview, state.size.width, state.size.height]);
+
+ useLayoutEffect(() => {
+ Konva.pixelRatio = dpr;
+ }, [dpr]);
+};
+
+type Props = {
+ asPreview?: boolean;
+};
+
+export const StageComponent = memo(({ asPreview = false }: Props) => {
+ const [stage] = useState(
+ () => new Konva.Stage({ id: uuidv4(), container: document.createElement('div'), listening: !asPreview })
+ );
+ const [container, setContainer] = useState(null);
+ const [wrapper, setWrapper] = useState(null);
+
+ const containerRef = useCallback((el: HTMLDivElement | null) => {
+ setContainer(el);
+ }, []);
+
+ const wrapperRef = useCallback((el: HTMLDivElement | null) => {
+ setWrapper(el);
+ }, []);
+
+ useStageRenderer(stage, container, wrapper, asPreview);
+
+ return (
+
+
+
+
+
+ );
+});
+
+StageComponent.displayName = 'StageComponent';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/ToolChooser.tsx b/invokeai/frontend/web/src/features/controlLayers/components/ToolChooser.tsx
new file mode 100644
index 0000000000..53535b4248
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/ToolChooser.tsx
@@ -0,0 +1,90 @@
+import { ButtonGroup, IconButton } from '@invoke-ai/ui-library';
+import { useStore } from '@nanostores/react';
+import { createSelector } from '@reduxjs/toolkit';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import {
+ $tool,
+ selectControlLayersSlice,
+ selectedLayerDeleted,
+ selectedLayerReset,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { useCallback } from 'react';
+import { useHotkeys } from 'react-hotkeys-hook';
+import { useTranslation } from 'react-i18next';
+import { PiArrowsOutCardinalBold, PiEraserBold, PiPaintBrushBold, PiRectangleBold } from 'react-icons/pi';
+
+const selectIsDisabled = createSelector(selectControlLayersSlice, (controlLayers) => {
+ const selectedLayer = controlLayers.present.layers.find((l) => l.id === controlLayers.present.selectedLayerId);
+ return selectedLayer?.type !== 'regional_guidance_layer';
+});
+
+export const ToolChooser: React.FC = () => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const isDisabled = useAppSelector(selectIsDisabled);
+ const tool = useStore($tool);
+
+ const setToolToBrush = useCallback(() => {
+ $tool.set('brush');
+ }, []);
+ useHotkeys('b', setToolToBrush, { enabled: !isDisabled }, [isDisabled]);
+ const setToolToEraser = useCallback(() => {
+ $tool.set('eraser');
+ }, []);
+ useHotkeys('e', setToolToEraser, { enabled: !isDisabled }, [isDisabled]);
+ const setToolToRect = useCallback(() => {
+ $tool.set('rect');
+ }, []);
+ useHotkeys('u', setToolToRect, { enabled: !isDisabled }, [isDisabled]);
+ const setToolToMove = useCallback(() => {
+ $tool.set('move');
+ }, []);
+ useHotkeys('v', setToolToMove, { enabled: !isDisabled }, [isDisabled]);
+
+ const resetSelectedLayer = useCallback(() => {
+ dispatch(selectedLayerReset());
+ }, [dispatch]);
+ useHotkeys('shift+c', resetSelectedLayer);
+
+ const deleteSelectedLayer = useCallback(() => {
+ dispatch(selectedLayerDeleted());
+ }, [dispatch]);
+ useHotkeys('shift+d', deleteSelectedLayer);
+
+ return (
+
+ }
+ variant={tool === 'brush' ? 'solid' : 'outline'}
+ onClick={setToolToBrush}
+ isDisabled={isDisabled}
+ />
+ }
+ variant={tool === 'eraser' ? 'solid' : 'outline'}
+ onClick={setToolToEraser}
+ isDisabled={isDisabled}
+ />
+ }
+ variant={tool === 'rect' ? 'solid' : 'outline'}
+ onClick={setToolToRect}
+ isDisabled={isDisabled}
+ />
+ }
+ variant={tool === 'move' ? 'solid' : 'outline'}
+ onClick={setToolToMove}
+ isDisabled={isDisabled}
+ />
+
+ );
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/UndoRedoButtonGroup.tsx b/invokeai/frontend/web/src/features/controlLayers/components/UndoRedoButtonGroup.tsx
new file mode 100644
index 0000000000..8babae7fcc
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/UndoRedoButtonGroup.tsx
@@ -0,0 +1,49 @@
+/* eslint-disable i18next/no-literal-string */
+import { ButtonGroup, IconButton } from '@invoke-ai/ui-library';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { redo, undo } from 'features/controlLayers/store/controlLayersSlice';
+import { memo, useCallback } from 'react';
+import { useHotkeys } from 'react-hotkeys-hook';
+import { useTranslation } from 'react-i18next';
+import { PiArrowClockwiseBold, PiArrowCounterClockwiseBold } from 'react-icons/pi';
+
+export const UndoRedoButtonGroup = memo(() => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+
+ const mayUndo = useAppSelector((s) => s.controlLayers.past.length > 0);
+ const handleUndo = useCallback(() => {
+ dispatch(undo());
+ }, [dispatch]);
+ useHotkeys(['meta+z', 'ctrl+z'], handleUndo, { enabled: mayUndo, preventDefault: true }, [mayUndo, handleUndo]);
+
+ const mayRedo = useAppSelector((s) => s.controlLayers.future.length > 0);
+ const handleRedo = useCallback(() => {
+ dispatch(redo());
+ }, [dispatch]);
+ useHotkeys(['meta+shift+z', 'ctrl+shift+z'], handleRedo, { enabled: mayRedo, preventDefault: true }, [
+ mayRedo,
+ handleRedo,
+ ]);
+
+ return (
+
+ }
+ isDisabled={!mayUndo}
+ />
+ }
+ isDisabled={!mayRedo}
+ />
+
+ );
+});
+
+UndoRedoButtonGroup.displayName = 'UndoRedoButtonGroup';
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterImagePreview.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterImagePreview.tsx
new file mode 100644
index 0000000000..b3094e5599
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterImagePreview.tsx
@@ -0,0 +1,237 @@
+import type { SystemStyleObject } from '@invoke-ai/ui-library';
+import { Box, Flex, Spinner, useShiftModifier } from '@invoke-ai/ui-library';
+import { skipToken } from '@reduxjs/toolkit/query';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import IAIDndImage from 'common/components/IAIDndImage';
+import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
+import { setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
+import { useControlAdapterControlImage } from 'features/controlAdapters/hooks/useControlAdapterControlImage';
+import { useControlAdapterProcessedControlImage } from 'features/controlAdapters/hooks/useControlAdapterProcessedControlImage';
+import { useControlAdapterProcessorType } from 'features/controlAdapters/hooks/useControlAdapterProcessorType';
+import {
+ controlAdapterImageChanged,
+ selectControlAdaptersSlice,
+} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
+import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
+import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
+import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
+import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
+import { memo, useCallback, useEffect, useMemo, useState } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiArrowCounterClockwiseBold, PiFloppyDiskBold, PiRulerBold } from 'react-icons/pi';
+import {
+ useAddImageToBoardMutation,
+ useChangeImageIsIntermediateMutation,
+ useGetImageDTOQuery,
+ useRemoveImageFromBoardMutation,
+} from 'services/api/endpoints/images';
+import type { PostUploadAction } from 'services/api/types';
+
+type Props = {
+ id: string;
+ isSmall?: boolean;
+};
+
+const selectPendingControlImages = createMemoizedSelector(
+ selectControlAdaptersSlice,
+ (controlAdapters) => controlAdapters.pendingControlImages
+);
+
+const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const controlImageName = useControlAdapterControlImage(id);
+ const processedControlImageName = useControlAdapterProcessedControlImage(id);
+ const processorType = useControlAdapterProcessorType(id);
+ const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
+ const isConnected = useAppSelector((s) => s.system.isConnected);
+ const activeTabName = useAppSelector(activeTabNameSelector);
+ const optimalDimension = useAppSelector(selectOptimalDimension);
+ const pendingControlImages = useAppSelector(selectPendingControlImages);
+ const shift = useShiftModifier();
+
+ const [isMouseOverImage, setIsMouseOverImage] = useState(false);
+
+ const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(
+ controlImageName ?? skipToken
+ );
+
+ const { currentData: processedControlImage, isError: isErrorProcessedControlImage } = useGetImageDTOQuery(
+ processedControlImageName ?? skipToken
+ );
+
+ const [changeIsIntermediate] = useChangeImageIsIntermediateMutation();
+ const [addToBoard] = useAddImageToBoardMutation();
+ const [removeFromBoard] = useRemoveImageFromBoardMutation();
+ const handleResetControlImage = useCallback(() => {
+ dispatch(controlAdapterImageChanged({ id, controlImage: null }));
+ }, [id, dispatch]);
+
+ const handleSaveControlImage = useCallback(async () => {
+ if (!processedControlImage) {
+ return;
+ }
+
+ await changeIsIntermediate({
+ imageDTO: processedControlImage,
+ is_intermediate: false,
+ }).unwrap();
+
+ if (autoAddBoardId !== 'none') {
+ addToBoard({
+ imageDTO: processedControlImage,
+ board_id: autoAddBoardId,
+ });
+ } else {
+ removeFromBoard({ imageDTO: processedControlImage });
+ }
+ }, [processedControlImage, changeIsIntermediate, autoAddBoardId, addToBoard, removeFromBoard]);
+
+ const handleSetControlImageToDimensions = useCallback(() => {
+ if (!controlImage) {
+ return;
+ }
+
+ if (activeTabName === 'unifiedCanvas') {
+ dispatch(setBoundingBoxDimensions({ width: controlImage.width, height: controlImage.height }, optimalDimension));
+ } else {
+ if (shift) {
+ const { width, height } = controlImage;
+ dispatch(widthChanged({ width, updateAspectRatio: true }));
+ dispatch(heightChanged({ height, updateAspectRatio: true }));
+ } else {
+ const { width, height } = calculateNewSize(
+ controlImage.width / controlImage.height,
+ optimalDimension * optimalDimension
+ );
+ dispatch(widthChanged({ width, updateAspectRatio: true }));
+ dispatch(heightChanged({ height, updateAspectRatio: true }));
+ }
+ }
+ }, [controlImage, activeTabName, dispatch, optimalDimension, shift]);
+
+ const handleMouseEnter = useCallback(() => {
+ setIsMouseOverImage(true);
+ }, []);
+
+ const handleMouseLeave = useCallback(() => {
+ setIsMouseOverImage(false);
+ }, []);
+
+ const draggableData = useMemo(() => {
+ if (controlImage) {
+ return {
+ id,
+ payloadType: 'IMAGE_DTO',
+ payload: { imageDTO: controlImage },
+ };
+ }
+ }, [controlImage, id]);
+
+ const droppableData = useMemo(
+ () => ({
+ id,
+ actionType: 'SET_CONTROL_ADAPTER_IMAGE',
+ context: { id },
+ }),
+ [id]
+ );
+
+ const postUploadAction = useMemo(() => ({ type: 'SET_CONTROL_ADAPTER_IMAGE', id }), [id]);
+
+ const shouldShowProcessedImage =
+ controlImage &&
+ processedControlImage &&
+ !isMouseOverImage &&
+ !pendingControlImages.includes(id) &&
+ processorType !== 'none';
+
+ useEffect(() => {
+ if (isConnected && (isErrorControlImage || isErrorProcessedControlImage)) {
+ handleResetControlImage();
+ }
+ }, [handleResetControlImage, isConnected, isErrorControlImage, isErrorProcessedControlImage]);
+
+ return (
+
+
+
+
+
+
+
+ <>
+ : undefined}
+ tooltip={t('controlnet.resetControlImage')}
+ />
+ : undefined}
+ tooltip={t('controlnet.saveControlImage')}
+ styleOverrides={saveControlImageStyleOverrides}
+ />
+ : undefined}
+ tooltip={shift ? t('controlnet.setControlImageDimensionsForce') : t('controlnet.setControlImageDimensions')}
+ styleOverrides={setControlImageDimensionsStyleOverrides}
+ />
+ >
+
+ {pendingControlImages.includes(id) && (
+
+
+
+ )}
+
+ );
+};
+
+export default memo(ControlAdapterImagePreview);
+
+const saveControlImageStyleOverrides: SystemStyleObject = { mt: 6 };
+const setControlImageDimensionsStyleOverrides: SystemStyleObject = { mt: 12 };
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig.tsx
new file mode 100644
index 0000000000..29a3502d37
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ControlAdapterLayerConfig.tsx
@@ -0,0 +1,72 @@
+import { Box, Flex, Icon, IconButton } from '@invoke-ai/ui-library';
+import ControlAdapterProcessorComponent from 'features/controlAdapters/components/ControlAdapterProcessorComponent';
+import ControlAdapterShouldAutoConfig from 'features/controlAdapters/components/ControlAdapterShouldAutoConfig';
+import ParamControlAdapterIPMethod from 'features/controlAdapters/components/parameters/ParamControlAdapterIPMethod';
+import ParamControlAdapterProcessorSelect from 'features/controlAdapters/components/parameters/ParamControlAdapterProcessorSelect';
+import { useControlAdapterType } from 'features/controlAdapters/hooks/useControlAdapterType';
+import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiCaretUpBold } from 'react-icons/pi';
+import { useToggle } from 'react-use';
+
+import ControlAdapterImagePreview from './ControlAdapterImagePreview';
+import { ParamControlAdapterBeginEnd } from './ParamControlAdapterBeginEnd';
+import ParamControlAdapterControlMode from './ParamControlAdapterControlMode';
+import ParamControlAdapterModel from './ParamControlAdapterModel';
+import ParamControlAdapterWeight from './ParamControlAdapterWeight';
+
+const ControlAdapterLayerConfig = (props: { id: string }) => {
+ const { id } = props;
+ const controlAdapterType = useControlAdapterType(id);
+ const { t } = useTranslation();
+ const [isExpanded, toggleIsExpanded] = useToggle(false);
+
+ return (
+
+
+
+ {' '}
+
+
+ {controlAdapterType !== 'ip_adapter' && (
+
+ }
+ />
+ )}
+
+
+
+ {controlAdapterType === 'ip_adapter' && }
+ {controlAdapterType === 'controlnet' && }
+
+
+
+
+
+
+
+ {isExpanded && (
+ <>
+
+
+
+ >
+ )}
+
+ );
+};
+
+export default memo(ControlAdapterLayerConfig);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterBeginEnd.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterBeginEnd.tsx
new file mode 100644
index 0000000000..e4bc07e0b4
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterBeginEnd.tsx
@@ -0,0 +1,89 @@
+import { CompositeRangeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
+import { useControlAdapterBeginEndStepPct } from 'features/controlAdapters/hooks/useControlAdapterBeginEndStepPct';
+import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
+import {
+ controlAdapterBeginStepPctChanged,
+ controlAdapterEndStepPctChanged,
+} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type Props = {
+ id: string;
+};
+
+const formatPct = (v: number) => `${Math.round(v * 100)}%`;
+
+export const ParamControlAdapterBeginEnd = memo(({ id }: Props) => {
+ const isEnabled = useControlAdapterIsEnabled(id);
+ const stepPcts = useControlAdapterBeginEndStepPct(id);
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+
+ const onChange = useCallback(
+ (v: [number, number]) => {
+ dispatch(
+ controlAdapterBeginStepPctChanged({
+ id,
+ beginStepPct: v[0],
+ })
+ );
+ dispatch(
+ controlAdapterEndStepPctChanged({
+ id,
+ endStepPct: v[1],
+ })
+ );
+ },
+ [dispatch, id]
+ );
+
+ const onReset = useCallback(() => {
+ dispatch(
+ controlAdapterBeginStepPctChanged({
+ id,
+ beginStepPct: 0,
+ })
+ );
+ dispatch(
+ controlAdapterEndStepPctChanged({
+ id,
+ endStepPct: 1,
+ })
+ );
+ }, [dispatch, id]);
+
+ const value = useMemo<[number, number]>(() => [stepPcts?.beginStepPct ?? 0, stepPcts?.endStepPct ?? 1], [stepPcts]);
+
+ if (!stepPcts) {
+ return null;
+ }
+
+ return (
+
+
+ {t('controlnet.beginEndStepPercentShort')}
+
+
+
+ );
+});
+
+ParamControlAdapterBeginEnd.displayName = 'ParamControlAdapterBeginEnd';
+
+const ariaLabel = ['Begin Step %', 'End Step %'];
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterControlMode.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterControlMode.tsx
new file mode 100644
index 0000000000..6b5d34c106
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterControlMode.tsx
@@ -0,0 +1,66 @@
+import type { ComboboxOnChange } from '@invoke-ai/ui-library';
+import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
+import { useControlAdapterControlMode } from 'features/controlAdapters/hooks/useControlAdapterControlMode';
+import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
+import { controlAdapterControlModeChanged } from 'features/controlAdapters/store/controlAdaptersSlice';
+import type { ControlMode } from 'features/controlAdapters/store/types';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type Props = {
+ id: string;
+};
+
+const ParamControlAdapterControlMode = ({ id }: Props) => {
+ const isEnabled = useControlAdapterIsEnabled(id);
+ const controlMode = useControlAdapterControlMode(id);
+ const dispatch = useAppDispatch();
+ const { t } = useTranslation();
+
+ const CONTROL_MODE_DATA = useMemo(
+ () => [
+ { label: t('controlnet.balanced'), value: 'balanced' },
+ { label: t('controlnet.prompt'), value: 'more_prompt' },
+ { label: t('controlnet.control'), value: 'more_control' },
+ { label: t('controlnet.megaControl'), value: 'unbalanced' },
+ ],
+ [t]
+ );
+
+ const handleControlModeChange = useCallback(
+ (v) => {
+ if (!v) {
+ return;
+ }
+ dispatch(
+ controlAdapterControlModeChanged({
+ id,
+ controlMode: v.value as ControlMode,
+ })
+ );
+ },
+ [id, dispatch]
+ );
+
+ const value = useMemo(
+ () => CONTROL_MODE_DATA.filter((o) => o.value === controlMode)[0],
+ [CONTROL_MODE_DATA, controlMode]
+ );
+
+ if (!controlMode) {
+ return null;
+ }
+
+ return (
+
+
+ {t('controlnet.control')}
+
+
+
+ );
+};
+
+export default memo(ParamControlAdapterControlMode);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterModel.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterModel.tsx
new file mode 100644
index 0000000000..73a7d695b3
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterModel.tsx
@@ -0,0 +1,136 @@
+import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
+import { Combobox, Flex, FormControl, Tooltip } from '@invoke-ai/ui-library';
+import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
+import { useControlAdapterCLIPVisionModel } from 'features/controlAdapters/hooks/useControlAdapterCLIPVisionModel';
+import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
+import { useControlAdapterModel } from 'features/controlAdapters/hooks/useControlAdapterModel';
+import { useControlAdapterModels } from 'features/controlAdapters/hooks/useControlAdapterModels';
+import { useControlAdapterType } from 'features/controlAdapters/hooks/useControlAdapterType';
+import {
+ controlAdapterCLIPVisionModelChanged,
+ controlAdapterModelChanged,
+} from 'features/controlAdapters/store/controlAdaptersSlice';
+import type { CLIPVisionModel } from 'features/controlAdapters/store/types';
+import { selectGenerationSlice } from 'features/parameters/store/generationSlice';
+import { memo, useCallback, useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+import type {
+ AnyModelConfig,
+ ControlNetModelConfig,
+ IPAdapterModelConfig,
+ T2IAdapterModelConfig,
+} from 'services/api/types';
+
+type ParamControlAdapterModelProps = {
+ id: string;
+};
+
+const selectMainModel = createMemoizedSelector(selectGenerationSlice, (generation) => generation.model);
+
+const ParamControlAdapterModel = ({ id }: ParamControlAdapterModelProps) => {
+ const isEnabled = useControlAdapterIsEnabled(id);
+ const controlAdapterType = useControlAdapterType(id);
+ const { modelConfig } = useControlAdapterModel(id);
+ const dispatch = useAppDispatch();
+ const currentBaseModel = useAppSelector((s) => s.generation.model?.base);
+ const currentCLIPVisionModel = useControlAdapterCLIPVisionModel(id);
+ const mainModel = useAppSelector(selectMainModel);
+ const { t } = useTranslation();
+
+ const [modelConfigs, { isLoading }] = useControlAdapterModels(controlAdapterType);
+
+ const _onChange = useCallback(
+ (modelConfig: ControlNetModelConfig | IPAdapterModelConfig | T2IAdapterModelConfig | null) => {
+ if (!modelConfig) {
+ return;
+ }
+ dispatch(
+ controlAdapterModelChanged({
+ id,
+ modelConfig,
+ })
+ );
+ },
+ [dispatch, id]
+ );
+
+ const onCLIPVisionModelChange = useCallback(
+ (v) => {
+ if (!v?.value) {
+ return;
+ }
+ dispatch(controlAdapterCLIPVisionModelChanged({ id, clipVisionModel: v.value as CLIPVisionModel }));
+ },
+ [dispatch, id]
+ );
+
+ const selectedModel = useMemo(
+ () => (modelConfig && controlAdapterType ? { ...modelConfig, model_type: controlAdapterType } : null),
+ [controlAdapterType, modelConfig]
+ );
+
+ const getIsDisabled = useCallback(
+ (model: AnyModelConfig): boolean => {
+ const isCompatible = currentBaseModel === model.base;
+ const hasMainModel = Boolean(currentBaseModel);
+ return !hasMainModel || !isCompatible;
+ },
+ [currentBaseModel]
+ );
+
+ const { options, value, onChange, noOptionsMessage } = useGroupedModelCombobox({
+ modelConfigs,
+ onChange: _onChange,
+ selectedModel,
+ getIsDisabled,
+ isLoading,
+ });
+
+ const clipVisionOptions = useMemo(
+ () => [
+ { label: 'ViT-H', value: 'ViT-H' },
+ { label: 'ViT-G', value: 'ViT-G' },
+ ],
+ []
+ );
+
+ const clipVisionModel = useMemo(
+ () => clipVisionOptions.find((o) => o.value === currentCLIPVisionModel),
+ [clipVisionOptions, currentCLIPVisionModel]
+ );
+
+ return (
+
+
+
+
+
+
+ {modelConfig?.type === 'ip_adapter' && modelConfig.format === 'checkpoint' && (
+
+
+
+ )}
+
+ );
+};
+
+export default memo(ParamControlAdapterModel);
diff --git a/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterWeight.tsx b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterWeight.tsx
new file mode 100644
index 0000000000..5e456fc792
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/components/controlAdapterOverrides/ParamControlAdapterWeight.tsx
@@ -0,0 +1,74 @@
+import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
+import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
+import { useControlAdapterWeight } from 'features/controlAdapters/hooks/useControlAdapterWeight';
+import { controlAdapterWeightChanged } from 'features/controlAdapters/store/controlAdaptersSlice';
+import { isNil } from 'lodash-es';
+import { memo, useCallback } from 'react';
+import { useTranslation } from 'react-i18next';
+
+type ParamControlAdapterWeightProps = {
+ id: string;
+};
+
+const formatValue = (v: number) => v.toFixed(2);
+
+const ParamControlAdapterWeight = ({ id }: ParamControlAdapterWeightProps) => {
+ const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const isEnabled = useControlAdapterIsEnabled(id);
+ const weight = useControlAdapterWeight(id);
+ const initial = useAppSelector((s) => s.config.sd.ca.weight.initial);
+ const sliderMin = useAppSelector((s) => s.config.sd.ca.weight.sliderMin);
+ const sliderMax = useAppSelector((s) => s.config.sd.ca.weight.sliderMax);
+ const numberInputMin = useAppSelector((s) => s.config.sd.ca.weight.numberInputMin);
+ const numberInputMax = useAppSelector((s) => s.config.sd.ca.weight.numberInputMax);
+ const coarseStep = useAppSelector((s) => s.config.sd.ca.weight.coarseStep);
+ const fineStep = useAppSelector((s) => s.config.sd.ca.weight.fineStep);
+
+ const onChange = useCallback(
+ (weight: number) => {
+ dispatch(controlAdapterWeightChanged({ id, weight }));
+ },
+ [dispatch, id]
+ );
+
+ if (isNil(weight)) {
+ // should never happen
+ return null;
+ }
+
+ return (
+
+
+ {t('controlnet.weight')}
+
+
+
+
+ );
+};
+
+export default memo(ParamControlAdapterWeight);
+
+const marks = [0, 1, 2];
diff --git a/invokeai/frontend/web/src/features/controlLayers/hooks/layerStateHooks.ts b/invokeai/frontend/web/src/features/controlLayers/hooks/layerStateHooks.ts
new file mode 100644
index 0000000000..b4880d1dc6
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/hooks/layerStateHooks.ts
@@ -0,0 +1,81 @@
+import { createSelector } from '@reduxjs/toolkit';
+import { useAppSelector } from 'app/store/storeHooks';
+import {
+ isControlAdapterLayer,
+ isRegionalGuidanceLayer,
+ selectControlLayersSlice,
+} from 'features/controlLayers/store/controlLayersSlice';
+import { useMemo } from 'react';
+import { assert } from 'tsafe';
+
+export const useLayerPositivePrompt = (layerId: string) => {
+ const selectLayer = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ assert(layer.positivePrompt !== null, `Layer ${layerId} does not have a positive prompt`);
+ return layer.positivePrompt;
+ }),
+ [layerId]
+ );
+ const prompt = useAppSelector(selectLayer);
+ return prompt;
+};
+
+export const useLayerNegativePrompt = (layerId: string) => {
+ const selectLayer = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(isRegionalGuidanceLayer(layer), `Layer ${layerId} not found or not an RP layer`);
+ assert(layer.negativePrompt !== null, `Layer ${layerId} does not have a negative prompt`);
+ return layer.negativePrompt;
+ }),
+ [layerId]
+ );
+ const prompt = useAppSelector(selectLayer);
+ return prompt;
+};
+
+export const useLayerIsVisible = (layerId: string) => {
+ const selectLayer = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(layer, `Layer ${layerId} not found`);
+ return layer.isEnabled;
+ }),
+ [layerId]
+ );
+ const isVisible = useAppSelector(selectLayer);
+ return isVisible;
+};
+
+export const useLayerType = (layerId: string) => {
+ const selectLayer = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.find((l) => l.id === layerId);
+ assert(layer, `Layer ${layerId} not found`);
+ return layer.type;
+ }),
+ [layerId]
+ );
+ const type = useAppSelector(selectLayer);
+ return type;
+};
+
+export const useLayerOpacity = (layerId: string) => {
+ const selectLayer = useMemo(
+ () =>
+ createSelector(selectControlLayersSlice, (controlLayers) => {
+ const layer = controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
+ assert(layer, `Layer ${layerId} not found`);
+ return { opacity: Math.round(layer.opacity * 100), isFilterEnabled: layer.isFilterEnabled };
+ }),
+ [layerId]
+ );
+ const opacity = useAppSelector(selectLayer);
+ return opacity;
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/hooks/mouseEventHooks.ts b/invokeai/frontend/web/src/features/controlLayers/hooks/mouseEventHooks.ts
new file mode 100644
index 0000000000..bab7ef263f
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/hooks/mouseEventHooks.ts
@@ -0,0 +1,217 @@
+import { $ctrl, $meta } from '@invoke-ai/ui-library';
+import { useStore } from '@nanostores/react';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { calculateNewBrushSize } from 'features/canvas/hooks/useCanvasZoom';
+import {
+ $cursorPosition,
+ $isMouseDown,
+ $isMouseOver,
+ $lastMouseDownPos,
+ $tool,
+ brushSizeChanged,
+ maskLayerLineAdded,
+ maskLayerPointsAdded,
+ maskLayerRectAdded,
+} from 'features/controlLayers/store/controlLayersSlice';
+import type Konva from 'konva';
+import type { KonvaEventObject } from 'konva/lib/Node';
+import type { Vector2d } from 'konva/lib/types';
+import { useCallback, useRef } from 'react';
+
+const getIsFocused = (stage: Konva.Stage) => {
+ return stage.container().contains(document.activeElement);
+};
+
+export const getScaledFlooredCursorPosition = (stage: Konva.Stage) => {
+ const pointerPosition = stage.getPointerPosition();
+ const stageTransform = stage.getAbsoluteTransform().copy();
+ if (!pointerPosition || !stageTransform) {
+ return;
+ }
+ const scaledCursorPosition = stageTransform.invert().point(pointerPosition);
+ return {
+ x: Math.floor(scaledCursorPosition.x),
+ y: Math.floor(scaledCursorPosition.y),
+ };
+};
+
+const syncCursorPos = (stage: Konva.Stage): Vector2d | null => {
+ const pos = getScaledFlooredCursorPosition(stage);
+ if (!pos) {
+ return null;
+ }
+ $cursorPosition.set(pos);
+ return pos;
+};
+
+const BRUSH_SPACING = 20;
+
+export const useMouseEvents = () => {
+ const dispatch = useAppDispatch();
+ const selectedLayerId = useAppSelector((s) => s.controlLayers.present.selectedLayerId);
+ const tool = useStore($tool);
+ const lastCursorPosRef = useRef<[number, number] | null>(null);
+ const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
+ const brushSize = useAppSelector((s) => s.controlLayers.present.brushSize);
+
+ const onMouseDown = useCallback(
+ (e: KonvaEventObject) => {
+ const stage = e.target.getStage();
+ if (!stage) {
+ return;
+ }
+ const pos = syncCursorPos(stage);
+ if (!pos) {
+ return;
+ }
+ $isMouseDown.set(true);
+ $lastMouseDownPos.set(pos);
+ if (!selectedLayerId) {
+ return;
+ }
+ if (tool === 'brush' || tool === 'eraser') {
+ dispatch(
+ maskLayerLineAdded({
+ layerId: selectedLayerId,
+ points: [pos.x, pos.y, pos.x, pos.y],
+ tool,
+ })
+ );
+ }
+ },
+ [dispatch, selectedLayerId, tool]
+ );
+
+ const onMouseUp = useCallback(
+ (e: KonvaEventObject) => {
+ const stage = e.target.getStage();
+ if (!stage) {
+ return;
+ }
+ $isMouseDown.set(false);
+ const pos = $cursorPosition.get();
+ const lastPos = $lastMouseDownPos.get();
+ const tool = $tool.get();
+ if (pos && lastPos && selectedLayerId && tool === 'rect') {
+ dispatch(
+ maskLayerRectAdded({
+ layerId: selectedLayerId,
+ rect: {
+ x: Math.min(pos.x, lastPos.x),
+ y: Math.min(pos.y, lastPos.y),
+ width: Math.abs(pos.x - lastPos.x),
+ height: Math.abs(pos.y - lastPos.y),
+ },
+ })
+ );
+ }
+ $lastMouseDownPos.set(null);
+ },
+ [dispatch, selectedLayerId]
+ );
+
+ const onMouseMove = useCallback(
+ (e: KonvaEventObject) => {
+ const stage = e.target.getStage();
+ if (!stage) {
+ return;
+ }
+ const pos = syncCursorPos(stage);
+ if (!pos || !selectedLayerId) {
+ return;
+ }
+ if (getIsFocused(stage) && $isMouseOver.get() && $isMouseDown.get() && (tool === 'brush' || tool === 'eraser')) {
+ if (lastCursorPosRef.current) {
+ // Dispatching redux events impacts perf substantially - using brush spacing keeps dispatches to a reasonable number
+ if (Math.hypot(lastCursorPosRef.current[0] - pos.x, lastCursorPosRef.current[1] - pos.y) < BRUSH_SPACING) {
+ return;
+ }
+ }
+ lastCursorPosRef.current = [pos.x, pos.y];
+ dispatch(maskLayerPointsAdded({ layerId: selectedLayerId, point: lastCursorPosRef.current }));
+ }
+ },
+ [dispatch, selectedLayerId, tool]
+ );
+
+ const onMouseLeave = useCallback(
+ (e: KonvaEventObject) => {
+ const stage = e.target.getStage();
+ if (!stage) {
+ return;
+ }
+ const pos = syncCursorPos(stage);
+ if (
+ pos &&
+ selectedLayerId &&
+ getIsFocused(stage) &&
+ $isMouseOver.get() &&
+ $isMouseDown.get() &&
+ (tool === 'brush' || tool === 'eraser')
+ ) {
+ dispatch(maskLayerPointsAdded({ layerId: selectedLayerId, point: [pos.x, pos.y] }));
+ }
+ $isMouseOver.set(false);
+ $isMouseDown.set(false);
+ $cursorPosition.set(null);
+ },
+ [selectedLayerId, tool, dispatch]
+ );
+
+ const onMouseEnter = useCallback(
+ (e: KonvaEventObject) => {
+ const stage = e.target.getStage();
+ if (!stage) {
+ return;
+ }
+ $isMouseOver.set(true);
+ const pos = syncCursorPos(stage);
+ if (!pos) {
+ return;
+ }
+ if (!getIsFocused(stage)) {
+ return;
+ }
+ if (e.evt.buttons !== 1) {
+ $isMouseDown.set(false);
+ } else {
+ $isMouseDown.set(true);
+ if (!selectedLayerId) {
+ return;
+ }
+ if (tool === 'brush' || tool === 'eraser') {
+ dispatch(
+ maskLayerLineAdded({
+ layerId: selectedLayerId,
+ points: [pos.x, pos.y, pos.x, pos.y],
+ tool,
+ })
+ );
+ }
+ }
+ },
+ [dispatch, selectedLayerId, tool]
+ );
+
+ const onMouseWheel = useCallback(
+ (e: KonvaEventObject) => {
+ e.evt.preventDefault();
+
+ // checking for ctrl key is pressed or not,
+ // so that brush size can be controlled using ctrl + scroll up/down
+
+ // Invert the delta if the property is set to true
+ let delta = e.evt.deltaY;
+ if (shouldInvertBrushSizeScrollDirection) {
+ delta = -delta;
+ }
+
+ if ($ctrl.get() || $meta.get()) {
+ dispatch(brushSizeChanged(calculateNewBrushSize(brushSize, delta)));
+ }
+ },
+ [shouldInvertBrushSizeScrollDirection, brushSize, dispatch]
+ );
+
+ return { onMouseDown, onMouseUp, onMouseMove, onMouseEnter, onMouseLeave, onMouseWheel };
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/hooks/useControlLayersTitle.ts b/invokeai/frontend/web/src/features/controlLayers/hooks/useControlLayersTitle.ts
new file mode 100644
index 0000000000..93c8bec8a6
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/hooks/useControlLayersTitle.ts
@@ -0,0 +1,31 @@
+import { createSelector } from '@reduxjs/toolkit';
+import { useAppSelector } from 'app/store/storeHooks';
+import { isRegionalGuidanceLayer, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
+import { useMemo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+const selectValidLayerCount = createSelector(selectControlLayersSlice, (controlLayers) => {
+ if (!controlLayers.present.isEnabled) {
+ return 0;
+ }
+ const validLayers = controlLayers.present.layers
+ .filter(isRegionalGuidanceLayer)
+ .filter((l) => l.isEnabled)
+ .filter((l) => {
+ const hasTextPrompt = Boolean(l.positivePrompt || l.negativePrompt);
+ const hasAtLeastOneImagePrompt = l.ipAdapterIds.length > 0;
+ return hasTextPrompt || hasAtLeastOneImagePrompt;
+ });
+
+ return validLayers.length;
+});
+
+export const useControlLayersTitle = () => {
+ const { t } = useTranslation();
+ const validLayerCount = useAppSelector(selectValidLayerCount);
+ const title = useMemo(() => {
+ const suffix = validLayerCount > 0 ? ` (${validLayerCount})` : '';
+ return `${t('controlLayers.controlLayers')}${suffix}`;
+ }, [t, validLayerCount]);
+ return title;
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/store/controlLayersSlice.ts b/invokeai/frontend/web/src/features/controlLayers/store/controlLayersSlice.ts
new file mode 100644
index 0000000000..6d351d4d0d
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/store/controlLayersSlice.ts
@@ -0,0 +1,676 @@
+import type { PayloadAction, UnknownAction } from '@reduxjs/toolkit';
+import { createSlice, isAnyOf } from '@reduxjs/toolkit';
+import type { PersistConfig, RootState } from 'app/store/store';
+import { moveBackward, moveForward, moveToBack, moveToFront } from 'common/util/arrayUtils';
+import { deepClone } from 'common/util/deepClone';
+import { roundToMultiple } from 'common/util/roundDownToMultiple';
+import {
+ controlAdapterImageChanged,
+ controlAdapterProcessedImageChanged,
+ isAnyControlAdapterAdded,
+} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
+import { initialAspectRatioState } from 'features/parameters/components/ImageSize/constants';
+import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
+import { modelChanged } from 'features/parameters/store/generationSlice';
+import type { ParameterAutoNegative } from 'features/parameters/types/parameterSchemas';
+import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
+import type { IRect, Vector2d } from 'konva/lib/types';
+import { isEqual, partition } from 'lodash-es';
+import { atom } from 'nanostores';
+import type { RgbColor } from 'react-colorful';
+import type { UndoableOptions } from 'redux-undo';
+import { assert } from 'tsafe';
+import { v4 as uuidv4 } from 'uuid';
+
+import type {
+ ControlAdapterLayer,
+ ControlLayersState,
+ DrawingTool,
+ IPAdapterLayer,
+ Layer,
+ RegionalGuidanceLayer,
+ Tool,
+ VectorMaskLine,
+ VectorMaskRect,
+} from './types';
+
+export const initialControlLayersState: ControlLayersState = {
+ _version: 1,
+ selectedLayerId: null,
+ brushSize: 100,
+ layers: [],
+ globalMaskLayerOpacity: 0.3, // this globally changes all mask layers' opacity
+ isEnabled: true,
+ positivePrompt: '',
+ negativePrompt: '',
+ positivePrompt2: '',
+ negativePrompt2: '',
+ shouldConcatPrompts: true,
+ initialImage: null,
+ size: {
+ width: 512,
+ height: 512,
+ aspectRatio: deepClone(initialAspectRatioState),
+ },
+};
+
+const isLine = (obj: VectorMaskLine | VectorMaskRect): obj is VectorMaskLine => obj.type === 'vector_mask_line';
+export const isRegionalGuidanceLayer = (layer?: Layer): layer is RegionalGuidanceLayer =>
+ layer?.type === 'regional_guidance_layer';
+export const isControlAdapterLayer = (layer?: Layer): layer is ControlAdapterLayer =>
+ layer?.type === 'control_adapter_layer';
+export const isIPAdapterLayer = (layer?: Layer): layer is IPAdapterLayer => layer?.type === 'ip_adapter_layer';
+export const isRenderableLayer = (layer?: Layer): layer is RegionalGuidanceLayer | ControlAdapterLayer =>
+ layer?.type === 'regional_guidance_layer' || layer?.type === 'control_adapter_layer';
+const resetLayer = (layer: Layer) => {
+ if (layer.type === 'regional_guidance_layer') {
+ layer.maskObjects = [];
+ layer.bbox = null;
+ layer.isEnabled = true;
+ layer.needsPixelBbox = false;
+ layer.bboxNeedsUpdate = false;
+ return;
+ }
+
+ if (layer.type === 'control_adapter_layer') {
+ // TODO
+ }
+};
+const getVectorMaskPreviewColor = (state: ControlLayersState): RgbColor => {
+ const vmLayers = state.layers.filter(isRegionalGuidanceLayer);
+ const lastColor = vmLayers[vmLayers.length - 1]?.previewColor;
+ return LayerColors.next(lastColor);
+};
+
+export const controlLayersSlice = createSlice({
+ name: 'controlLayers',
+ initialState: initialControlLayersState,
+ reducers: {
+ //#region All Layers
+ regionalGuidanceLayerAdded: (state, action: PayloadAction<{ layerId: string }>) => {
+ const { layerId } = action.payload;
+ const layer: RegionalGuidanceLayer = {
+ id: getRegionalGuidanceLayerId(layerId),
+ type: 'regional_guidance_layer',
+ isEnabled: true,
+ bbox: null,
+ bboxNeedsUpdate: false,
+ maskObjects: [],
+ previewColor: getVectorMaskPreviewColor(state),
+ x: 0,
+ y: 0,
+ autoNegative: 'invert',
+ needsPixelBbox: false,
+ positivePrompt: '',
+ negativePrompt: null,
+ ipAdapterIds: [],
+ isSelected: true,
+ };
+ state.layers.push(layer);
+ state.selectedLayerId = layer.id;
+ for (const layer of state.layers.filter(isRenderableLayer)) {
+ if (layer.id !== layerId) {
+ layer.isSelected = false;
+ }
+ }
+ return;
+ },
+ ipAdapterLayerAdded: (state, action: PayloadAction<{ layerId: string; ipAdapterId: string }>) => {
+ const { layerId, ipAdapterId } = action.payload;
+ const layer: IPAdapterLayer = {
+ id: getIPAdapterLayerId(layerId),
+ type: 'ip_adapter_layer',
+ isEnabled: true,
+ ipAdapterId,
+ };
+ state.layers.push(layer);
+ return;
+ },
+ controlAdapterLayerAdded: (state, action: PayloadAction<{ layerId: string; controlNetId: string }>) => {
+ const { layerId, controlNetId } = action.payload;
+ const layer: ControlAdapterLayer = {
+ id: getControlNetLayerId(layerId),
+ type: 'control_adapter_layer',
+ controlNetId,
+ x: 0,
+ y: 0,
+ bbox: null,
+ bboxNeedsUpdate: false,
+ isEnabled: true,
+ imageName: null,
+ opacity: 1,
+ isSelected: true,
+ isFilterEnabled: true,
+ };
+ state.layers.push(layer);
+ state.selectedLayerId = layer.id;
+ for (const layer of state.layers.filter(isRenderableLayer)) {
+ if (layer.id !== layerId) {
+ layer.isSelected = false;
+ }
+ }
+ return;
+ },
+ layerSelected: (state, action: PayloadAction) => {
+ for (const layer of state.layers.filter(isRenderableLayer)) {
+ if (layer.id === action.payload) {
+ layer.isSelected = true;
+ state.selectedLayerId = action.payload;
+ } else {
+ layer.isSelected = false;
+ }
+ }
+ },
+ layerVisibilityToggled: (state, action: PayloadAction) => {
+ const layer = state.layers.find((l) => l.id === action.payload);
+ if (layer) {
+ layer.isEnabled = !layer.isEnabled;
+ }
+ },
+ layerTranslated: (state, action: PayloadAction<{ layerId: string; x: number; y: number }>) => {
+ const { layerId, x, y } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (isRenderableLayer(layer)) {
+ layer.x = x;
+ layer.y = y;
+ }
+ },
+ layerBboxChanged: (state, action: PayloadAction<{ layerId: string; bbox: IRect | null }>) => {
+ const { layerId, bbox } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (isRenderableLayer(layer)) {
+ layer.bbox = bbox;
+ layer.bboxNeedsUpdate = false;
+ if (bbox === null && layer.type === 'regional_guidance_layer') {
+ // The layer was fully erased, empty its objects to prevent accumulation of invisible objects
+ layer.maskObjects = [];
+ layer.needsPixelBbox = false;
+ }
+ }
+ },
+ layerReset: (state, action: PayloadAction) => {
+ const layer = state.layers.find((l) => l.id === action.payload);
+ if (layer) {
+ resetLayer(layer);
+ }
+ },
+ layerDeleted: (state, action: PayloadAction) => {
+ state.layers = state.layers.filter((l) => l.id !== action.payload);
+ state.selectedLayerId = state.layers[0]?.id ?? null;
+ },
+ layerMovedForward: (state, action: PayloadAction) => {
+ const cb = (l: Layer) => l.id === action.payload;
+ const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
+ moveForward(renderableLayers, cb);
+ state.layers = [...ipAdapterLayers, ...renderableLayers];
+ },
+ layerMovedToFront: (state, action: PayloadAction) => {
+ const cb = (l: Layer) => l.id === action.payload;
+ const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
+ // Because the layers are in reverse order, moving to the front is equivalent to moving to the back
+ moveToBack(renderableLayers, cb);
+ state.layers = [...ipAdapterLayers, ...renderableLayers];
+ },
+ layerMovedBackward: (state, action: PayloadAction) => {
+ const cb = (l: Layer) => l.id === action.payload;
+ const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
+ moveBackward(renderableLayers, cb);
+ state.layers = [...ipAdapterLayers, ...renderableLayers];
+ },
+ layerMovedToBack: (state, action: PayloadAction) => {
+ const cb = (l: Layer) => l.id === action.payload;
+ const [renderableLayers, ipAdapterLayers] = partition(state.layers, isRenderableLayer);
+ // Because the layers are in reverse order, moving to the back is equivalent to moving to the front
+ moveToFront(renderableLayers, cb);
+ state.layers = [...ipAdapterLayers, ...renderableLayers];
+ },
+ selectedLayerReset: (state) => {
+ const layer = state.layers.find((l) => l.id === state.selectedLayerId);
+ if (layer) {
+ resetLayer(layer);
+ }
+ },
+ selectedLayerDeleted: (state) => {
+ state.layers = state.layers.filter((l) => l.id !== state.selectedLayerId);
+ state.selectedLayerId = state.layers[0]?.id ?? null;
+ },
+ layerOpacityChanged: (state, action: PayloadAction<{ layerId: string; opacity: number }>) => {
+ const { layerId, opacity } = action.payload;
+ const layer = state.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
+ if (layer) {
+ layer.opacity = opacity;
+ }
+ },
+ //#endregion
+
+ //#region CA Layers
+ isFilterEnabledChanged: (state, action: PayloadAction<{ layerId: string; isFilterEnabled: boolean }>) => {
+ const { layerId, isFilterEnabled } = action.payload;
+ const layer = state.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
+ if (layer) {
+ layer.isFilterEnabled = isFilterEnabled;
+ }
+ },
+ //#endregion
+
+ //#region Mask Layers
+ maskLayerPositivePromptChanged: (state, action: PayloadAction<{ layerId: string; prompt: string | null }>) => {
+ const { layerId, prompt } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.positivePrompt = prompt;
+ }
+ },
+ maskLayerNegativePromptChanged: (state, action: PayloadAction<{ layerId: string; prompt: string | null }>) => {
+ const { layerId, prompt } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.negativePrompt = prompt;
+ }
+ },
+ maskLayerIPAdapterAdded: (state, action: PayloadAction<{ layerId: string; ipAdapterId: string }>) => {
+ const { layerId, ipAdapterId } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.ipAdapterIds.push(ipAdapterId);
+ }
+ },
+ maskLayerIPAdapterDeleted: (state, action: PayloadAction<{ layerId: string; ipAdapterId: string }>) => {
+ const { layerId, ipAdapterId } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.ipAdapterIds = layer.ipAdapterIds.filter((id) => id !== ipAdapterId);
+ }
+ },
+ maskLayerPreviewColorChanged: (state, action: PayloadAction<{ layerId: string; color: RgbColor }>) => {
+ const { layerId, color } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.previewColor = color;
+ }
+ },
+ maskLayerLineAdded: {
+ reducer: (
+ state,
+ action: PayloadAction<
+ { layerId: string; points: [number, number, number, number]; tool: DrawingTool },
+ string,
+ { uuid: string }
+ >
+ ) => {
+ const { layerId, points, tool } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ const lineId = getRegionalGuidanceLayerLineId(layer.id, action.meta.uuid);
+ layer.maskObjects.push({
+ type: 'vector_mask_line',
+ tool: tool,
+ id: lineId,
+ // Points must be offset by the layer's x and y coordinates
+ // TODO: Handle this in the event listener?
+ points: [points[0] - layer.x, points[1] - layer.y, points[2] - layer.x, points[3] - layer.y],
+ strokeWidth: state.brushSize,
+ });
+ layer.bboxNeedsUpdate = true;
+ if (!layer.needsPixelBbox && tool === 'eraser') {
+ layer.needsPixelBbox = true;
+ }
+ }
+ },
+ prepare: (payload: { layerId: string; points: [number, number, number, number]; tool: DrawingTool }) => ({
+ payload,
+ meta: { uuid: uuidv4() },
+ }),
+ },
+ maskLayerPointsAdded: (state, action: PayloadAction<{ layerId: string; point: [number, number] }>) => {
+ const { layerId, point } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ const lastLine = layer.maskObjects.findLast(isLine);
+ if (!lastLine) {
+ return;
+ }
+ // Points must be offset by the layer's x and y coordinates
+ // TODO: Handle this in the event listener
+ lastLine.points.push(point[0] - layer.x, point[1] - layer.y);
+ layer.bboxNeedsUpdate = true;
+ }
+ },
+ maskLayerRectAdded: {
+ reducer: (state, action: PayloadAction<{ layerId: string; rect: IRect }, string, { uuid: string }>) => {
+ const { layerId, rect } = action.payload;
+ if (rect.height === 0 || rect.width === 0) {
+ // Ignore zero-area rectangles
+ return;
+ }
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ const id = getMaskedGuidnaceLayerRectId(layer.id, action.meta.uuid);
+ layer.maskObjects.push({
+ type: 'vector_mask_rect',
+ id,
+ x: rect.x - layer.x,
+ y: rect.y - layer.y,
+ width: rect.width,
+ height: rect.height,
+ });
+ layer.bboxNeedsUpdate = true;
+ }
+ },
+ prepare: (payload: { layerId: string; rect: IRect }) => ({ payload, meta: { uuid: uuidv4() } }),
+ },
+ maskLayerAutoNegativeChanged: (
+ state,
+ action: PayloadAction<{ layerId: string; autoNegative: ParameterAutoNegative }>
+ ) => {
+ const { layerId, autoNegative } = action.payload;
+ const layer = state.layers.find((l) => l.id === layerId);
+ if (layer?.type === 'regional_guidance_layer') {
+ layer.autoNegative = autoNegative;
+ }
+ },
+ //#endregion
+
+ //#region Base Layer
+ positivePromptChanged: (state, action: PayloadAction) => {
+ state.positivePrompt = action.payload;
+ },
+ negativePromptChanged: (state, action: PayloadAction) => {
+ state.negativePrompt = action.payload;
+ },
+ positivePrompt2Changed: (state, action: PayloadAction) => {
+ state.positivePrompt2 = action.payload;
+ },
+ negativePrompt2Changed: (state, action: PayloadAction) => {
+ state.negativePrompt2 = action.payload;
+ },
+ shouldConcatPromptsChanged: (state, action: PayloadAction) => {
+ state.shouldConcatPrompts = action.payload;
+ },
+ widthChanged: (state, action: PayloadAction<{ width: number; updateAspectRatio?: boolean }>) => {
+ const { width, updateAspectRatio } = action.payload;
+ state.size.width = width;
+ if (updateAspectRatio) {
+ state.size.aspectRatio.value = width / state.size.height;
+ state.size.aspectRatio.id = 'Free';
+ state.size.aspectRatio.isLocked = false;
+ }
+ },
+ heightChanged: (state, action: PayloadAction<{ height: number; updateAspectRatio?: boolean }>) => {
+ const { height, updateAspectRatio } = action.payload;
+ state.size.height = height;
+ if (updateAspectRatio) {
+ state.size.aspectRatio.value = state.size.width / height;
+ state.size.aspectRatio.id = 'Free';
+ state.size.aspectRatio.isLocked = false;
+ }
+ },
+ aspectRatioChanged: (state, action: PayloadAction) => {
+ state.size.aspectRatio = action.payload;
+ },
+ //#endregion
+
+ //#region General
+ brushSizeChanged: (state, action: PayloadAction) => {
+ state.brushSize = Math.round(action.payload);
+ },
+ globalMaskLayerOpacityChanged: (state, action: PayloadAction) => {
+ state.globalMaskLayerOpacity = action.payload;
+ },
+ isEnabledChanged: (state, action: PayloadAction) => {
+ state.isEnabled = action.payload;
+ },
+ undo: (state) => {
+ // Invalidate the bbox for all layers to prevent stale bboxes
+ for (const layer of state.layers.filter(isRenderableLayer)) {
+ layer.bboxNeedsUpdate = true;
+ }
+ },
+ redo: (state) => {
+ // Invalidate the bbox for all layers to prevent stale bboxes
+ for (const layer of state.layers.filter(isRenderableLayer)) {
+ layer.bboxNeedsUpdate = true;
+ }
+ },
+ //#endregion
+ },
+ extraReducers(builder) {
+ builder.addCase(modelChanged, (state, action) => {
+ const newModel = action.payload;
+ if (!newModel || action.meta.previousModel?.base === newModel.base) {
+ // Model was cleared or the base didn't change
+ return;
+ }
+ const optimalDimension = getOptimalDimension(newModel);
+ if (getIsSizeOptimal(state.size.width, state.size.height, optimalDimension)) {
+ return;
+ }
+ const { width, height } = calculateNewSize(state.size.aspectRatio.value, optimalDimension * optimalDimension);
+ state.size.width = width;
+ state.size.height = height;
+ });
+
+ builder.addCase(controlAdapterImageChanged, (state, action) => {
+ const { id, controlImage } = action.payload;
+ const layer = state.layers.filter(isControlAdapterLayer).find((l) => l.controlNetId === id);
+ if (layer) {
+ layer.bbox = null;
+ layer.bboxNeedsUpdate = true;
+ layer.isEnabled = true;
+ layer.imageName = controlImage?.image_name ?? null;
+ }
+ });
+
+ builder.addCase(controlAdapterProcessedImageChanged, (state, action) => {
+ const { id, processedControlImage } = action.payload;
+ const layer = state.layers.filter(isControlAdapterLayer).find((l) => l.controlNetId === id);
+ if (layer) {
+ layer.bbox = null;
+ layer.bboxNeedsUpdate = true;
+ layer.isEnabled = true;
+ layer.imageName = processedControlImage?.image_name ?? null;
+ }
+ });
+
+ // TODO: This is a temp fix to reduce issues with T2I adapter having a different downscaling
+ // factor than the UNet. Hopefully we get an upstream fix in diffusers.
+ builder.addMatcher(isAnyControlAdapterAdded, (state, action) => {
+ if (action.payload.type === 't2i_adapter') {
+ state.size.width = roundToMultiple(state.size.width, 64);
+ state.size.height = roundToMultiple(state.size.height, 64);
+ }
+ });
+ },
+});
+
+/**
+ * This class is used to cycle through a set of colors for the prompt region layers.
+ */
+class LayerColors {
+ static COLORS: RgbColor[] = [
+ { r: 121, g: 157, b: 219 }, // rgb(121, 157, 219)
+ { r: 131, g: 214, b: 131 }, // rgb(131, 214, 131)
+ { r: 250, g: 225, b: 80 }, // rgb(250, 225, 80)
+ { r: 220, g: 144, b: 101 }, // rgb(220, 144, 101)
+ { r: 224, g: 117, b: 117 }, // rgb(224, 117, 117)
+ { r: 213, g: 139, b: 202 }, // rgb(213, 139, 202)
+ { r: 161, g: 120, b: 214 }, // rgb(161, 120, 214)
+ ];
+ static i = this.COLORS.length - 1;
+ /**
+ * Get the next color in the sequence. If a known color is provided, the next color will be the one after it.
+ */
+ static next(currentColor?: RgbColor): RgbColor {
+ if (currentColor) {
+ const i = this.COLORS.findIndex((c) => isEqual(c, currentColor));
+ if (i !== -1) {
+ this.i = i;
+ }
+ }
+ this.i = (this.i + 1) % this.COLORS.length;
+ const color = this.COLORS[this.i];
+ assert(color);
+ return color;
+ }
+}
+
+export const {
+ // All layer actions
+ layerDeleted,
+ layerMovedBackward,
+ layerMovedForward,
+ layerMovedToBack,
+ layerMovedToFront,
+ layerReset,
+ layerSelected,
+ layerTranslated,
+ layerBboxChanged,
+ layerVisibilityToggled,
+ selectedLayerReset,
+ selectedLayerDeleted,
+ regionalGuidanceLayerAdded,
+ ipAdapterLayerAdded,
+ controlAdapterLayerAdded,
+ layerOpacityChanged,
+ // CA layer actions
+ isFilterEnabledChanged,
+ // Mask layer actions
+ maskLayerLineAdded,
+ maskLayerPointsAdded,
+ maskLayerRectAdded,
+ maskLayerNegativePromptChanged,
+ maskLayerPositivePromptChanged,
+ maskLayerIPAdapterAdded,
+ maskLayerIPAdapterDeleted,
+ maskLayerAutoNegativeChanged,
+ maskLayerPreviewColorChanged,
+ // Base layer actions
+ positivePromptChanged,
+ negativePromptChanged,
+ positivePrompt2Changed,
+ negativePrompt2Changed,
+ shouldConcatPromptsChanged,
+ widthChanged,
+ heightChanged,
+ aspectRatioChanged,
+ // General actions
+ brushSizeChanged,
+ globalMaskLayerOpacityChanged,
+ undo,
+ redo,
+} = controlLayersSlice.actions;
+
+export const selectAllControlAdapterIds = (controlLayers: ControlLayersState) =>
+ controlLayers.layers.flatMap((l) => {
+ if (l.type === 'control_adapter_layer') {
+ return [l.controlNetId];
+ }
+ if (l.type === 'ip_adapter_layer') {
+ return [l.ipAdapterId];
+ }
+ if (l.type === 'regional_guidance_layer') {
+ return l.ipAdapterIds;
+ }
+ return [];
+ });
+
+export const selectControlLayersSlice = (state: RootState) => state.controlLayers;
+
+/* eslint-disable-next-line @typescript-eslint/no-explicit-any */
+const migrateControlLayersState = (state: any): any => {
+ return state;
+};
+
+export const $isMouseDown = atom(false);
+export const $isMouseOver = atom(false);
+export const $lastMouseDownPos = atom(null);
+export const $tool = atom('brush');
+export const $cursorPosition = atom(null);
+
+// IDs for singleton Konva layers and objects
+export const TOOL_PREVIEW_LAYER_ID = 'tool_preview_layer';
+export const TOOL_PREVIEW_BRUSH_GROUP_ID = 'tool_preview_layer.brush_group';
+export const TOOL_PREVIEW_BRUSH_FILL_ID = 'tool_preview_layer.brush_fill';
+export const TOOL_PREVIEW_BRUSH_BORDER_INNER_ID = 'tool_preview_layer.brush_border_inner';
+export const TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID = 'tool_preview_layer.brush_border_outer';
+export const TOOL_PREVIEW_RECT_ID = 'tool_preview_layer.rect';
+export const BACKGROUND_LAYER_ID = 'background_layer';
+export const BACKGROUND_RECT_ID = 'background_layer.rect';
+export const NO_LAYERS_MESSAGE_LAYER_ID = 'no_layers_message';
+
+// Names (aka classes) for Konva layers and objects
+export const CONTROLNET_LAYER_NAME = 'control_adapter_layer';
+export const CONTROLNET_LAYER_IMAGE_NAME = 'control_adapter_layer.image';
+export const regional_guidance_layer_NAME = 'regional_guidance_layer';
+export const regional_guidance_layer_LINE_NAME = 'regional_guidance_layer.line';
+export const regional_guidance_layer_OBJECT_GROUP_NAME = 'regional_guidance_layer.object_group';
+export const regional_guidance_layer_RECT_NAME = 'regional_guidance_layer.rect';
+export const LAYER_BBOX_NAME = 'layer.bbox';
+
+// Getters for non-singleton layer and object IDs
+const getRegionalGuidanceLayerId = (layerId: string) => `${regional_guidance_layer_NAME}_${layerId}`;
+const getRegionalGuidanceLayerLineId = (layerId: string, lineId: string) => `${layerId}.line_${lineId}`;
+const getMaskedGuidnaceLayerRectId = (layerId: string, lineId: string) => `${layerId}.rect_${lineId}`;
+export const getRegionalGuidanceLayerObjectGroupId = (layerId: string, groupId: string) =>
+ `${layerId}.objectGroup_${groupId}`;
+export const getLayerBboxId = (layerId: string) => `${layerId}.bbox`;
+const getControlNetLayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
+export const getControlNetLayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
+const getIPAdapterLayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
+
+export const controlLayersPersistConfig: PersistConfig = {
+ name: controlLayersSlice.name,
+ initialState: initialControlLayersState,
+ migrate: migrateControlLayersState,
+ persistDenylist: [],
+};
+
+// These actions are _individually_ grouped together as single undoable actions
+const undoableGroupByMatcher = isAnyOf(
+ layerTranslated,
+ brushSizeChanged,
+ globalMaskLayerOpacityChanged,
+ maskLayerPositivePromptChanged,
+ maskLayerNegativePromptChanged,
+ maskLayerPreviewColorChanged
+);
+
+// These are used to group actions into logical lines below (hate typos)
+const LINE_1 = 'LINE_1';
+const LINE_2 = 'LINE_2';
+
+export const controlLayersUndoableConfig: UndoableOptions = {
+ limit: 64,
+ undoType: controlLayersSlice.actions.undo.type,
+ redoType: controlLayersSlice.actions.redo.type,
+ groupBy: (action, state, history) => {
+ // Lines are started with `maskLayerLineAdded` and may have any number of subsequent `maskLayerPointsAdded` events.
+ // We can use a double-buffer-esque trick to group each "logical" line as a single undoable action, without grouping
+ // separate logical lines as a single undo action.
+ if (maskLayerLineAdded.match(action)) {
+ return history.group === LINE_1 ? LINE_2 : LINE_1;
+ }
+ if (maskLayerPointsAdded.match(action)) {
+ if (history.group === LINE_1 || history.group === LINE_2) {
+ return history.group;
+ }
+ }
+ if (undoableGroupByMatcher(action)) {
+ return action.type;
+ }
+ return null;
+ },
+ filter: (action, _state, _history) => {
+ // Ignore all actions from other slices
+ if (!action.type.startsWith(controlLayersSlice.name)) {
+ return false;
+ }
+ // This action is triggered on state changes, including when we undo. If we do not ignore this action, when we
+ // undo, this action triggers and empties the future states array. Therefore, we must ignore this action.
+ if (layerBboxChanged.match(action)) {
+ return false;
+ }
+ return true;
+ },
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/store/types.ts b/invokeai/frontend/web/src/features/controlLayers/store/types.ts
new file mode 100644
index 0000000000..58b25f967b
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/store/types.ts
@@ -0,0 +1,92 @@
+import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
+import type {
+ ParameterAutoNegative,
+ ParameterHeight,
+ ParameterNegativePrompt,
+ ParameterNegativeStylePromptSDXL,
+ ParameterPositivePrompt,
+ ParameterPositiveStylePromptSDXL,
+ ParameterWidth,
+} from 'features/parameters/types/parameterSchemas';
+import type { IRect } from 'konva/lib/types';
+import type { RgbColor } from 'react-colorful';
+
+export type DrawingTool = 'brush' | 'eraser';
+
+export type Tool = DrawingTool | 'move' | 'rect';
+
+export type VectorMaskLine = {
+ id: string;
+ type: 'vector_mask_line';
+ tool: DrawingTool;
+ strokeWidth: number;
+ points: number[];
+};
+
+export type VectorMaskRect = {
+ id: string;
+ type: 'vector_mask_rect';
+ x: number;
+ y: number;
+ width: number;
+ height: number;
+};
+
+type LayerBase = {
+ id: string;
+ isEnabled: boolean;
+};
+
+type RenderableLayerBase = LayerBase & {
+ x: number;
+ y: number;
+ bbox: IRect | null;
+ bboxNeedsUpdate: boolean;
+ isSelected: boolean;
+};
+
+export type ControlAdapterLayer = RenderableLayerBase & {
+ type: 'control_adapter_layer'; // technically, also t2i adapter layer
+ controlNetId: string;
+ imageName: string | null;
+ opacity: number;
+ isFilterEnabled: boolean;
+};
+
+export type IPAdapterLayer = LayerBase & {
+ type: 'ip_adapter_layer'; // technically, also t2i adapter layer
+ ipAdapterId: string;
+};
+
+export type RegionalGuidanceLayer = RenderableLayerBase & {
+ type: 'regional_guidance_layer';
+ maskObjects: (VectorMaskLine | VectorMaskRect)[];
+ positivePrompt: ParameterPositivePrompt | null;
+ negativePrompt: ParameterNegativePrompt | null; // Up to one text prompt per mask
+ ipAdapterIds: string[]; // Any number of image prompts
+ previewColor: RgbColor;
+ autoNegative: ParameterAutoNegative;
+ needsPixelBbox: boolean; // Needs the slower pixel-based bbox calculation - set to true when an there is an eraser object
+};
+
+export type Layer = RegionalGuidanceLayer | ControlAdapterLayer | IPAdapterLayer;
+
+export type ControlLayersState = {
+ _version: 1;
+ selectedLayerId: string | null;
+ layers: Layer[];
+ brushSize: number;
+ globalMaskLayerOpacity: number;
+ isEnabled: boolean;
+ positivePrompt: ParameterPositivePrompt;
+ negativePrompt: ParameterNegativePrompt;
+ positivePrompt2: ParameterPositiveStylePromptSDXL;
+ negativePrompt2: ParameterNegativeStylePromptSDXL;
+ shouldConcatPrompts: boolean;
+ initialImage: string | null;
+ size: {
+ width: ParameterWidth;
+ height: ParameterHeight;
+ aspectRatio: AspectRatioState;
+ };
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/util/bbox.ts b/invokeai/frontend/web/src/features/controlLayers/util/bbox.ts
new file mode 100644
index 0000000000..3c2915e0ab
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/util/bbox.ts
@@ -0,0 +1,134 @@
+import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
+import { imageDataToDataURL } from 'features/canvas/util/blobToDataURL';
+import { regional_guidance_layer_OBJECT_GROUP_NAME } from 'features/controlLayers/store/controlLayersSlice';
+import Konva from 'konva';
+import type { Layer as KonvaLayerType } from 'konva/lib/Layer';
+import type { IRect } from 'konva/lib/types';
+import { assert } from 'tsafe';
+
+const GET_CLIENT_RECT_CONFIG = { skipTransform: true };
+
+type Extents = {
+ minX: number;
+ minY: number;
+ maxX: number;
+ maxY: number;
+};
+
+/**
+ * Get the bounding box of an image.
+ * @param imageData The ImageData object to get the bounding box of.
+ * @returns The minimum and maximum x and y values of the image's bounding box.
+ */
+const getImageDataBbox = (imageData: ImageData): Extents | null => {
+ const { data, width, height } = imageData;
+ let minX = width;
+ let minY = height;
+ let maxX = -1;
+ let maxY = -1;
+ let alpha = 0;
+ let isEmpty = true;
+
+ for (let y = 0; y < height; y++) {
+ for (let x = 0; x < width; x++) {
+ alpha = data[(y * width + x) * 4 + 3] ?? 0;
+ if (alpha > 0) {
+ isEmpty = false;
+ if (x < minX) {
+ minX = x;
+ }
+ if (x > maxX) {
+ maxX = x;
+ }
+ if (y < minY) {
+ minY = y;
+ }
+ if (y > maxY) {
+ maxY = y;
+ }
+ }
+ }
+ }
+
+ return isEmpty ? null : { minX, minY, maxX, maxY };
+};
+
+/**
+ * Get the bounding box of a regional prompt konva layer. This function has special handling for regional prompt layers.
+ * @param layer The konva layer to get the bounding box of.
+ * @param preview Whether to open a new tab displaying the rendered layer, which is used to calculate the bbox.
+ */
+export const getLayerBboxPixels = (layer: KonvaLayerType, preview: boolean = false): IRect | null => {
+ // To calculate the layer's bounding box, we must first export it to a pixel array, then do some math.
+ //
+ // Though it is relatively fast, we can't use Konva's `getClientRect`. It programmatically determines the rect
+ // by calculating the extents of individual shapes from their "vector" shape data.
+ //
+ // This doesn't work when some shapes are drawn with composite operations that "erase" pixels, like eraser lines.
+ // These shapes' extents are still calculated as if they were solid, leading to a bounding box that is too large.
+ const stage = layer.getStage();
+
+ // Construct and offscreen canvas on which we will do the bbox calculations.
+ const offscreenStageContainer = document.createElement('div');
+ const offscreenStage = new Konva.Stage({
+ container: offscreenStageContainer,
+ width: stage.width(),
+ height: stage.height(),
+ });
+
+ // Clone the layer and filter out unwanted children.
+ const layerClone = layer.clone();
+ offscreenStage.add(layerClone);
+
+ for (const child of layerClone.getChildren()) {
+ if (child.name() === regional_guidance_layer_OBJECT_GROUP_NAME) {
+ // We need to cache the group to ensure it composites out eraser strokes correctly
+ child.opacity(1);
+ child.cache();
+ } else {
+ // Filter out unwanted children.
+ child.destroy();
+ }
+ }
+
+ // Get a worst-case rect using the relatively fast `getClientRect`.
+ const layerRect = layerClone.getClientRect();
+
+ // Capture the image data with the above rect.
+ const layerImageData = offscreenStage
+ .toCanvas(layerRect)
+ .getContext('2d')
+ ?.getImageData(0, 0, layerRect.width, layerRect.height);
+ assert(layerImageData, "Unable to get layer's image data");
+
+ if (preview) {
+ openBase64ImageInTab([{ base64: imageDataToDataURL(layerImageData), caption: layer.id() }]);
+ }
+
+ // Calculate the layer's bounding box.
+ const layerBbox = getImageDataBbox(layerImageData);
+
+ if (!layerBbox) {
+ return null;
+ }
+
+ // Correct the bounding box to be relative to the layer's position.
+ const correctedLayerBbox = {
+ x: layerBbox.minX - Math.floor(stage.x()) + layerRect.x - Math.floor(layer.x()),
+ y: layerBbox.minY - Math.floor(stage.y()) + layerRect.y - Math.floor(layer.y()),
+ width: layerBbox.maxX - layerBbox.minX,
+ height: layerBbox.maxY - layerBbox.minY,
+ };
+
+ return correctedLayerBbox;
+};
+
+export const getLayerBboxFast = (layer: KonvaLayerType): IRect | null => {
+ const bbox = layer.getClientRect(GET_CLIENT_RECT_CONFIG);
+ return {
+ x: Math.floor(bbox.x),
+ y: Math.floor(bbox.y),
+ width: Math.floor(bbox.width),
+ height: Math.floor(bbox.height),
+ };
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/util/getLayerBlobs.ts b/invokeai/frontend/web/src/features/controlLayers/util/getLayerBlobs.ts
new file mode 100644
index 0000000000..1b0808c5f1
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/util/getLayerBlobs.ts
@@ -0,0 +1,66 @@
+import { getStore } from 'app/store/nanostores/store';
+import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
+import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
+import { isRegionalGuidanceLayer, regional_guidance_layer_NAME } from 'features/controlLayers/store/controlLayersSlice';
+import { renderers } from 'features/controlLayers/util/renderers';
+import Konva from 'konva';
+import { assert } from 'tsafe';
+
+/**
+ * Get the blobs of all regional prompt layers. Only visible layers are returned.
+ * @param layerIds The IDs of the layers to get blobs for. If not provided, all regional prompt layers are used.
+ * @param preview Whether to open a new tab displaying each layer.
+ * @returns A map of layer IDs to blobs.
+ */
+export const getRegionalPromptLayerBlobs = async (
+ layerIds?: string[],
+ preview: boolean = false
+): Promise> => {
+ const state = getStore().getState();
+ const { layers } = state.controlLayers.present;
+ const { width, height } = state.controlLayers.present.size;
+ const reduxLayers = layers.filter(isRegionalGuidanceLayer);
+ const container = document.createElement('div');
+ const stage = new Konva.Stage({ container, width, height });
+ renderers.renderLayers(stage, reduxLayers, 1, 'brush');
+
+ const konvaLayers = stage.find(`.${regional_guidance_layer_NAME}`);
+ const blobs: Record = {};
+
+ // First remove all layers
+ for (const layer of konvaLayers) {
+ layer.remove();
+ }
+
+ // Next render each layer to a blob
+ for (const layer of konvaLayers) {
+ if (layerIds && !layerIds.includes(layer.id())) {
+ continue;
+ }
+ const reduxLayer = reduxLayers.find((l) => l.id === layer.id());
+ assert(reduxLayer, `Redux layer ${layer.id()} not found`);
+ stage.add(layer);
+ const blob = await new Promise((resolve) => {
+ stage.toBlob({
+ callback: (blob) => {
+ assert(blob, 'Blob is null');
+ resolve(blob);
+ },
+ });
+ });
+
+ if (preview) {
+ const base64 = await blobToDataURL(blob);
+ openBase64ImageInTab([
+ {
+ base64,
+ caption: `${reduxLayer.id}: ${reduxLayer.positivePrompt} / ${reduxLayer.negativePrompt}`,
+ },
+ ]);
+ }
+ layer.remove();
+ blobs[layer.id()] = blob;
+ }
+
+ return blobs;
+};
diff --git a/invokeai/frontend/web/src/features/controlLayers/util/renderers.ts b/invokeai/frontend/web/src/features/controlLayers/util/renderers.ts
new file mode 100644
index 0000000000..b2f04a88c1
--- /dev/null
+++ b/invokeai/frontend/web/src/features/controlLayers/util/renderers.ts
@@ -0,0 +1,776 @@
+import { getStore } from 'app/store/nanostores/store';
+import { rgbaColorToString, rgbColorToString } from 'features/canvas/util/colorToString';
+import { getScaledFlooredCursorPosition } from 'features/controlLayers/hooks/mouseEventHooks';
+import {
+ $tool,
+ BACKGROUND_LAYER_ID,
+ BACKGROUND_RECT_ID,
+ CONTROLNET_LAYER_IMAGE_NAME,
+ CONTROLNET_LAYER_NAME,
+ getControlNetLayerImageId,
+ getLayerBboxId,
+ getRegionalGuidanceLayerObjectGroupId,
+ isControlAdapterLayer,
+ isRegionalGuidanceLayer,
+ isRenderableLayer,
+ LAYER_BBOX_NAME,
+ NO_LAYERS_MESSAGE_LAYER_ID,
+ regional_guidance_layer_LINE_NAME,
+ regional_guidance_layer_NAME,
+ regional_guidance_layer_OBJECT_GROUP_NAME,
+ regional_guidance_layer_RECT_NAME,
+ TOOL_PREVIEW_BRUSH_BORDER_INNER_ID,
+ TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID,
+ TOOL_PREVIEW_BRUSH_FILL_ID,
+ TOOL_PREVIEW_BRUSH_GROUP_ID,
+ TOOL_PREVIEW_LAYER_ID,
+ TOOL_PREVIEW_RECT_ID,
+} from 'features/controlLayers/store/controlLayersSlice';
+import type {
+ ControlAdapterLayer,
+ Layer,
+ RegionalGuidanceLayer,
+ Tool,
+ VectorMaskLine,
+ VectorMaskRect,
+} from 'features/controlLayers/store/types';
+import { getLayerBboxFast, getLayerBboxPixels } from 'features/controlLayers/util/bbox';
+import Konva from 'konva';
+import type { IRect, Vector2d } from 'konva/lib/types';
+import { debounce } from 'lodash-es';
+import type { RgbColor } from 'react-colorful';
+import { imagesApi } from 'services/api/endpoints/images';
+import { assert } from 'tsafe';
+import { v4 as uuidv4 } from 'uuid';
+
+const BBOX_SELECTED_STROKE = 'rgba(78, 190, 255, 1)';
+const BRUSH_BORDER_INNER_COLOR = 'rgba(0,0,0,1)';
+const BRUSH_BORDER_OUTER_COLOR = 'rgba(255,255,255,0.8)';
+// This is invokeai/frontend/web/public/assets/images/transparent_bg.png as a dataURL
+const STAGE_BG_DATAURL =
+ 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAIAAAAC64paAAAEsmlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4KPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS41LjAiPgogPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iCiAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgIHhtbG5zOnBob3Rvc2hvcD0iaHR0cDovL25zLmFkb2JlLmNvbS9waG90b3Nob3AvMS4wLyIKICAgIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIKICAgIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIgogICAgeG1sbnM6c3RFdnQ9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZUV2ZW50IyIKICAgZXhpZjpQaXhlbFhEaW1lbnNpb249IjIwIgogICBleGlmOlBpeGVsWURpbWVuc2lvbj0iMjAiCiAgIGV4aWY6Q29sb3JTcGFjZT0iMSIKICAgdGlmZjpJbWFnZVdpZHRoPSIyMCIKICAgdGlmZjpJbWFnZUxlbmd0aD0iMjAiCiAgIHRpZmY6UmVzb2x1dGlvblVuaXQ9IjIiCiAgIHRpZmY6WFJlc29sdXRpb249IjMwMC8xIgogICB0aWZmOllSZXNvbHV0aW9uPSIzMDAvMSIKICAgcGhvdG9zaG9wOkNvbG9yTW9kZT0iMyIKICAgcGhvdG9zaG9wOklDQ1Byb2ZpbGU9InNSR0IgSUVDNjE5NjYtMi4xIgogICB4bXA6TW9kaWZ5RGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCIKICAgeG1wOk1ldGFkYXRhRGF0ZT0iMjAyNC0wNC0yM1QwODoyMDo0NysxMDowMCI+CiAgIDx4bXBNTTpIaXN0b3J5PgogICAgPHJkZjpTZXE+CiAgICAgPHJkZjpsaQogICAgICBzdEV2dDphY3Rpb249InByb2R1Y2VkIgogICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJBZmZpbml0eSBQaG90byAxLjEwLjgiCiAgICAgIHN0RXZ0OndoZW49IjIwMjQtMDQtMjNUMDg6MjA6NDcrMTA6MDAiLz4KICAgIDwvcmRmOlNlcT4KICAgPC94bXBNTTpIaXN0b3J5PgogIDwvcmRmOkRlc2NyaXB0aW9uPgogPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KPD94cGFja2V0IGVuZD0iciI/Pn9pdVgAAAGBaUNDUHNSR0IgSUVDNjE5NjYtMi4xAAAokXWR3yuDURjHP5uJmKghFy6WxpVpqMWNMgm1tGbKr5vt3S+1d3t73y3JrXKrKHHj1wV/AbfKtVJESq53TdywXs9rakv2nJ7zfM73nOfpnOeAPZJRVMPhAzWb18NTAffC4pK7oYiDTjpw4YgqhjYeCgWpaR8P2Kx457Vq1T73rzXHE4YCtkbhMUXT88LTwsG1vGbxrnC7ko7Ghc+F+3W5oPC9pcfKXLQ4VeYvi/VIeALsbcLuVBXHqlhJ66qwvByPmikov/exXuJMZOfnJPaId2MQZooAbmaYZAI/g4zK7MfLEAOyoka+7yd/lpzkKjJrrKOzSoo0efpFLUj1hMSk6AkZGdat/v/tq5EcHipXdwag/sU033qhYQdK26b5eWyapROoe4arbCU/dwQj76JvVzTPIbRuwsV1RYvtweUWdD1pUT36I9WJ25NJeD2DlkVw3ULTcrlnv/ucPkJkQ77qBvYPoE/Ot658AxagZ8FoS/a7AAAACXBIWXMAAC4jAAAuIwF4pT92AAAAL0lEQVQ4jWM8ffo0A25gYmKCR5YJjxxBMKp5ZGhm/P//Px7pM2fO0MrmUc0jQzMAB2EIhZC3pUYAAAAASUVORK5CYII=';
+
+const mapId = (object: { id: string }) => object.id;
+
+const selectRenderableLayers = (n: Konva.Node) =>
+ n.name() === regional_guidance_layer_NAME || n.name() === CONTROLNET_LAYER_NAME;
+
+const selectVectorMaskObjects = (node: Konva.Node) => {
+ return node.name() === regional_guidance_layer_LINE_NAME || node.name() === regional_guidance_layer_RECT_NAME;
+};
+
+/**
+ * Creates the brush preview layer.
+ * @param stage The konva stage to render on.
+ * @returns The brush preview layer.
+ */
+const createToolPreviewLayer = (stage: Konva.Stage) => {
+ // Initialize the brush preview layer & add to the stage
+ const toolPreviewLayer = new Konva.Layer({ id: TOOL_PREVIEW_LAYER_ID, visible: false, listening: false });
+ stage.add(toolPreviewLayer);
+
+ // Add handlers to show/hide the brush preview layer
+ stage.on('mousemove', (e) => {
+ const tool = $tool.get();
+ e.target
+ .getStage()
+ ?.findOne(`#${TOOL_PREVIEW_LAYER_ID}`)
+ ?.visible(tool === 'brush' || tool === 'eraser');
+ });
+ stage.on('mouseleave', (e) => {
+ e.target.getStage()?.findOne(`#${TOOL_PREVIEW_LAYER_ID}`)?.visible(false);
+ });
+ stage.on('mouseenter', (e) => {
+ const tool = $tool.get();
+ e.target
+ .getStage()
+ ?.findOne(`#${TOOL_PREVIEW_LAYER_ID}`)
+ ?.visible(tool === 'brush' || tool === 'eraser');
+ });
+
+ // Create the brush preview group & circles
+ const brushPreviewGroup = new Konva.Group({ id: TOOL_PREVIEW_BRUSH_GROUP_ID });
+ const brushPreviewFill = new Konva.Circle({
+ id: TOOL_PREVIEW_BRUSH_FILL_ID,
+ listening: false,
+ strokeEnabled: false,
+ });
+ brushPreviewGroup.add(brushPreviewFill);
+ const brushPreviewBorderInner = new Konva.Circle({
+ id: TOOL_PREVIEW_BRUSH_BORDER_INNER_ID,
+ listening: false,
+ stroke: BRUSH_BORDER_INNER_COLOR,
+ strokeWidth: 1,
+ strokeEnabled: true,
+ });
+ brushPreviewGroup.add(brushPreviewBorderInner);
+ const brushPreviewBorderOuter = new Konva.Circle({
+ id: TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID,
+ listening: false,
+ stroke: BRUSH_BORDER_OUTER_COLOR,
+ strokeWidth: 1,
+ strokeEnabled: true,
+ });
+ brushPreviewGroup.add(brushPreviewBorderOuter);
+ toolPreviewLayer.add(brushPreviewGroup);
+
+ // Create the rect preview
+ const rectPreview = new Konva.Rect({ id: TOOL_PREVIEW_RECT_ID, listening: false, stroke: 'white', strokeWidth: 1 });
+ toolPreviewLayer.add(rectPreview);
+
+ return toolPreviewLayer;
+};
+
+/**
+ * Renders the brush preview for the selected tool.
+ * @param stage The konva stage to render on.
+ * @param tool The selected tool.
+ * @param color The selected layer's color.
+ * @param cursorPos The cursor position.
+ * @param lastMouseDownPos The position of the last mouse down event - used for the rect tool.
+ * @param brushSize The brush size.
+ */
+const renderToolPreview = (
+ stage: Konva.Stage,
+ tool: Tool,
+ color: RgbColor | null,
+ selectedLayerType: Layer['type'] | null,
+ globalMaskLayerOpacity: number,
+ cursorPos: Vector2d | null,
+ lastMouseDownPos: Vector2d | null,
+ isMouseOver: boolean,
+ brushSize: number
+) => {
+ const layerCount = stage.find(`.${regional_guidance_layer_NAME}`).length;
+ // Update the stage's pointer style
+ if (layerCount === 0) {
+ // We have no layers, so we should not render any tool
+ stage.container().style.cursor = 'default';
+ } else if (selectedLayerType !== 'regional_guidance_layer') {
+ // Non-mask-guidance layers don't have tools
+ stage.container().style.cursor = 'not-allowed';
+ } else if (tool === 'move') {
+ // Move tool gets a pointer
+ stage.container().style.cursor = 'default';
+ } else if (tool === 'rect') {
+ // Move rect gets a crosshair
+ stage.container().style.cursor = 'crosshair';
+ } else {
+ // Else we use the brush preview
+ stage.container().style.cursor = 'none';
+ }
+
+ const toolPreviewLayer = stage.findOne(`#${TOOL_PREVIEW_LAYER_ID}`) ?? createToolPreviewLayer(stage);
+
+ if (!isMouseOver || layerCount === 0) {
+ // We can bail early if the mouse isn't over the stage or there are no layers
+ toolPreviewLayer.visible(false);
+ return;
+ }
+
+ toolPreviewLayer.visible(true);
+
+ const brushPreviewGroup = stage.findOne(`#${TOOL_PREVIEW_BRUSH_GROUP_ID}`);
+ assert(brushPreviewGroup, 'Brush preview group not found');
+
+ const rectPreview = stage.findOne(`#${TOOL_PREVIEW_RECT_ID}`);
+ assert(rectPreview, 'Rect preview not found');
+
+ // No need to render the brush preview if the cursor position or color is missing
+ if (cursorPos && color && (tool === 'brush' || tool === 'eraser')) {
+ // Update the fill circle
+ const brushPreviewFill = brushPreviewGroup.findOne(`#${TOOL_PREVIEW_BRUSH_FILL_ID}`);
+ brushPreviewFill?.setAttrs({
+ x: cursorPos.x,
+ y: cursorPos.y,
+ radius: brushSize / 2,
+ fill: rgbaColorToString({ ...color, a: globalMaskLayerOpacity }),
+ globalCompositeOperation: tool === 'brush' ? 'source-over' : 'destination-out',
+ });
+
+ // Update the inner border of the brush preview
+ const brushPreviewInner = toolPreviewLayer.findOne(`#${TOOL_PREVIEW_BRUSH_BORDER_INNER_ID}`);
+ brushPreviewInner?.setAttrs({ x: cursorPos.x, y: cursorPos.y, radius: brushSize / 2 });
+
+ // Update the outer border of the brush preview
+ const brushPreviewOuter = toolPreviewLayer.findOne(`#${TOOL_PREVIEW_BRUSH_BORDER_OUTER_ID}`);
+ brushPreviewOuter?.setAttrs({
+ x: cursorPos.x,
+ y: cursorPos.y,
+ radius: brushSize / 2 + 1,
+ });
+
+ brushPreviewGroup.visible(true);
+ } else {
+ brushPreviewGroup.visible(false);
+ }
+
+ if (cursorPos && lastMouseDownPos && tool === 'rect') {
+ const rectPreview = toolPreviewLayer.findOne(`#${TOOL_PREVIEW_RECT_ID}`);
+ rectPreview?.setAttrs({
+ x: Math.min(cursorPos.x, lastMouseDownPos.x),
+ y: Math.min(cursorPos.y, lastMouseDownPos.y),
+ width: Math.abs(cursorPos.x - lastMouseDownPos.x),
+ height: Math.abs(cursorPos.y - lastMouseDownPos.y),
+ });
+ rectPreview?.visible(true);
+ } else {
+ rectPreview?.visible(false);
+ }
+};
+
+/**
+ * Creates a vector mask layer.
+ * @param stage The konva stage to attach the layer to.
+ * @param reduxLayer The redux layer to create the konva layer from.
+ * @param onLayerPosChanged Callback for when the layer's position changes.
+ */
+const createRegionalGuidanceLayer = (
+ stage: Konva.Stage,
+ reduxLayer: RegionalGuidanceLayer,
+ onLayerPosChanged?: (layerId: string, x: number, y: number) => void
+) => {
+ // This layer hasn't been added to the konva state yet
+ const konvaLayer = new Konva.Layer({
+ id: reduxLayer.id,
+ name: regional_guidance_layer_NAME,
+ draggable: true,
+ dragDistance: 0,
+ });
+
+ // Create a `dragmove` listener for this layer
+ if (onLayerPosChanged) {
+ konvaLayer.on('dragend', function (e) {
+ onLayerPosChanged(reduxLayer.id, Math.floor(e.target.x()), Math.floor(e.target.y()));
+ });
+ }
+
+ // The dragBoundFunc limits how far the layer can be dragged
+ konvaLayer.dragBoundFunc(function (pos) {
+ const cursorPos = getScaledFlooredCursorPosition(stage);
+ if (!cursorPos) {
+ return this.getAbsolutePosition();
+ }
+ // Prevent the user from dragging the layer out of the stage bounds.
+ if (
+ cursorPos.x < 0 ||
+ cursorPos.x > stage.width() / stage.scaleX() ||
+ cursorPos.y < 0 ||
+ cursorPos.y > stage.height() / stage.scaleY()
+ ) {
+ return this.getAbsolutePosition();
+ }
+ return pos;
+ });
+
+ // The object group holds all of the layer's objects (e.g. lines and rects)
+ const konvaObjectGroup = new Konva.Group({
+ id: getRegionalGuidanceLayerObjectGroupId(reduxLayer.id, uuidv4()),
+ name: regional_guidance_layer_OBJECT_GROUP_NAME,
+ listening: false,
+ });
+ konvaLayer.add(konvaObjectGroup);
+
+ stage.add(konvaLayer);
+
+ return konvaLayer;
+};
+
+/**
+ * Creates a konva line from a redux vector mask line.
+ * @param reduxObject The redux object to create the konva line from.
+ * @param konvaGroup The konva group to add the line to.
+ */
+const createVectorMaskLine = (reduxObject: VectorMaskLine, konvaGroup: Konva.Group): Konva.Line => {
+ const vectorMaskLine = new Konva.Line({
+ id: reduxObject.id,
+ key: reduxObject.id,
+ name: regional_guidance_layer_LINE_NAME,
+ strokeWidth: reduxObject.strokeWidth,
+ tension: 0,
+ lineCap: 'round',
+ lineJoin: 'round',
+ shadowForStrokeEnabled: false,
+ globalCompositeOperation: reduxObject.tool === 'brush' ? 'source-over' : 'destination-out',
+ listening: false,
+ });
+ konvaGroup.add(vectorMaskLine);
+ return vectorMaskLine;
+};
+
+/**
+ * Creates a konva rect from a redux vector mask rect.
+ * @param reduxObject The redux object to create the konva rect from.
+ * @param konvaGroup The konva group to add the rect to.
+ */
+const createVectorMaskRect = (reduxObject: VectorMaskRect, konvaGroup: Konva.Group): Konva.Rect => {
+ const vectorMaskRect = new Konva.Rect({
+ id: reduxObject.id,
+ key: reduxObject.id,
+ name: regional_guidance_layer_RECT_NAME,
+ x: reduxObject.x,
+ y: reduxObject.y,
+ width: reduxObject.width,
+ height: reduxObject.height,
+ listening: false,
+ });
+ konvaGroup.add(vectorMaskRect);
+ return vectorMaskRect;
+};
+
+/**
+ * Renders a vector mask layer.
+ * @param stage The konva stage to render on.
+ * @param reduxLayer The redux vector mask layer to render.
+ * @param reduxLayerIndex The index of the layer in the redux store.
+ * @param globalMaskLayerOpacity The opacity of the global mask layer.
+ * @param tool The current tool.
+ */
+const renderRegionalGuidanceLayer = (
+ stage: Konva.Stage,
+ reduxLayer: RegionalGuidanceLayer,
+ globalMaskLayerOpacity: number,
+ tool: Tool,
+ onLayerPosChanged?: (layerId: string, x: number, y: number) => void
+): void => {
+ const konvaLayer =
+ stage.findOne(`#${reduxLayer.id}`) ??
+ createRegionalGuidanceLayer(stage, reduxLayer, onLayerPosChanged);
+
+ // Update the layer's position and listening state
+ konvaLayer.setAttrs({
+ listening: tool === 'move', // The layer only listens when using the move tool - otherwise the stage is handling mouse events
+ x: Math.floor(reduxLayer.x),
+ y: Math.floor(reduxLayer.y),
+ });
+
+ // Convert the color to a string, stripping the alpha - the object group will handle opacity.
+ const rgbColor = rgbColorToString(reduxLayer.previewColor);
+
+ const konvaObjectGroup = konvaLayer.findOne(`.${regional_guidance_layer_OBJECT_GROUP_NAME}`);
+ assert(konvaObjectGroup, `Object group not found for layer ${reduxLayer.id}`);
+
+ // We use caching to handle "global" layer opacity, but caching is expensive and we should only do it when required.
+ let groupNeedsCache = false;
+
+ const objectIds = reduxLayer.maskObjects.map(mapId);
+ for (const objectNode of konvaObjectGroup.find(selectVectorMaskObjects)) {
+ if (!objectIds.includes(objectNode.id())) {
+ objectNode.destroy();
+ groupNeedsCache = true;
+ }
+ }
+
+ for (const reduxObject of reduxLayer.maskObjects) {
+ if (reduxObject.type === 'vector_mask_line') {
+ const vectorMaskLine =
+ stage.findOne(`#${reduxObject.id}`) ?? createVectorMaskLine(reduxObject, konvaObjectGroup);
+
+ // Only update the points if they have changed. The point values are never mutated, they are only added to the
+ // array, so checking the length is sufficient to determine if we need to re-cache.
+ if (vectorMaskLine.points().length !== reduxObject.points.length) {
+ vectorMaskLine.points(reduxObject.points);
+ groupNeedsCache = true;
+ }
+ // Only update the color if it has changed.
+ if (vectorMaskLine.stroke() !== rgbColor) {
+ vectorMaskLine.stroke(rgbColor);
+ groupNeedsCache = true;
+ }
+ } else if (reduxObject.type === 'vector_mask_rect') {
+ const konvaObject =
+ stage.findOne(`#${reduxObject.id}`) ?? createVectorMaskRect(reduxObject, konvaObjectGroup);
+
+ // Only update the color if it has changed.
+ if (konvaObject.fill() !== rgbColor) {
+ konvaObject.fill(rgbColor);
+ groupNeedsCache = true;
+ }
+ }
+ }
+
+ // Only update layer visibility if it has changed.
+ if (konvaLayer.visible() !== reduxLayer.isEnabled) {
+ konvaLayer.visible(reduxLayer.isEnabled);
+ groupNeedsCache = true;
+ }
+
+ if (konvaObjectGroup.children.length === 0) {
+ // No objects - clear the cache to reset the previous pixel data
+ konvaObjectGroup.clearCache();
+ } else if (groupNeedsCache) {
+ konvaObjectGroup.cache();
+ }
+
+ // Updating group opacity does not require re-caching
+ if (konvaObjectGroup.opacity() !== globalMaskLayerOpacity) {
+ konvaObjectGroup.opacity(globalMaskLayerOpacity);
+ }
+};
+
+const createControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLayer): Konva.Layer => {
+ const konvaLayer = new Konva.Layer({
+ id: reduxLayer.id,
+ name: CONTROLNET_LAYER_NAME,
+ imageSmoothingEnabled: true,
+ });
+ stage.add(konvaLayer);
+ return konvaLayer;
+};
+
+const createControlNetLayerImage = (konvaLayer: Konva.Layer, image: HTMLImageElement): Konva.Image => {
+ const konvaImage = new Konva.Image({
+ name: CONTROLNET_LAYER_IMAGE_NAME,
+ image,
+ });
+ konvaLayer.add(konvaImage);
+ return konvaImage;
+};
+
+const updateControlNetLayerImageSource = async (
+ stage: Konva.Stage,
+ konvaLayer: Konva.Layer,
+ reduxLayer: ControlAdapterLayer
+) => {
+ if (reduxLayer.imageName) {
+ const imageName = reduxLayer.imageName;
+ const req = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(reduxLayer.imageName));
+ const imageDTO = await req.unwrap();
+ req.unsubscribe();
+ const image = new Image();
+ const imageId = getControlNetLayerImageId(reduxLayer.id, imageName);
+ image.onload = () => {
+ // Find the existing image or create a new one - must find using the name, bc the id may have just changed
+ const konvaImage =
+ konvaLayer.findOne(`.${CONTROLNET_LAYER_IMAGE_NAME}`) ??
+ createControlNetLayerImage(konvaLayer, image);
+
+ // Update the image's attributes
+ konvaImage.setAttrs({
+ id: imageId,
+ image,
+ });
+ updateControlNetLayerImageAttrs(stage, konvaImage, reduxLayer);
+ // Must cache after this to apply the filters
+ konvaImage.cache();
+ image.id = imageId;
+ };
+ image.src = imageDTO.image_url;
+ } else {
+ konvaLayer.findOne(`.${CONTROLNET_LAYER_IMAGE_NAME}`)?.destroy();
+ }
+};
+
+const updateControlNetLayerImageAttrs = (
+ stage: Konva.Stage,
+ konvaImage: Konva.Image,
+ reduxLayer: ControlAdapterLayer
+) => {
+ let needsCache = false;
+ const newWidth = stage.width() / stage.scaleX();
+ const newHeight = stage.height() / stage.scaleY();
+ const hasFilter = konvaImage.filters() !== null && konvaImage.filters().length > 0;
+ if (
+ konvaImage.width() !== newWidth ||
+ konvaImage.height() !== newHeight ||
+ konvaImage.visible() !== reduxLayer.isEnabled ||
+ hasFilter !== reduxLayer.isFilterEnabled
+ ) {
+ konvaImage.setAttrs({
+ opacity: reduxLayer.opacity,
+ scaleX: 1,
+ scaleY: 1,
+ width: stage.width() / stage.scaleX(),
+ height: stage.height() / stage.scaleY(),
+ visible: reduxLayer.isEnabled,
+ filters: reduxLayer.isFilterEnabled ? [LightnessToAlphaFilter] : [],
+ });
+ needsCache = true;
+ }
+ if (konvaImage.opacity() !== reduxLayer.opacity) {
+ konvaImage.opacity(reduxLayer.opacity);
+ }
+ if (needsCache) {
+ konvaImage.cache();
+ }
+};
+
+const renderControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLayer) => {
+ const konvaLayer = stage.findOne(`#${reduxLayer.id}`) ?? createControlNetLayer(stage, reduxLayer);
+ const konvaImage = konvaLayer.findOne(`.${CONTROLNET_LAYER_IMAGE_NAME}`);
+ const canvasImageSource = konvaImage?.image();
+ let imageSourceNeedsUpdate = false;
+ if (canvasImageSource instanceof HTMLImageElement) {
+ if (
+ reduxLayer.imageName &&
+ canvasImageSource.id !== getControlNetLayerImageId(reduxLayer.id, reduxLayer.imageName)
+ ) {
+ imageSourceNeedsUpdate = true;
+ } else if (!reduxLayer.imageName) {
+ imageSourceNeedsUpdate = true;
+ }
+ } else if (!canvasImageSource) {
+ imageSourceNeedsUpdate = true;
+ }
+
+ if (imageSourceNeedsUpdate) {
+ updateControlNetLayerImageSource(stage, konvaLayer, reduxLayer);
+ } else if (konvaImage) {
+ updateControlNetLayerImageAttrs(stage, konvaImage, reduxLayer);
+ }
+};
+
+/**
+ * Renders the layers on the stage.
+ * @param stage The konva stage to render on.
+ * @param reduxLayers Array of the layers from the redux store.
+ * @param layerOpacity The opacity of the layer.
+ * @param onLayerPosChanged Callback for when the layer's position changes. This is optional to allow for offscreen rendering.
+ * @returns
+ */
+const renderLayers = (
+ stage: Konva.Stage,
+ reduxLayers: Layer[],
+ globalMaskLayerOpacity: number,
+ tool: Tool,
+ onLayerPosChanged?: (layerId: string, x: number, y: number) => void
+) => {
+ const reduxLayerIds = reduxLayers.filter(isRenderableLayer).map(mapId);
+ // Remove un-rendered layers
+ for (const konvaLayer of stage.find(selectRenderableLayers)) {
+ if (!reduxLayerIds.includes(konvaLayer.id())) {
+ konvaLayer.destroy();
+ }
+ }
+
+ for (const reduxLayer of reduxLayers) {
+ if (isRegionalGuidanceLayer(reduxLayer)) {
+ renderRegionalGuidanceLayer(stage, reduxLayer, globalMaskLayerOpacity, tool, onLayerPosChanged);
+ }
+ if (isControlAdapterLayer(reduxLayer)) {
+ renderControlNetLayer(stage, reduxLayer);
+ }
+ }
+};
+
+/**
+ * Creates a bounding box rect for a layer.
+ * @param reduxLayer The redux layer to create the bounding box for.
+ * @param konvaLayer The konva layer to attach the bounding box to.
+ * @param onBboxMouseDown Callback for when the bounding box is clicked.
+ */
+const createBboxRect = (reduxLayer: Layer, konvaLayer: Konva.Layer) => {
+ const rect = new Konva.Rect({
+ id: getLayerBboxId(reduxLayer.id),
+ name: LAYER_BBOX_NAME,
+ strokeWidth: 1,
+ });
+ konvaLayer.add(rect);
+ return rect;
+};
+
+/**
+ * Renders the bounding boxes for the layers.
+ * @param stage The konva stage to render on
+ * @param reduxLayers An array of all redux layers to draw bboxes for
+ * @param selectedLayerId The selected layer's id
+ * @param tool The current tool
+ * @param onBboxChanged Callback for when the bbox is changed
+ * @param onBboxMouseDown Callback for when the bbox is clicked
+ * @returns
+ */
+const renderBbox = (
+ stage: Konva.Stage,
+ reduxLayers: Layer[],
+ tool: Tool,
+ onBboxChanged: (layerId: string, bbox: IRect | null) => void
+) => {
+ // Hide all bboxes so they don't interfere with getClientRect
+ for (const bboxRect of stage.find(`.${LAYER_BBOX_NAME}`)) {
+ bboxRect.visible(false);
+ bboxRect.listening(false);
+ }
+ // No selected layer or not using the move tool - nothing more to do here
+ if (tool !== 'move') {
+ return;
+ }
+
+ for (const reduxLayer of reduxLayers) {
+ if (reduxLayer.type === 'regional_guidance_layer') {
+ const konvaLayer = stage.findOne(`#${reduxLayer.id}`);
+ assert(konvaLayer, `Layer ${reduxLayer.id} not found in stage`);
+
+ let bbox = reduxLayer.bbox;
+
+ // We only need to recalculate the bbox if the layer has changed and it has objects
+ if (reduxLayer.bboxNeedsUpdate && reduxLayer.maskObjects.length) {
+ // We only need to use the pixel-perfect bounding box if the layer has eraser strokes
+ bbox = reduxLayer.needsPixelBbox ? getLayerBboxPixels(konvaLayer) : getLayerBboxFast(konvaLayer);
+ // Update the layer's bbox in the redux store
+ onBboxChanged(reduxLayer.id, bbox);
+ }
+
+ if (!bbox) {
+ continue;
+ }
+
+ const rect = konvaLayer.findOne(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(reduxLayer, konvaLayer);
+
+ rect.setAttrs({
+ visible: true,
+ listening: reduxLayer.isSelected,
+ x: bbox.x,
+ y: bbox.y,
+ width: bbox.width,
+ height: bbox.height,
+ stroke: reduxLayer.isSelected ? BBOX_SELECTED_STROKE : '',
+ });
+ }
+ }
+};
+
+/**
+ * Creates the background layer for the stage.
+ * @param stage The konva stage to render on
+ */
+const createBackgroundLayer = (stage: Konva.Stage): Konva.Layer => {
+ const layer = new Konva.Layer({
+ id: BACKGROUND_LAYER_ID,
+ });
+ const background = new Konva.Rect({
+ id: BACKGROUND_RECT_ID,
+ x: stage.x(),
+ y: 0,
+ width: stage.width() / stage.scaleX(),
+ height: stage.height() / stage.scaleY(),
+ listening: false,
+ opacity: 0.2,
+ });
+ layer.add(background);
+ stage.add(layer);
+ const image = new Image();
+ image.onload = () => {
+ background.fillPatternImage(image);
+ };
+ image.src = STAGE_BG_DATAURL;
+ return layer;
+};
+
+/**
+ * Renders the background layer for the stage.
+ * @param stage The konva stage to render on
+ * @param width The unscaled width of the canvas
+ * @param height The unscaled height of the canvas
+ */
+const renderBackground = (stage: Konva.Stage, width: number, height: number) => {
+ const layer = stage.findOne(`#${BACKGROUND_LAYER_ID}`) ?? createBackgroundLayer(stage);
+
+ const background = layer.findOne(`#${BACKGROUND_RECT_ID}`);
+ assert(background, 'Background rect not found');
+ // ensure background rect is in the top-left of the canvas
+ background.absolutePosition({ x: 0, y: 0 });
+
+ // set the dimensions of the background rect to match the canvas - not the stage!!!
+ background.size({
+ width: width / stage.scaleX(),
+ height: height / stage.scaleY(),
+ });
+
+ // Calculate the amount the stage is moved - including the effect of scaling
+ const stagePos = {
+ x: -stage.x() / stage.scaleX(),
+ y: -stage.y() / stage.scaleY(),
+ };
+
+ // Apply that movement to the fill pattern
+ background.fillPatternOffset(stagePos);
+};
+
+/**
+ * Arranges all layers in the z-axis by updating their z-indices.
+ * @param stage The konva stage
+ * @param layerIds An array of redux layer ids, in their z-index order
+ */
+const arrangeLayers = (stage: Konva.Stage, layerIds: string[]): void => {
+ let nextZIndex = 0;
+ // Background is the first layer
+ stage.findOne(`#${BACKGROUND_LAYER_ID}`)?.zIndex(nextZIndex++);
+ // Then arrange the redux layers in order
+ for (const layerId of layerIds) {
+ stage.findOne(`#${layerId}`)?.zIndex(nextZIndex++);
+ }
+ // Finally, the tool preview layer is always on top
+ stage.findOne(`#${TOOL_PREVIEW_LAYER_ID}`)?.zIndex(nextZIndex++);
+};
+
+const createNoLayersMessageLayer = (stage: Konva.Stage): Konva.Layer => {
+ const noLayersMessageLayer = new Konva.Layer({
+ id: NO_LAYERS_MESSAGE_LAYER_ID,
+ opacity: 0.7,
+ listening: false,
+ });
+ const text = new Konva.Text({
+ x: 0,
+ y: 0,
+ align: 'center',
+ verticalAlign: 'middle',
+ text: 'No Layers Added',
+ fontFamily: '"Inter Variable", sans-serif',
+ fontStyle: '600',
+ fill: 'white',
+ });
+ noLayersMessageLayer.add(text);
+ stage.add(noLayersMessageLayer);
+ return noLayersMessageLayer;
+};
+
+const renderNoLayersMessage = (stage: Konva.Stage, layerCount: number, width: number, height: number) => {
+ const noLayersMessageLayer =
+ stage.findOne(`#${NO_LAYERS_MESSAGE_LAYER_ID}`) ?? createNoLayersMessageLayer(stage);
+ if (layerCount === 0) {
+ noLayersMessageLayer.findOne('Text')?.setAttrs({
+ width,
+ height,
+ fontSize: 32 / stage.scaleX(),
+ });
+ } else {
+ noLayersMessageLayer?.destroy();
+ }
+};
+
+export const renderers = {
+ renderToolPreview,
+ renderLayers,
+ renderBbox,
+ renderBackground,
+ renderNoLayersMessage,
+ arrangeLayers,
+};
+
+const DEBOUNCE_MS = 300;
+
+export const debouncedRenderers = {
+ renderToolPreview: debounce(renderToolPreview, DEBOUNCE_MS),
+ renderLayers: debounce(renderLayers, DEBOUNCE_MS),
+ renderBbox: debounce(renderBbox, DEBOUNCE_MS),
+ renderBackground: debounce(renderBackground, DEBOUNCE_MS),
+ renderNoLayersMessage: debounce(renderNoLayersMessage, DEBOUNCE_MS),
+ arrangeLayers: debounce(arrangeLayers, DEBOUNCE_MS),
+};
+
+/**
+ * Calculates the lightness (HSL) of a given pixel and sets the alpha channel to that value.
+ * This is useful for edge maps and other masks, to make the black areas transparent.
+ * @param imageData The image data to apply the filter to
+ */
+const LightnessToAlphaFilter = (imageData: ImageData) => {
+ const len = imageData.data.length / 4;
+ for (let i = 0; i < len; i++) {
+ const r = imageData.data[i * 4 + 0] as number;
+ const g = imageData.data[i * 4 + 1] as number;
+ const b = imageData.data[i * 4 + 2] as number;
+ const cMin = Math.min(r, g, b);
+ const cMax = Math.max(r, g, b);
+ imageData.data[i * 4 + 3] = (cMin + cMax) / 2;
+ }
+};
diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageMetadataViewer/DataViewer.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageMetadataViewer/DataViewer.tsx
index 8f6cef2c20..a6d0481b89 100644
--- a/invokeai/frontend/web/src/features/gallery/components/ImageMetadataViewer/DataViewer.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/ImageMetadataViewer/DataViewer.tsx
@@ -1,4 +1,4 @@
-import { Box, Flex, IconButton, Tooltip } from '@invoke-ai/ui-library';
+import { Box, Flex, IconButton, Tooltip, useShiftModifier } from '@invoke-ai/ui-library';
import { getOverlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
import { isString } from 'lodash-es';
import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
@@ -9,18 +9,19 @@ import { PiCopyBold, PiDownloadSimpleBold } from 'react-icons/pi';
type Props = {
label: string;
- data: object | string;
+ data: unknown;
fileName?: string;
withDownload?: boolean;
withCopy?: boolean;
+ extraCopyActions?: { label: string; getData: (data: unknown) => unknown }[];
};
const overlayscrollbarsOptions = getOverlayScrollbarsParams('scroll', 'scroll').options;
const DataViewer = (props: Props) => {
- const { label, data, fileName, withDownload = true, withCopy = true } = props;
+ const { label, data, fileName, withDownload = true, withCopy = true, extraCopyActions } = props;
const dataString = useMemo(() => (isString(data) ? data : JSON.stringify(data, null, 2)), [data]);
-
+ const shift = useShiftModifier();
const handleCopy = useCallback(() => {
navigator.clipboard.writeText(dataString);
}, [dataString]);
@@ -67,6 +68,10 @@ const DataViewer = (props: Props) => {
/>
)}
+ {shift &&
+ extraCopyActions?.map(({ label, getData }) => (
+
+ ))}
);
@@ -78,3 +83,27 @@ const overlayScrollbarsStyles: CSSProperties = {
height: '100%',
width: '100%',
};
+
+type ExtraCopyActionProps = {
+ label: string;
+ data: unknown;
+ getData: (data: unknown) => unknown;
+};
+const ExtraCopyAction = ({ label, data, getData }: ExtraCopyActionProps) => {
+ const { t } = useTranslation();
+ const handleCopy = useCallback(() => {
+ navigator.clipboard.writeText(JSON.stringify(getData(data), null, 2));
+ }, [data, getData]);
+
+ return (
+
+ }
+ variant="ghost"
+ opacity={0.7}
+ onClick={handleCopy}
+ />
+
+ );
+};
diff --git a/invokeai/frontend/web/src/features/metadata/util/parsers.ts b/invokeai/frontend/web/src/features/metadata/util/parsers.ts
index 3decea6737..5d2bd78784 100644
--- a/invokeai/frontend/web/src/features/metadata/util/parsers.ts
+++ b/invokeai/frontend/web/src/features/metadata/util/parsers.ts
@@ -286,7 +286,9 @@ const parseControlNet: MetadataParseFunc = async (meta
controlMode: control_mode ?? initialControlNet.controlMode,
resizeMode: resize_mode ?? initialControlNet.resizeMode,
controlImage: image?.image_name ?? null,
+ controlImageDimensions: null,
processedControlImage: processedImage?.image_name ?? null,
+ processedControlImageDimensions: null,
processorType,
processorNode,
shouldAutoConfig: true,
@@ -350,9 +352,11 @@ const parseT2IAdapter: MetadataParseFunc = async (meta
endStepPct: end_step_percent ?? initialT2IAdapter.endStepPct,
resizeMode: resize_mode ?? initialT2IAdapter.resizeMode,
controlImage: image?.image_name ?? null,
+ controlImageDimensions: null,
processedControlImage: processedImage?.image_name ?? null,
- processorType,
+ processedControlImageDimensions: null,
processorNode,
+ processorType,
shouldAutoConfig: true,
id: uuidv4(),
};
diff --git a/invokeai/frontend/web/src/features/metadata/util/recallers.ts b/invokeai/frontend/web/src/features/metadata/util/recallers.ts
index 4f332e23a9..f07b2ab8b6 100644
--- a/invokeai/frontend/web/src/features/metadata/util/recallers.ts
+++ b/invokeai/frontend/web/src/features/metadata/util/recallers.ts
@@ -5,6 +5,14 @@ import {
ipAdaptersReset,
t2iAdaptersReset,
} from 'features/controlAdapters/store/controlAdaptersSlice';
+import {
+ heightChanged,
+ negativePrompt2Changed,
+ negativePromptChanged,
+ positivePrompt2Changed,
+ positivePromptChanged,
+ widthChanged,
+} from 'features/controlLayers/store/controlLayersSlice';
import { setHrfEnabled, setHrfMethod, setHrfStrength } from 'features/hrf/store/hrfSlice';
import type { LoRA } from 'features/lora/store/loraSlice';
import { loraRecalled, lorasReset } from 'features/lora/store/loraSlice';
@@ -16,18 +24,14 @@ import type {
} from 'features/metadata/types';
import { modelSelected } from 'features/parameters/store/actions';
import {
- heightRecalled,
initialImageChanged,
setCfgRescaleMultiplier,
setCfgScale,
setImg2imgStrength,
- setNegativePrompt,
- setPositivePrompt,
setScheduler,
setSeed,
setSteps,
vaeSelected,
- widthRecalled,
} from 'features/parameters/store/generationSlice';
import type {
ParameterCFGRescaleMultiplier,
@@ -53,8 +57,6 @@ import type {
} from 'features/parameters/types/parameterSchemas';
import {
refinerModelChanged,
- setNegativeStylePromptSDXL,
- setPositiveStylePromptSDXL,
setRefinerCFGScale,
setRefinerNegativeAestheticScore,
setRefinerPositiveAestheticScore,
@@ -65,19 +67,19 @@ import {
import type { ImageDTO } from 'services/api/types';
const recallPositivePrompt: MetadataRecallFunc = (positivePrompt) => {
- getStore().dispatch(setPositivePrompt(positivePrompt));
+ getStore().dispatch(positivePromptChanged(positivePrompt));
};
const recallNegativePrompt: MetadataRecallFunc = (negativePrompt) => {
- getStore().dispatch(setNegativePrompt(negativePrompt));
+ getStore().dispatch(negativePromptChanged(negativePrompt));
};
const recallSDXLPositiveStylePrompt: MetadataRecallFunc = (positiveStylePrompt) => {
- getStore().dispatch(setPositiveStylePromptSDXL(positiveStylePrompt));
+ getStore().dispatch(positivePrompt2Changed(positiveStylePrompt));
};
const recallSDXLNegativeStylePrompt: MetadataRecallFunc = (negativeStylePrompt) => {
- getStore().dispatch(setNegativeStylePromptSDXL(negativeStylePrompt));
+ getStore().dispatch(negativePrompt2Changed(negativeStylePrompt));
};
const recallSeed: MetadataRecallFunc = (seed) => {
@@ -101,11 +103,11 @@ const recallInitialImage: MetadataRecallFunc = async (imageDTO) => {
};
const recallWidth: MetadataRecallFunc = (width) => {
- getStore().dispatch(widthRecalled(width));
+ getStore().dispatch(widthChanged({ width, updateAspectRatio: true }));
};
const recallHeight: MetadataRecallFunc = (height) => {
- getStore().dispatch(heightRecalled(height));
+ getStore().dispatch(heightChanged({ height, updateAspectRatio: true }));
};
const recallSteps: MetadataRecallFunc = (steps) => {
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/ControlNetOrT2IAdapterDefaultSettings.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/ControlNetOrT2IAdapterDefaultSettings.tsx
index 93750348c0..dcdc4e2a36 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/ControlNetOrT2IAdapterDefaultSettings.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/ControlNetOrT2IAdapterDefaultSettings.tsx
@@ -1,4 +1,4 @@
-import { Button, Flex, Heading, Text } from '@invoke-ai/ui-library';
+import { Button, Flex, Heading, SimpleGrid, Text } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useControlNetOrT2IAdapterDefaultSettings } from 'features/modelManagerV2/hooks/useControlNetOrT2IAdapterDefaultSettings';
import { DefaultPreprocessor } from 'features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/DefaultPreprocessor';
@@ -92,13 +92,9 @@ export const ControlNetOrT2IAdapterDefaultSettings = () => {
-
-
-
-
-
-
-
+
+
+
>
);
};
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/MainModelDefaultSettings.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/MainModelDefaultSettings.tsx
index 0027bd12e3..e096b11209 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/MainModelDefaultSettings.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/MainModelDefaultSettings.tsx
@@ -1,4 +1,4 @@
-import { Button, Flex, Heading, Text } from '@invoke-ai/ui-library';
+import { Button, Flex, Heading, SimpleGrid, Text } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useMainModelDefaultSettings } from 'features/modelManagerV2/hooks/useMainModelDefaultSettings';
import { DefaultHeight } from 'features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/DefaultHeight';
@@ -122,40 +122,16 @@ export const MainModelDefaultSettings = () => {
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
>
);
};
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelEdit.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelEdit.tsx
index c73b6e52ed..1f8e50b9da 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelEdit.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelEdit.tsx
@@ -6,6 +6,7 @@ import {
FormLabel,
Heading,
Input,
+ SimpleGrid,
Text,
Textarea,
} from '@invoke-ai/ui-library';
@@ -66,25 +67,21 @@ export const ModelEdit = ({ form }: Props) => {
{t('modelManager.modelSettings')}
-
+
{t('modelManager.baseModel')}
-
- {data.type === 'main' && data.format === 'checkpoint' && (
- <>
-
+
+ {t('modelManager.variant')}
+
+
+ {data.type === 'main' && data.format === 'checkpoint' && (
+ <>
{t('modelManager.pathToConfig')}
-
- {t('modelManager.variant')}
-
-
-
-
{t('modelManager.predictionType')}
@@ -93,9 +90,9 @@ export const ModelEdit = ({ form }: Props) => {
{t('modelManager.upcastAttention')}
-
- >
- )}
+ >
+ )}
+
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelView.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelView.tsx
index 0618af5dd0..83ae94c9bb 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelView.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/ModelView.tsx
@@ -1,4 +1,4 @@
-import { Box, Flex, Text } from '@invoke-ai/ui-library';
+import { Box, Flex, SimpleGrid, Text } from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppSelector } from 'app/store/storeHooks';
import { ControlNetOrT2IAdapterDefaultSettings } from 'features/modelManagerV2/subpanels/ModelPanel/ControlNetOrT2IAdapterDefaultSettings/ControlNetOrT2IAdapterDefaultSettings';
@@ -24,57 +24,32 @@ export const ModelView = () => {
return (
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+ {data.type === 'main' && }
{data.type === 'main' && data.format === 'diffusers' && data.repo_variant && (
-
-
-
+
)}
-
{data.type === 'main' && data.format === 'checkpoint' && (
<>
-
-
-
-
-
-
-
-
+
+
+
>
)}
-
{data.type === 'ip_adapter' && data.format === 'invokeai' && (
-
-
-
+
)}
-
+
+
+
+ {data.type === 'main' && data.base !== 'sdxl-refiner' && }
+ {(data.type === 'controlnet' || data.type === 't2i_adapter') && }
+ {(data.type === 'main' || data.type === 'lora') && }
- {data.type === 'main' && data.base !== 'sdxl-refiner' && (
-
-
-
- )}
- {(data.type === 'controlnet' || data.type === 't2i_adapter') && (
-
-
-
- )}
- {(data.type === 'main' || data.type === 'lora') && (
-
-
-
- )}
);
};
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/TriggerPhrases.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/TriggerPhrases.tsx
index ec707352c7..9352d7996c 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/TriggerPhrases.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelPanel/TriggerPhrases.tsx
@@ -77,9 +77,17 @@ export const TriggerPhrases = () => {
[updateModel, selectedModelKey, triggerPhrases]
);
+ const onTriggerPhraseAddFormSubmit = useCallback(
+ (e: React.FormEvent) => {
+ e.preventDefault();
+ addTriggerPhrase();
+ },
+ [addTriggerPhrase]
+ );
+
return (
-
);
-};
+});
+
+AspectRatioIconPreview.displayName = 'AspectRatioIconPreview';
diff --git a/invokeai/frontend/web/src/features/parameters/components/ImageSize/ImageSize.tsx b/invokeai/frontend/web/src/features/parameters/components/ImageSize/ImageSize.tsx
index 811578263a..0c702c77f1 100644
--- a/invokeai/frontend/web/src/features/parameters/components/ImageSize/ImageSize.tsx
+++ b/invokeai/frontend/web/src/features/parameters/components/ImageSize/ImageSize.tsx
@@ -1,6 +1,5 @@
import type { FormLabelProps } from '@invoke-ai/ui-library';
import { Flex, FormControlGroup } from '@invoke-ai/ui-library';
-import { AspectRatioPreview } from 'features/parameters/components/ImageSize/AspectRatioPreview';
import { AspectRatioSelect } from 'features/parameters/components/ImageSize/AspectRatioSelect';
import type { ImageSizeContextInnerValue } from 'features/parameters/components/ImageSize/ImageSizeContext';
import { ImageSizeContext } from 'features/parameters/components/ImageSize/ImageSizeContext';
@@ -13,10 +12,11 @@ import { memo } from 'react';
type ImageSizeProps = ImageSizeContextInnerValue & {
widthComponent: ReactNode;
heightComponent: ReactNode;
+ previewComponent: ReactNode;
};
export const ImageSize = memo((props: ImageSizeProps) => {
- const { widthComponent, heightComponent, ...ctx } = props;
+ const { widthComponent, heightComponent, previewComponent, ...ctx } = props;
return (
@@ -33,7 +33,7 @@ export const ImageSize = memo((props: ImageSizeProps) => {
-
+ {previewComponent}
diff --git a/invokeai/frontend/web/src/features/parameters/components/ImageSize/constants.ts b/invokeai/frontend/web/src/features/parameters/components/ImageSize/constants.ts
index b8c46005e6..0e435e795e 100644
--- a/invokeai/frontend/web/src/features/parameters/components/ImageSize/constants.ts
+++ b/invokeai/frontend/web/src/features/parameters/components/ImageSize/constants.ts
@@ -1,7 +1,6 @@
import type { ComboboxOption } from '@invoke-ai/ui-library';
import type { AspectRatioID, AspectRatioState } from './types';
-
// When the aspect ratio is between these two values, we show the icon (experimentally determined)
export const ICON_LOW_CUTOFF = 0.23;
export const ICON_HIGH_CUTOFF = 1 / ICON_LOW_CUTOFF;
@@ -25,7 +24,6 @@ export const ICON_CONTAINER_STYLES = {
alignItems: 'center',
justifyContent: 'center',
};
-
export const ASPECT_RATIO_OPTIONS: ComboboxOption[] = [
{ label: 'Free' as const, value: 'Free' },
{ label: '16:9' as const, value: '16:9' },
diff --git a/invokeai/frontend/web/src/features/parameters/store/generationSlice.ts b/invokeai/frontend/web/src/features/parameters/store/generationSlice.ts
index 0da6e21d9f..18180455ce 100644
--- a/invokeai/frontend/web/src/features/parameters/store/generationSlice.ts
+++ b/invokeai/frontend/web/src/features/parameters/store/generationSlice.ts
@@ -1,11 +1,7 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
-import { roundToMultiple } from 'common/util/roundDownToMultiple';
-import { isAnyControlAdapterAdded } from 'features/controlAdapters/store/controlAdaptersSlice';
-import { calculateNewSize } from 'features/parameters/components/ImageSize/calculateNewSize';
import { initialAspectRatioState } from 'features/parameters/components/ImageSize/constants';
-import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import { CLIP_SKIP_MAP } from 'features/parameters/types/constants';
import type {
ParameterCanvasCoherenceMode,
@@ -16,7 +12,7 @@ import type {
ParameterScheduler,
ParameterVAEModel,
} from 'features/parameters/types/parameterSchemas';
-import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
+import { getOptimalDimension } from 'features/parameters/util/optimalDimension';
import { configChanged } from 'features/system/store/configSlice';
import { clamp } from 'lodash-es';
import type { RgbaColor } from 'react-colorful';
@@ -28,12 +24,9 @@ const initialGenerationState: GenerationState = {
_version: 2,
cfgScale: 7.5,
cfgRescaleMultiplier: 0,
- height: 512,
img2imgStrength: 0.75,
infillMethod: 'patchmatch',
iterations: 1,
- positivePrompt: '',
- negativePrompt: '',
scheduler: 'euler',
maskBlur: 16,
maskBlurMethod: 'box',
@@ -44,7 +37,6 @@ const initialGenerationState: GenerationState = {
shouldFitToWidthHeight: true,
shouldRandomizeSeed: true,
steps: 50,
- width: 512,
model: null,
vae: null,
vaePrecision: 'fp32',
@@ -53,7 +45,6 @@ const initialGenerationState: GenerationState = {
clipSkip: 0,
shouldUseCpuNoise: true,
shouldShowAdvancedOptions: false,
- aspectRatio: { ...initialAspectRatioState },
infillTileSize: 32,
infillPatchmatchDownscaleSize: 1,
infillMosaicTileWidth: 64,
@@ -67,12 +58,6 @@ export const generationSlice = createSlice({
name: 'generation',
initialState: initialGenerationState,
reducers: {
- setPositivePrompt: (state, action: PayloadAction) => {
- state.positivePrompt = action.payload;
- },
- setNegativePrompt: (state, action: PayloadAction) => {
- state.negativePrompt = action.payload;
- },
setIterations: (state, action: PayloadAction) => {
state.iterations = action.payload;
},
@@ -148,19 +133,6 @@ export const generationSlice = createSlice({
const { maxClip } = CLIP_SKIP_MAP[newModel.base];
state.clipSkip = clamp(state.clipSkip, 0, maxClip);
}
-
- if (action.meta.previousModel?.base === newModel.base) {
- // The base model hasn't changed, we don't need to optimize the size
- return;
- }
-
- const optimalDimension = getOptimalDimension(newModel);
- if (getIsSizeOptimal(state.width, state.height, optimalDimension)) {
- return;
- }
- const { width, height } = calculateNewSize(state.aspectRatio.value, optimalDimension * optimalDimension);
- state.width = width;
- state.height = height;
},
prepare: (payload: ParameterModel | null, previousModel?: ParameterModel | null) => ({
payload,
@@ -182,27 +154,6 @@ export const generationSlice = createSlice({
shouldUseCpuNoiseChanged: (state, action: PayloadAction) => {
state.shouldUseCpuNoise = action.payload;
},
- widthChanged: (state, action: PayloadAction) => {
- state.width = action.payload;
- },
- heightChanged: (state, action: PayloadAction) => {
- state.height = action.payload;
- },
- widthRecalled: (state, action: PayloadAction) => {
- state.width = action.payload;
- state.aspectRatio.value = action.payload / state.height;
- state.aspectRatio.id = 'Free';
- state.aspectRatio.isLocked = false;
- },
- heightRecalled: (state, action: PayloadAction) => {
- state.height = action.payload;
- state.aspectRatio.value = state.width / action.payload;
- state.aspectRatio.id = 'Free';
- state.aspectRatio.isLocked = false;
- },
- aspectRatioChanged: (state, action: PayloadAction) => {
- state.aspectRatio = action.payload;
- },
setInfillMethod: (state, action: PayloadAction) => {
state.infillMethod = action.payload;
},
@@ -237,15 +188,6 @@ export const generationSlice = createSlice({
state.vaePrecision = action.payload.sd.vaePrecision;
}
});
-
- // TODO: This is a temp fix to reduce issues with T2I adapter having a different downscaling
- // factor than the UNet. Hopefully we get an upstream fix in diffusers.
- builder.addMatcher(isAnyControlAdapterAdded, (state, action) => {
- if (action.payload.type === 't2i_adapter') {
- state.width = roundToMultiple(state.width, 64);
- state.height = roundToMultiple(state.height, 64);
- }
- });
},
selectors: {
selectOptimalDimension: (slice) => getOptimalDimension(slice.model),
@@ -259,8 +201,6 @@ export const {
setImg2imgStrength,
setInfillMethod,
setIterations,
- setPositivePrompt,
- setNegativePrompt,
setScheduler,
setMaskBlur,
setCanvasCoherenceMode,
@@ -278,11 +218,6 @@ export const {
setClipSkip,
shouldUseCpuNoiseChanged,
vaePrecisionChanged,
- aspectRatioChanged,
- widthChanged,
- heightChanged,
- widthRecalled,
- heightRecalled,
setInfillTileSize,
setInfillPatchmatchDownscaleSize,
setInfillMosaicTileWidth,
diff --git a/invokeai/frontend/web/src/features/parameters/store/types.ts b/invokeai/frontend/web/src/features/parameters/store/types.ts
index 773cfbf925..9314f8d076 100644
--- a/invokeai/frontend/web/src/features/parameters/store/types.ts
+++ b/invokeai/frontend/web/src/features/parameters/store/types.ts
@@ -1,21 +1,16 @@
import type { PayloadAction } from '@reduxjs/toolkit';
-import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import type {
ParameterCanvasCoherenceMode,
ParameterCFGRescaleMultiplier,
ParameterCFGScale,
- ParameterHeight,
ParameterMaskBlurMethod,
ParameterModel,
- ParameterNegativePrompt,
- ParameterPositivePrompt,
ParameterPrecision,
ParameterScheduler,
ParameterSeed,
ParameterSteps,
ParameterStrength,
ParameterVAEModel,
- ParameterWidth,
} from 'features/parameters/types/parameterSchemas';
import type { RgbaColor } from 'react-colorful';
@@ -23,13 +18,10 @@ export interface GenerationState {
_version: 2;
cfgScale: ParameterCFGScale;
cfgRescaleMultiplier: ParameterCFGRescaleMultiplier;
- height: ParameterHeight;
img2imgStrength: ParameterStrength;
infillMethod: string;
initialImage?: { imageName: string; width: number; height: number };
iterations: number;
- positivePrompt: ParameterPositivePrompt;
- negativePrompt: ParameterNegativePrompt;
scheduler: ParameterScheduler;
maskBlur: number;
maskBlurMethod: ParameterMaskBlurMethod;
@@ -40,7 +32,6 @@ export interface GenerationState {
shouldFitToWidthHeight: boolean;
shouldRandomizeSeed: boolean;
steps: ParameterSteps;
- width: ParameterWidth;
model: ParameterModel | null;
vae: ParameterVAEModel | null;
vaePrecision: ParameterPrecision;
@@ -49,7 +40,6 @@ export interface GenerationState {
clipSkip: number;
shouldUseCpuNoise: boolean;
shouldShowAdvancedOptions: boolean;
- aspectRatio: AspectRatioState;
infillTileSize: number;
infillPatchmatchDownscaleSize: number;
infillMosaicTileWidth: number;
diff --git a/invokeai/frontend/web/src/features/parameters/types/parameterSchemas.ts b/invokeai/frontend/web/src/features/parameters/types/parameterSchemas.ts
index 75693cd47f..a18cc7f86d 100644
--- a/invokeai/frontend/web/src/features/parameters/types/parameterSchemas.ts
+++ b/invokeai/frontend/web/src/features/parameters/types/parameterSchemas.ts
@@ -196,3 +196,8 @@ const zLoRAWeight = z.number();
type ParameterLoRAWeight = z.infer;
export const isParameterLoRAWeight = (val: unknown): val is ParameterLoRAWeight => zLoRAWeight.safeParse(val).success;
// #endregion
+
+// #region Regional Prompts AutoNegative
+const zAutoNegative = z.enum(['off', 'invert']);
+export type ParameterAutoNegative = z.infer;
+// #endregion
diff --git a/invokeai/frontend/web/src/features/queue/components/QueueButtonTooltip.tsx b/invokeai/frontend/web/src/features/queue/components/QueueButtonTooltip.tsx
index 5d1b7264ea..f63e96c45f 100644
--- a/invokeai/frontend/web/src/features/queue/components/QueueButtonTooltip.tsx
+++ b/invokeai/frontend/web/src/features/queue/components/QueueButtonTooltip.tsx
@@ -2,19 +2,19 @@ import { Divider, Flex, ListItem, Text, UnorderedList } from '@invoke-ai/ui-libr
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { useIsReadyToEnqueue } from 'common/hooks/useIsReadyToEnqueue';
+import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
-import { selectGenerationSlice } from 'features/parameters/store/generationSlice';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useEnqueueBatchMutation } from 'services/api/endpoints/queue';
import { useBoardName } from 'services/api/hooks/useBoardName';
const selectPromptsCount = createSelector(
- selectGenerationSlice,
+ selectControlLayersSlice,
selectDynamicPromptsSlice,
- (generation, dynamicPrompts) =>
- getShouldProcessPrompt(generation.positivePrompt) ? dynamicPrompts.prompts.length : 1
+ (controlLayers, dynamicPrompts) =>
+ getShouldProcessPrompt(controlLayers.present.positivePrompt) ? dynamicPrompts.prompts.length : 1
);
type Props = {
diff --git a/invokeai/frontend/web/src/features/queue/components/QueueList/QueueItemDetail.tsx b/invokeai/frontend/web/src/features/queue/components/QueueList/QueueItemDetail.tsx
index a26a7d4360..b719ae0a92 100644
--- a/invokeai/frontend/web/src/features/queue/components/QueueList/QueueItemDetail.tsx
+++ b/invokeai/frontend/web/src/features/queue/components/QueueList/QueueItemDetail.tsx
@@ -3,6 +3,7 @@ import DataViewer from 'features/gallery/components/ImageMetadataViewer/DataView
import { useCancelBatch } from 'features/queue/hooks/useCancelBatch';
import { useCancelQueueItem } from 'features/queue/hooks/useCancelQueueItem';
import { getSecondsFromTimestamps } from 'features/queue/util/getSecondsFromTimestamps';
+import { get } from 'lodash-es';
import type { ReactNode } from 'react';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -92,7 +93,15 @@ const QueueItemComponent = ({ queueItemDTO }: Props) => {
)}
- {queueItem ? : }
+ {queueItem ? (
+ get(data, 'session.graph') }]}
+ />
+ ) : (
+
+ )}
);
diff --git a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLNegativeStylePrompt.tsx b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLNegativeStylePrompt.tsx
index 067319817a..bba9e0b32d 100644
--- a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLNegativeStylePrompt.tsx
+++ b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLNegativeStylePrompt.tsx
@@ -1,22 +1,22 @@
import { Box, Textarea } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { negativePrompt2Changed } from 'features/controlLayers/store/controlLayersSlice';
import { PromptOverlayButtonWrapper } from 'features/parameters/components/Prompts/PromptOverlayButtonWrapper';
import { AddPromptTriggerButton } from 'features/prompt/AddPromptTriggerButton';
import { PromptPopover } from 'features/prompt/PromptPopover';
import { usePrompt } from 'features/prompt/usePrompt';
-import { setNegativeStylePromptSDXL } from 'features/sdxl/store/sdxlSlice';
import { memo, useCallback, useRef } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
export const ParamSDXLNegativeStylePrompt = memo(() => {
const dispatch = useAppDispatch();
- const prompt = useAppSelector((s) => s.sdxl.negativeStylePrompt);
+ const prompt = useAppSelector((s) => s.controlLayers.present.negativePrompt2);
const textareaRef = useRef(null);
const { t } = useTranslation();
const handleChange = useCallback(
(v: string) => {
- dispatch(setNegativeStylePromptSDXL(v));
+ dispatch(negativePrompt2Changed(v));
},
[dispatch]
);
diff --git a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLPositiveStylePrompt.tsx b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLPositiveStylePrompt.tsx
index 6fc302cd9c..3828136c74 100644
--- a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLPositiveStylePrompt.tsx
+++ b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/ParamSDXLPositiveStylePrompt.tsx
@@ -1,21 +1,21 @@
import { Box, Textarea } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { positivePrompt2Changed } from 'features/controlLayers/store/controlLayersSlice';
import { PromptOverlayButtonWrapper } from 'features/parameters/components/Prompts/PromptOverlayButtonWrapper';
import { AddPromptTriggerButton } from 'features/prompt/AddPromptTriggerButton';
import { PromptPopover } from 'features/prompt/PromptPopover';
import { usePrompt } from 'features/prompt/usePrompt';
-import { setPositiveStylePromptSDXL } from 'features/sdxl/store/sdxlSlice';
import { memo, useCallback, useRef } from 'react';
import { useTranslation } from 'react-i18next';
export const ParamSDXLPositiveStylePrompt = memo(() => {
const dispatch = useAppDispatch();
- const prompt = useAppSelector((s) => s.sdxl.positiveStylePrompt);
+ const prompt = useAppSelector((s) => s.controlLayers.present.positivePrompt2);
const textareaRef = useRef(null);
const { t } = useTranslation();
const handleChange = useCallback(
(v: string) => {
- dispatch(setPositiveStylePromptSDXL(v));
+ dispatch(positivePrompt2Changed(v));
},
[dispatch]
);
diff --git a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLConcatButton.tsx b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLConcatButton.tsx
index 31df7d62d0..0af3dfcee4 100644
--- a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLConcatButton.tsx
+++ b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLConcatButton.tsx
@@ -1,23 +1,23 @@
import { IconButton, Tooltip } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
-import { setShouldConcatSDXLStylePrompt } from 'features/sdxl/store/sdxlSlice';
+import { shouldConcatPromptsChanged } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiLinkSimpleBold, PiLinkSimpleBreakBold } from 'react-icons/pi';
export const SDXLConcatButton = memo(() => {
- const shouldConcatSDXLStylePrompt = useAppSelector((s) => s.sdxl.shouldConcatSDXLStylePrompt);
+ const shouldConcatPrompts = useAppSelector((s) => s.controlLayers.present.shouldConcatPrompts);
const dispatch = useAppDispatch();
const { t } = useTranslation();
const handleShouldConcatPromptChange = useCallback(() => {
- dispatch(setShouldConcatSDXLStylePrompt(!shouldConcatSDXLStylePrompt));
- }, [dispatch, shouldConcatSDXLStylePrompt]);
+ dispatch(shouldConcatPromptsChanged(!shouldConcatPrompts));
+ }, [dispatch, shouldConcatPrompts]);
const label = useMemo(
- () => (shouldConcatSDXLStylePrompt ? t('sdxl.concatPromptStyle') : t('sdxl.freePromptStyle')),
- [shouldConcatSDXLStylePrompt, t]
+ () => (shouldConcatPrompts ? t('sdxl.concatPromptStyle') : t('sdxl.freePromptStyle')),
+ [shouldConcatPrompts, t]
);
return (
@@ -25,7 +25,7 @@ export const SDXLConcatButton = memo(() => {
: }
+ icon={shouldConcatPrompts ? : }
variant="promptOverlay"
fontSize={12}
px={0.5}
diff --git a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLPrompts.tsx b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLPrompts.tsx
index 4aca9a85a6..b585e92a5f 100644
--- a/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLPrompts.tsx
+++ b/invokeai/frontend/web/src/features/sdxl/components/SDXLPrompts/SDXLPrompts.tsx
@@ -8,13 +8,13 @@ import { ParamSDXLNegativeStylePrompt } from './ParamSDXLNegativeStylePrompt';
import { ParamSDXLPositiveStylePrompt } from './ParamSDXLPositiveStylePrompt';
export const SDXLPrompts = memo(() => {
- const shouldConcatSDXLStylePrompt = useAppSelector((s) => s.sdxl.shouldConcatSDXLStylePrompt);
+ const shouldConcatPrompts = useAppSelector((s) => s.controlLayers.present.shouldConcatPrompts);
return (
- {!shouldConcatSDXLStylePrompt && }
+ {!shouldConcatPrompts && }
- {!shouldConcatSDXLStylePrompt && }
+ {!shouldConcatPrompts && }
);
});
diff --git a/invokeai/frontend/web/src/features/sdxl/store/sdxlSlice.ts b/invokeai/frontend/web/src/features/sdxl/store/sdxlSlice.ts
index 91e1418e1d..10a8f861f1 100644
--- a/invokeai/frontend/web/src/features/sdxl/store/sdxlSlice.ts
+++ b/invokeai/frontend/web/src/features/sdxl/store/sdxlSlice.ts
@@ -1,18 +1,10 @@
import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
-import type {
- ParameterNegativeStylePromptSDXL,
- ParameterPositiveStylePromptSDXL,
- ParameterScheduler,
- ParameterSDXLRefinerModel,
-} from 'features/parameters/types/parameterSchemas';
+import type { ParameterScheduler, ParameterSDXLRefinerModel } from 'features/parameters/types/parameterSchemas';
type SDXLState = {
_version: 2;
- positiveStylePrompt: ParameterPositiveStylePromptSDXL;
- negativeStylePrompt: ParameterNegativeStylePromptSDXL;
- shouldConcatSDXLStylePrompt: boolean;
refinerModel: ParameterSDXLRefinerModel | null;
refinerSteps: number;
refinerCFGScale: number;
@@ -24,9 +16,6 @@ type SDXLState = {
const initialSDXLState: SDXLState = {
_version: 2,
- positiveStylePrompt: '',
- negativeStylePrompt: '',
- shouldConcatSDXLStylePrompt: true,
refinerModel: null,
refinerSteps: 20,
refinerCFGScale: 7.5,
@@ -40,15 +29,6 @@ export const sdxlSlice = createSlice({
name: 'sdxl',
initialState: initialSDXLState,
reducers: {
- setPositiveStylePromptSDXL: (state, action: PayloadAction) => {
- state.positiveStylePrompt = action.payload;
- },
- setNegativeStylePromptSDXL: (state, action: PayloadAction) => {
- state.negativeStylePrompt = action.payload;
- },
- setShouldConcatSDXLStylePrompt: (state, action: PayloadAction) => {
- state.shouldConcatSDXLStylePrompt = action.payload;
- },
refinerModelChanged: (state, action: PayloadAction) => {
state.refinerModel = action.payload;
},
@@ -74,9 +54,6 @@ export const sdxlSlice = createSlice({
});
export const {
- setPositiveStylePromptSDXL,
- setNegativeStylePromptSDXL,
- setShouldConcatSDXLStylePrompt,
refinerModelChanged,
setRefinerSteps,
setRefinerCFGScale,
diff --git a/invokeai/frontend/web/src/features/settingsAccordions/components/ControlSettingsAccordion/ControlSettingsAccordion.tsx b/invokeai/frontend/web/src/features/settingsAccordions/components/ControlSettingsAccordion/ControlSettingsAccordion.tsx
index ec81b0b211..d072cfde0f 100644
--- a/invokeai/frontend/web/src/features/settingsAccordions/components/ControlSettingsAccordion/ControlSettingsAccordion.tsx
+++ b/invokeai/frontend/web/src/features/settingsAccordions/components/ControlSettingsAccordion/ControlSettingsAccordion.tsx
@@ -13,51 +13,66 @@ import {
selectValidIPAdapters,
selectValidT2IAdapters,
} from 'features/controlAdapters/store/controlAdaptersSlice';
+import { selectAllControlAdapterIds, selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { useStandaloneAccordionToggle } from 'features/settingsAccordions/hooks/useStandaloneAccordionToggle';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { Fragment, memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
-const selector = createMemoizedSelector(selectControlAdaptersSlice, (controlAdapters) => {
- const badges: string[] = [];
- let isError = false;
+const selector = createMemoizedSelector(
+ [selectControlAdaptersSlice, selectControlLayersSlice],
+ (controlAdapters, controlLayers) => {
+ const badges: string[] = [];
+ let isError = false;
- const enabledIPAdapterCount = selectAllIPAdapters(controlAdapters).filter((ca) => ca.isEnabled).length;
- const validIPAdapterCount = selectValidIPAdapters(controlAdapters).length;
- if (enabledIPAdapterCount > 0) {
- badges.push(`${enabledIPAdapterCount} IP`);
- }
- if (enabledIPAdapterCount > validIPAdapterCount) {
- isError = true;
- }
+ const controlLayersAdapterIds = selectAllControlAdapterIds(controlLayers.present);
- const enabledControlNetCount = selectAllControlNets(controlAdapters).filter((ca) => ca.isEnabled).length;
- const validControlNetCount = selectValidControlNets(controlAdapters).length;
- if (enabledControlNetCount > 0) {
- badges.push(`${enabledControlNetCount} ControlNet`);
- }
- if (enabledControlNetCount > validControlNetCount) {
- isError = true;
- }
+ const enabledNonRegionalIPAdapterCount = selectAllIPAdapters(controlAdapters)
+ .filter((ca) => !controlLayersAdapterIds.includes(ca.id))
+ .filter((ca) => ca.isEnabled).length;
- const enabledT2IAdapterCount = selectAllT2IAdapters(controlAdapters).filter((ca) => ca.isEnabled).length;
- const validT2IAdapterCount = selectValidT2IAdapters(controlAdapters).length;
- if (enabledT2IAdapterCount > 0) {
- badges.push(`${enabledT2IAdapterCount} T2I`);
- }
- if (enabledT2IAdapterCount > validT2IAdapterCount) {
- isError = true;
- }
+ const validIPAdapterCount = selectValidIPAdapters(controlAdapters).length;
+ if (enabledNonRegionalIPAdapterCount > 0) {
+ badges.push(`${enabledNonRegionalIPAdapterCount} IP`);
+ }
+ if (enabledNonRegionalIPAdapterCount > validIPAdapterCount) {
+ isError = true;
+ }
- const controlAdapterIds = selectControlAdapterIds(controlAdapters);
+ const enabledControlNetCount = selectAllControlNets(controlAdapters)
+ .filter((ca) => !controlLayersAdapterIds.includes(ca.id))
+ .filter((ca) => ca.isEnabled).length;
+ const validControlNetCount = selectValidControlNets(controlAdapters).length;
+ if (enabledControlNetCount > 0) {
+ badges.push(`${enabledControlNetCount} ControlNet`);
+ }
+ if (enabledControlNetCount > validControlNetCount) {
+ isError = true;
+ }
- return {
- controlAdapterIds,
- badges,
- isError, // TODO: Add some visual indicator that the control adapters are in an error state
- };
-});
+ const enabledT2IAdapterCount = selectAllT2IAdapters(controlAdapters)
+ .filter((ca) => !controlLayersAdapterIds.includes(ca.id))
+ .filter((ca) => ca.isEnabled).length;
+ const validT2IAdapterCount = selectValidT2IAdapters(controlAdapters).length;
+ if (enabledT2IAdapterCount > 0) {
+ badges.push(`${enabledT2IAdapterCount} T2I`);
+ }
+ if (enabledT2IAdapterCount > validT2IAdapterCount) {
+ isError = true;
+ }
+
+ const controlAdapterIds = selectControlAdapterIds(controlAdapters).filter(
+ (id) => !controlLayersAdapterIds.includes(id)
+ );
+
+ return {
+ controlAdapterIds,
+ badges,
+ isError, // TODO: Add some visual indicator that the control adapters are in an error state
+ };
+ }
+);
export const ControlSettingsAccordion: React.FC = memo(() => {
const { t } = useTranslation();
diff --git a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSettingsAccordion.tsx b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSettingsAccordion.tsx
index 125a611876..bb9cfd36ce 100644
--- a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSettingsAccordion.tsx
+++ b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSettingsAccordion.tsx
@@ -3,6 +3,7 @@ import { Expander, Flex, FormControlGroup, StandaloneAccordion } from '@invoke-a
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { selectCanvasSlice } from 'features/canvas/store/canvasSlice';
+import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { HrfSettings } from 'features/hrf/components/HrfSettings';
import { selectHrfSlice } from 'features/hrf/store/hrfSlice';
import ParamScaleBeforeProcessing from 'features/parameters/components/Canvas/InfillAndScaling/ParamScaleBeforeProcessing';
@@ -24,8 +25,8 @@ import { ImageSizeCanvas } from './ImageSizeCanvas';
import { ImageSizeLinear } from './ImageSizeLinear';
const selector = createMemoizedSelector(
- [selectGenerationSlice, selectCanvasSlice, selectHrfSlice, activeTabNameSelector],
- (generation, canvas, hrf, activeTabName) => {
+ [selectGenerationSlice, selectCanvasSlice, selectHrfSlice, selectControlLayersSlice, activeTabNameSelector],
+ (generation, canvas, hrf, controlLayers, activeTabName) => {
const { shouldRandomizeSeed, model } = generation;
const { hrfEnabled } = hrf;
const badges: string[] = [];
@@ -42,7 +43,7 @@ const selector = createMemoizedSelector(
badges.push('locked');
}
} else {
- const { aspectRatio, width, height } = generation;
+ const { aspectRatio, width, height } = controlLayers.present.size;
badges.push(`${width}×${height}`);
badges.push(aspectRatio.id);
if (aspectRatio.isLocked) {
diff --git a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeCanvas.tsx b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeCanvas.tsx
index eaf7b25730..878174fe75 100644
--- a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeCanvas.tsx
+++ b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeCanvas.tsx
@@ -2,6 +2,7 @@ import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { aspectRatioChanged, setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
import ParamBoundingBoxHeight from 'features/parameters/components/Canvas/BoundingBox/ParamBoundingBoxHeight';
import ParamBoundingBoxWidth from 'features/parameters/components/Canvas/BoundingBox/ParamBoundingBoxWidth';
+import { AspectRatioIconPreview } from 'features/parameters/components/ImageSize/AspectRatioIconPreview';
import { ImageSize } from 'features/parameters/components/ImageSize/ImageSize';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import { selectOptimalDimension } from 'features/parameters/store/generationSlice';
@@ -41,6 +42,7 @@ export const ImageSizeCanvas = memo(() => {
aspectRatioState={aspectRatioState}
heightComponent={ }
widthComponent={ }
+ previewComponent={ }
onChangeAspectRatioState={onChangeAspectRatioState}
onChangeWidth={onChangeWidth}
onChangeHeight={onChangeHeight}
diff --git a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeLinear.tsx b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeLinear.tsx
index 9d5d2eb284..7e436556da 100644
--- a/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeLinear.tsx
+++ b/invokeai/frontend/web/src/features/settingsAccordions/components/ImageSettingsAccordion/ImageSizeLinear.tsx
@@ -1,27 +1,31 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { aspectRatioChanged, heightChanged, widthChanged } from 'features/controlLayers/store/controlLayersSlice';
import { ParamHeight } from 'features/parameters/components/Core/ParamHeight';
import { ParamWidth } from 'features/parameters/components/Core/ParamWidth';
+import { AspectRatioCanvasPreview } from 'features/parameters/components/ImageSize/AspectRatioCanvasPreview';
+import { AspectRatioIconPreview } from 'features/parameters/components/ImageSize/AspectRatioIconPreview';
import { ImageSize } from 'features/parameters/components/ImageSize/ImageSize';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
-import { aspectRatioChanged, heightChanged, widthChanged } from 'features/parameters/store/generationSlice';
+import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback } from 'react';
export const ImageSizeLinear = memo(() => {
const dispatch = useAppDispatch();
- const width = useAppSelector((s) => s.generation.width);
- const height = useAppSelector((s) => s.generation.height);
- const aspectRatioState = useAppSelector((s) => s.generation.aspectRatio);
+ const tab = useAppSelector(activeTabNameSelector);
+ const width = useAppSelector((s) => s.controlLayers.present.size.width);
+ const height = useAppSelector((s) => s.controlLayers.present.size.height);
+ const aspectRatioState = useAppSelector((s) => s.controlLayers.present.size.aspectRatio);
const onChangeWidth = useCallback(
(width: number) => {
- dispatch(widthChanged(width));
+ dispatch(widthChanged({ width }));
},
[dispatch]
);
const onChangeHeight = useCallback(
(height: number) => {
- dispatch(heightChanged(height));
+ dispatch(heightChanged({ height }));
},
[dispatch]
);
@@ -40,6 +44,7 @@ export const ImageSizeLinear = memo(() => {
aspectRatioState={aspectRatioState}
heightComponent={ }
widthComponent={ }
+ previewComponent={tab === 'txt2img' ? : }
onChangeAspectRatioState={onChangeAspectRatioState}
onChangeWidth={onChangeWidth}
onChangeHeight={onChangeHeight}
diff --git a/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx b/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx
index cb49696dbf..9ac5324d41 100644
--- a/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx
+++ b/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx
@@ -11,6 +11,7 @@ import StatusIndicator from 'features/system/components/StatusIndicator';
import { selectConfigSlice } from 'features/system/store/configSlice';
import FloatingGalleryButton from 'features/ui/components/FloatingGalleryButton';
import FloatingParametersPanelButtons from 'features/ui/components/FloatingParametersPanelButtons';
+import ParametersPanelTextToImage from 'features/ui/components/ParametersPanelTextToImage';
import type { UsePanelOptions } from 'features/ui/hooks/usePanel';
import { usePanel } from 'features/ui/hooks/usePanel';
import { usePanelStorage } from 'features/ui/hooks/usePanelStorage';
@@ -249,7 +250,7 @@ const InvokeTabs = () => {
onExpand={optionsPanel.onExpand}
collapsible
>
- {activeTabName === 'nodes' ? : }
+
{
};
export default memo(InvokeTabs);
+
+const ParametersPanelComponent = memo(() => {
+ const activeTabName = useAppSelector(activeTabNameSelector);
+
+ if (activeTabName === 'nodes') {
+ return ;
+ }
+ if (activeTabName === 'txt2img') {
+ return ;
+ }
+ return ;
+});
+ParametersPanelComponent.displayName = 'ParametersPanelComponent';
diff --git a/invokeai/frontend/web/src/features/ui/components/ParametersPanel.tsx b/invokeai/frontend/web/src/features/ui/components/ParametersPanel.tsx
index a74d132bd6..b8d35976e3 100644
--- a/invokeai/frontend/web/src/features/ui/components/ParametersPanel.tsx
+++ b/invokeai/frontend/web/src/features/ui/components/ParametersPanel.tsx
@@ -34,7 +34,7 @@ const ParametersPanel = () => {
{isSDXL ? : }
-
+ {activeTabName !== 'txt2img' && }
{activeTabName === 'unifiedCanvas' && }
{isSDXL && }
diff --git a/invokeai/frontend/web/src/features/ui/components/ParametersPanelTextToImage.tsx b/invokeai/frontend/web/src/features/ui/components/ParametersPanelTextToImage.tsx
new file mode 100644
index 0000000000..2d14a50856
--- /dev/null
+++ b/invokeai/frontend/web/src/features/ui/components/ParametersPanelTextToImage.tsx
@@ -0,0 +1,70 @@
+import { Box, Flex, Tab, TabList, TabPanel, TabPanels, Tabs } from '@invoke-ai/ui-library';
+import { useAppSelector } from 'app/store/storeHooks';
+import { overlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
+import { ControlLayersPanelContent } from 'features/controlLayers/components/ControlLayersPanelContent';
+import { useControlLayersTitle } from 'features/controlLayers/hooks/useControlLayersTitle';
+import { Prompts } from 'features/parameters/components/Prompts/Prompts';
+import QueueControls from 'features/queue/components/QueueControls';
+import { SDXLPrompts } from 'features/sdxl/components/SDXLPrompts/SDXLPrompts';
+import { AdvancedSettingsAccordion } from 'features/settingsAccordions/components/AdvancedSettingsAccordion/AdvancedSettingsAccordion';
+import { CompositingSettingsAccordion } from 'features/settingsAccordions/components/CompositingSettingsAccordion/CompositingSettingsAccordion';
+import { ControlSettingsAccordion } from 'features/settingsAccordions/components/ControlSettingsAccordion/ControlSettingsAccordion';
+import { GenerationSettingsAccordion } from 'features/settingsAccordions/components/GenerationSettingsAccordion/GenerationSettingsAccordion';
+import { ImageSettingsAccordion } from 'features/settingsAccordions/components/ImageSettingsAccordion/ImageSettingsAccordion';
+import { RefinerSettingsAccordion } from 'features/settingsAccordions/components/RefinerSettingsAccordion/RefinerSettingsAccordion';
+import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
+import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
+import type { CSSProperties } from 'react';
+import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+const overlayScrollbarsStyles: CSSProperties = {
+ height: '100%',
+ width: '100%',
+};
+
+const ParametersPanelTextToImage = () => {
+ const { t } = useTranslation();
+ const activeTabName = useAppSelector(activeTabNameSelector);
+ const controlLayersTitle = useControlLayersTitle();
+ const isSDXL = useAppSelector((s) => s.generation.model?.base === 'sdxl');
+
+ return (
+
+
+
+
+
+
+ {isSDXL ? : }
+
+
+ {t('common.settingsLabel')}
+ {controlLayersTitle}
+
+
+
+
+
+
+
+ {activeTabName !== 'txt2img' && }
+ {activeTabName === 'unifiedCanvas' && }
+ {isSDXL && }
+
+
+
+
+
+
+
+
+
+
+
+
+
+ );
+};
+
+export default memo(ParametersPanelTextToImage);
diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ImageToImageTab.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ImageToImageTab.tsx
index dcacdbdff4..07e87d202c 100644
--- a/invokeai/frontend/web/src/features/ui/components/tabs/ImageToImageTab.tsx
+++ b/invokeai/frontend/web/src/features/ui/components/tabs/ImageToImageTab.tsx
@@ -1,7 +1,7 @@
-import { Box } from '@invoke-ai/ui-library';
+import { Box, Flex } from '@invoke-ai/ui-library';
+import CurrentImageDisplay from 'features/gallery/components/CurrentImage/CurrentImageDisplay';
import InitialImageDisplay from 'features/parameters/components/ImageToImage/InitialImageDisplay';
import ResizeHandle from 'features/ui/components/tabs/ResizeHandle';
-import TextToImageTabMain from 'features/ui/components/tabs/TextToImageTab';
import { usePanelStorage } from 'features/ui/hooks/usePanelStorage';
import type { CSSProperties } from 'react';
import { memo, useCallback, useRef } from 'react';
@@ -42,7 +42,11 @@ const ImageToImageTab = () => {
-
+
+
+
+
+
diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/TextToImageTab.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/TextToImageTab.tsx
index fd38514edc..f9b760bcd5 100644
--- a/invokeai/frontend/web/src/features/ui/components/tabs/TextToImageTab.tsx
+++ b/invokeai/frontend/web/src/features/ui/components/tabs/TextToImageTab.tsx
@@ -1,13 +1,31 @@
-import { Box, Flex } from '@invoke-ai/ui-library';
+import { Box, Tab, TabList, TabPanel, TabPanels, Tabs } from '@invoke-ai/ui-library';
+import { ControlLayersEditor } from 'features/controlLayers/components/ControlLayersEditor';
+import { useControlLayersTitle } from 'features/controlLayers/hooks/useControlLayersTitle';
import CurrentImageDisplay from 'features/gallery/components/CurrentImage/CurrentImageDisplay';
import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
const TextToImageTab = () => {
+ const { t } = useTranslation();
+ const controlLayersTitle = useControlLayersTitle();
+
return (
-
-
-
+
+
+ {t('common.viewer')}
+ {controlLayersTitle}
+
+
+
+
+
+
+
+
+
+
+
);
};
diff --git a/invokeai/frontend/web/src/features/ui/hooks/usePanel.ts b/invokeai/frontend/web/src/features/ui/hooks/usePanel.ts
index efb0f3cdd1..f9ebe97064 100644
--- a/invokeai/frontend/web/src/features/ui/hooks/usePanel.ts
+++ b/invokeai/frontend/web/src/features/ui/hooks/usePanel.ts
@@ -124,7 +124,9 @@ export const usePanel = (arg: UsePanelOptions): UsePanelReturn => {
*
* For now, we'll just resize the panel to the min size every time the panel group is resized.
*/
- panelHandleRef.current.resize(minSizePct);
+ if (!panelHandleRef.current.isCollapsed()) {
+ panelHandleRef.current.resize(minSizePct);
+ }
});
resizeObserver.observe(panelGroupElement);
diff --git a/invokeai/frontend/web/src/services/api/hooks/useGetModelConfigWithTypeGuard.ts b/invokeai/frontend/web/src/services/api/hooks/useGetModelConfigWithTypeGuard.ts
index 6de2941403..8ff4db1acc 100644
--- a/invokeai/frontend/web/src/services/api/hooks/useGetModelConfigWithTypeGuard.ts
+++ b/invokeai/frontend/web/src/services/api/hooks/useGetModelConfigWithTypeGuard.ts
@@ -8,7 +8,7 @@ export const useGetModelConfigWithTypeGuard = (
) => {
const result = useGetModelConfigQuery(key ?? skipToken, {
selectFromResult: (result) => {
- const modelConfig = result.data;
+ const modelConfig = result.currentData;
return {
...result,
modelConfig: modelConfig && typeGuard(modelConfig) ? modelConfig : undefined,
diff --git a/invokeai/frontend/web/src/services/api/schema.ts b/invokeai/frontend/web/src/services/api/schema.ts
index 7157de227b..727fad6f81 100644
--- a/invokeai/frontend/web/src/services/api/schema.ts
+++ b/invokeai/frontend/web/src/services/api/schema.ts
@@ -584,6 +584,43 @@ export type components = {
*/
type: "add";
};
+ /**
+ * Alpha Mask to Tensor
+ * @description Convert a mask image to a tensor. Opaque regions are 1 and transparent regions are 0.
+ */
+ AlphaMaskToTensorInvocation: {
+ /**
+ * Id
+ * @description The id of this instance of an invocation. Must be unique among all instances of invocations.
+ */
+ id: string;
+ /**
+ * Is Intermediate
+ * @description Whether or not this is an intermediate invocation.
+ * @default false
+ */
+ is_intermediate?: boolean;
+ /**
+ * Use Cache
+ * @description Whether or not to use the cache
+ * @default true
+ */
+ use_cache?: boolean;
+ /** @description The mask image to convert. */
+ image?: components["schemas"]["ImageField"];
+ /**
+ * Invert
+ * @description Whether to invert the mask.
+ * @default false
+ */
+ invert?: boolean;
+ /**
+ * type
+ * @default alpha_mask_to_tensor
+ * @constant
+ */
+ type: "alpha_mask_to_tensor";
+ };
/**
* AppConfig
* @description App Config Response
@@ -2841,6 +2878,28 @@ export type components = {
* @default 0
*/
minimum_denoise?: number;
+ /**
+ * [OPTIONAL] Image
+ * @description OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE
+ */
+ image?: components["schemas"]["ImageField"] | null;
+ /**
+ * [OPTIONAL] VAE
+ * @description OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE
+ */
+ vae?: components["schemas"]["VAEField"] | null;
+ /**
+ * Tiled
+ * @description Processing using overlapping tiles (reduce memory consumption)
+ * @default false
+ */
+ tiled?: boolean;
+ /**
+ * Fp32
+ * @description Whether or not to use full float32 precision
+ * @default false
+ */
+ fp32?: boolean;
/**
* type
* @default create_gradient_mask
@@ -4125,7 +4184,7 @@ export type components = {
* @description The nodes in this graph
*/
nodes: {
- [key: string]: components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"];
+ [key: string]: components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"];
};
/**
* Edges
@@ -4162,7 +4221,7 @@ export type components = {
* @description The results of node executions
*/
results: {
- [key: string]: components["schemas"]["ColorCollectionOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["IntegerCollectionOutput"] | components["schemas"]["T2IAdapterOutput"] | components["schemas"]["ColorOutput"] | components["schemas"]["SDXLLoRALoaderOutput"] | components["schemas"]["ConditioningCollectionOutput"] | components["schemas"]["CLIPSkipInvocationOutput"] | components["schemas"]["IPAdapterOutput"] | components["schemas"]["String2Output"] | components["schemas"]["MaskOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["CLIPOutput"] | components["schemas"]["IntegerOutput"] | components["schemas"]["BooleanCollectionOutput"] | components["schemas"]["GradientMaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["UNetOutput"] | components["schemas"]["SeamlessModeOutput"] | components["schemas"]["StringPosNegOutput"] | components["schemas"]["FaceOffOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["CollectInvocationOutput"] | components["schemas"]["MetadataOutput"] | components["schemas"]["SchedulerOutput"] | components["schemas"]["DenoiseMaskOutput"] | components["schemas"]["ImageOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["LoRALoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["StringCollectionOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["FaceMaskOutput"] | components["schemas"]["LatentsCollectionOutput"] | components["schemas"]["VAEOutput"] | components["schemas"]["CalculateImageTilesOutput"] | components["schemas"]["PairTileImageOutput"] | components["schemas"]["BooleanOutput"] | components["schemas"]["ConditioningOutput"] | components["schemas"]["IdealSizeOutput"] | components["schemas"]["MetadataItemOutput"] | components["schemas"]["TileToPropertiesOutput"];
+ [key: string]: components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["ConditioningOutput"] | components["schemas"]["FaceOffOutput"] | components["schemas"]["SchedulerOutput"] | components["schemas"]["ColorOutput"] | components["schemas"]["String2Output"] | components["schemas"]["BooleanCollectionOutput"] | components["schemas"]["ColorCollectionOutput"] | components["schemas"]["DenoiseMaskOutput"] | components["schemas"]["T2IAdapterOutput"] | components["schemas"]["GradientMaskOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["BooleanOutput"] | components["schemas"]["CalculateImageTilesOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ConditioningCollectionOutput"] | components["schemas"]["StringCollectionOutput"] | components["schemas"]["PairTileImageOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["CollectInvocationOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["IPAdapterOutput"] | components["schemas"]["LatentsCollectionOutput"] | components["schemas"]["SeamlessModeOutput"] | components["schemas"]["IdealSizeOutput"] | components["schemas"]["SDXLLoRALoaderOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["MetadataItemOutput"] | components["schemas"]["IntegerOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["FaceMaskOutput"] | components["schemas"]["VAEOutput"] | components["schemas"]["StringPosNegOutput"] | components["schemas"]["CLIPOutput"] | components["schemas"]["TileToPropertiesOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageOutput"] | components["schemas"]["MetadataOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["UNetOutput"] | components["schemas"]["CLIPSkipInvocationOutput"] | components["schemas"]["IntegerCollectionOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["LoRALoaderOutput"];
};
/**
* Errors
@@ -5293,6 +5352,51 @@ export type components = {
*/
type: "img_lerp";
};
+ /**
+ * Image Mask to Tensor
+ * @description Convert a mask image to a tensor. Converts the image to grayscale and uses thresholding at the specified value.
+ */
+ ImageMaskToTensorInvocation: {
+ /** @description Optional metadata to be saved with the image */
+ metadata?: components["schemas"]["MetadataField"] | null;
+ /**
+ * Id
+ * @description The id of this instance of an invocation. Must be unique among all instances of invocations.
+ */
+ id: string;
+ /**
+ * Is Intermediate
+ * @description Whether or not this is an intermediate invocation.
+ * @default false
+ */
+ is_intermediate?: boolean;
+ /**
+ * Use Cache
+ * @description Whether or not to use the cache
+ * @default true
+ */
+ use_cache?: boolean;
+ /** @description The mask image to convert. */
+ image?: components["schemas"]["ImageField"];
+ /**
+ * Cutoff
+ * @description Cutoff (<)
+ * @default 128
+ */
+ cutoff?: number;
+ /**
+ * Invert
+ * @description Whether to invert the mask.
+ * @default false
+ */
+ invert?: boolean;
+ /**
+ * type
+ * @default image_mask_to_tensor
+ * @constant
+ */
+ type: "image_mask_to_tensor";
+ };
/**
* Multiply Images
* @description Multiplies two images together using `PIL.ImageChops.multiply()`.
@@ -6004,6 +6108,37 @@ export type components = {
*/
type: "integer_output";
};
+ /**
+ * Invert Tensor Mask
+ * @description Inverts a tensor mask.
+ */
+ InvertTensorMaskInvocation: {
+ /**
+ * Id
+ * @description The id of this instance of an invocation. Must be unique among all instances of invocations.
+ */
+ id: string;
+ /**
+ * Is Intermediate
+ * @description Whether or not this is an intermediate invocation.
+ * @default false
+ */
+ is_intermediate?: boolean;
+ /**
+ * Use Cache
+ * @description Whether or not to use the cache
+ * @default true
+ */
+ use_cache?: boolean;
+ /** @description The tensor mask to convert. */
+ mask?: components["schemas"]["TensorField"];
+ /**
+ * type
+ * @default invert_tensor_mask
+ * @constant
+ */
+ type: "invert_tensor_mask";
+ };
/** InvocationCacheStatus */
InvocationCacheStatus: {
/**
@@ -6790,6 +6925,8 @@ export type components = {
trigger_phrases?: string[] | null;
/** @description Default settings for this model */
default_settings?: components["schemas"]["MainModelDefaultSettings"] | null;
+ /** @default normal */
+ variant?: components["schemas"]["ModelVariantType"];
/**
* Format
* @default checkpoint
@@ -6806,8 +6943,6 @@ export type components = {
* @description When this model was last converted to diffusers
*/
converted_at?: number | null;
- /** @default normal */
- variant?: components["schemas"]["ModelVariantType"];
/** @default epsilon */
prediction_type?: components["schemas"]["SchedulerPredictionType"];
/**
@@ -6878,6 +7013,8 @@ export type components = {
trigger_phrases?: string[] | null;
/** @description Default settings for this model */
default_settings?: components["schemas"]["MainModelDefaultSettings"] | null;
+ /** @default normal */
+ variant?: components["schemas"]["ModelVariantType"];
/**
* Format
* @default diffusers
diff --git a/invokeai/version/invokeai_version.py b/invokeai/version/invokeai_version.py
index 4b56dfc53e..7c223b74a7 100644
--- a/invokeai/version/invokeai_version.py
+++ b/invokeai/version/invokeai_version.py
@@ -1 +1 @@
-__version__ = "4.0.4"
+__version__ = "4.2.0a4"
diff --git a/tests/app/util/test_controlnet_utils.py b/tests/app/util/test_controlnet_utils.py
index 21662cce8d..9806fe7806 100644
--- a/tests/app/util/test_controlnet_utils.py
+++ b/tests/app/util/test_controlnet_utils.py
@@ -3,6 +3,7 @@ import pytest
from PIL import Image
from invokeai.app.util.controlnet_utils import prepare_control_image
+from invokeai.backend.image_util.util import nms
@pytest.mark.parametrize("num_channels", [1, 2, 3])
@@ -40,3 +41,10 @@ def test_prepare_control_image_num_channels_too_large(num_channels):
device="cpu",
do_classifier_free_guidance=False,
)
+
+
+@pytest.mark.parametrize("threshold,sigma", [(None, 1.0), (1, None)])
+def test_nms_invalid_options(threshold: None | int, sigma: None | float):
+ """Test that an exception is raised in nms(...) if only one of the `threshold` or `sigma` parameters are provided."""
+ with pytest.raises(ValueError):
+ nms(np.zeros((256, 256, 3), dtype=np.uint8), threshold, sigma)
diff --git a/tests/test_model_probe.py b/tests/test_model_probe.py
index 8be7089cf5..95929e3aa3 100644
--- a/tests/test_model_probe.py
+++ b/tests/test_model_probe.py
@@ -4,7 +4,7 @@ import pytest
from torch import tensor
from invokeai.backend.model_manager import BaseModelType, ModelRepoVariant
-from invokeai.backend.model_manager.config import InvalidModelConfigException
+from invokeai.backend.model_manager.config import InvalidModelConfigException, MainDiffusersConfig, ModelVariantType
from invokeai.backend.model_manager.probe import (
CkptType,
ModelProbe,
@@ -78,3 +78,11 @@ def test_probe_handles_state_dict_with_integer_keys():
}
with pytest.raises(InvalidModelConfigException):
ModelProbe.get_model_type_from_checkpoint(Path("embedding.pt"), state_dict_with_integer_keys)
+
+
+def test_probe_sd1_diffusers_inpainting(datadir: Path):
+ config = ModelProbe.probe(datadir / "sd-1/main/dreamshaper-8-inpainting")
+ assert isinstance(config, MainDiffusersConfig)
+ assert config.base is BaseModelType.StableDiffusion1
+ assert config.variant is ModelVariantType.Inpaint
+ assert config.repo_variant is ModelRepoVariant.FP16
diff --git a/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/README b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/README
new file mode 100644
index 0000000000..15349b5f0a
--- /dev/null
+++ b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/README
@@ -0,0 +1 @@
+This folder contains config files copied from [Lykon/dreamshaper-8-inpainting](https://huggingface.co/Lykon/dreamshaper-8-inpainting).
diff --git a/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/model_index.json b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/model_index.json
new file mode 100644
index 0000000000..e4e983f7ba
--- /dev/null
+++ b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/model_index.json
@@ -0,0 +1,34 @@
+{
+ "_class_name": "StableDiffusionInpaintPipeline",
+ "_diffusers_version": "0.21.0.dev0",
+ "_name_or_path": "lykon-models/dreamshaper-8-inpainting",
+ "feature_extractor": [
+ "transformers",
+ "CLIPFeatureExtractor"
+ ],
+ "requires_safety_checker": true,
+ "safety_checker": [
+ "stable_diffusion",
+ "StableDiffusionSafetyChecker"
+ ],
+ "scheduler": [
+ "diffusers",
+ "DEISMultistepScheduler"
+ ],
+ "text_encoder": [
+ "transformers",
+ "CLIPTextModel"
+ ],
+ "tokenizer": [
+ "transformers",
+ "CLIPTokenizer"
+ ],
+ "unet": [
+ "diffusers",
+ "UNet2DConditionModel"
+ ],
+ "vae": [
+ "diffusers",
+ "AutoencoderKL"
+ ]
+}
diff --git a/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/scheduler/scheduler_config.json b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/scheduler/scheduler_config.json
new file mode 100644
index 0000000000..a63f334bd6
--- /dev/null
+++ b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/scheduler/scheduler_config.json
@@ -0,0 +1,23 @@
+{
+ "_class_name": "DEISMultistepScheduler",
+ "_diffusers_version": "0.21.0.dev0",
+ "algorithm_type": "deis",
+ "beta_end": 0.012,
+ "beta_schedule": "scaled_linear",
+ "beta_start": 0.00085,
+ "clip_sample": false,
+ "dynamic_thresholding_ratio": 0.995,
+ "lower_order_final": true,
+ "num_train_timesteps": 1000,
+ "prediction_type": "epsilon",
+ "sample_max_value": 1.0,
+ "set_alpha_to_one": false,
+ "skip_prk_steps": true,
+ "solver_order": 2,
+ "solver_type": "logrho",
+ "steps_offset": 1,
+ "thresholding": false,
+ "timestep_spacing": "leading",
+ "trained_betas": null,
+ "use_karras_sigmas": false
+}
diff --git a/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/unet/config.json b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/unet/config.json
new file mode 100644
index 0000000000..d9f3b21d92
--- /dev/null
+++ b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/unet/config.json
@@ -0,0 +1,66 @@
+{
+ "_class_name": "UNet2DConditionModel",
+ "_diffusers_version": "0.21.0.dev0",
+ "_name_or_path": "/home/patrick/.cache/huggingface/hub/models--lykon-models--dreamshaper-8-inpainting/snapshots/15dcb9dec91a39ee498e3917c9ef6174b103862d/unet",
+ "act_fn": "silu",
+ "addition_embed_type": null,
+ "addition_embed_type_num_heads": 64,
+ "addition_time_embed_dim": null,
+ "attention_head_dim": 8,
+ "attention_type": "default",
+ "block_out_channels": [
+ 320,
+ 640,
+ 1280,
+ 1280
+ ],
+ "center_input_sample": false,
+ "class_embed_type": null,
+ "class_embeddings_concat": false,
+ "conv_in_kernel": 3,
+ "conv_out_kernel": 3,
+ "cross_attention_dim": 768,
+ "cross_attention_norm": null,
+ "down_block_types": [
+ "CrossAttnDownBlock2D",
+ "CrossAttnDownBlock2D",
+ "CrossAttnDownBlock2D",
+ "DownBlock2D"
+ ],
+ "downsample_padding": 1,
+ "dual_cross_attention": false,
+ "encoder_hid_dim": null,
+ "encoder_hid_dim_type": null,
+ "flip_sin_to_cos": true,
+ "freq_shift": 0,
+ "in_channels": 9,
+ "layers_per_block": 2,
+ "mid_block_only_cross_attention": null,
+ "mid_block_scale_factor": 1,
+ "mid_block_type": "UNetMidBlock2DCrossAttn",
+ "norm_eps": 1e-05,
+ "norm_num_groups": 32,
+ "num_attention_heads": null,
+ "num_class_embeds": null,
+ "only_cross_attention": false,
+ "out_channels": 4,
+ "projection_class_embeddings_input_dim": null,
+ "resnet_out_scale_factor": 1.0,
+ "resnet_skip_time_act": false,
+ "resnet_time_scale_shift": "default",
+ "sample_size": 64,
+ "time_cond_proj_dim": null,
+ "time_embedding_act_fn": null,
+ "time_embedding_dim": null,
+ "time_embedding_type": "positional",
+ "timestep_post_act": null,
+ "transformer_layers_per_block": 1,
+ "up_block_types": [
+ "UpBlock2D",
+ "CrossAttnUpBlock2D",
+ "CrossAttnUpBlock2D",
+ "CrossAttnUpBlock2D"
+ ],
+ "upcast_attention": null,
+ "use_linear_projection": false
+}
diff --git a/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/unet/diffusion_pytorch_model.fp16.safetensors b/tests/test_model_probe/sd-1/main/dreamshaper-8-inpainting/unet/diffusion_pytorch_model.fp16.safetensors
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/tests/test_object_serializer_disk.py b/tests/test_object_serializer_disk.py
index 125534c500..84c6e876fc 100644
--- a/tests/test_object_serializer_disk.py
+++ b/tests/test_object_serializer_disk.py
@@ -99,6 +99,20 @@ def test_obj_serializer_ephemeral_writes_to_tempdir(tmp_path: Path):
assert not Path(tmp_path, obj_1_name).exists()
+def test_obj_serializer_ephemeral_deletes_dangling_tempdirs_on_init(tmp_path: Path):
+ tempdir = tmp_path / "tmpdir"
+ tempdir.mkdir()
+ ObjectSerializerDisk[MockDataclass](tmp_path, ephemeral=True)
+ assert not tempdir.exists()
+
+
+def test_obj_serializer_does_not_delete_tempdirs_on_init(tmp_path: Path):
+ tempdir = tmp_path / "tmpdir"
+ tempdir.mkdir()
+ ObjectSerializerDisk[MockDataclass](tmp_path, ephemeral=False)
+ assert tempdir.exists()
+
+
def test_obj_serializer_disk_different_types(tmp_path: Path):
obj_serializer_1 = ObjectSerializerDisk[MockDataclass](tmp_path)
obj_1 = MockDataclass(foo="bar")