diff --git a/.github/actions/install-frontend-deps/action.yml b/.github/actions/install-frontend-deps/action.yml
index 32b4987249..6152da80c6 100644
--- a/.github/actions/install-frontend-deps/action.yml
+++ b/.github/actions/install-frontend-deps/action.yml
@@ -9,9 +9,9 @@ runs:
node-version: '18'
- name: setup pnpm
- uses: pnpm/action-setup@v2
+ uses: pnpm/action-setup@v4
with:
- version: 8
+ version: 8.15.6
run_install: false
- name: get pnpm store directory
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index 6c8fee470e..e29d481b41 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -8,7 +8,7 @@
## QA Instructions
-
+
## Merge Plan
diff --git a/README.md b/README.md
index 41de4882ee..96c3210c66 100644
--- a/README.md
+++ b/README.md
@@ -12,12 +12,24 @@
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
-[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
+Invoke is available in two editions:
+
+| **Community Edition** | **Professional Edition** |
+|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
+| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
+| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
+| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
+| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
+| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
-

+# Documentation
+| **Quick Links** |
+|----------------------------------------------------------------------------------------------------------------------------|
+| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
+
## Quick Start
@@ -37,6 +49,33 @@ Invoke is a leading creative engine built to empower professionals and enthusias
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
+## Docker Container
+
+We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
+
+> [!IMPORTANT]
+> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
+
+### Generate!
+
+Run the container, modifying the command as necessary:
+
+```bash
+docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
+```
+
+Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
+
+For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
+
+### Persist your data
+
+You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
+
+### DIY
+
+Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
+
## Troubleshooting, FAQ and Support
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
@@ -114,3 +153,5 @@ Original portions of the software are Copyright © 2024 by respective contributo
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
+[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
+[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html
diff --git a/docker/.env.sample b/docker/.env.sample
index aeb69bfd27..eef690a808 100644
--- a/docker/.env.sample
+++ b/docker/.env.sample
@@ -19,8 +19,9 @@
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
# INVOKEAI_PORT=9090
-## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
-# GPU_DRIVER=nvidia #| rocm
+## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
+# GPU_DRIVER=cuda #| rocm
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
+## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
# CONTAINER_UID=1000
diff --git a/docker/README.md b/docker/README.md
index 9e7ac15145..fc6edeacd3 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -1,41 +1,75 @@
-# InvokeAI Containerized
+# Invoke in Docker
-All commands should be run within the `docker` directory: `cd docker`
+- Ensure that Docker can use the GPU on your system
+- This documentation assumes Linux, but should work similarly under Windows with WSL2
+- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
-## Quickstart :rocket:
+## Quickstart :lightning:
-On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
+No `docker compose`, no persistence, just a simple one-liner using the official images:
-For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
+**CUDA:**
-## Detailed setup
+```bash
+docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
+```
+
+**ROCm:**
+
+```bash
+docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
+```
+
+Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
+
+> [!TIP]
+> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
+
+## Customize the container
+
+We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
+
+```bash
+cd docker
+cp .env.sample .env
+# edit .env to your liking if you need to; it is well commented.
+./run.sh
+```
+
+It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
+
+## Docker setup in detail
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- - The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
+ - The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
3. Ensure docker daemon is able to access the GPU.
- - You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
+ - [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
+ - [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
#### macOS
+> [!TIP]
+> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
+
+If you are still reading:
+
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
-This is done via Docker Desktop preferences
+This is done via Docker Desktop preferences.
-### Configure Invoke environment
+### Configure the Invoke Environment
-1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
- a. the desired location of the InvokeAI runtime directory, or
- b. an existing, v3.0.0 compatible runtime directory.
+1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
1. Execute `run.sh`
The image will be built automatically if needed.
-The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
+The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
### Use a GPU
@@ -43,9 +77,9 @@ The runtime directory (holding models and outputs) will be created in the locati
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
-The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
+The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
-To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
+To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
## Customize
@@ -59,10 +93,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
-GPU_DRIVER=nvidia
+GPU_DRIVER=cuda
```
-Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
+Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even More Customizing!
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 2ad50e74a1..af96cc1c8f 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -1,7 +1,5 @@
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
-version: '3.8'
-
x-invokeai: &invokeai
image: "local/invokeai:latest"
build:
@@ -32,7 +30,7 @@ x-invokeai: &invokeai
services:
- invokeai-nvidia:
+ invokeai-cuda:
<<: *invokeai
deploy:
resources:
diff --git a/docker/docker-entrypoint.sh b/docker/docker-entrypoint.sh
index 7fb52f3af9..686bd9630f 100755
--- a/docker/docker-entrypoint.sh
+++ b/docker/docker-entrypoint.sh
@@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
# but it is useful to have the full SSH server e.g. on Runpod.
# (use SCP to copy files to/from the image, etc)
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
- apt-get update
- apt-get install -y openssh-server
- pushd "$HOME"
- mkdir -p .ssh
- echo "${PUBLIC_KEY}" > .ssh/authorized_keys
- chmod -R 700 .ssh
- popd
- service ssh start
+ apt-get update
+ apt-get install -y openssh-server
+ pushd "$HOME"
+ mkdir -p .ssh
+ echo "${PUBLIC_KEY}" >.ssh/authorized_keys
+ chmod -R 700 .ssh
+ popd
+ service ssh start
fi
mkdir -p "${INVOKEAI_ROOT}"
-chown --recursive ${USER} "${INVOKEAI_ROOT}"
+chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).
diff --git a/docker/run.sh b/docker/run.sh
index d413e53453..1040e865bf 100755
--- a/docker/run.sh
+++ b/docker/run.sh
@@ -8,11 +8,15 @@ run() {
local build_args=""
local profile=""
+ # create .env file if it doesn't exist, otherwise docker compose will fail
touch .env
+
+ # parse .env file for build args
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
- [[ -z "$profile" ]] && profile="nvidia"
+ # default to 'cuda' profile
+ [[ -z "$profile" ]] && profile="cuda"
local service_name="invokeai-$profile"
diff --git a/docs/contributing/MODEL_MANAGER.md b/docs/contributing/MODEL_MANAGER.md
index 9c5cb8e589..487c859dec 100644
--- a/docs/contributing/MODEL_MANAGER.md
+++ b/docs/contributing/MODEL_MANAGER.md
@@ -408,7 +408,7 @@ config = get_config()
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config.db_path, logger)
-record_store = ModelRecordServiceSQL(db)
+record_store = ModelRecordServiceSQL(db, logger)
queue = DownloadQueueService()
queue.start()
diff --git a/docs/installation/040_INSTALL_DOCKER.md b/docs/installation/040_INSTALL_DOCKER.md
index 3814b72e80..119cff93d2 100644
--- a/docs/installation/040_INSTALL_DOCKER.md
+++ b/docs/installation/040_INSTALL_DOCKER.md
@@ -4,50 +4,37 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker
-!!! warning "macOS and AMD GPU Users"
+!!! warning "macOS users"
- We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
- because Docker containers can not access the GPU on macOS.
-
-!!! warning "AMD GPU Users"
-
- Container support for AMD GPUs has been reported to work by the community, but has not received
- extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
- use the `build.sh` script to build the image for this to take effect at build time.
+ Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
!!! tip "Linux and Windows Users"
- For optimal performance, configure your Docker daemon to access your machine's GPU.
+ Configure Docker to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
- Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
-## Why containers?
-
-They provide a flexible, reliable way to build and deploy InvokeAI.
-See [Processes](https://12factor.net/processes) under the Twelve-Factor App
-methodology for details on why running applications in such a stateless fashion is important.
-
-The container is configured for CUDA by default, but can be built to support AMD GPUs
-by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
-
-Developers on Apple silicon (M1/M2/M3): You
-[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
-and performance is reduced compared with running it directly on macOS but for
-development purposes it's fine. Once you're done with development tasks on your
-laptop you can build for the target platform and architecture and deploy to
-another environment with NVIDIA GPUs on-premises or in the cloud.
+ Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
## TL;DR
-This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
+Ensure your Docker setup is able to use your GPU. Then:
+
+ ```bash
+ docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
+ ```
+
+Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
+
+## Build-It-Yourself
+
+All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
```bash
- # docker compose commands should be run from the `docker` directory
cd docker
+ cp .env.sample .env
docker compose up
```
-## Installation in a Linux container (desktop)
+We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
### Prerequisites
@@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
-#### Get a Huggingface-Token
-
-Besides the Docker Agent you will need an Account on
-[huggingface.co](https://huggingface.co/join).
-
-After you succesfully registered your account, go to
-[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
-a token and copy it, since you will need in for the next step.
-
### Setup
-Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
+Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
@@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
## Troubleshooting / FAQ
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
-- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
- and you may have cloned this repository before the issue was fixed. To solve this, please change
- the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
+- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
+ To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
- For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
+ For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
diff --git a/installer/templates/invoke.bat.in b/installer/templates/invoke.bat.in
index c8ef19710b..774b667c08 100644
--- a/installer/templates/invoke.bat.in
+++ b/installer/templates/invoke.bat.in
@@ -13,7 +13,7 @@ echo 2. Open the developer console
echo 3. Command-line help
echo Q - Quit
echo.
-echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
+echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
echo.
set /P choice="Please enter 1-4, Q: [1] "
if not defined choice set choice=1
diff --git a/invokeai/app/api/dependencies.py b/invokeai/app/api/dependencies.py
index 19a7bb083d..6e049399db 100644
--- a/invokeai/app/api/dependencies.py
+++ b/invokeai/app/api/dependencies.py
@@ -4,37 +4,39 @@ from logging import Logger
import torch
+from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
+from invokeai.app.services.board_images.board_images_default import BoardImagesService
+from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
+from invokeai.app.services.boards.boards_default import BoardService
+from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
+from invokeai.app.services.config.config_default import InvokeAIAppConfig
+from invokeai.app.services.download.download_default import DownloadQueueService
+from invokeai.app.services.events.events_fastapievents import FastAPIEventService
+from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
+from invokeai.app.services.image_records.image_records_sqlite import SqliteImageRecordStorage
+from invokeai.app.services.images.images_default import ImageService
+from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
+from invokeai.app.services.invocation_services import InvocationServices
+from invokeai.app.services.invocation_stats.invocation_stats_default import InvocationStatsService
+from invokeai.app.services.invoker import Invoker
+from invokeai.app.services.model_images.model_images_default import ModelImageFileStorageDisk
+from invokeai.app.services.model_manager.model_manager_default import ModelManagerService
+from invokeai.app.services.model_records.model_records_sql import ModelRecordServiceSQL
+from invokeai.app.services.names.names_default import SimpleNameService
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
+from invokeai.app.services.session_processor.session_processor_default import (
+ DefaultSessionProcessor,
+ DefaultSessionRunner,
+)
+from invokeai.app.services.session_queue.session_queue_sqlite import SqliteSessionQueue
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
+from invokeai.app.services.urls.urls_default import LocalUrlService
+from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
-from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
-from ..services.board_images.board_images_default import BoardImagesService
-from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
-from ..services.boards.boards_default import BoardService
-from ..services.bulk_download.bulk_download_default import BulkDownloadService
-from ..services.config import InvokeAIAppConfig
-from ..services.download import DownloadQueueService
-from ..services.events.events_fastapievents import FastAPIEventService
-from ..services.image_files.image_files_disk import DiskImageFileStorage
-from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
-from ..services.images.images_default import ImageService
-from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
-from ..services.invocation_services import InvocationServices
-from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
-from ..services.invoker import Invoker
-from ..services.model_images.model_images_default import ModelImageFileStorageDisk
-from ..services.model_manager.model_manager_default import ModelManagerService
-from ..services.model_records import ModelRecordServiceSQL
-from ..services.names.names_default import SimpleNameService
-from ..services.session_processor.session_processor_default import DefaultSessionProcessor, DefaultSessionRunner
-from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
-from ..services.urls.urls_default import LocalUrlService
-from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
-
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
@@ -97,7 +99,7 @@ class ApiDependencies:
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
model_manager = ModelManagerService.build_model_manager(
app_config=configuration,
- model_record_service=ModelRecordServiceSQL(db=db),
+ model_record_service=ModelRecordServiceSQL(db=db, logger=logger),
download_queue=download_queue_service,
events=events,
)
diff --git a/invokeai/app/api/routers/app_info.py b/invokeai/app/api/routers/app_info.py
index c3bc98a038..3206adb242 100644
--- a/invokeai/app/api/routers/app_info.py
+++ b/invokeai/app/api/routers/app_info.py
@@ -10,14 +10,13 @@ from fastapi import Body
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
-from ..dependencies import ApiDependencies
-
class LogLevel(int, Enum):
NotSet = logging.NOTSET
diff --git a/invokeai/app/api/routers/board_images.py b/invokeai/app/api/routers/board_images.py
index 8e36a682d2..eb193f6585 100644
--- a/invokeai/app/api/routers/board_images.py
+++ b/invokeai/app/api/routers/board_images.py
@@ -2,7 +2,7 @@ from fastapi import Body, HTTPException
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
-from ..dependencies import ApiDependencies
+from invokeai.app.api.dependencies import ApiDependencies
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
diff --git a/invokeai/app/api/routers/boards.py b/invokeai/app/api/routers/boards.py
index 19c2b330f0..926c0f7fd2 100644
--- a/invokeai/app/api/routers/boards.py
+++ b/invokeai/app/api/routers/boards.py
@@ -4,12 +4,11 @@ from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from ..dependencies import ApiDependencies
-
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@@ -32,6 +31,7 @@ class DeleteBoardResult(BaseModel):
)
async def create_board(
board_name: str = Query(description="The name of the board to create"),
+ is_private: bool = Query(default=False, description="Whether the board is private"),
) -> BoardDTO:
"""Creates a board"""
try:
diff --git a/invokeai/app/api/routers/download_queue.py b/invokeai/app/api/routers/download_queue.py
index a6e53c7a5c..2633b28bca 100644
--- a/invokeai/app/api/routers/download_queue.py
+++ b/invokeai/app/api/routers/download_queue.py
@@ -8,13 +8,12 @@ from fastapi.routing import APIRouter
from pydantic.networks import AnyHttpUrl
from starlette.exceptions import HTTPException
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.download import (
DownloadJob,
UnknownJobIDException,
)
-from ..dependencies import ApiDependencies
-
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
diff --git a/invokeai/app/api/routers/images.py b/invokeai/app/api/routers/images.py
index 991681026c..2bc0b48251 100644
--- a/invokeai/app/api/routers/images.py
+++ b/invokeai/app/api/routers/images.py
@@ -8,6 +8,7 @@ from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, JsonValue
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
@@ -18,8 +19,6 @@ from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
-from ..dependencies import ApiDependencies
-
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@@ -234,21 +233,14 @@ async def get_image_workflow(
)
async def get_image_full(
image_name: str = Path(description="The name of full-resolution image file to get"),
-) -> FileResponse:
+) -> Response:
"""Gets a full-resolution image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name)
-
- if not ApiDependencies.invoker.services.images.validate_path(path):
- raise HTTPException(status_code=404)
-
- response = FileResponse(
- path,
- media_type="image/png",
- filename=image_name,
- content_disposition_type="inline",
- )
+ with open(path, "rb") as f:
+ content = f.read()
+ response = Response(content, media_type="image/png")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@@ -269,15 +261,14 @@ async def get_image_full(
)
async def get_image_thumbnail(
image_name: str = Path(description="The name of thumbnail image file to get"),
-) -> FileResponse:
+) -> Response:
"""Gets a thumbnail image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name, thumbnail=True)
- if not ApiDependencies.invoker.services.images.validate_path(path):
- raise HTTPException(status_code=404)
-
- response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
+ with open(path, "rb") as f:
+ content = f.read()
+ response = Response(content, media_type="image/webp")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@@ -323,18 +314,12 @@ async def list_image_dtos(
limit: int = Query(default=10, description="The number of images per page"),
order_dir: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The order of sort"),
starred_first: bool = Query(default=True, description="Whether to sort by starred images first"),
+ search_term: Optional[str] = Query(default=None, description="The term to search for"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of image DTOs"""
image_dtos = ApiDependencies.invoker.services.images.get_many(
- offset,
- limit,
- starred_first,
- order_dir,
- image_origin,
- categories,
- is_intermediate,
- board_id,
+ offset, limit, starred_first, order_dir, image_origin, categories, is_intermediate, board_id, search_term
)
return image_dtos
diff --git a/invokeai/app/api/routers/model_manager.py b/invokeai/app/api/routers/model_manager.py
index 298756d175..f73b7a86b1 100644
--- a/invokeai/app/api/routers/model_manager.py
+++ b/invokeai/app/api/routers/model_manager.py
@@ -16,6 +16,7 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
from invokeai.app.services.model_records import (
@@ -35,8 +36,6 @@ from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataW
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
-from ..dependencies import ApiDependencies
-
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
# images are immutable; set a high max-age
diff --git a/invokeai/app/api/routers/session_queue.py b/invokeai/app/api/routers/session_queue.py
index 7161e54a41..5dd4693795 100644
--- a/invokeai/app/api/routers/session_queue.py
+++ b/invokeai/app/api/routers/session_queue.py
@@ -4,6 +4,7 @@ from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
@@ -19,8 +20,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
-from ..dependencies import ApiDependencies
-
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
diff --git a/invokeai/app/api_app.py b/invokeai/app/api_app.py
index e69d95af71..88820a0c4c 100644
--- a/invokeai/app/api_app.py
+++ b/invokeai/app/api_app.py
@@ -20,14 +20,9 @@ from torch.backends.mps import is_available as is_mps_available
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
+from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
-from invokeai.app.services.config.config_default import get_config
-from invokeai.app.util.custom_openapi import get_openapi_func
-from invokeai.backend.util.devices import TorchDevice
-
-from ..backend.util.logging import InvokeAILogger
-from .api.dependencies import ApiDependencies
-from .api.routers import (
+from invokeai.app.api.routers import (
app_info,
board_images,
boards,
@@ -38,7 +33,11 @@ from .api.routers import (
utilities,
workflows,
)
-from .api.sockets import SocketIO
+from invokeai.app.api.sockets import SocketIO
+from invokeai.app.services.config.config_default import get_config
+from invokeai.app.util.custom_openapi import get_openapi_func
+from invokeai.backend.util.devices import TorchDevice
+from invokeai.backend.util.logging import InvokeAILogger
app_config = get_config()
@@ -162,6 +161,7 @@ def invoke_api() -> None:
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+ s.settimeout(1)
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
diff --git a/invokeai/app/invocations/baseinvocation.py b/invokeai/app/invocations/baseinvocation.py
index 1d169f0a82..b527de41bc 100644
--- a/invokeai/app/invocations/baseinvocation.py
+++ b/invokeai/app/invocations/baseinvocation.py
@@ -40,7 +40,7 @@ from invokeai.app.util.misc import uuid_string
from invokeai.backend.util.logging import InvokeAILogger
if TYPE_CHECKING:
- from ..services.invocation_services import InvocationServices
+ from invokeai.app.services.invocation_services import InvocationServices
logger = InvokeAILogger.get_logger()
diff --git a/invokeai/app/invocations/collections.py b/invokeai/app/invocations/collections.py
index e02291980f..bd3dedb3f8 100644
--- a/invokeai/app/invocations/collections.py
+++ b/invokeai/app/invocations/collections.py
@@ -4,13 +4,12 @@
import numpy as np
from pydantic import ValidationInfo, field_validator
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField
-
@invocation(
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
diff --git a/invokeai/app/invocations/compel.py b/invokeai/app/invocations/compel.py
index 1e78e10d38..fffb09e654 100644
--- a/invokeai/app/invocations/compel.py
+++ b/invokeai/app/invocations/compel.py
@@ -5,6 +5,7 @@ from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
+from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import (
ConditioningField,
FieldDescriptions,
@@ -14,6 +15,7 @@ from invokeai.app.invocations.fields import (
TensorField,
UIComponent,
)
+from invokeai.app.invocations.model import CLIPField
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
@@ -26,9 +28,6 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
)
from invokeai.backend.util.devices import TorchDevice
-from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
-from .model import CLIPField
-
# unconditioned: Optional[torch.Tensor]
diff --git a/invokeai/app/invocations/constants.py b/invokeai/app/invocations/constants.py
index e01589be81..e97275e4fd 100644
--- a/invokeai/app/invocations/constants.py
+++ b/invokeai/app/invocations/constants.py
@@ -1,6 +1,5 @@
from typing import Literal
-from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
from invokeai.backend.util.devices import TorchDevice
LATENT_SCALE_FACTOR = 8
@@ -11,9 +10,6 @@ factor is hard-coded to a literal '8' rather than using this constant.
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
"""
-SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
-"""A literal type representing the valid scheduler names."""
-
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
"""A literal type for PIL image modes supported by Invoke"""
diff --git a/invokeai/app/invocations/controlnet_image_processors.py b/invokeai/app/invocations/controlnet_image_processors.py
index 9778c3d22d..4a74624818 100644
--- a/invokeai/app/invocations/controlnet_image_processors.py
+++ b/invokeai/app/invocations/controlnet_image_processors.py
@@ -22,6 +22,13 @@ from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, field_validator, model_validator
+from invokeai.app.invocations.baseinvocation import (
+ BaseInvocation,
+ BaseInvocationOutput,
+ Classification,
+ invocation,
+ invocation_output,
+)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -46,8 +53,6 @@ from invokeai.backend.image_util.util import np_to_pil, pil_to_np
from invokeai.backend.model_manager.load import LoadedModelWithoutConfig
from invokeai.backend.util.devices import TorchDevice
-from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
-
class ControlField(BaseModel):
image: ImageField = Field(description="The control image")
diff --git a/invokeai/app/invocations/cv.py b/invokeai/app/invocations/cv.py
index a7c394deb2..f7951ccfeb 100644
--- a/invokeai/app/invocations/cv.py
+++ b/invokeai/app/invocations/cv.py
@@ -5,13 +5,11 @@ import cv2 as cv
import numpy
from PIL import Image, ImageOps
-from invokeai.app.invocations.fields import ImageField
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField, WithBoard, WithMetadata
-
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
diff --git a/invokeai/app/invocations/denoise_latents.py b/invokeai/app/invocations/denoise_latents.py
index fd901298f7..ccacc3303c 100644
--- a/invokeai/app/invocations/denoise_latents.py
+++ b/invokeai/app/invocations/denoise_latents.py
@@ -1,5 +1,6 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import inspect
+import os
from contextlib import ExitStack
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
@@ -17,7 +18,7 @@ from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPVisionModelWithProjection
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
-from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
+from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.fields import (
ConditioningField,
@@ -39,6 +40,7 @@ from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
+from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext, DenoiseInputs
from invokeai.backend.stable_diffusion.diffusers_pipeline import (
ControlNetData,
StableDiffusionGeneratorPipeline,
@@ -53,7 +55,13 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
TextConditioningData,
TextConditioningRegions,
)
+from invokeai.backend.stable_diffusion.diffusion.custom_atttention import CustomAttnProcessor2_0
+from invokeai.backend.stable_diffusion.diffusion_backend import StableDiffusionBackend
+from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+from invokeai.backend.stable_diffusion.extensions.preview import PreviewExt
+from invokeai.backend.stable_diffusion.extensions_manager import ExtensionsManager
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.hotfixes import ControlNetModel
from invokeai.backend.util.mask import to_standard_float_mask
@@ -313,9 +321,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
context: InvocationContext,
positive_conditioning_field: Union[ConditioningField, list[ConditioningField]],
negative_conditioning_field: Union[ConditioningField, list[ConditioningField]],
- unet: UNet2DConditionModel,
latent_height: int,
latent_width: int,
+ device: torch.device,
+ dtype: torch.dtype,
cfg_scale: float | list[float],
steps: int,
cfg_rescale_multiplier: float,
@@ -329,10 +338,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
uncond_list = [uncond_list]
cond_text_embeddings, cond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
- cond_list, context, unet.device, unet.dtype
+ cond_list, context, device, dtype
)
uncond_text_embeddings, uncond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
- uncond_list, context, unet.device, unet.dtype
+ uncond_list, context, device, dtype
)
cond_text_embedding, cond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
@@ -340,14 +349,14 @@ class DenoiseLatentsInvocation(BaseInvocation):
masks=cond_text_embedding_masks,
latent_height=latent_height,
latent_width=latent_width,
- dtype=unet.dtype,
+ dtype=dtype,
)
uncond_text_embedding, uncond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
text_conditionings=uncond_text_embeddings,
masks=uncond_text_embedding_masks,
latent_height=latent_height,
latent_width=latent_width,
- dtype=unet.dtype,
+ dtype=dtype,
)
if isinstance(cfg_scale, list):
@@ -706,9 +715,108 @@ class DenoiseLatentsInvocation(BaseInvocation):
return seed, noise, latents
+ def invoke(self, context: InvocationContext) -> LatentsOutput:
+ if os.environ.get("USE_MODULAR_DENOISE", False):
+ return self._new_invoke(context)
+ else:
+ return self._old_invoke(context)
+
@torch.no_grad()
@SilenceWarnings() # This quenches the NSFW nag from diffusers.
- def invoke(self, context: InvocationContext) -> LatentsOutput:
+ def _new_invoke(self, context: InvocationContext) -> LatentsOutput:
+ ext_manager = ExtensionsManager(is_canceled=context.util.is_canceled)
+
+ device = TorchDevice.choose_torch_device()
+ dtype = TorchDevice.choose_torch_dtype()
+
+ seed, noise, latents = self.prepare_noise_and_latents(context, self.noise, self.latents)
+ latents = latents.to(device=device, dtype=dtype)
+ if noise is not None:
+ noise = noise.to(device=device, dtype=dtype)
+
+ _, _, latent_height, latent_width = latents.shape
+
+ conditioning_data = self.get_conditioning_data(
+ context=context,
+ positive_conditioning_field=self.positive_conditioning,
+ negative_conditioning_field=self.negative_conditioning,
+ cfg_scale=self.cfg_scale,
+ steps=self.steps,
+ latent_height=latent_height,
+ latent_width=latent_width,
+ device=device,
+ dtype=dtype,
+ # TODO: old backend, remove
+ cfg_rescale_multiplier=self.cfg_rescale_multiplier,
+ )
+
+ scheduler = get_scheduler(
+ context=context,
+ scheduler_info=self.unet.scheduler,
+ scheduler_name=self.scheduler,
+ seed=seed,
+ )
+
+ timesteps, init_timestep, scheduler_step_kwargs = self.init_scheduler(
+ scheduler,
+ seed=seed,
+ device=device,
+ steps=self.steps,
+ denoising_start=self.denoising_start,
+ denoising_end=self.denoising_end,
+ )
+
+ denoise_ctx = DenoiseContext(
+ inputs=DenoiseInputs(
+ orig_latents=latents,
+ timesteps=timesteps,
+ init_timestep=init_timestep,
+ noise=noise,
+ seed=seed,
+ scheduler_step_kwargs=scheduler_step_kwargs,
+ conditioning_data=conditioning_data,
+ attention_processor_cls=CustomAttnProcessor2_0,
+ ),
+ unet=None,
+ scheduler=scheduler,
+ )
+
+ # get the unet's config so that we can pass the base to sd_step_callback()
+ unet_config = context.models.get_config(self.unet.unet.key)
+
+ ### preview
+ def step_callback(state: PipelineIntermediateState) -> None:
+ context.util.sd_step_callback(state, unet_config.base)
+
+ ext_manager.add_extension(PreviewExt(step_callback))
+
+ # ext: t2i/ip adapter
+ ext_manager.run_callback(ExtensionCallbackType.SETUP, denoise_ctx)
+
+ unet_info = context.models.load(self.unet.unet)
+ assert isinstance(unet_info.model, UNet2DConditionModel)
+ with (
+ unet_info.model_on_device() as (model_state_dict, unet),
+ ModelPatcher.patch_unet_attention_processor(unet, denoise_ctx.inputs.attention_processor_cls),
+ # ext: controlnet
+ ext_manager.patch_extensions(unet),
+ # ext: freeu, seamless, ip adapter, lora
+ ext_manager.patch_unet(model_state_dict, unet),
+ ):
+ sd_backend = StableDiffusionBackend(unet, scheduler)
+ denoise_ctx.unet = unet
+ result_latents = sd_backend.latents_from_embeddings(denoise_ctx, ext_manager)
+
+ # https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
+ result_latents = result_latents.detach().to("cpu")
+ TorchDevice.empty_cache()
+
+ name = context.tensors.save(tensor=result_latents)
+ return LatentsOutput.build(latents_name=name, latents=result_latents, seed=None)
+
+ @torch.no_grad()
+ @SilenceWarnings() # This quenches the NSFW nag from diffusers.
+ def _old_invoke(self, context: InvocationContext) -> LatentsOutput:
seed, noise, latents = self.prepare_noise_and_latents(context, self.noise, self.latents)
mask, masked_latents, gradient_mask = self.prep_inpaint_mask(context, latents)
@@ -787,7 +895,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
context=context,
positive_conditioning_field=self.positive_conditioning,
negative_conditioning_field=self.negative_conditioning,
- unet=unet,
+ device=unet.device,
+ dtype=unet.dtype,
latent_height=latent_height,
latent_width=latent_width,
cfg_scale=self.cfg_scale,
diff --git a/invokeai/app/invocations/fields.py b/invokeai/app/invocations/fields.py
index 0fa0216f1c..f9a483f84c 100644
--- a/invokeai/app/invocations/fields.py
+++ b/invokeai/app/invocations/fields.py
@@ -48,6 +48,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
T2IAdapterModel = "T2IAdapterModelField"
+ SpandrelImageToImageModel = "SpandrelImageToImageModelField"
# endregion
# region Misc Field Types
@@ -134,6 +135,7 @@ class FieldDescriptions:
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
+ spandrel_image_to_image_model = "Image-to-Image model"
lora_weight = "The weight at which the LoRA is applied to each model"
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
raw_prompt = "Raw prompt text (no parsing)"
@@ -160,6 +162,7 @@ class FieldDescriptions:
fp32 = "Whether or not to use full float32 precision"
precision = "Precision to use"
tiled = "Processing using overlapping tiles (reduce memory consumption)"
+ vae_tile_size = "The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the model will be used. Larger tile sizes generally produce better results at the cost of higher memory usage."
detect_res = "Pixel resolution for detection"
image_res = "Pixel resolution for output image"
safe_mode = "Whether or not to use safe mode"
diff --git a/invokeai/app/invocations/image.py b/invokeai/app/invocations/image.py
index 65e7ce5e06..a551f8df8a 100644
--- a/invokeai/app/invocations/image.py
+++ b/invokeai/app/invocations/image.py
@@ -6,6 +6,7 @@ import cv2
import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
+from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import IMAGE_MODES
from invokeai.app.invocations.fields import (
ColorField,
@@ -21,8 +22,6 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
-from .baseinvocation import BaseInvocation, Classification, invocation
-
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
class ShowImageInvocation(BaseInvocation):
diff --git a/invokeai/app/invocations/image_to_latents.py b/invokeai/app/invocations/image_to_latents.py
index 06de530154..dadd8bb3a1 100644
--- a/invokeai/app/invocations/image_to_latents.py
+++ b/invokeai/app/invocations/image_to_latents.py
@@ -1,3 +1,4 @@
+from contextlib import nullcontext
from functools import singledispatchmethod
import einops
@@ -12,7 +13,7 @@ from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
-from invokeai.app.invocations.constants import DEFAULT_PRECISION
+from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -24,6 +25,7 @@ from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
+from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
@invocation(
@@ -31,7 +33,7 @@ from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_t
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
- version="1.0.2",
+ version="1.1.0",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@@ -44,12 +46,17 @@ class ImageToLatentsInvocation(BaseInvocation):
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
+ # NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
+ # offer a way to directly set None values.
+ tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@staticmethod
- def vae_encode(vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor) -> torch.Tensor:
+ def vae_encode(
+ vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor, tile_size: int = 0
+ ) -> torch.Tensor:
with vae_info as vae:
- assert isinstance(vae, torch.nn.Module)
+ assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
orig_dtype = vae.dtype
if upcast:
vae.to(dtype=torch.float32)
@@ -81,9 +88,18 @@ class ImageToLatentsInvocation(BaseInvocation):
else:
vae.disable_tiling()
+ tiling_context = nullcontext()
+ if tile_size > 0:
+ tiling_context = patch_vae_tiling_params(
+ vae,
+ tile_sample_min_size=tile_size,
+ tile_latent_min_size=tile_size // LATENT_SCALE_FACTOR,
+ tile_overlap_factor=0.25,
+ )
+
# non_noised_latents_from_image
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
- with torch.inference_mode():
+ with torch.inference_mode(), tiling_context:
latents = ImageToLatentsInvocation._encode_to_tensor(vae, image_tensor)
latents = vae.config.scaling_factor * latents
@@ -101,7 +117,9 @@ class ImageToLatentsInvocation(BaseInvocation):
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
- latents = self.vae_encode(vae_info, self.fp32, self.tiled, image_tensor)
+ latents = self.vae_encode(
+ vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
+ )
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
diff --git a/invokeai/app/invocations/infill.py b/invokeai/app/invocations/infill.py
index 7e1a2ee322..3314d72620 100644
--- a/invokeai/app/invocations/infill.py
+++ b/invokeai/app/invocations/infill.py
@@ -3,7 +3,9 @@ from typing import Literal, get_args
from PIL import Image
-from invokeai.app.invocations.fields import ColorField, ImageField
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import ColorField, ImageField, InputField, WithBoard, WithMetadata
+from invokeai.app.invocations.image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
@@ -14,10 +16,6 @@ from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, in
from invokeai.backend.image_util.infill_methods.tile import infill_tile
from invokeai.backend.util.logging import InvokeAILogger
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField, WithBoard, WithMetadata
-from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
-
logger = InvokeAILogger.get_logger()
diff --git a/invokeai/app/invocations/latents_to_image.py b/invokeai/app/invocations/latents_to_image.py
index 202e8bfa1b..cc8a9c44a3 100644
--- a/invokeai/app/invocations/latents_to_image.py
+++ b/invokeai/app/invocations/latents_to_image.py
@@ -1,3 +1,5 @@
+from contextlib import nullcontext
+
import torch
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.attention_processor import (
@@ -8,10 +10,9 @@ from diffusers.models.attention_processor import (
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
-from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
-from invokeai.app.invocations.constants import DEFAULT_PRECISION
+from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
@@ -24,6 +25,7 @@ from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion import set_seamless
+from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
@@ -32,7 +34,7 @@ from invokeai.backend.util.devices import TorchDevice
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
- version="1.2.2",
+ version="1.3.0",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
@@ -46,6 +48,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
+ # NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
+ # offer a way to directly set None values.
+ tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@torch.no_grad()
@@ -53,9 +58,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
- assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
+ assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
- assert isinstance(vae, torch.nn.Module)
+ assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
@@ -87,10 +92,19 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
else:
vae.disable_tiling()
+ tiling_context = nullcontext()
+ if self.tile_size > 0:
+ tiling_context = patch_vae_tiling_params(
+ vae,
+ tile_sample_min_size=self.tile_size,
+ tile_latent_min_size=self.tile_size // LATENT_SCALE_FACTOR,
+ tile_overlap_factor=0.25,
+ )
+
# clear memory as vae decode can request a lot
TorchDevice.empty_cache()
- with torch.inference_mode():
+ with torch.inference_mode(), tiling_context:
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
image = vae.decode(latents, return_dict=False)[0]
diff --git a/invokeai/app/invocations/math.py b/invokeai/app/invocations/math.py
index dad000d411..5d3988031b 100644
--- a/invokeai/app/invocations/math.py
+++ b/invokeai/app/invocations/math.py
@@ -5,12 +5,11 @@ from typing import Literal
import numpy as np
from pydantic import ValidationInfo, field_validator
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, InputField
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import BaseInvocation, invocation
-
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
class AddInvocation(BaseInvocation):
diff --git a/invokeai/app/invocations/metadata.py b/invokeai/app/invocations/metadata.py
index 9c7264a9bb..17b68ffc0b 100644
--- a/invokeai/app/invocations/metadata.py
+++ b/invokeai/app/invocations/metadata.py
@@ -14,8 +14,7 @@ from invokeai.app.invocations.fields import (
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
-
-from ...version import __version__
+from invokeai.version.invokeai_version import __version__
class MetadataItemField(BaseModel):
diff --git a/invokeai/app/invocations/model.py b/invokeai/app/invocations/model.py
index 94a6136fcb..c0d067c0a7 100644
--- a/invokeai/app/invocations/model.py
+++ b/invokeai/app/invocations/model.py
@@ -3,18 +3,17 @@ from typing import List, Optional
from pydantic import BaseModel, Field
-from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
-from invokeai.app.services.shared.invocation_context import InvocationContext
-from invokeai.app.shared.models import FreeUConfig
-from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
-
-from .baseinvocation import (
+from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
+from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
+from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.app.shared.models import FreeUConfig
+from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
class ModelIdentifierField(BaseModel):
diff --git a/invokeai/app/invocations/noise.py b/invokeai/app/invocations/noise.py
index 931e639106..1d3ff3a29c 100644
--- a/invokeai/app/invocations/noise.py
+++ b/invokeai/app/invocations/noise.py
@@ -4,18 +4,12 @@
import torch
from pydantic import field_validator
+from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
-
-from ...backend.util.devices import TorchDevice
-from .baseinvocation import (
- BaseInvocation,
- BaseInvocationOutput,
- invocation,
- invocation_output,
-)
+from invokeai.backend.util.devices import TorchDevice
"""
Utilities
diff --git a/invokeai/app/invocations/param_easing.py b/invokeai/app/invocations/param_easing.py
index 0e590f4e2b..3e785ef545 100644
--- a/invokeai/app/invocations/param_easing.py
+++ b/invokeai/app/invocations/param_easing.py
@@ -39,12 +39,11 @@ from easing_functions import (
)
from matplotlib.ticker import MaxNLocator
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import FloatCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField
-
@invocation(
"float_range",
diff --git a/invokeai/app/invocations/primitives.py b/invokeai/app/invocations/primitives.py
index 28f72fb377..e5056e3775 100644
--- a/invokeai/app/invocations/primitives.py
+++ b/invokeai/app/invocations/primitives.py
@@ -4,6 +4,7 @@ from typing import Optional
import torch
+from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
ColorField,
@@ -21,13 +22,6 @@ from invokeai.app.invocations.fields import (
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import (
- BaseInvocation,
- BaseInvocationOutput,
- invocation,
- invocation_output,
-)
-
"""
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
- primitive nodes
diff --git a/invokeai/app/invocations/prompt.py b/invokeai/app/invocations/prompt.py
index 64a06d2f18..48eec0ac0e 100644
--- a/invokeai/app/invocations/prompt.py
+++ b/invokeai/app/invocations/prompt.py
@@ -5,12 +5,11 @@ import numpy as np
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from pydantic import field_validator
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import InputField, UIComponent
from invokeai.app.invocations.primitives import StringCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField, UIComponent
-
@invocation(
"dynamic_prompt",
diff --git a/invokeai/app/invocations/scheduler.py b/invokeai/app/invocations/scheduler.py
index 52af20378e..a870a442ef 100644
--- a/invokeai/app/invocations/scheduler.py
+++ b/invokeai/app/invocations/scheduler.py
@@ -1,5 +1,4 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
-from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
from invokeai.app.invocations.fields import (
FieldDescriptions,
InputField,
@@ -7,6 +6,7 @@ from invokeai.app.invocations.fields import (
UIType,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
@invocation_output("scheduler_output")
diff --git a/invokeai/app/invocations/sdxl.py b/invokeai/app/invocations/sdxl.py
index 1c0817cb92..8eed158a61 100644
--- a/invokeai/app/invocations/sdxl.py
+++ b/invokeai/app/invocations/sdxl.py
@@ -1,15 +1,9 @@
+from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
+from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import SubModelType
-from .baseinvocation import (
- BaseInvocation,
- BaseInvocationOutput,
- invocation,
- invocation_output,
-)
-from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
-
@invocation_output("sdxl_model_loader_output")
class SDXLModelLoaderOutput(BaseInvocationOutput):
diff --git a/invokeai/app/invocations/spandrel_image_to_image.py b/invokeai/app/invocations/spandrel_image_to_image.py
new file mode 100644
index 0000000000..bbe31af644
--- /dev/null
+++ b/invokeai/app/invocations/spandrel_image_to_image.py
@@ -0,0 +1,144 @@
+import numpy as np
+import torch
+from PIL import Image
+from tqdm import tqdm
+
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import (
+ FieldDescriptions,
+ ImageField,
+ InputField,
+ UIType,
+ WithBoard,
+ WithMetadata,
+)
+from invokeai.app.invocations.model import ModelIdentifierField
+from invokeai.app.invocations.primitives import ImageOutput
+from invokeai.app.services.session_processor.session_processor_common import CanceledException
+from invokeai.app.services.shared.invocation_context import InvocationContext
+from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
+from invokeai.backend.tiles.tiles import calc_tiles_min_overlap
+from invokeai.backend.tiles.utils import TBLR, Tile
+
+
+@invocation("spandrel_image_to_image", title="Image-to-Image", tags=["upscale"], category="upscale", version="1.1.0")
+class SpandrelImageToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
+ """Run any spandrel image-to-image model (https://github.com/chaiNNer-org/spandrel)."""
+
+ image: ImageField = InputField(description="The input image")
+ image_to_image_model: ModelIdentifierField = InputField(
+ title="Image-to-Image Model",
+ description=FieldDescriptions.spandrel_image_to_image_model,
+ ui_type=UIType.SpandrelImageToImageModel,
+ )
+ tile_size: int = InputField(
+ default=512, description="The tile size for tiled image-to-image. Set to 0 to disable tiling."
+ )
+
+ def _scale_tile(self, tile: Tile, scale: int) -> Tile:
+ return Tile(
+ coords=TBLR(
+ top=tile.coords.top * scale,
+ bottom=tile.coords.bottom * scale,
+ left=tile.coords.left * scale,
+ right=tile.coords.right * scale,
+ ),
+ overlap=TBLR(
+ top=tile.overlap.top * scale,
+ bottom=tile.overlap.bottom * scale,
+ left=tile.overlap.left * scale,
+ right=tile.overlap.right * scale,
+ ),
+ )
+
+ @torch.inference_mode()
+ def invoke(self, context: InvocationContext) -> ImageOutput:
+ # Images are converted to RGB, because most models don't support an alpha channel. In the future, we may want to
+ # revisit this.
+ image = context.images.get_pil(self.image.image_name, mode="RGB")
+
+ # Compute the image tiles.
+ if self.tile_size > 0:
+ min_overlap = 20
+ tiles = calc_tiles_min_overlap(
+ image_height=image.height,
+ image_width=image.width,
+ tile_height=self.tile_size,
+ tile_width=self.tile_size,
+ min_overlap=min_overlap,
+ )
+ else:
+ # No tiling. Generate a single tile that covers the entire image.
+ min_overlap = 0
+ tiles = [
+ Tile(
+ coords=TBLR(top=0, bottom=image.height, left=0, right=image.width),
+ overlap=TBLR(top=0, bottom=0, left=0, right=0),
+ )
+ ]
+
+ # Sort tiles first by left x coordinate, then by top y coordinate. During tile processing, we want to iterate
+ # over tiles left-to-right, top-to-bottom.
+ tiles = sorted(tiles, key=lambda x: x.coords.left)
+ tiles = sorted(tiles, key=lambda x: x.coords.top)
+
+ # Prepare input image for inference.
+ image_tensor = SpandrelImageToImageModel.pil_to_tensor(image)
+
+ # Load the model.
+ spandrel_model_info = context.models.load(self.image_to_image_model)
+
+ # Run the model on each tile.
+ with spandrel_model_info as spandrel_model:
+ assert isinstance(spandrel_model, SpandrelImageToImageModel)
+
+ # Scale the tiles for re-assembling the final image.
+ scale = spandrel_model.scale
+ scaled_tiles = [self._scale_tile(tile, scale=scale) for tile in tiles]
+
+ # Prepare the output tensor.
+ _, channels, height, width = image_tensor.shape
+ output_tensor = torch.zeros(
+ (height * scale, width * scale, channels), dtype=torch.uint8, device=torch.device("cpu")
+ )
+
+ image_tensor = image_tensor.to(device=spandrel_model.device, dtype=spandrel_model.dtype)
+
+ for tile, scaled_tile in tqdm(list(zip(tiles, scaled_tiles, strict=True)), desc="Upscaling Tiles"):
+ # Exit early if the invocation has been canceled.
+ if context.util.is_canceled():
+ raise CanceledException
+
+ # Extract the current tile from the input tensor.
+ input_tile = image_tensor[
+ :, :, tile.coords.top : tile.coords.bottom, tile.coords.left : tile.coords.right
+ ].to(device=spandrel_model.device, dtype=spandrel_model.dtype)
+
+ # Run the model on the tile.
+ output_tile = spandrel_model.run(input_tile)
+
+ # Convert the output tile into the output tensor's format.
+ # (N, C, H, W) -> (C, H, W)
+ output_tile = output_tile.squeeze(0)
+ # (C, H, W) -> (H, W, C)
+ output_tile = output_tile.permute(1, 2, 0)
+ output_tile = output_tile.clamp(0, 1)
+ output_tile = (output_tile * 255).to(dtype=torch.uint8, device=torch.device("cpu"))
+
+ # Merge the output tile into the output tensor.
+ # We only keep half of the overlap on the top and left side of the tile. We do this in case there are
+ # edge artifacts. We don't bother with any 'blending' in the current implementation - for most upscalers
+ # it seems unnecessary, but we may find a need in the future.
+ top_overlap = scaled_tile.overlap.top // 2
+ left_overlap = scaled_tile.overlap.left // 2
+ output_tensor[
+ scaled_tile.coords.top + top_overlap : scaled_tile.coords.bottom,
+ scaled_tile.coords.left + left_overlap : scaled_tile.coords.right,
+ :,
+ ] = output_tile[top_overlap:, left_overlap:, :]
+
+ # Convert the output tensor to a PIL image.
+ np_image = output_tensor.detach().numpy().astype(np.uint8)
+ pil_image = Image.fromarray(np_image)
+ image_dto = context.images.save(image=pil_image)
+ return ImageOutput.build(image_dto)
diff --git a/invokeai/app/invocations/strings.py b/invokeai/app/invocations/strings.py
index 46ef35cbbf..2b6bf300b9 100644
--- a/invokeai/app/invocations/strings.py
+++ b/invokeai/app/invocations/strings.py
@@ -2,17 +2,11 @@
import re
+from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
+from invokeai.app.invocations.fields import InputField, OutputField, UIComponent
+from invokeai.app.invocations.primitives import StringOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
-from .baseinvocation import (
- BaseInvocation,
- BaseInvocationOutput,
- invocation,
- invocation_output,
-)
-from .fields import InputField, OutputField, UIComponent
-from .primitives import StringOutput
-
@invocation_output("string_pos_neg_output")
class StringPosNegOutput(BaseInvocationOutput):
diff --git a/invokeai/app/invocations/tiled_multi_diffusion_denoise_latents.py b/invokeai/app/invocations/tiled_multi_diffusion_denoise_latents.py
index 2566fd2551..409171794e 100644
--- a/invokeai/app/invocations/tiled_multi_diffusion_denoise_latents.py
+++ b/invokeai/app/invocations/tiled_multi_diffusion_denoise_latents.py
@@ -8,7 +8,7 @@ from diffusers.schedulers.scheduling_utils import SchedulerMixin
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
-from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
+from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
from invokeai.app.invocations.fields import (
@@ -29,6 +29,7 @@ from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
MultiDiffusionPipeline,
MultiDiffusionRegionConditioning,
)
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.tiles.tiles import (
calc_tiles_min_overlap,
)
@@ -174,6 +175,10 @@ class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
_, _, latent_height, latent_width = latents.shape
# Calculate the tile locations to cover the latent-space image.
+ # TODO(ryand): In the future, we may want to revisit the tile overlap strategy. Things to consider:
+ # - How much overlap 'context' to provide for each denoising step.
+ # - How much overlap to use during merging/blending.
+ # - Should we 'jitter' the tile locations in each step so that the seams are in different places?
tiles = calc_tiles_min_overlap(
image_height=latent_height,
image_width=latent_width,
@@ -217,7 +222,8 @@ class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
context=context,
positive_conditioning_field=self.positive_conditioning,
negative_conditioning_field=self.negative_conditioning,
- unet=unet,
+ device=unet.device,
+ dtype=unet.dtype,
latent_height=latent_tile_height,
latent_width=latent_tile_width,
cfg_scale=self.cfg_scale,
diff --git a/invokeai/app/invocations/upscale.py b/invokeai/app/invocations/upscale.py
index f93060f8d3..e7b3968aec 100644
--- a/invokeai/app/invocations/upscale.py
+++ b/invokeai/app/invocations/upscale.py
@@ -6,15 +6,13 @@ import numpy as np
from PIL import Image
from pydantic import ConfigDict
-from invokeai.app.invocations.fields import ImageField
+from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
+from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
-from .baseinvocation import BaseInvocation, invocation
-from .fields import InputField, WithBoard, WithMetadata
-
# TODO: Populate this from disk?
# TODO: Use model manager to load?
ESRGAN_MODELS = Literal[
diff --git a/invokeai/app/services/board_image_records/board_image_records_sqlite.py b/invokeai/app/services/board_image_records/board_image_records_sqlite.py
index cde810a739..33ac76b06f 100644
--- a/invokeai/app/services/board_image_records/board_image_records_sqlite.py
+++ b/invokeai/app/services/board_image_records/board_image_records_sqlite.py
@@ -2,12 +2,11 @@ import sqlite3
import threading
from typing import Optional, cast
+from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
-from .board_image_records_base import BoardImageRecordStorageBase
-
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
_conn: sqlite3.Connection
diff --git a/invokeai/app/services/board_images/board_images_default.py b/invokeai/app/services/board_images/board_images_default.py
index 85e478619c..6a564f5a91 100644
--- a/invokeai/app/services/board_images/board_images_default.py
+++ b/invokeai/app/services/board_images/board_images_default.py
@@ -1,9 +1,8 @@
from typing import Optional
+from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
from invokeai.app.services.invoker import Invoker
-from .board_images_base import BoardImagesServiceABC
-
class BoardImagesService(BoardImagesServiceABC):
__invoker: Invoker
diff --git a/invokeai/app/services/board_records/board_records_base.py b/invokeai/app/services/board_records/board_records_base.py
index 9d065b3750..9d16dacf60 100644
--- a/invokeai/app/services/board_records/board_records_base.py
+++ b/invokeai/app/services/board_records/board_records_base.py
@@ -1,9 +1,8 @@
from abc import ABC, abstractmethod
+from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from .board_records_common import BoardChanges, BoardRecord
-
class BoardRecordStorageBase(ABC):
"""Low-level service responsible for interfacing with the board record store."""
diff --git a/invokeai/app/services/board_records/board_records_common.py b/invokeai/app/services/board_records/board_records_common.py
index d763480a9f..0dda8a8b6b 100644
--- a/invokeai/app/services/board_records/board_records_common.py
+++ b/invokeai/app/services/board_records/board_records_common.py
@@ -24,6 +24,8 @@ class BoardRecord(BaseModelExcludeNull):
"""The name of the cover image of the board."""
archived: bool = Field(description="Whether or not the board is archived.")
"""Whether or not the board is archived."""
+ is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
+ """Whether the board is private."""
def deserialize_board_record(board_dict: dict) -> BoardRecord:
@@ -38,6 +40,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
updated_at = board_dict.get("updated_at", get_iso_timestamp())
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
archived = board_dict.get("archived", False)
+ is_private = board_dict.get("is_private", False)
return BoardRecord(
board_id=board_id,
@@ -47,6 +50,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
updated_at=updated_at,
deleted_at=deleted_at,
archived=archived,
+ is_private=is_private,
)
diff --git a/invokeai/app/services/board_records/board_records_sqlite.py b/invokeai/app/services/board_records/board_records_sqlite.py
index 9d81e2f1e7..c64e060b95 100644
--- a/invokeai/app/services/board_records/board_records_sqlite.py
+++ b/invokeai/app/services/board_records/board_records_sqlite.py
@@ -2,12 +2,8 @@ import sqlite3
import threading
from typing import Union, cast
-from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
-from invokeai.app.util.misc import uuid_string
-
-from .board_records_base import BoardRecordStorageBase
-from .board_records_common import (
+from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
+from invokeai.app.services.board_records.board_records_common import (
BoardChanges,
BoardRecord,
BoardRecordDeleteException,
@@ -15,6 +11,9 @@ from .board_records_common import (
BoardRecordSaveException,
deserialize_board_record,
)
+from invokeai.app.services.shared.pagination import OffsetPaginatedResults
+from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
+from invokeai.app.util.misc import uuid_string
class SqliteBoardRecordStorage(BoardRecordStorageBase):
diff --git a/invokeai/app/services/boards/boards_base.py b/invokeai/app/services/boards/boards_base.py
index fddd5a7954..3c2d148cb9 100644
--- a/invokeai/app/services/boards/boards_base.py
+++ b/invokeai/app/services/boards/boards_base.py
@@ -1,10 +1,9 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges
+from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from .boards_common import BoardDTO
-
class BoardServiceABC(ABC):
"""High-level service for board management."""
diff --git a/invokeai/app/services/boards/boards_common.py b/invokeai/app/services/boards/boards_common.py
index 0cb54102bb..15d0b3c37f 100644
--- a/invokeai/app/services/boards/boards_common.py
+++ b/invokeai/app/services/boards/boards_common.py
@@ -2,7 +2,7 @@ from typing import Optional
from pydantic import Field
-from ..board_records.board_records_common import BoardRecord
+from invokeai.app.services.board_records.board_records_common import BoardRecord
class BoardDTO(BoardRecord):
diff --git a/invokeai/app/services/boards/boards_default.py b/invokeai/app/services/boards/boards_default.py
index 6457aee1d2..97fd3059a9 100644
--- a/invokeai/app/services/boards/boards_default.py
+++ b/invokeai/app/services/boards/boards_default.py
@@ -1,11 +1,9 @@
from invokeai.app.services.board_records.board_records_common import BoardChanges
-from invokeai.app.services.boards.boards_common import BoardDTO
+from invokeai.app.services.boards.boards_base import BoardServiceABC
+from invokeai.app.services.boards.boards_common import BoardDTO, board_record_to_dto
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from .boards_base import BoardServiceABC
-from .boards_common import board_record_to_dto
-
class BoardService(BoardServiceABC):
__invoker: Invoker
diff --git a/invokeai/app/services/bulk_download/bulk_download_default.py b/invokeai/app/services/bulk_download/bulk_download_default.py
index d4bf059b8f..4ebbd10d4f 100644
--- a/invokeai/app/services/bulk_download/bulk_download_default.py
+++ b/invokeai/app/services/bulk_download/bulk_download_default.py
@@ -4,6 +4,7 @@ from typing import Optional, Union
from zipfile import ZipFile
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
+from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
from invokeai.app.services.bulk_download.bulk_download_common import (
DEFAULT_BULK_DOWNLOAD_ID,
BulkDownloadException,
@@ -15,8 +16,6 @@ from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.misc import uuid_string
-from .bulk_download_base import BulkDownloadBase
-
class BulkDownloadService(BulkDownloadBase):
def start(self, invoker: Invoker) -> None:
diff --git a/invokeai/app/services/config/__init__.py b/invokeai/app/services/config/__init__.py
index 126692f08a..df1acbf104 100644
--- a/invokeai/app/services/config/__init__.py
+++ b/invokeai/app/services/config/__init__.py
@@ -1,7 +1,6 @@
"""Init file for InvokeAI configure package."""
from invokeai.app.services.config.config_common import PagingArgumentParser
-
-from .config_default import InvokeAIAppConfig, get_config
+from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]
diff --git a/invokeai/app/services/download/__init__.py b/invokeai/app/services/download/__init__.py
index 33b0025809..48ded7d549 100644
--- a/invokeai/app/services/download/__init__.py
+++ b/invokeai/app/services/download/__init__.py
@@ -1,13 +1,13 @@
"""Init file for download queue."""
-from .download_base import (
+from invokeai.app.services.download.download_base import (
DownloadJob,
DownloadJobStatus,
DownloadQueueServiceBase,
MultiFileDownloadJob,
UnknownJobIDException,
)
-from .download_default import DownloadQueueService, TqdmProgress
+from invokeai.app.services.download.download_default import DownloadQueueService, TqdmProgress
__all__ = [
"DownloadJob",
diff --git a/invokeai/app/services/download/download_default.py b/invokeai/app/services/download/download_default.py
index f6c7c1a1a0..b97f61657c 100644
--- a/invokeai/app/services/download/download_default.py
+++ b/invokeai/app/services/download/download_default.py
@@ -16,12 +16,7 @@ from requests import HTTPError
from tqdm import tqdm
from invokeai.app.services.config import InvokeAIAppConfig, get_config
-from invokeai.app.services.events.events_base import EventServiceBase
-from invokeai.app.util.misc import get_iso_timestamp
-from invokeai.backend.model_manager.metadata import RemoteModelFile
-from invokeai.backend.util.logging import InvokeAILogger
-
-from .download_base import (
+from invokeai.app.services.download.download_base import (
DownloadEventHandler,
DownloadExceptionHandler,
DownloadJob,
@@ -33,6 +28,10 @@ from .download_base import (
ServiceInactiveException,
UnknownJobIDException,
)
+from invokeai.app.services.events.events_base import EventServiceBase
+from invokeai.app.util.misc import get_iso_timestamp
+from invokeai.backend.model_manager.metadata import RemoteModelFile
+from invokeai.backend.util.logging import InvokeAILogger
# Maximum number of bytes to download during each call to requests.iter_content()
DOWNLOAD_CHUNK_SIZE = 100000
@@ -185,7 +184,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
job = DownloadJob(
source=url,
dest=path,
- access_token=access_token,
+ access_token=access_token or self._lookup_access_token(url),
)
mfdj.download_parts.add(job)
self._download_part2parent[job.source] = mfdj
diff --git a/invokeai/app/services/events/events_fastapievents.py b/invokeai/app/services/events/events_fastapievents.py
index 8279d3bb34..d514a06b67 100644
--- a/invokeai/app/services/events/events_fastapievents.py
+++ b/invokeai/app/services/events/events_fastapievents.py
@@ -6,12 +6,11 @@ from queue import Empty, Queue
from fastapi_events.dispatcher import dispatch
+from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.events.events_common import (
EventBase,
)
-from .events_base import EventServiceBase
-
class FastAPIEventService(EventServiceBase):
def __init__(self, event_handler_id: int) -> None:
diff --git a/invokeai/app/services/image_files/image_files_disk.py b/invokeai/app/services/image_files/image_files_disk.py
index 15d0be31f8..95ab052520 100644
--- a/invokeai/app/services/image_files/image_files_disk.py
+++ b/invokeai/app/services/image_files/image_files_disk.py
@@ -7,12 +7,15 @@ from PIL import Image, PngImagePlugin
from PIL.Image import Image as PILImageType
from send2trash import send2trash
+from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
+from invokeai.app.services.image_files.image_files_common import (
+ ImageFileDeleteException,
+ ImageFileNotFoundException,
+ ImageFileSaveException,
+)
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
-from .image_files_base import ImageFileStorageBase
-from .image_files_common import ImageFileDeleteException, ImageFileNotFoundException, ImageFileSaveException
-
class DiskImageFileStorage(ImageFileStorageBase):
"""Stores images on disk"""
diff --git a/invokeai/app/services/image_records/image_records_base.py b/invokeai/app/services/image_records/image_records_base.py
index 23aa30a1c3..1211c9762c 100644
--- a/invokeai/app/services/image_records/image_records_base.py
+++ b/invokeai/app/services/image_records/image_records_base.py
@@ -3,11 +3,15 @@ from datetime import datetime
from typing import Optional
from invokeai.app.invocations.fields import MetadataField
+from invokeai.app.services.image_records.image_records_common import (
+ ImageCategory,
+ ImageRecord,
+ ImageRecordChanges,
+ ResourceOrigin,
+)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
-from .image_records_common import ImageCategory, ImageRecord, ImageRecordChanges, ResourceOrigin
-
class ImageRecordStorageBase(ABC):
"""Low-level service responsible for interfacing with the image record store."""
@@ -44,6 +48,7 @@ class ImageRecordStorageBase(ABC):
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
+ search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageRecord]:
"""Gets a page of image records."""
pass
diff --git a/invokeai/app/services/image_records/image_records_sqlite.py b/invokeai/app/services/image_records/image_records_sqlite.py
index 2aab844f46..b0c2155a18 100644
--- a/invokeai/app/services/image_records/image_records_sqlite.py
+++ b/invokeai/app/services/image_records/image_records_sqlite.py
@@ -4,12 +4,8 @@ from datetime import datetime
from typing import Optional, Union, cast
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
-from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
-from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
-
-from .image_records_base import ImageRecordStorageBase
-from .image_records_common import (
+from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
+from invokeai.app.services.image_records.image_records_common import (
IMAGE_DTO_COLS,
ImageCategory,
ImageRecord,
@@ -20,6 +16,9 @@ from .image_records_common import (
ResourceOrigin,
deserialize_image_record,
)
+from invokeai.app.services.shared.pagination import OffsetPaginatedResults
+from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
+from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
class SqliteImageRecordStorage(ImageRecordStorageBase):
@@ -151,6 +150,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
+ search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageRecord]:
try:
self._lock.acquire()
@@ -211,6 +211,13 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
"""
query_params.append(board_id)
+ # Search term condition
+ if search_term:
+ query_conditions += """--sql
+ AND images.metadata LIKE ?
+ """
+ query_params.append(f"%{search_term.lower()}%")
+
if starred_first:
query_pagination = f"""--sql
ORDER BY images.starred DESC, images.created_at {order_dir.value} LIMIT ? OFFSET ?
diff --git a/invokeai/app/services/images/images_base.py b/invokeai/app/services/images/images_base.py
index e07a6cc468..5328c1854e 100644
--- a/invokeai/app/services/images/images_base.py
+++ b/invokeai/app/services/images/images_base.py
@@ -123,6 +123,7 @@ class ImageServiceABC(ABC):
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
+ search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a paginated list of image DTOs."""
pass
diff --git a/invokeai/app/services/images/images_default.py b/invokeai/app/services/images/images_default.py
index 0d792157b8..15d950bab8 100644
--- a/invokeai/app/services/images/images_default.py
+++ b/invokeai/app/services/images/images_default.py
@@ -3,16 +3,12 @@ from typing import Optional
from PIL.Image import Image as PILImageType
from invokeai.app.invocations.fields import MetadataField
-from invokeai.app.services.invoker import Invoker
-from invokeai.app.services.shared.pagination import OffsetPaginatedResults
-from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
-
-from ..image_files.image_files_common import (
+from invokeai.app.services.image_files.image_files_common import (
ImageFileDeleteException,
ImageFileNotFoundException,
ImageFileSaveException,
)
-from ..image_records.image_records_common import (
+from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
ImageRecord,
ImageRecordChanges,
@@ -23,8 +19,11 @@ from ..image_records.image_records_common import (
InvalidOriginException,
ResourceOrigin,
)
-from .images_base import ImageServiceABC
-from .images_common import ImageDTO, image_record_to_dto
+from invokeai.app.services.images.images_base import ImageServiceABC
+from invokeai.app.services.images.images_common import ImageDTO, image_record_to_dto
+from invokeai.app.services.invoker import Invoker
+from invokeai.app.services.shared.pagination import OffsetPaginatedResults
+from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
class ImageService(ImageServiceABC):
@@ -214,6 +213,7 @@ class ImageService(ImageServiceABC):
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
+ search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageDTO]:
try:
results = self.__invoker.services.image_records.get_many(
@@ -225,6 +225,7 @@ class ImageService(ImageServiceABC):
categories,
is_intermediate,
board_id,
+ search_term,
)
image_dtos = [
diff --git a/invokeai/app/services/invocation_services.py b/invokeai/app/services/invocation_services.py
index f4fce6098f..90ca613074 100644
--- a/invokeai/app/services/invocation_services.py
+++ b/invokeai/app/services/invocation_services.py
@@ -10,29 +10,28 @@ if TYPE_CHECKING:
import torch
+ from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
+ from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
+ from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
+ from invokeai.app.services.boards.boards_base import BoardServiceABC
+ from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
+ from invokeai.app.services.config import InvokeAIAppConfig
+ from invokeai.app.services.download import DownloadQueueServiceBase
+ from invokeai.app.services.events.events_base import EventServiceBase
+ from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
+ from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
+ from invokeai.app.services.images.images_base import ImageServiceABC
+ from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
+ from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
+ from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
+ from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
+ from invokeai.app.services.names.names_base import NameServiceBase
+ from invokeai.app.services.session_processor.session_processor_base import SessionProcessorBase
+ from invokeai.app.services.session_queue.session_queue_base import SessionQueueBase
+ from invokeai.app.services.urls.urls_base import UrlServiceBase
+ from invokeai.app.services.workflow_records.workflow_records_base import WorkflowRecordsStorageBase
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
- from .board_image_records.board_image_records_base import BoardImageRecordStorageBase
- from .board_images.board_images_base import BoardImagesServiceABC
- from .board_records.board_records_base import BoardRecordStorageBase
- from .boards.boards_base import BoardServiceABC
- from .bulk_download.bulk_download_base import BulkDownloadBase
- from .config import InvokeAIAppConfig
- from .download import DownloadQueueServiceBase
- from .events.events_base import EventServiceBase
- from .image_files.image_files_base import ImageFileStorageBase
- from .image_records.image_records_base import ImageRecordStorageBase
- from .images.images_base import ImageServiceABC
- from .invocation_cache.invocation_cache_base import InvocationCacheBase
- from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
- from .model_images.model_images_base import ModelImageFileStorageBase
- from .model_manager.model_manager_base import ModelManagerServiceBase
- from .names.names_base import NameServiceBase
- from .session_processor.session_processor_base import SessionProcessorBase
- from .session_queue.session_queue_base import SessionQueueBase
- from .urls.urls_base import UrlServiceBase
- from .workflow_records.workflow_records_base import WorkflowRecordsStorageBase
-
class InvocationServices:
"""Services that can be used by invocations"""
diff --git a/invokeai/app/services/invocation_stats/invocation_stats_default.py b/invokeai/app/services/invocation_stats/invocation_stats_default.py
index 5a41f1f5d6..5533657dc7 100644
--- a/invokeai/app/services/invocation_stats/invocation_stats_default.py
+++ b/invokeai/app/services/invocation_stats/invocation_stats_default.py
@@ -9,11 +9,8 @@ import torch
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation
-from invokeai.app.services.invoker import Invoker
-from invokeai.backend.model_manager.load.model_cache import CacheStats
-
-from .invocation_stats_base import InvocationStatsServiceBase
-from .invocation_stats_common import (
+from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
+from invokeai.app.services.invocation_stats.invocation_stats_common import (
GESStatsNotFoundError,
GraphExecutionStats,
GraphExecutionStatsSummary,
@@ -22,6 +19,8 @@ from .invocation_stats_common import (
NodeExecutionStats,
NodeExecutionStatsSummary,
)
+from invokeai.app.services.invoker import Invoker
+from invokeai.backend.model_manager.load.model_cache import CacheStats
# Size of 1GB in bytes.
GB = 2**30
diff --git a/invokeai/app/services/invoker.py b/invokeai/app/services/invoker.py
index 527afb37f4..64f83725a1 100644
--- a/invokeai/app/services/invoker.py
+++ b/invokeai/app/services/invoker.py
@@ -1,7 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
-from .invocation_services import InvocationServices
+from invokeai.app.services.invocation_services import InvocationServices
class Invoker:
diff --git a/invokeai/app/services/model_images/model_images_default.py b/invokeai/app/services/model_images/model_images_default.py
index 0ab79df3ed..36f04a93b5 100644
--- a/invokeai/app/services/model_images/model_images_default.py
+++ b/invokeai/app/services/model_images/model_images_default.py
@@ -5,15 +5,14 @@ from PIL.Image import Image as PILImageType
from send2trash import send2trash
from invokeai.app.services.invoker import Invoker
-from invokeai.app.util.misc import uuid_string
-from invokeai.app.util.thumbnails import make_thumbnail
-
-from .model_images_base import ModelImageFileStorageBase
-from .model_images_common import (
+from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
+from invokeai.app.services.model_images.model_images_common import (
ModelImageFileDeleteException,
ModelImageFileNotFoundException,
ModelImageFileSaveException,
)
+from invokeai.app.util.misc import uuid_string
+from invokeai.app.util.thumbnails import make_thumbnail
class ModelImageFileStorageDisk(ModelImageFileStorageBase):
diff --git a/invokeai/app/services/model_install/__init__.py b/invokeai/app/services/model_install/__init__.py
index 941485a134..d96e86cbfe 100644
--- a/invokeai/app/services/model_install/__init__.py
+++ b/invokeai/app/services/model_install/__init__.py
@@ -1,9 +1,7 @@
"""Initialization file for model install service package."""
-from .model_install_base import (
- ModelInstallServiceBase,
-)
-from .model_install_common import (
+from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
+from invokeai.app.services.model_install.model_install_common import (
HFModelSource,
InstallStatus,
LocalModelSource,
@@ -12,7 +10,7 @@ from .model_install_common import (
UnknownInstallJobException,
URLModelSource,
)
-from .model_install_default import ModelInstallService
+from invokeai.app.services.model_install.model_install_default import ModelInstallService
__all__ = [
"ModelInstallServiceBase",
diff --git a/invokeai/app/services/model_install/model_install_default.py b/invokeai/app/services/model_install/model_install_default.py
index 558d8a3b59..7bcda6606b 100644
--- a/invokeai/app/services/model_install/model_install_default.py
+++ b/invokeai/app/services/model_install/model_install_default.py
@@ -23,6 +23,16 @@ from invokeai.app.services.download import DownloadQueueServiceBase, MultiFileDo
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
+from invokeai.app.services.model_install.model_install_common import (
+ MODEL_SOURCE_TO_TYPE_MAP,
+ HFModelSource,
+ InstallStatus,
+ LocalModelSource,
+ ModelInstallJob,
+ ModelSource,
+ StringLikeSource,
+ URLModelSource,
+)
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.backend.model_manager.config import (
@@ -47,17 +57,6 @@ from invokeai.backend.util.catch_sigint import catch_sigint
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.util import slugify
-from .model_install_common import (
- MODEL_SOURCE_TO_TYPE_MAP,
- HFModelSource,
- InstallStatus,
- LocalModelSource,
- ModelInstallJob,
- ModelSource,
- StringLikeSource,
- URLModelSource,
-)
-
TMPDIR_PREFIX = "tmpinstall_"
@@ -857,7 +856,7 @@ class ModelInstallService(ModelInstallServiceBase):
with self._lock:
if install_job := self._download_cache.pop(download_job.id, None):
assert excp is not None
- install_job.set_error(excp)
+ self._set_error(install_job, excp)
self._download_queue.cancel_job(download_job)
# Let other threads know that the number of downloads has changed
diff --git a/invokeai/app/services/model_load/__init__.py b/invokeai/app/services/model_load/__init__.py
index b4a86e9348..4c7e40c8c7 100644
--- a/invokeai/app/services/model_load/__init__.py
+++ b/invokeai/app/services/model_load/__init__.py
@@ -1,6 +1,6 @@
"""Initialization file for model load service module."""
-from .model_load_base import ModelLoadServiceBase
-from .model_load_default import ModelLoadService
+from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
+from invokeai.app.services.model_load.model_load_default import ModelLoadService
__all__ = ["ModelLoadServiceBase", "ModelLoadService"]
diff --git a/invokeai/app/services/model_load/model_load_default.py b/invokeai/app/services/model_load/model_load_default.py
index 8eb94616be..be2cc2478a 100644
--- a/invokeai/app/services/model_load/model_load_default.py
+++ b/invokeai/app/services/model_load/model_load_default.py
@@ -10,6 +10,7 @@ from torch import load as torch_load
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.invoker import Invoker
+from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
from invokeai.backend.model_manager.load import (
LoadedModel,
@@ -22,8 +23,6 @@ from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
-from .model_load_base import ModelLoadServiceBase
-
class ModelLoadService(ModelLoadServiceBase):
"""Wrapper around ModelLoaderRegistry."""
diff --git a/invokeai/app/services/model_manager/__init__.py b/invokeai/app/services/model_manager/__init__.py
index 5455577266..07c27cee31 100644
--- a/invokeai/app/services/model_manager/__init__.py
+++ b/invokeai/app/services/model_manager/__init__.py
@@ -1,10 +1,9 @@
"""Initialization file for model manager service."""
+from invokeai.app.services.model_manager.model_manager_default import ModelManagerService, ModelManagerServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
from invokeai.backend.model_manager.load import LoadedModel
-from .model_manager_default import ModelManagerService, ModelManagerServiceBase
-
__all__ = [
"ModelManagerServiceBase",
"ModelManagerService",
diff --git a/invokeai/app/services/model_manager/model_manager_base.py b/invokeai/app/services/model_manager/model_manager_base.py
index af1b68e1ec..a906076b16 100644
--- a/invokeai/app/services/model_manager/model_manager_base.py
+++ b/invokeai/app/services/model_manager/model_manager_base.py
@@ -5,14 +5,13 @@ from abc import ABC, abstractmethod
import torch
from typing_extensions import Self
+from invokeai.app.services.config.config_default import InvokeAIAppConfig
+from invokeai.app.services.download.download_base import DownloadQueueServiceBase
+from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
-
-from ..config import InvokeAIAppConfig
-from ..download import DownloadQueueServiceBase
-from ..events.events_base import EventServiceBase
-from ..model_install import ModelInstallServiceBase
-from ..model_load import ModelLoadServiceBase
-from ..model_records import ModelRecordServiceBase
+from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
+from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
+from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
class ModelManagerServiceBase(ABC):
diff --git a/invokeai/app/services/model_manager/model_manager_default.py b/invokeai/app/services/model_manager/model_manager_default.py
index f695c3c8c1..78f8e09e74 100644
--- a/invokeai/app/services/model_manager/model_manager_default.py
+++ b/invokeai/app/services/model_manager/model_manager_default.py
@@ -6,19 +6,20 @@ from typing import Optional
import torch
from typing_extensions import Self
+from invokeai.app.services.config.config_default import InvokeAIAppConfig
+from invokeai.app.services.download.download_base import DownloadQueueServiceBase
+from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
+from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
+from invokeai.app.services.model_install.model_install_default import ModelInstallService
+from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
+from invokeai.app.services.model_load.model_load_default import ModelLoadService
+from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
+from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
from invokeai.backend.model_manager.load import ModelCache, ModelLoaderRegistry
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
-from ..config import InvokeAIAppConfig
-from ..download import DownloadQueueServiceBase
-from ..events.events_base import EventServiceBase
-from ..model_install import ModelInstallService, ModelInstallServiceBase
-from ..model_load import ModelLoadService, ModelLoadServiceBase
-from ..model_records import ModelRecordServiceBase
-from .model_manager_base import ModelManagerServiceBase
-
class ModelManagerService(ModelManagerServiceBase):
"""
diff --git a/invokeai/app/services/model_records/model_records_sql.py b/invokeai/app/services/model_records/model_records_sql.py
index 16abf4c523..1d0780efe1 100644
--- a/invokeai/app/services/model_records/model_records_sql.py
+++ b/invokeai/app/services/model_records/model_records_sql.py
@@ -40,12 +40,24 @@ Typical usage:
"""
import json
+import logging
import sqlite3
from math import ceil
from pathlib import Path
from typing import List, Optional, Union
+import pydantic
+
+from invokeai.app.services.model_records.model_records_base import (
+ DuplicateModelException,
+ ModelRecordChanges,
+ ModelRecordOrderBy,
+ ModelRecordServiceBase,
+ ModelSummary,
+ UnknownModelException,
+)
from invokeai.app.services.shared.pagination import PaginatedResults
+from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@@ -54,21 +66,11 @@ from invokeai.backend.model_manager.config import (
ModelType,
)
-from ..shared.sqlite.sqlite_database import SqliteDatabase
-from .model_records_base import (
- DuplicateModelException,
- ModelRecordChanges,
- ModelRecordOrderBy,
- ModelRecordServiceBase,
- ModelSummary,
- UnknownModelException,
-)
-
class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database."""
- def __init__(self, db: SqliteDatabase):
+ def __init__(self, db: SqliteDatabase, logger: logging.Logger):
"""
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
@@ -77,6 +79,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
super().__init__()
self._db = db
self._cursor = db.conn.cursor()
+ self._logger = logger
@property
def db(self) -> SqliteDatabase:
@@ -292,7 +295,20 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
tuple(bindings),
)
result = self._cursor.fetchall()
- results = [ModelConfigFactory.make_config(json.loads(x[0]), timestamp=x[1]) for x in result]
+
+ # Parse the model configs.
+ results: list[AnyModelConfig] = []
+ for row in result:
+ try:
+ model_config = ModelConfigFactory.make_config(json.loads(row[0]), timestamp=row[1])
+ except pydantic.ValidationError:
+ # We catch this error so that the app can still run if there are invalid model configs in the database.
+ # One reason that an invalid model config might be in the database is if someone had to rollback from a
+ # newer version of the app that added a new model type.
+ self._logger.warning(f"Found an invalid model config in the database. Ignoring this model. ({row[0]})")
+ else:
+ results.append(model_config)
+
return results
def search_by_path(self, path: Union[str, Path]) -> List[AnyModelConfig]:
diff --git a/invokeai/app/services/names/names_default.py b/invokeai/app/services/names/names_default.py
index 104268c8bd..5804a937d6 100644
--- a/invokeai/app/services/names/names_default.py
+++ b/invokeai/app/services/names/names_default.py
@@ -1,7 +1,6 @@
+from invokeai.app.services.names.names_base import NameServiceBase
from invokeai.app.util.misc import uuid_string
-from .names_base import NameServiceBase
-
class SimpleNameService(NameServiceBase):
"""Creates image names from UUIDs."""
diff --git a/invokeai/app/services/session_processor/session_processor_default.py b/invokeai/app/services/session_processor/session_processor_default.py
index 3f348fb239..e4faaeb911 100644
--- a/invokeai/app/services/session_processor/session_processor_default.py
+++ b/invokeai/app/services/session_processor/session_processor_default.py
@@ -13,24 +13,24 @@ from invokeai.app.services.events.events_common import (
register_events,
)
from invokeai.app.services.invocation_stats.invocation_stats_common import GESStatsNotFoundError
+from invokeai.app.services.invoker import Invoker
from invokeai.app.services.session_processor.session_processor_base import (
+ InvocationServices,
OnAfterRunNode,
OnAfterRunSession,
OnBeforeRunNode,
OnBeforeRunSession,
OnNodeError,
OnNonFatalProcessorError,
+ SessionProcessorBase,
+ SessionRunnerBase,
)
-from invokeai.app.services.session_processor.session_processor_common import CanceledException
+from invokeai.app.services.session_processor.session_processor_common import CanceledException, SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem, SessionQueueItemNotFoundError
from invokeai.app.services.shared.graph import NodeInputError
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
from invokeai.app.util.profiler import Profiler
-from ..invoker import Invoker
-from .session_processor_base import InvocationServices, SessionProcessorBase, SessionRunnerBase
-from .session_processor_common import SessionProcessorStatus
-
class DefaultSessionRunner(SessionRunnerBase):
"""Processes a single session's invocations."""
diff --git a/invokeai/app/services/urls/urls_default.py b/invokeai/app/services/urls/urls_default.py
index ff5071333f..d570521fb8 100644
--- a/invokeai/app/services/urls/urls_default.py
+++ b/invokeai/app/services/urls/urls_default.py
@@ -1,6 +1,6 @@
import os
-from .urls_base import UrlServiceBase
+from invokeai.app.services.urls.urls_base import UrlServiceBase
class LocalUrlService(UrlServiceBase):
diff --git a/invokeai/app/services/workflow_records/default_workflows/ESRGAN Upscaling with Canny ControlNet.json b/invokeai/app/services/workflow_records/default_workflows/ESRGAN Upscaling with Canny ControlNet.json
index dd98eca18f..2cadcae961 100644
--- a/invokeai/app/services/workflow_records/default_workflows/ESRGAN Upscaling with Canny ControlNet.json
+++ b/invokeai/app/services/workflow_records/default_workflows/ESRGAN Upscaling with Canny ControlNet.json
@@ -2,7 +2,7 @@
"name": "ESRGAN Upscaling with Canny ControlNet",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "upscale, controlnet, default",
"notes": "",
@@ -36,14 +36,13 @@
"version": "3.0.0",
"category": "default"
},
- "id": "0e71a27e-a22b-4a9b-b20a-6d789abff2bc",
"nodes": [
{
- "id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
+ "id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "invocation",
"data": {
- "id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
- "version": "1.1.1",
+ "id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
+ "version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -57,6 +56,10 @@
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": true,
@@ -65,122 +68,63 @@
},
"position": {
"x": 1250,
- "y": 1500
+ "y": 1200
}
},
{
- "id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
+ "id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "invocation",
"data": {
- "id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
- "version": "1.0.2",
+ "id": "5ca498a4-c8c8-4580-a396-0c984317205d",
+ "version": "1.1.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
- "type": "main_model_loader",
+ "type": "i2l",
"inputs": {
- "model": {
- "name": "model",
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
"label": "",
- "value": {
- "key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
- "hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
- "name": "Deliberate_v5",
- "base": "sd-1",
- "type": "main"
- }
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 700,
- "y": 1375
- }
- },
- {
- "id": "771bdf6a-0813-4099-a5d8-921a138754d4",
- "type": "invocation",
- "data": {
- "id": "771bdf6a-0813-4099-a5d8-921a138754d4",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "image",
- "inputs": {
- "image": {
- "name": "image",
- "label": "Image To Upscale",
- "value": {
- "image_name": "d2e42ba6-d420-496b-82db-91c9b75956c1.png"
- }
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 344.5593065887157,
- "y": 1698.161491368619
- }
- },
- {
- "id": "f7564dd2-9539-47f2-ac13-190804461f4e",
- "type": "invocation",
- "data": {
- "id": "f7564dd2-9539-47f2-ac13-190804461f4e",
- "version": "1.3.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "esrgan",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "image": {
- "name": "image",
- "label": ""
- },
- "model_name": {
- "name": "model_name",
- "label": "Upscaler Model",
- "value": "RealESRGAN_x2plus.pth"
+ "value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
- "value": 400
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
}
},
- "isOpen": true,
+ "isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
- "x": 717.3863693661265,
- "y": 1721.9215053134815
+ "x": 1650,
+ "y": 1675
}
},
{
- "id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
+ "id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "invocation",
"data": {
- "id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
- "version": "1.3.2",
+ "id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
+ "version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
- "type": "canny_image_processor",
+ "type": "l2i",
"inputs": {
"board": {
"name": "board",
@@ -190,38 +134,37 @@
"name": "metadata",
"label": ""
},
- "image": {
- "name": "image",
+ "latents": {
+ "name": "latents",
"label": ""
},
- "detect_resolution": {
- "name": "detect_resolution",
- "label": "",
- "value": 512
+ "vae": {
+ "name": "vae",
+ "label": ""
},
- "image_resolution": {
- "name": "image_resolution",
+ "tiled": {
+ "name": "tiled",
"label": "",
- "value": 512
+ "value": false
},
- "low_threshold": {
- "name": "low_threshold",
+ "tile_size": {
+ "name": "tile_size",
"label": "",
- "value": 100
+ "value": 0
},
- "high_threshold": {
- "name": "high_threshold",
+ "fp32": {
+ "name": "fp32",
"label": "",
- "value": 200
+ "value": false
}
},
"isOpen": true,
- "isIntermediate": true,
+ "isIntermediate": false,
"useCache": true
},
"position": {
- "x": 1200,
- "y": 1900
+ "x": 2559.4751127537957,
+ "y": 1246.6000376741406
}
},
{
@@ -229,7 +172,7 @@
"type": "invocation",
"data": {
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
- "version": "1.1.1",
+ "version": "1.1.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -285,6 +228,193 @@
"y": 1902.9649340196056
}
},
+ {
+ "id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
+ "type": "invocation",
+ "data": {
+ "id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
+ "version": "1.3.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "canny_image_processor",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "detect_resolution": {
+ "name": "detect_resolution",
+ "label": "",
+ "value": 512
+ },
+ "image_resolution": {
+ "name": "image_resolution",
+ "label": "",
+ "value": 512
+ },
+ "low_threshold": {
+ "name": "low_threshold",
+ "label": "",
+ "value": 100
+ },
+ "high_threshold": {
+ "name": "high_threshold",
+ "label": "",
+ "value": 200
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 1200,
+ "y": 1900
+ }
+ },
+ {
+ "id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
+ "type": "invocation",
+ "data": {
+ "id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
+ "version": "1.0.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": "",
+ "value": {
+ "key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
+ "hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
+ "name": "Deliberate_v5",
+ "base": "sd-1",
+ "type": "main"
+ }
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 700,
+ "y": 1375
+ }
+ },
+ {
+ "id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
+ "type": "invocation",
+ "data": {
+ "id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 1250,
+ "y": 1500
+ }
+ },
+ {
+ "id": "771bdf6a-0813-4099-a5d8-921a138754d4",
+ "type": "invocation",
+ "data": {
+ "id": "771bdf6a-0813-4099-a5d8-921a138754d4",
+ "version": "1.0.2",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "image",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": "Image To Upscale"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 344.5593065887157,
+ "y": 1698.161491368619
+ }
+ },
+ {
+ "id": "f7564dd2-9539-47f2-ac13-190804461f4e",
+ "type": "invocation",
+ "data": {
+ "id": "f7564dd2-9539-47f2-ac13-190804461f4e",
+ "version": "1.3.2",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "esrgan",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "model_name": {
+ "name": "model_name",
+ "label": "Upscaler Model",
+ "value": "RealESRGAN_x2plus.pth"
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 400
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 717.3863693661265,
+ "y": 1721.9215053134815
+ }
+ },
{
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "invocation",
@@ -413,122 +543,6 @@
"y": 1232.6219060454753
}
},
- {
- "id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
- "type": "invocation",
- "data": {
- "id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": true
- },
- "position": {
- "x": 2559.4751127537957,
- "y": 1246.6000376741406
- }
- },
- {
- "id": "5ca498a4-c8c8-4580-a396-0c984317205d",
- "type": "invocation",
- "data": {
- "id": "5ca498a4-c8c8-4580-a396-0c984317205d",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "i2l",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 1650,
- "y": 1675
- }
- },
- {
- "id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
- "type": "invocation",
- "data": {
- "id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 1250,
- "y": 1200
- }
- },
{
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"type": "invocation",
diff --git a/invokeai/app/services/workflow_records/default_workflows/Face Detailer with IP-Adapter & Canny (See Note in Details).json b/invokeai/app/services/workflow_records/default_workflows/Face Detailer with IP-Adapter & Canny (See Note in Details).json
index 8c7dcee30c..481ba85e64 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Face Detailer with IP-Adapter & Canny (See Note in Details).json
+++ b/invokeai/app/services/workflow_records/default_workflows/Face Detailer with IP-Adapter & Canny (See Note in Details).json
@@ -2,7 +2,7 @@
"name": "Face Detailer with IP-Adapter & Canny (See Note in Details)",
"author": "kosmoskatten",
"description": "A workflow to add detail to and improve faces. This workflow is most effective when used with a model that creates realistic outputs. ",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "face detailer, IP-Adapter, Canny",
"notes": "Set this image as the blur mask: https://i.imgur.com/Gxi61zP.png",
@@ -37,16 +37,219 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
{
- "id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
+ "id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
"type": "invocation",
"data": {
- "id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
- "version": "1.1.1",
+ "id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 2031.5518710051792,
+ "y": -492.1742944307074
+ }
+ },
+ {
+ "id": "8fe598c6-d447-44fa-a165-4975af77d080",
+ "type": "invocation",
+ "data": {
+ "id": "8fe598c6-d447-44fa-a165-4975af77d080",
+ "version": "1.3.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "canny_image_processor",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "detect_resolution": {
+ "name": "detect_resolution",
+ "label": "",
+ "value": 512
+ },
+ "image_resolution": {
+ "name": "image_resolution",
+ "label": "",
+ "value": 512
+ },
+ "low_threshold": {
+ "name": "low_threshold",
+ "label": "",
+ "value": 100
+ },
+ "high_threshold": {
+ "name": "high_threshold",
+ "label": "",
+ "value": 200
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3519.4131037388597,
+ "y": 576.7946795840575
+ }
+ },
+ {
+ "id": "f60b6161-8f26-42f6-89ff-545e6011e501",
+ "type": "invocation",
+ "data": {
+ "id": "f60b6161-8f26-42f6-89ff-545e6011e501",
+ "version": "1.1.2",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "controlnet",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "control_model": {
+ "name": "control_model",
+ "label": "Control Model (select canny)",
+ "value": {
+ "key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
+ "hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
+ "name": "sd-controlnet-canny",
+ "base": "sd-1",
+ "type": "controlnet"
+ }
+ },
+ "control_weight": {
+ "name": "control_weight",
+ "label": "",
+ "value": 0.5
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 0.5
+ },
+ "control_mode": {
+ "name": "control_mode",
+ "label": "",
+ "value": "balanced"
+ },
+ "resize_mode": {
+ "name": "resize_mode",
+ "label": "",
+ "value": "just_resize"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3950,
+ "y": 150
+ }
+ },
+ {
+ "id": "22b750db-b85e-486b-b278-ac983e329813",
+ "type": "invocation",
+ "data": {
+ "id": "22b750db-b85e-486b-b278-ac983e329813",
+ "version": "1.4.1",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "ip_adapter",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "ip_adapter_model": {
+ "name": "ip_adapter_model",
+ "label": "IP-Adapter Model (select IP Adapter Face)",
+ "value": {
+ "key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
+ "hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
+ "name": "ip_adapter_sd15",
+ "base": "sd-1",
+ "type": "ip_adapter"
+ }
+ },
+ "clip_vision_model": {
+ "name": "clip_vision_model",
+ "label": "",
+ "value": "ViT-H"
+ },
+ "weight": {
+ "name": "weight",
+ "label": "",
+ "value": 0.5
+ },
+ "method": {
+ "name": "method",
+ "label": "",
+ "value": "full"
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 0.8
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3575,
+ "y": -200
+ }
+ },
+ {
+ "id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
+ "type": "invocation",
+ "data": {
+ "id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
+ "version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -60,6 +263,140 @@
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 2550,
+ "y": -525
+ }
+ },
+ {
+ "id": "2224ed72-2453-4252-bd89-3085240e0b6f",
+ "type": "invocation",
+ "data": {
+ "id": "2224ed72-2453-4252-bd89-3085240e0b6f",
+ "version": "1.3.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": true
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": false,
+ "useCache": true
+ },
+ "position": {
+ "x": 4980.1395106966565,
+ "y": -255.9158921745602
+ }
+ },
+ {
+ "id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
+ "type": "invocation",
+ "data": {
+ "id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
+ "version": "1.1.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "i2l",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": true
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3100,
+ "y": -275
+ }
+ },
+ {
+ "id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
+ "type": "invocation",
+ "data": {
+ "id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": true,
@@ -251,45 +588,6 @@
"y": 0
}
},
- {
- "id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
- "type": "invocation",
- "data": {
- "id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "i2l",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": true
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 3100,
- "y": -275
- }
- },
{
"id": "bd06261d-a74a-4d1f-8374-745ed6194bc2",
"type": "invocation",
@@ -418,53 +716,6 @@
"y": -175
}
},
- {
- "id": "2224ed72-2453-4252-bd89-3085240e0b6f",
- "type": "invocation",
- "data": {
- "id": "2224ed72-2453-4252-bd89-3085240e0b6f",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": true
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": true
- },
- "position": {
- "x": 4980.1395106966565,
- "y": -255.9158921745602
- }
- },
{
"id": "2974e5b3-3d41-4b6f-9953-cd21e8f3a323",
"type": "invocation",
@@ -692,201 +943,6 @@
"y": -275
}
},
- {
- "id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
- "type": "invocation",
- "data": {
- "id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 2550,
- "y": -525
- }
- },
- {
- "id": "22b750db-b85e-486b-b278-ac983e329813",
- "type": "invocation",
- "data": {
- "id": "22b750db-b85e-486b-b278-ac983e329813",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "ip_adapter",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "ip_adapter_model": {
- "name": "ip_adapter_model",
- "label": "IP-Adapter Model (select IP Adapter Face)",
- "value": {
- "key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
- "hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
- "name": "ip_adapter_sd15",
- "base": "sd-1",
- "type": "ip_adapter"
- }
- },
- "weight": {
- "name": "weight",
- "label": "",
- "value": 0.5
- },
- "begin_step_percent": {
- "name": "begin_step_percent",
- "label": "",
- "value": 0
- },
- "end_step_percent": {
- "name": "end_step_percent",
- "label": "",
- "value": 0.8
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 3575,
- "y": -200
- }
- },
- {
- "id": "f60b6161-8f26-42f6-89ff-545e6011e501",
- "type": "invocation",
- "data": {
- "id": "f60b6161-8f26-42f6-89ff-545e6011e501",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "controlnet",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "control_model": {
- "name": "control_model",
- "label": "Control Model (select canny)",
- "value": {
- "key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
- "hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
- "name": "sd-controlnet-canny",
- "base": "sd-1",
- "type": "controlnet"
- }
- },
- "control_weight": {
- "name": "control_weight",
- "label": "",
- "value": 0.5
- },
- "begin_step_percent": {
- "name": "begin_step_percent",
- "label": "",
- "value": 0
- },
- "end_step_percent": {
- "name": "end_step_percent",
- "label": "",
- "value": 0.5
- },
- "control_mode": {
- "name": "control_mode",
- "label": "",
- "value": "balanced"
- },
- "resize_mode": {
- "name": "resize_mode",
- "label": "",
- "value": "just_resize"
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 3950,
- "y": 150
- }
- },
- {
- "id": "8fe598c6-d447-44fa-a165-4975af77d080",
- "type": "invocation",
- "data": {
- "id": "8fe598c6-d447-44fa-a165-4975af77d080",
- "version": "1.3.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "canny_image_processor",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "image": {
- "name": "image",
- "label": ""
- },
- "detect_resolution": {
- "name": "detect_resolution",
- "label": "",
- "value": 512
- },
- "image_resolution": {
- "name": "image_resolution",
- "label": "",
- "value": 512
- },
- "low_threshold": {
- "name": "low_threshold",
- "label": "",
- "value": 100
- },
- "high_threshold": {
- "name": "high_threshold",
- "label": "",
- "value": 200
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 3519.4131037388597,
- "y": 576.7946795840575
- }
- },
{
"id": "4bd4ae80-567f-4366-b8c6-3bb06f4fb46a",
"type": "invocation",
@@ -1035,30 +1091,6 @@
"x": 2578.2364832140506,
"y": 78.7948456497351
}
- },
- {
- "id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
- "type": "invocation",
- "data": {
- "id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
- "version": "1.0.2",
- "label": "",
- "notes": "",
- "type": "main_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 2031.5518710051792,
- "y": -492.1742944307074
- }
}
],
"edges": [
diff --git a/invokeai/app/services/workflow_records/default_workflows/Multi ControlNet (Canny & Depth).json b/invokeai/app/services/workflow_records/default_workflows/Multi ControlNet (Canny & Depth).json
index d859094216..3ff99b5eb3 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Multi ControlNet (Canny & Depth).json
+++ b/invokeai/app/services/workflow_records/default_workflows/Multi ControlNet (Canny & Depth).json
@@ -2,7 +2,7 @@
"name": "Multi ControlNet (Canny & Depth)",
"author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "ControlNet, canny, depth",
"notes": "",
@@ -37,140 +37,104 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
{
- "id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
+ "id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "invocation",
"data": {
- "id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
- "version": "1.0.2",
+ "id": "9db25398-c869-4a63-8815-c6559341ef12",
+ "version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
- "type": "image",
+ "type": "l2i",
"inputs": {
- "image": {
- "name": "image",
- "label": "Depth Input Image"
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 3666.135718057363,
- "y": 186.66887319822808
- }
- },
- {
- "id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
- "type": "invocation",
- "data": {
- "id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "controlnet",
- "inputs": {
- "image": {
- "name": "image",
+ "board": {
+ "name": "board",
"label": ""
},
- "control_model": {
- "name": "control_model",
- "label": "Control Model (select depth)",
- "value": {
- "key": "87e8855c-671f-4c9e-bbbb-8ed47ccb4aac",
- "hash": "blake3:2550bf22a53942dfa28ab2fed9d10d80851112531f44d977168992edf9d0534c",
- "name": "control_v11f1p_sd15_depth",
- "base": "sd-1",
- "type": "controlnet"
- }
+ "metadata": {
+ "name": "metadata",
+ "label": ""
},
- "control_weight": {
- "name": "control_weight",
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
"label": "",
- "value": 1
+ "value": false
},
- "begin_step_percent": {
- "name": "begin_step_percent",
+ "tile_size": {
+ "name": "tile_size",
"label": "",
"value": 0
},
- "end_step_percent": {
- "name": "end_step_percent",
+ "fp32": {
+ "name": "fp32",
"label": "",
- "value": 1
- },
- "control_mode": {
- "name": "control_mode",
- "label": "",
- "value": "balanced"
- },
- "resize_mode": {
- "name": "resize_mode",
- "label": "",
- "value": "just_resize"
+ "value": false
}
},
"isOpen": true,
- "isIntermediate": true,
+ "isIntermediate": false,
"useCache": true
},
"position": {
- "x": 4477.604342844504,
- "y": -49.39005411272677
- }
- },
- {
- "id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
- "type": "invocation",
- "data": {
- "id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Negative Prompt",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 4075,
+ "x": 5675,
"y": -825
}
},
{
- "id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
+ "id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "invocation",
"data": {
- "id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
- "version": "1.0.2",
+ "id": "c826ba5e-9676-4475-b260-07b85e88753c",
+ "version": "1.3.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
- "type": "main_model_loader",
+ "type": "canny_image_processor",
"inputs": {
- "model": {
- "name": "model",
+ "board": {
+ "name": "board",
"label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "detect_resolution": {
+ "name": "detect_resolution",
+ "label": "",
+ "value": 512
+ },
+ "image_resolution": {
+ "name": "image_resolution",
+ "label": "",
+ "value": 512
+ },
+ "low_threshold": {
+ "name": "low_threshold",
+ "label": "",
+ "value": 100
+ },
+ "high_threshold": {
+ "name": "high_threshold",
+ "label": "",
+ "value": 200
}
},
"isOpen": true,
@@ -178,29 +142,52 @@
"useCache": true
},
"position": {
- "x": 3600,
- "y": -1000
+ "x": 4095.757337055795,
+ "y": -455.63440891935863
}
},
{
- "id": "7ce68934-3419-42d4-ac70-82cfc9397306",
+ "id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "invocation",
"data": {
- "id": "7ce68934-3419-42d4-ac70-82cfc9397306",
- "version": "1.1.1",
+ "id": "018b1214-c2af-43a7-9910-fb687c6726d7",
+ "version": "1.2.4",
"nodePack": "invokeai",
"label": "",
"notes": "",
- "type": "compel",
+ "type": "midas_depth_image_processor",
"inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Positive Prompt",
- "value": ""
- },
- "clip": {
- "name": "clip",
+ "board": {
+ "name": "board",
"label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "a_mult": {
+ "name": "a_mult",
+ "label": "",
+ "value": 2
+ },
+ "bg_th": {
+ "name": "bg_th",
+ "label": "",
+ "value": 0.1
+ },
+ "detect_resolution": {
+ "name": "detect_resolution",
+ "label": "",
+ "value": 512
+ },
+ "image_resolution": {
+ "name": "image_resolution",
+ "label": "",
+ "value": 512
}
},
"isOpen": true,
@@ -208,8 +195,8 @@
"useCache": true
},
"position": {
- "x": 4075,
- "y": -1125
+ "x": 4082.783145980783,
+ "y": 0.01629251229994111
}
},
{
@@ -217,7 +204,7 @@
"type": "invocation",
"data": {
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
- "version": "1.1.1",
+ "version": "1.1.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -273,6 +260,185 @@
"y": -618.4221638099414
}
},
+ {
+ "id": "7ce68934-3419-42d4-ac70-82cfc9397306",
+ "type": "invocation",
+ "data": {
+ "id": "7ce68934-3419-42d4-ac70-82cfc9397306",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Positive Prompt",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 4075,
+ "y": -1125
+ }
+ },
+ {
+ "id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
+ "type": "invocation",
+ "data": {
+ "id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
+ "version": "1.0.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3600,
+ "y": -1000
+ }
+ },
+ {
+ "id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
+ "type": "invocation",
+ "data": {
+ "id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Negative Prompt",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 4075,
+ "y": -825
+ }
+ },
+ {
+ "id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
+ "type": "invocation",
+ "data": {
+ "id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
+ "version": "1.1.2",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "controlnet",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "control_model": {
+ "name": "control_model",
+ "label": "Control Model (select depth)",
+ "value": {
+ "key": "87e8855c-671f-4c9e-bbbb-8ed47ccb4aac",
+ "hash": "blake3:2550bf22a53942dfa28ab2fed9d10d80851112531f44d977168992edf9d0534c",
+ "name": "control_v11f1p_sd15_depth",
+ "base": "sd-1",
+ "type": "controlnet"
+ }
+ },
+ "control_weight": {
+ "name": "control_weight",
+ "label": "",
+ "value": 1
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 1
+ },
+ "control_mode": {
+ "name": "control_mode",
+ "label": "",
+ "value": "balanced"
+ },
+ "resize_mode": {
+ "name": "resize_mode",
+ "label": "",
+ "value": "just_resize"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 4477.604342844504,
+ "y": -49.39005411272677
+ }
+ },
+ {
+ "id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
+ "type": "invocation",
+ "data": {
+ "id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
+ "version": "1.0.2",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "image",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": "Depth Input Image"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 3666.135718057363,
+ "y": 186.66887319822808
+ }
+ },
{
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
"type": "invocation",
@@ -322,159 +488,6 @@
"y": -575
}
},
- {
- "id": "018b1214-c2af-43a7-9910-fb687c6726d7",
- "type": "invocation",
- "data": {
- "id": "018b1214-c2af-43a7-9910-fb687c6726d7",
- "version": "1.2.3",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "midas_depth_image_processor",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "image": {
- "name": "image",
- "label": ""
- },
- "a_mult": {
- "name": "a_mult",
- "label": "",
- "value": 2
- },
- "bg_th": {
- "name": "bg_th",
- "label": "",
- "value": 0.1
- },
- "detect_resolution": {
- "name": "detect_resolution",
- "label": "",
- "value": 512
- },
- "image_resolution": {
- "name": "image_resolution",
- "label": "",
- "value": 512
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 4082.783145980783,
- "y": 0.01629251229994111
- }
- },
- {
- "id": "c826ba5e-9676-4475-b260-07b85e88753c",
- "type": "invocation",
- "data": {
- "id": "c826ba5e-9676-4475-b260-07b85e88753c",
- "version": "1.3.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "canny_image_processor",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "image": {
- "name": "image",
- "label": ""
- },
- "detect_resolution": {
- "name": "detect_resolution",
- "label": "",
- "value": 512
- },
- "image_resolution": {
- "name": "image_resolution",
- "label": "",
- "value": 512
- },
- "low_threshold": {
- "name": "low_threshold",
- "label": "",
- "value": 100
- },
- "high_threshold": {
- "name": "high_threshold",
- "label": "",
- "value": 200
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 4095.757337055795,
- "y": -455.63440891935863
- }
- },
- {
- "id": "9db25398-c869-4a63-8815-c6559341ef12",
- "type": "invocation",
- "data": {
- "id": "9db25398-c869-4a63-8815-c6559341ef12",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": true
- },
- "position": {
- "x": 5675,
- "y": -825
- }
- },
{
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "invocation",
diff --git a/invokeai/app/services/workflow_records/default_workflows/MultiDiffusion SDXL (Beta).json b/invokeai/app/services/workflow_records/default_workflows/MultiDiffusion SDXL (Beta).json
new file mode 100644
index 0000000000..b2842315c4
--- /dev/null
+++ b/invokeai/app/services/workflow_records/default_workflows/MultiDiffusion SDXL (Beta).json
@@ -0,0 +1,2181 @@
+{
+ "name": "MultiDiffusion SDXL (Beta)",
+ "author": "Invoke",
+ "description": "A workflow to upscale an input image with tiled upscaling, using SDXL based models.",
+ "version": "1.0.0",
+ "contact": "invoke@invoke.ai",
+ "tags": "tiled, upscaling, sdxl",
+ "notes": "",
+ "exposedFields": [
+ {
+ "nodeId": "1ba845a6-eb88-49a1-a490-5fe6754f3ec9",
+ "fieldName": "value"
+ },
+ {
+ "nodeId": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "fieldName": "board"
+ },
+ {
+ "nodeId": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "fieldName": "image"
+ },
+ {
+ "nodeId": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d",
+ "fieldName": "a"
+ },
+ {
+ "nodeId": "696de0e1-cdd2-42e8-abeb-57a926bc6df6",
+ "fieldName": "a"
+ },
+ {
+ "nodeId": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "fieldName": "a"
+ },
+ {
+ "nodeId": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "fieldName": "model"
+ },
+ {
+ "nodeId": "f0cd0d2f-9614-43f7-9944-a75b8d5ccd65",
+ "fieldName": "model_name"
+ },
+ {
+ "nodeId": "c26bff37-4f12-482f-ba45-3a5d729b4c4f",
+ "fieldName": "value"
+ },
+ {
+ "nodeId": "f5ca24ee-21c5-4c8c-8d3c-371b5079b086",
+ "fieldName": "value"
+ },
+ {
+ "nodeId": "094bc4ed-5c68-4342-84f4-51056c755796",
+ "fieldName": "value"
+ },
+ {
+ "nodeId": "100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3a",
+ "fieldName": "vae_model"
+ },
+ {
+ "nodeId": "f936ebb3-6902-4df9-a775-6a68bac2da70",
+ "fieldName": "model"
+ }
+ ],
+ "meta": {
+ "version": "3.0.0",
+ "category": "default"
+ },
+ "nodes": [
+ {
+ "id": "f936ebb3-6902-4df9-a775-6a68bac2da70",
+ "type": "invocation",
+ "data": {
+ "id": "f936ebb3-6902-4df9-a775-6a68bac2da70",
+ "type": "model_identifier",
+ "version": "1.0.0",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": "ControlNet Model - choose xinsir's tile ControlNet",
+ "value": {
+ "key": "845b6959-1657-4164-be33-fe0f63ad1752",
+ "hash": "random:3b602344599a53b4e4c80a2259362e122543e6f9e8e428be76ab910f9368704b",
+ "name": "controlnet-tile-sdxl-1.0",
+ "base": "sdxl",
+ "type": "controlnet"
+ }
+ }
+ }
+ },
+ "position": {
+ "x": -3983.6167650620723,
+ "y": -1329.1431151846386
+ }
+ },
+ {
+ "id": "00239057-20d4-4cd2-a010-28727b256ea2",
+ "type": "invocation",
+ "data": {
+ "id": "00239057-20d4-4cd2-a010-28727b256ea2",
+ "type": "rand_int",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": false,
+ "inputs": {
+ "low": {
+ "name": "low",
+ "label": "",
+ "value": 0
+ },
+ "high": {
+ "name": "high",
+ "label": "",
+ "value": 2147483647
+ }
+ }
+ },
+ "position": {
+ "x": -4000,
+ "y": -1800
+ }
+ },
+ {
+ "id": "094bc4ed-5c68-4342-84f4-51056c755796",
+ "type": "invocation",
+ "data": {
+ "id": "094bc4ed-5c68-4342-84f4-51056c755796",
+ "type": "boolean",
+ "version": "1.0.1",
+ "label": "Tiled Option",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "Tiled VAE (Saves VRAM, Color Inconsistency)",
+ "value": false
+ }
+ }
+ },
+ "position": {
+ "x": -2746.0467136971292,
+ "y": -2219.070070545694
+ }
+ },
+ {
+ "id": "f5ca24ee-21c5-4c8c-8d3c-371b5079b086",
+ "type": "invocation",
+ "data": {
+ "id": "f5ca24ee-21c5-4c8c-8d3c-371b5079b086",
+ "type": "string",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "Negative Prompt (Optional)",
+ "value": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3525,
+ "y": -2525
+ }
+ },
+ {
+ "id": "c26bff37-4f12-482f-ba45-3a5d729b4c4f",
+ "type": "invocation",
+ "data": {
+ "id": "c26bff37-4f12-482f-ba45-3a5d729b4c4f",
+ "type": "string",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "Positive Prompt (Optional)",
+ "value": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3525,
+ "y": -2825
+ }
+ },
+ {
+ "id": "6daa9526-382b-491d-964f-f53fc308664f",
+ "type": "invocation",
+ "data": {
+ "id": "6daa9526-382b-491d-964f-f53fc308664f",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 0.35
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 100
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1450
+ }
+ },
+ {
+ "id": "f1afd295-860f-48b6-a76a-90609bf2cc11",
+ "type": "invocation",
+ "data": {
+ "id": "f1afd295-860f-48b6-a76a-90609bf2cc11",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 0.013
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1550
+ }
+ },
+ {
+ "id": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "type": "invocation",
+ "data": {
+ "id": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 0
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 100
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1500
+ }
+ },
+ {
+ "id": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d",
+ "type": "invocation",
+ "data": {
+ "id": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "Creativity Input",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "Creativity Control (-10 to 10)",
+ "value": 5
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": -1
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -2125
+ }
+ },
+ {
+ "id": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03",
+ "type": "invocation",
+ "data": {
+ "id": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "DIV"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 100
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1975
+ }
+ },
+ {
+ "id": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c",
+ "type": "invocation",
+ "data": {
+ "id": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "A",
+ "value": 0
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 10
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -2075
+ }
+ },
+ {
+ "id": "49a8cc12-aa19-48c5-b6b3-04e0b603b384",
+ "type": "invocation",
+ "data": {
+ "id": "49a8cc12-aa19-48c5-b6b3-04e0b603b384",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 4.99
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -2025
+ }
+ },
+ {
+ "id": "e4d5ca7c-8fcf-4c59-9c58-67194c80dc73",
+ "type": "invocation",
+ "data": {
+ "id": "e4d5ca7c-8fcf-4c59-9c58-67194c80dc73",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 0
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 1
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1925
+ }
+ },
+ {
+ "id": "696de0e1-cdd2-42e8-abeb-57a926bc6df6",
+ "type": "invocation",
+ "data": {
+ "id": "696de0e1-cdd2-42e8-abeb-57a926bc6df6",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "Sharpness Input",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "Sharpness Control (-10 to 10)",
+ "value": 0
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 10
+ }
+ }
+ },
+ "position": {
+ "x": -4750,
+ "y": -2275
+ }
+ },
+ {
+ "id": "79390b60-4077-4f94-ad0a-4229cc73ddb2",
+ "type": "invocation",
+ "data": {
+ "id": "79390b60-4077-4f94-ad0a-4229cc73ddb2",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 3.75
+ }
+ }
+ },
+ "position": {
+ "x": -4750,
+ "y": -2000
+ }
+ },
+ {
+ "id": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "type": "invocation",
+ "data": {
+ "id": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 25
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 1
+ }
+ }
+ },
+ "position": {
+ "x": -4750,
+ "y": -1950
+ }
+ },
+ {
+ "id": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "type": "invocation",
+ "data": {
+ "id": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "Structural Input",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "Structural Control (-10 to 10)",
+ "value": 0
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 10
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1700
+ }
+ },
+ {
+ "id": "bc53651f-208b-440c-be30-f93f72ae700e",
+ "type": "invocation",
+ "data": {
+ "id": "bc53651f-208b-440c-be30-f93f72ae700e",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 0.025
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1650
+ }
+ },
+ {
+ "id": "67346654-cac0-446a-8cde-9af4b5a029a6",
+ "type": "invocation",
+ "data": {
+ "id": "67346654-cac0-446a-8cde-9af4b5a029a6",
+ "type": "float_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "ADD"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 0.3
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 1
+ }
+ }
+ },
+ "position": {
+ "x": -3500,
+ "y": -1600
+ }
+ },
+ {
+ "id": "6636a27a-f130-4a13-b3e5-50b44e4a566f",
+ "type": "invocation",
+ "data": {
+ "id": "6636a27a-f130-4a13-b3e5-50b44e4a566f",
+ "type": "collect",
+ "version": "1.0.0",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "item": {
+ "name": "item",
+ "label": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3125,
+ "y": -1500
+ }
+ },
+ {
+ "id": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "type": "invocation",
+ "data": {
+ "id": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "type": "controlnet",
+ "version": "1.1.2",
+ "label": "ControlNet (use xinsir's tile ControlNet)",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "control_model": {
+ "name": "control_model",
+ "label": ""
+ },
+ "control_weight": {
+ "name": "control_weight",
+ "label": "",
+ "value": 0.6
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 0.5
+ },
+ "control_mode": {
+ "name": "control_mode",
+ "label": "",
+ "value": "balanced"
+ },
+ "resize_mode": {
+ "name": "resize_mode",
+ "label": "",
+ "value": "just_resize"
+ }
+ }
+ },
+ "position": {
+ "x": -3493.4229674963885,
+ "y": -1359.2223984776113
+ }
+ },
+ {
+ "id": "27215391-b20e-412a-b854-7fa5927f5437",
+ "type": "invocation",
+ "data": {
+ "id": "27215391-b20e-412a-b854-7fa5927f5437",
+ "type": "sdxl_compel_prompt",
+ "version": "1.2.0",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "style": {
+ "name": "style",
+ "label": "",
+ "value": ""
+ },
+ "original_width": {
+ "name": "original_width",
+ "label": "",
+ "value": 4096
+ },
+ "original_height": {
+ "name": "original_height",
+ "label": "",
+ "value": 4096
+ },
+ "crop_top": {
+ "name": "crop_top",
+ "label": "",
+ "value": 0
+ },
+ "crop_left": {
+ "name": "crop_left",
+ "label": "",
+ "value": 0
+ },
+ "target_width": {
+ "name": "target_width",
+ "label": "",
+ "value": 1024
+ },
+ "target_height": {
+ "name": "target_height",
+ "label": "",
+ "value": 1024
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "clip2": {
+ "name": "clip2",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3525,
+ "y": -2300
+ }
+ },
+ {
+ "id": "100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3a",
+ "type": "invocation",
+ "data": {
+ "id": "100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3a",
+ "type": "vae_loader",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "vae_model": {
+ "name": "vae_model",
+ "label": "",
+ "value": {
+ "key": "4bc2bddf-94d9-4efe-a8e2-5eda28710f4c",
+ "hash": "random:67e47a77a1fcef9c0f5cd5d889d71c191f07383a0bf587f1849b2bc3f359440a",
+ "name": "sdxl-vae-fp16-fix",
+ "base": "sdxl",
+ "type": "vae"
+ }
+ }
+ }
+ },
+ "position": {
+ "x": -4000,
+ "y": -2575
+ }
+ },
+ {
+ "id": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "type": "invocation",
+ "data": {
+ "id": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "type": "sdxl_model_loader",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": "SDXL Model"
+ }
+ }
+ },
+ "position": {
+ "x": -4000,
+ "y": -2825
+ }
+ },
+ {
+ "id": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "type": "invocation",
+ "data": {
+ "id": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "type": "sdxl_compel_prompt",
+ "version": "1.2.0",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "style": {
+ "name": "style",
+ "label": "",
+ "value": ""
+ },
+ "original_width": {
+ "name": "original_width",
+ "label": "",
+ "value": 4096
+ },
+ "original_height": {
+ "name": "original_height",
+ "label": "",
+ "value": 4096
+ },
+ "crop_top": {
+ "name": "crop_top",
+ "label": "",
+ "value": 0
+ },
+ "crop_left": {
+ "name": "crop_left",
+ "label": "",
+ "value": 0
+ },
+ "target_width": {
+ "name": "target_width",
+ "label": "",
+ "value": 1024
+ },
+ "target_height": {
+ "name": "target_height",
+ "label": "",
+ "value": 1024
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "clip2": {
+ "name": "clip2",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3525,
+ "y": -2600
+ }
+ },
+ {
+ "id": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe",
+ "type": "invocation",
+ "data": {
+ "id": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe",
+ "type": "unsharp_mask",
+ "version": "1.2.2",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "radius": {
+ "name": "radius",
+ "label": "",
+ "value": 2
+ },
+ "strength": {
+ "name": "strength",
+ "label": "",
+ "value": 50
+ }
+ }
+ },
+ "position": {
+ "x": -4400,
+ "y": -1875
+ }
+ },
+ {
+ "id": "53c2d5fd-863d-4950-93e0-628f3d61b493",
+ "type": "invocation",
+ "data": {
+ "id": "53c2d5fd-863d-4950-93e0-628f3d61b493",
+ "type": "unsharp_mask",
+ "version": "1.2.2",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "radius": {
+ "name": "radius",
+ "label": "",
+ "value": 2
+ },
+ "strength": {
+ "name": "strength",
+ "label": "",
+ "value": 50
+ }
+ }
+ },
+ "position": {
+ "x": -4750,
+ "y": -1875
+ }
+ },
+ {
+ "id": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "type": "invocation",
+ "data": {
+ "id": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "type": "i2l",
+ "version": "1.1.0",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ }
+ },
+ "position": {
+ "x": -4000,
+ "y": -1875
+ }
+ },
+ {
+ "id": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "type": "invocation",
+ "data": {
+ "id": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "type": "l2i",
+ "version": "1.3.0",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": false,
+ "useCache": true,
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": "Output Board"
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ }
+ },
+ "position": {
+ "x": -2750,
+ "y": -2575
+ }
+ },
+ {
+ "id": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "type": "invocation",
+ "data": {
+ "id": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "type": "tiled_multi_diffusion_denoise_latents",
+ "version": "1.0.0",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "positive_conditioning": {
+ "name": "positive_conditioning",
+ "label": ""
+ },
+ "negative_conditioning": {
+ "name": "negative_conditioning",
+ "label": ""
+ },
+ "noise": {
+ "name": "noise",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "tile_height": {
+ "name": "tile_height",
+ "label": "",
+ "value": 1024
+ },
+ "tile_width": {
+ "name": "tile_width",
+ "label": "",
+ "value": 1024
+ },
+ "tile_overlap": {
+ "name": "tile_overlap",
+ "label": "",
+ "value": 128
+ },
+ "steps": {
+ "name": "steps",
+ "label": "",
+ "value": 25
+ },
+ "cfg_scale": {
+ "name": "cfg_scale",
+ "label": "",
+ "value": 5
+ },
+ "denoising_start": {
+ "name": "denoising_start",
+ "label": "",
+ "value": 0.6
+ },
+ "denoising_end": {
+ "name": "denoising_end",
+ "label": "",
+ "value": 1
+ },
+ "scheduler": {
+ "name": "scheduler",
+ "label": "",
+ "value": "kdpm_2"
+ },
+ "unet": {
+ "name": "unet",
+ "label": ""
+ },
+ "cfg_rescale_multiplier": {
+ "name": "cfg_rescale_multiplier",
+ "label": "",
+ "value": 0
+ },
+ "control": {
+ "name": "control",
+ "label": ""
+ }
+ }
+ },
+ "position": {
+ "x": -3125,
+ "y": -2575
+ }
+ },
+ {
+ "id": "1ba845a6-eb88-49a1-a490-5fe6754f3ec9",
+ "type": "invocation",
+ "data": {
+ "id": "1ba845a6-eb88-49a1-a490-5fe6754f3ec9",
+ "type": "integer",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "Scale (2x, 4x)",
+ "value": 2
+ }
+ }
+ },
+ "position": {
+ "x": -4400,
+ "y": -2175
+ }
+ },
+ {
+ "id": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "type": "invocation",
+ "data": {
+ "id": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "type": "integer_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 1
+ }
+ }
+ },
+ "position": {
+ "x": -4400,
+ "y": -1950
+ }
+ },
+ {
+ "id": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "type": "invocation",
+ "data": {
+ "id": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "type": "integer_math",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "operation": {
+ "name": "operation",
+ "label": "",
+ "value": "MUL"
+ },
+ "a": {
+ "name": "a",
+ "label": "",
+ "value": 1
+ },
+ "b": {
+ "name": "b",
+ "label": "",
+ "value": 1
+ }
+ }
+ },
+ "position": {
+ "x": -4400,
+ "y": -2000
+ }
+ },
+ {
+ "id": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "type": "invocation",
+ "data": {
+ "id": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "type": "img_resize",
+ "version": "1.2.2",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "width": {
+ "name": "width",
+ "label": "",
+ "value": 512
+ },
+ "height": {
+ "name": "height",
+ "label": "",
+ "value": 512
+ },
+ "resample_mode": {
+ "name": "resample_mode",
+ "label": "",
+ "value": "lanczos"
+ }
+ }
+ },
+ "position": {
+ "x": -4375,
+ "y": -1825
+ }
+ },
+ {
+ "id": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "type": "invocation",
+ "data": {
+ "id": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "type": "controlnet",
+ "version": "1.1.2",
+ "label": "ControlNet (use xinsir's tile ControlNet)",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "control_model": {
+ "name": "control_model",
+ "label": ""
+ },
+ "control_weight": {
+ "name": "control_weight",
+ "label": "",
+ "value": 0.25
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0.5
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 0.8
+ },
+ "control_mode": {
+ "name": "control_mode",
+ "label": "Control Mode",
+ "value": "balanced"
+ },
+ "resize_mode": {
+ "name": "resize_mode",
+ "label": "",
+ "value": "just_resize"
+ }
+ }
+ },
+ "position": {
+ "x": -3131.577032503611,
+ "y": -1392.1075609956667
+ }
+ },
+ {
+ "id": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "type": "invocation",
+ "data": {
+ "id": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "type": "noise",
+ "version": "1.0.2",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "seed": {
+ "name": "seed",
+ "label": "",
+ "value": 3
+ },
+ "width": {
+ "name": "width",
+ "label": "",
+ "value": 512
+ },
+ "height": {
+ "name": "height",
+ "label": "",
+ "value": 512
+ },
+ "use_cpu": {
+ "name": "use_cpu",
+ "label": "",
+ "value": true
+ }
+ }
+ },
+ "position": {
+ "x": -4000,
+ "y": -1750
+ }
+ },
+ {
+ "id": "f0cd0d2f-9614-43f7-9944-a75b8d5ccd65",
+ "type": "invocation",
+ "data": {
+ "id": "f0cd0d2f-9614-43f7-9944-a75b8d5ccd65",
+ "type": "esrgan",
+ "version": "1.3.2",
+ "label": "",
+ "notes": "",
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true,
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "model_name": {
+ "name": "model_name",
+ "label": "Upscaling Model",
+ "value": "RealESRGAN_x4plus.pth"
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 500
+ }
+ }
+ },
+ "position": {
+ "x": -4750,
+ "y": -1825
+ }
+ },
+ {
+ "id": "7dbb756b-7d79-431c-a46d-d8f7b082c127",
+ "type": "invocation",
+ "data": {
+ "id": "7dbb756b-7d79-431c-a46d-d8f7b082c127",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "type": "float_to_int",
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "",
+ "value": 0
+ },
+ "multiple": {
+ "name": "multiple",
+ "label": "",
+ "value": 8
+ },
+ "method": {
+ "name": "method",
+ "label": "",
+ "value": "Floor"
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4000,
+ "y": -1950
+ }
+ },
+ {
+ "id": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "type": "invocation",
+ "data": {
+ "id": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "version": "1.0.2",
+ "label": "",
+ "notes": "",
+ "type": "image",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": "Image to Upscale"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4750,
+ "y": -2850
+ }
+ },
+ {
+ "id": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
+ "type": "invocation",
+ "data": {
+ "id": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
+ "version": "1.0.1",
+ "label": "",
+ "notes": "",
+ "type": "float_to_int",
+ "inputs": {
+ "value": {
+ "name": "value",
+ "label": "",
+ "value": 8
+ },
+ "multiple": {
+ "name": "multiple",
+ "label": "",
+ "value": 8
+ },
+ "method": {
+ "name": "method",
+ "label": "",
+ "value": "Floor"
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4000,
+ "y": -2000
+ }
+ }
+ ],
+ "edges": [
+ {
+ "id": "reactflow__edge-f936ebb3-6902-4df9-a775-6a68bac2da70model-be4082d6-e238-40ea-a9df-fc0d725e8895control_model",
+ "type": "default",
+ "source": "f936ebb3-6902-4df9-a775-6a68bac2da70",
+ "target": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "sourceHandle": "model",
+ "targetHandle": "control_model"
+ },
+ {
+ "id": "reactflow__edge-f936ebb3-6902-4df9-a775-6a68bac2da70model-b78f53b6-2eae-4956-97b4-7e73768d1491control_model",
+ "type": "default",
+ "source": "f936ebb3-6902-4df9-a775-6a68bac2da70",
+ "target": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "sourceHandle": "model",
+ "targetHandle": "control_model"
+ },
+ {
+ "id": "49a8cc12-aa19-48c5-b6b3-04e0b603b384-c8f5c671-8c87-4d96-a75e-a9937ac6bc03-collapsed",
+ "type": "collapsed",
+ "source": "49a8cc12-aa19-48c5-b6b3-04e0b603b384",
+ "target": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03"
+ },
+ {
+ "id": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c-49a8cc12-aa19-48c5-b6b3-04e0b603b384-collapsed",
+ "type": "collapsed",
+ "source": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c",
+ "target": "49a8cc12-aa19-48c5-b6b3-04e0b603b384"
+ },
+ {
+ "id": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d-14e65dbe-4249-4b25-9a63-3a10cfaeb61c-collapsed",
+ "type": "collapsed",
+ "source": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d",
+ "target": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c"
+ },
+ {
+ "id": "reactflow__edge-00239057-20d4-4cd2-a010-28727b256ea2value-8923451b-5a27-4395-b7f2-dce875fca6f5seed",
+ "type": "default",
+ "source": "00239057-20d4-4cd2-a010-28727b256ea2",
+ "target": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "sourceHandle": "value",
+ "targetHandle": "seed"
+ },
+ {
+ "id": "reactflow__edge-094bc4ed-5c68-4342-84f4-51056c755796value-c3b60a50-8039-4924-90e3-8c608e1fecb5tiled",
+ "type": "default",
+ "source": "094bc4ed-5c68-4342-84f4-51056c755796",
+ "target": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "sourceHandle": "value",
+ "targetHandle": "tiled"
+ },
+ {
+ "id": "reactflow__edge-094bc4ed-5c68-4342-84f4-51056c755796value-117f982a-03da-49b1-bf9f-29711160ac02tiled",
+ "type": "default",
+ "source": "094bc4ed-5c68-4342-84f4-51056c755796",
+ "target": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "sourceHandle": "value",
+ "targetHandle": "tiled"
+ },
+ {
+ "id": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03-e4d5ca7c-8fcf-4c59-9c58-67194c80dc73-collapsed",
+ "type": "collapsed",
+ "source": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03",
+ "target": "e4d5ca7c-8fcf-4c59-9c58-67194c80dc73"
+ },
+ {
+ "id": "d350feac-9686-4e0d-bd46-a96bd2630818-7dbb756b-7d79-431c-a46d-d8f7b082c127-collapsed",
+ "type": "collapsed",
+ "source": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "target": "7dbb756b-7d79-431c-a46d-d8f7b082c127"
+ },
+ {
+ "id": "5b256f14-caab-40ff-b8f0-9679cd542163-f5d9bf3b-2646-4b17-9894-20fd2b4218ea-collapsed",
+ "type": "collapsed",
+ "source": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "target": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea"
+ },
+ {
+ "id": "4950132a-2d06-4571-b2c0-55cb37a31e9b-041c59cc-f9e4-4dc9-8b31-84648c5f3ebe-collapsed",
+ "type": "collapsed",
+ "source": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "target": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe"
+ },
+ {
+ "id": "4950132a-2d06-4571-b2c0-55cb37a31e9b-53c2d5fd-863d-4950-93e0-628f3d61b493-collapsed",
+ "type": "collapsed",
+ "source": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "target": "53c2d5fd-863d-4950-93e0-628f3d61b493"
+ },
+ {
+ "id": "reactflow__edge-f5ca24ee-21c5-4c8c-8d3c-371b5079b086value-27215391-b20e-412a-b854-7fa5927f5437style",
+ "type": "default",
+ "source": "f5ca24ee-21c5-4c8c-8d3c-371b5079b086",
+ "target": "27215391-b20e-412a-b854-7fa5927f5437",
+ "sourceHandle": "value",
+ "targetHandle": "style"
+ },
+ {
+ "id": "reactflow__edge-f5ca24ee-21c5-4c8c-8d3c-371b5079b086value-27215391-b20e-412a-b854-7fa5927f5437prompt",
+ "type": "default",
+ "source": "f5ca24ee-21c5-4c8c-8d3c-371b5079b086",
+ "target": "27215391-b20e-412a-b854-7fa5927f5437",
+ "sourceHandle": "value",
+ "targetHandle": "prompt"
+ },
+ {
+ "id": "reactflow__edge-c26bff37-4f12-482f-ba45-3a5d729b4c4fvalue-6142b69a-323f-4ecd-a7e5-67dc61349c51style",
+ "type": "default",
+ "source": "c26bff37-4f12-482f-ba45-3a5d729b4c4f",
+ "target": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "sourceHandle": "value",
+ "targetHandle": "style"
+ },
+ {
+ "id": "reactflow__edge-c26bff37-4f12-482f-ba45-3a5d729b4c4fvalue-6142b69a-323f-4ecd-a7e5-67dc61349c51prompt",
+ "type": "default",
+ "source": "c26bff37-4f12-482f-ba45-3a5d729b4c4f",
+ "target": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "sourceHandle": "value",
+ "targetHandle": "prompt"
+ },
+ {
+ "id": "88ae723e-4933-4371-b52d-3ada52a59d36-6daa9526-382b-491d-964f-f53fc308664f-collapsed",
+ "type": "collapsed",
+ "source": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "target": "6daa9526-382b-491d-964f-f53fc308664f"
+ },
+ {
+ "id": "f1afd295-860f-48b6-a76a-90609bf2cc11-88ae723e-4933-4371-b52d-3ada52a59d36-collapsed",
+ "type": "collapsed",
+ "source": "f1afd295-860f-48b6-a76a-90609bf2cc11",
+ "target": "88ae723e-4933-4371-b52d-3ada52a59d36"
+ },
+ {
+ "id": "bc53651f-208b-440c-be30-f93f72ae700e-67346654-cac0-446a-8cde-9af4b5a029a6-collapsed",
+ "type": "collapsed",
+ "source": "bc53651f-208b-440c-be30-f93f72ae700e",
+ "target": "67346654-cac0-446a-8cde-9af4b5a029a6"
+ },
+ {
+ "id": "reactflow__edge-67346654-cac0-446a-8cde-9af4b5a029a6value-be4082d6-e238-40ea-a9df-fc0d725e8895begin_step_percent",
+ "type": "default",
+ "source": "67346654-cac0-446a-8cde-9af4b5a029a6",
+ "target": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "sourceHandle": "value",
+ "targetHandle": "begin_step_percent"
+ },
+ {
+ "id": "reactflow__edge-67346654-cac0-446a-8cde-9af4b5a029a6value-b78f53b6-2eae-4956-97b4-7e73768d1491end_step_percent",
+ "type": "default",
+ "source": "67346654-cac0-446a-8cde-9af4b5a029a6",
+ "target": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "sourceHandle": "value",
+ "targetHandle": "end_step_percent"
+ },
+ {
+ "id": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa-f1afd295-860f-48b6-a76a-90609bf2cc11-collapsed",
+ "type": "collapsed",
+ "source": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "target": "f1afd295-860f-48b6-a76a-90609bf2cc11"
+ },
+ {
+ "id": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa-bc53651f-208b-440c-be30-f93f72ae700e-collapsed",
+ "type": "collapsed",
+ "source": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "target": "bc53651f-208b-440c-be30-f93f72ae700e"
+ },
+ {
+ "id": "reactflow__edge-bc53651f-208b-440c-be30-f93f72ae700evalue-67346654-cac0-446a-8cde-9af4b5a029a6b",
+ "type": "default",
+ "source": "bc53651f-208b-440c-be30-f93f72ae700e",
+ "target": "67346654-cac0-446a-8cde-9af4b5a029a6",
+ "sourceHandle": "value",
+ "targetHandle": "b",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-6daa9526-382b-491d-964f-f53fc308664fvalue-b78f53b6-2eae-4956-97b4-7e73768d1491control_weight",
+ "type": "default",
+ "source": "6daa9526-382b-491d-964f-f53fc308664f",
+ "target": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "sourceHandle": "value",
+ "targetHandle": "control_weight"
+ },
+ {
+ "id": "reactflow__edge-88ae723e-4933-4371-b52d-3ada52a59d36value-6daa9526-382b-491d-964f-f53fc308664fb",
+ "type": "default",
+ "source": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "target": "6daa9526-382b-491d-964f-f53fc308664f",
+ "sourceHandle": "value",
+ "targetHandle": "b",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-88ae723e-4933-4371-b52d-3ada52a59d36value-be4082d6-e238-40ea-a9df-fc0d725e8895control_weight",
+ "type": "default",
+ "source": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "target": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "sourceHandle": "value",
+ "targetHandle": "control_weight"
+ },
+ {
+ "id": "reactflow__edge-f1afd295-860f-48b6-a76a-90609bf2cc11value-88ae723e-4933-4371-b52d-3ada52a59d36b",
+ "type": "default",
+ "source": "f1afd295-860f-48b6-a76a-90609bf2cc11",
+ "target": "88ae723e-4933-4371-b52d-3ada52a59d36",
+ "sourceHandle": "value",
+ "targetHandle": "b",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-bd094e2f-41e5-4b61-9f7b-56cf337d53favalue-f1afd295-860f-48b6-a76a-90609bf2cc11a",
+ "type": "default",
+ "source": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "target": "f1afd295-860f-48b6-a76a-90609bf2cc11",
+ "sourceHandle": "value",
+ "targetHandle": "a",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-1dd915a3-6756-48ed-b68b-ee3b4bd06c1dvalue-14e65dbe-4249-4b25-9a63-3a10cfaeb61ca",
+ "type": "default",
+ "source": "1dd915a3-6756-48ed-b68b-ee3b4bd06c1d",
+ "target": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c",
+ "sourceHandle": "value",
+ "targetHandle": "a",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-e4d5ca7c-8fcf-4c59-9c58-67194c80dc73value-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7adenoising_start",
+ "type": "default",
+ "source": "e4d5ca7c-8fcf-4c59-9c58-67194c80dc73",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "value",
+ "targetHandle": "denoising_start"
+ },
+ {
+ "id": "reactflow__edge-c8f5c671-8c87-4d96-a75e-a9937ac6bc03value-e4d5ca7c-8fcf-4c59-9c58-67194c80dc73b",
+ "type": "default",
+ "source": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03",
+ "target": "e4d5ca7c-8fcf-4c59-9c58-67194c80dc73",
+ "sourceHandle": "value",
+ "targetHandle": "b",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-49a8cc12-aa19-48c5-b6b3-04e0b603b384value-c8f5c671-8c87-4d96-a75e-a9937ac6bc03a",
+ "type": "default",
+ "source": "49a8cc12-aa19-48c5-b6b3-04e0b603b384",
+ "target": "c8f5c671-8c87-4d96-a75e-a9937ac6bc03",
+ "sourceHandle": "value",
+ "targetHandle": "a",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-14e65dbe-4249-4b25-9a63-3a10cfaeb61cvalue-49a8cc12-aa19-48c5-b6b3-04e0b603b384a",
+ "type": "default",
+ "source": "14e65dbe-4249-4b25-9a63-3a10cfaeb61c",
+ "target": "49a8cc12-aa19-48c5-b6b3-04e0b603b384",
+ "sourceHandle": "value",
+ "targetHandle": "a",
+ "hidden": true
+ },
+ {
+ "id": "79390b60-4077-4f94-ad0a-4229cc73ddb2-4950132a-2d06-4571-b2c0-55cb37a31e9b-collapsed",
+ "type": "collapsed",
+ "source": "79390b60-4077-4f94-ad0a-4229cc73ddb2",
+ "target": "4950132a-2d06-4571-b2c0-55cb37a31e9b"
+ },
+ {
+ "id": "reactflow__edge-4950132a-2d06-4571-b2c0-55cb37a31e9bvalue-041c59cc-f9e4-4dc9-8b31-84648c5f3ebestrength",
+ "type": "default",
+ "source": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "target": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe",
+ "sourceHandle": "value",
+ "targetHandle": "strength",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-4950132a-2d06-4571-b2c0-55cb37a31e9bvalue-53c2d5fd-863d-4950-93e0-628f3d61b493strength",
+ "type": "default",
+ "source": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "target": "53c2d5fd-863d-4950-93e0-628f3d61b493",
+ "sourceHandle": "value",
+ "targetHandle": "strength",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-79390b60-4077-4f94-ad0a-4229cc73ddb2value-4950132a-2d06-4571-b2c0-55cb37a31e9bb",
+ "type": "default",
+ "source": "79390b60-4077-4f94-ad0a-4229cc73ddb2",
+ "target": "4950132a-2d06-4571-b2c0-55cb37a31e9b",
+ "sourceHandle": "value",
+ "targetHandle": "b",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-696de0e1-cdd2-42e8-abeb-57a926bc6df6value-79390b60-4077-4f94-ad0a-4229cc73ddb2a",
+ "type": "default",
+ "source": "696de0e1-cdd2-42e8-abeb-57a926bc6df6",
+ "target": "79390b60-4077-4f94-ad0a-4229cc73ddb2",
+ "sourceHandle": "value",
+ "targetHandle": "a"
+ },
+ {
+ "id": "reactflow__edge-bd094e2f-41e5-4b61-9f7b-56cf337d53favalue-bc53651f-208b-440c-be30-f93f72ae700ea",
+ "type": "default",
+ "source": "bd094e2f-41e5-4b61-9f7b-56cf337d53fa",
+ "target": "bc53651f-208b-440c-be30-f93f72ae700e",
+ "sourceHandle": "value",
+ "targetHandle": "a",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-6636a27a-f130-4a13-b3e5-50b44e4a566fcollection-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7acontrol",
+ "type": "default",
+ "source": "6636a27a-f130-4a13-b3e5-50b44e4a566f",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "collection",
+ "targetHandle": "control"
+ },
+ {
+ "id": "reactflow__edge-b78f53b6-2eae-4956-97b4-7e73768d1491control-6636a27a-f130-4a13-b3e5-50b44e4a566fitem",
+ "type": "default",
+ "source": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "target": "6636a27a-f130-4a13-b3e5-50b44e4a566f",
+ "sourceHandle": "control",
+ "targetHandle": "item"
+ },
+ {
+ "id": "reactflow__edge-be4082d6-e238-40ea-a9df-fc0d725e8895control-6636a27a-f130-4a13-b3e5-50b44e4a566fitem",
+ "type": "default",
+ "source": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "target": "6636a27a-f130-4a13-b3e5-50b44e4a566f",
+ "sourceHandle": "control",
+ "targetHandle": "item"
+ },
+ {
+ "id": "reactflow__edge-7671553a-cd4b-4e25-8332-9d5667e64493image-b78f53b6-2eae-4956-97b4-7e73768d1491image",
+ "type": "default",
+ "source": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "target": "b78f53b6-2eae-4956-97b4-7e73768d1491",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fdclip2-27215391-b20e-412a-b854-7fa5927f5437clip2",
+ "type": "default",
+ "source": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "target": "27215391-b20e-412a-b854-7fa5927f5437",
+ "sourceHandle": "clip2",
+ "targetHandle": "clip2"
+ },
+ {
+ "id": "reactflow__edge-e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fdclip-27215391-b20e-412a-b854-7fa5927f5437clip",
+ "type": "default",
+ "source": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "target": "27215391-b20e-412a-b854-7fa5927f5437",
+ "sourceHandle": "clip",
+ "targetHandle": "clip"
+ },
+ {
+ "id": "reactflow__edge-e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fdclip2-6142b69a-323f-4ecd-a7e5-67dc61349c51clip2",
+ "type": "default",
+ "source": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "target": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "sourceHandle": "clip2",
+ "targetHandle": "clip2"
+ },
+ {
+ "id": "reactflow__edge-6142b69a-323f-4ecd-a7e5-67dc61349c51conditioning-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7apositive_conditioning",
+ "type": "default",
+ "source": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "conditioning",
+ "targetHandle": "positive_conditioning"
+ },
+ {
+ "id": "reactflow__edge-27215391-b20e-412a-b854-7fa5927f5437conditioning-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7anegative_conditioning",
+ "type": "default",
+ "source": "27215391-b20e-412a-b854-7fa5927f5437",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "conditioning",
+ "targetHandle": "negative_conditioning"
+ },
+ {
+ "id": "reactflow__edge-e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fdunet-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7aunet",
+ "type": "default",
+ "source": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "unet",
+ "targetHandle": "unet"
+ },
+ {
+ "id": "reactflow__edge-100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3avae-117f982a-03da-49b1-bf9f-29711160ac02vae",
+ "type": "default",
+ "source": "100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3a",
+ "target": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "sourceHandle": "vae",
+ "targetHandle": "vae"
+ },
+ {
+ "id": "reactflow__edge-100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3avae-c3b60a50-8039-4924-90e3-8c608e1fecb5vae",
+ "type": "default",
+ "source": "100b3143-b3fb-4ff3-bb3c-8d4d3f89ae3a",
+ "target": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "sourceHandle": "vae",
+ "targetHandle": "vae"
+ },
+ {
+ "id": "reactflow__edge-e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fdclip-6142b69a-323f-4ecd-a7e5-67dc61349c51clip",
+ "type": "default",
+ "source": "e277e4b7-01cd-4daa-86ab-7bfa3cdcd9fd",
+ "target": "6142b69a-323f-4ecd-a7e5-67dc61349c51",
+ "sourceHandle": "clip",
+ "targetHandle": "clip"
+ },
+ {
+ "id": "reactflow__edge-041c59cc-f9e4-4dc9-8b31-84648c5f3ebeimage-7671553a-cd4b-4e25-8332-9d5667e64493image",
+ "type": "default",
+ "source": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe",
+ "target": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-f0cd0d2f-9614-43f7-9944-a75b8d5ccd65image-041c59cc-f9e4-4dc9-8b31-84648c5f3ebeimage",
+ "type": "default",
+ "source": "f0cd0d2f-9614-43f7-9944-a75b8d5ccd65",
+ "target": "041c59cc-f9e4-4dc9-8b31-84648c5f3ebe",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-53c2d5fd-863d-4950-93e0-628f3d61b493image-f0cd0d2f-9614-43f7-9944-a75b8d5ccd65image",
+ "type": "default",
+ "source": "53c2d5fd-863d-4950-93e0-628f3d61b493",
+ "target": "f0cd0d2f-9614-43f7-9944-a75b8d5ccd65",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-5ca87ace-edf9-49c7-a424-cd42416b86a7image-53c2d5fd-863d-4950-93e0-628f3d61b493image",
+ "type": "default",
+ "source": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "target": "53c2d5fd-863d-4950-93e0-628f3d61b493",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7alatents-c3b60a50-8039-4924-90e3-8c608e1fecb5latents",
+ "type": "default",
+ "source": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "target": "c3b60a50-8039-4924-90e3-8c608e1fecb5",
+ "sourceHandle": "latents",
+ "targetHandle": "latents"
+ },
+ {
+ "id": "reactflow__edge-117f982a-03da-49b1-bf9f-29711160ac02latents-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7alatents",
+ "type": "default",
+ "source": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "latents",
+ "targetHandle": "latents"
+ },
+ {
+ "id": "reactflow__edge-8923451b-5a27-4395-b7f2-dce875fca6f5noise-8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7anoise",
+ "type": "default",
+ "source": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "target": "8dba0d37-cd2e-4fe5-ae9f-5464b85a8a7a",
+ "sourceHandle": "noise",
+ "targetHandle": "noise"
+ },
+ {
+ "id": "reactflow__edge-d350feac-9686-4e0d-bd46-a96bd2630818value-7dbb756b-7d79-431c-a46d-d8f7b082c127value",
+ "type": "default",
+ "source": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "target": "7dbb756b-7d79-431c-a46d-d8f7b082c127",
+ "sourceHandle": "value",
+ "targetHandle": "value",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-5b256f14-caab-40ff-b8f0-9679cd542163value-f5d9bf3b-2646-4b17-9894-20fd2b4218eavalue",
+ "type": "default",
+ "source": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "target": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
+ "sourceHandle": "value",
+ "targetHandle": "value",
+ "hidden": true
+ },
+ {
+ "id": "reactflow__edge-7671553a-cd4b-4e25-8332-9d5667e64493height-8923451b-5a27-4395-b7f2-dce875fca6f5height",
+ "type": "default",
+ "source": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "target": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "sourceHandle": "height",
+ "targetHandle": "height"
+ },
+ {
+ "id": "reactflow__edge-7671553a-cd4b-4e25-8332-9d5667e64493width-8923451b-5a27-4395-b7f2-dce875fca6f5width",
+ "type": "default",
+ "source": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "target": "8923451b-5a27-4395-b7f2-dce875fca6f5",
+ "sourceHandle": "width",
+ "targetHandle": "width"
+ },
+ {
+ "id": "reactflow__edge-7671553a-cd4b-4e25-8332-9d5667e64493image-117f982a-03da-49b1-bf9f-29711160ac02image",
+ "type": "default",
+ "source": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "target": "117f982a-03da-49b1-bf9f-29711160ac02",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-7671553a-cd4b-4e25-8332-9d5667e64493image-be4082d6-e238-40ea-a9df-fc0d725e8895image",
+ "type": "default",
+ "source": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "target": "be4082d6-e238-40ea-a9df-fc0d725e8895",
+ "sourceHandle": "image",
+ "targetHandle": "image"
+ },
+ {
+ "id": "reactflow__edge-7dbb756b-7d79-431c-a46d-d8f7b082c127value-7671553a-cd4b-4e25-8332-9d5667e64493height",
+ "type": "default",
+ "source": "7dbb756b-7d79-431c-a46d-d8f7b082c127",
+ "target": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "sourceHandle": "value",
+ "targetHandle": "height"
+ },
+ {
+ "id": "reactflow__edge-f5d9bf3b-2646-4b17-9894-20fd2b4218eavalue-7671553a-cd4b-4e25-8332-9d5667e64493width",
+ "type": "default",
+ "source": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
+ "target": "7671553a-cd4b-4e25-8332-9d5667e64493",
+ "sourceHandle": "value",
+ "targetHandle": "width"
+ },
+ {
+ "id": "reactflow__edge-5ca87ace-edf9-49c7-a424-cd42416b86a7height-d350feac-9686-4e0d-bd46-a96bd2630818a",
+ "type": "default",
+ "source": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "target": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "sourceHandle": "height",
+ "targetHandle": "a"
+ },
+ {
+ "id": "reactflow__edge-1ba845a6-eb88-49a1-a490-5fe6754f3ec9value-d350feac-9686-4e0d-bd46-a96bd2630818b",
+ "type": "default",
+ "source": "1ba845a6-eb88-49a1-a490-5fe6754f3ec9",
+ "target": "d350feac-9686-4e0d-bd46-a96bd2630818",
+ "sourceHandle": "value",
+ "targetHandle": "b"
+ },
+ {
+ "id": "reactflow__edge-1ba845a6-eb88-49a1-a490-5fe6754f3ec9value-5b256f14-caab-40ff-b8f0-9679cd542163b",
+ "type": "default",
+ "source": "1ba845a6-eb88-49a1-a490-5fe6754f3ec9",
+ "target": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "sourceHandle": "value",
+ "targetHandle": "b"
+ },
+ {
+ "id": "reactflow__edge-5ca87ace-edf9-49c7-a424-cd42416b86a7width-5b256f14-caab-40ff-b8f0-9679cd542163a",
+ "type": "default",
+ "source": "5ca87ace-edf9-49c7-a424-cd42416b86a7",
+ "target": "5b256f14-caab-40ff-b8f0-9679cd542163",
+ "sourceHandle": "width",
+ "targetHandle": "a"
+ }
+ ]
+}
diff --git a/invokeai/app/services/workflow_records/default_workflows/Prompt from File.json b/invokeai/app/services/workflow_records/default_workflows/Prompt from File.json
index 765b236714..de902bc77e 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Prompt from File.json
+++ b/invokeai/app/services/workflow_records/default_workflows/Prompt from File.json
@@ -2,7 +2,7 @@
"name": "Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using Prompt from File node",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, prompt from file, default",
"notes": "",
@@ -37,16 +37,68 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
{
- "id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
+ "id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "invocation",
"data": {
- "id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
- "version": "1.1.1",
+ "id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
+ "version": "1.3.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 2037.861329274915,
+ "y": -329.8393457509562
+ }
+ },
+ {
+ "id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
+ "type": "invocation",
+ "data": {
+ "id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
+ "version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -60,6 +112,69 @@
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 925,
+ "y": -275
+ }
+ },
+ {
+ "id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
+ "type": "invocation",
+ "data": {
+ "id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
+ "version": "1.0.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 0,
+ "y": -375
+ }
+ },
+ {
+ "id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
+ "type": "invocation",
+ "data": {
+ "id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": false,
@@ -141,61 +256,6 @@
"y": -400
}
},
- {
- "id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
- "type": "invocation",
- "data": {
- "id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "main_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 0,
- "y": -375
- }
- },
- {
- "id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
- "type": "invocation",
- "data": {
- "id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 925,
- "y": -275
- }
- },
{
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "invocation",
@@ -268,53 +328,6 @@
"y": -50
}
},
- {
- "id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
- "type": "invocation",
- "data": {
- "id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 2037.861329274915,
- "y": -329.8393457509562
- }
- },
{
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "invocation",
diff --git a/invokeai/app/services/workflow_records/default_workflows/Text to Image - SD1.5.json b/invokeai/app/services/workflow_records/default_workflows/Text to Image - SD1.5.json
index d3d52150bc..65f894724c 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Text to Image - SD1.5.json
+++ b/invokeai/app/services/workflow_records/default_workflows/Text to Image - SD1.5.json
@@ -2,7 +2,7 @@
"name": "Text to Image - SD1.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
@@ -33,16 +33,127 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
+ {
+ "id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
+ "type": "invocation",
+ "data": {
+ "id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
+ "version": "1.3.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": true
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": false,
+ "useCache": true
+ },
+ "position": {
+ "x": 1800,
+ "y": 25
+ }
+ },
+ {
+ "id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
+ "type": "invocation",
+ "data": {
+ "id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "Positive Compel Prompt",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Positive Prompt",
+ "value": "Super cute tiger cub, national geographic award-winning photograph"
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 1000,
+ "y": 25
+ }
+ },
+ {
+ "id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
+ "type": "invocation",
+ "data": {
+ "id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
+ "version": "1.0.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 600,
+ "y": 25
+ }
+ },
{
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
- "version": "1.1.1",
+ "version": "1.2.0",
"nodePack": "invokeai",
"label": "Negative Compel Prompt",
"notes": "",
@@ -56,6 +167,10 @@
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": true,
@@ -108,61 +223,6 @@
"y": 325
}
},
- {
- "id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
- "type": "invocation",
- "data": {
- "id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "main_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 600,
- "y": 25
- }
- },
- {
- "id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
- "type": "invocation",
- "data": {
- "id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "Positive Compel Prompt",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Positive Prompt",
- "value": "Super cute tiger cub, national geographic award-winning photograph"
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 1000,
- "y": 25
- }
- },
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
@@ -280,53 +340,6 @@
"x": 1400,
"y": 25
}
- },
- {
- "id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
- "type": "invocation",
- "data": {
- "id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": true
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": true
- },
- "position": {
- "x": 1800,
- "y": 25
- }
}
],
"edges": [
diff --git a/invokeai/app/services/workflow_records/default_workflows/Text to Image - SDXL.json b/invokeai/app/services/workflow_records/default_workflows/Text to Image - SDXL.json
index 1527bbceb1..0f4777169e 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Text to Image - SDXL.json
+++ b/invokeai/app/services/workflow_records/default_workflows/Text to Image - SDXL.json
@@ -2,7 +2,7 @@
"name": "Text to Image - SDXL",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"notes": "",
@@ -29,10 +29,271 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
+ {
+ "id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
+ "type": "invocation",
+ "data": {
+ "id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "type": "vae_loader",
+ "inputs": {
+ "vae_model": {
+ "name": "vae_model",
+ "label": "VAE (use the FP16 model)",
+ "value": {
+ "key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
+ "hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
+ "name": "sdxl-vae-fp16-fix",
+ "base": "sdxl",
+ "type": "vae"
+ }
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 375,
+ "y": -225
+ }
+ },
+ {
+ "id": "63e91020-83b2-4f35-b174-ad9692aabb48",
+ "type": "invocation",
+ "data": {
+ "id": "63e91020-83b2-4f35-b174-ad9692aabb48",
+ "version": "1.3.0",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": false,
+ "useCache": false
+ },
+ "position": {
+ "x": 1475,
+ "y": -500
+ }
+ },
+ {
+ "id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
+ "type": "invocation",
+ "data": {
+ "id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "SDXL Positive Compel Prompt",
+ "notes": "",
+ "type": "sdxl_compel_prompt",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Positive Prompt",
+ "value": ""
+ },
+ "style": {
+ "name": "style",
+ "label": "Positive Style",
+ "value": ""
+ },
+ "original_width": {
+ "name": "original_width",
+ "label": "",
+ "value": 1024
+ },
+ "original_height": {
+ "name": "original_height",
+ "label": "",
+ "value": 1024
+ },
+ "crop_top": {
+ "name": "crop_top",
+ "label": "",
+ "value": 0
+ },
+ "crop_left": {
+ "name": "crop_left",
+ "label": "",
+ "value": 0
+ },
+ "target_width": {
+ "name": "target_width",
+ "label": "",
+ "value": 1024
+ },
+ "target_height": {
+ "name": "target_height",
+ "label": "",
+ "value": 1024
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "clip2": {
+ "name": "clip2",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 750,
+ "y": -175
+ }
+ },
+ {
+ "id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
+ "type": "invocation",
+ "data": {
+ "id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
+ "version": "1.0.3",
+ "nodePack": "invokeai",
+ "label": "",
+ "notes": "",
+ "type": "sdxl_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": "",
+ "value": {
+ "key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
+ "hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
+ "name": "dreamshaper-xl-v2-turbo",
+ "base": "sdxl",
+ "type": "main"
+ }
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 375,
+ "y": -500
+ }
+ },
+ {
+ "id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
+ "type": "invocation",
+ "data": {
+ "id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
+ "version": "1.2.0",
+ "nodePack": "invokeai",
+ "label": "SDXL Negative Compel Prompt",
+ "notes": "",
+ "type": "sdxl_compel_prompt",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Negative Prompt",
+ "value": ""
+ },
+ "style": {
+ "name": "style",
+ "label": "Negative Style",
+ "value": ""
+ },
+ "original_width": {
+ "name": "original_width",
+ "label": "",
+ "value": 1024
+ },
+ "original_height": {
+ "name": "original_height",
+ "label": "",
+ "value": 1024
+ },
+ "crop_top": {
+ "name": "crop_top",
+ "label": "",
+ "value": 0
+ },
+ "crop_left": {
+ "name": "crop_left",
+ "label": "",
+ "value": 0
+ },
+ "target_width": {
+ "name": "target_width",
+ "label": "",
+ "value": 1024
+ },
+ "target_height": {
+ "name": "target_height",
+ "label": "",
+ "value": 1024
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "clip2": {
+ "name": "clip2",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 750,
+ "y": 200
+ }
+ },
{
"id": "3774ec24-a69e-4254-864c-097d07a6256f",
"type": "invocation",
@@ -88,75 +349,6 @@
"y": -125
}
},
- {
- "id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
- "type": "invocation",
- "data": {
- "id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "SDXL Negative Compel Prompt",
- "notes": "",
- "type": "sdxl_compel_prompt",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Negative Prompt",
- "value": ""
- },
- "style": {
- "name": "style",
- "label": "Negative Style",
- "value": ""
- },
- "original_width": {
- "name": "original_width",
- "label": "",
- "value": 1024
- },
- "original_height": {
- "name": "original_height",
- "label": "",
- "value": 1024
- },
- "crop_top": {
- "name": "crop_top",
- "label": "",
- "value": 0
- },
- "crop_left": {
- "name": "crop_left",
- "label": "",
- "value": 0
- },
- "target_width": {
- "name": "target_width",
- "label": "",
- "value": 1024
- },
- "target_height": {
- "name": "target_height",
- "label": "",
- "value": 1024
- },
- "clip": {
- "name": "clip",
- "label": ""
- },
- "clip2": {
- "name": "clip2",
- "label": ""
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 750,
- "y": 200
- }
- },
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
@@ -229,154 +421,6 @@
"y": -50
}
},
- {
- "id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
- "type": "invocation",
- "data": {
- "id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
- "version": "1.0.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "sdxl_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": "",
- "value": {
- "key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
- "hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
- "name": "dreamshaper-xl-v2-turbo",
- "base": "sdxl",
- "type": "main"
- }
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 375,
- "y": -500
- }
- },
- {
- "id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
- "type": "invocation",
- "data": {
- "id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
- "version": "1.1.1",
- "nodePack": "invokeai",
- "label": "SDXL Positive Compel Prompt",
- "notes": "",
- "type": "sdxl_compel_prompt",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Positive Prompt",
- "value": ""
- },
- "style": {
- "name": "style",
- "label": "Positive Style",
- "value": ""
- },
- "original_width": {
- "name": "original_width",
- "label": "",
- "value": 1024
- },
- "original_height": {
- "name": "original_height",
- "label": "",
- "value": 1024
- },
- "crop_top": {
- "name": "crop_top",
- "label": "",
- "value": 0
- },
- "crop_left": {
- "name": "crop_left",
- "label": "",
- "value": 0
- },
- "target_width": {
- "name": "target_width",
- "label": "",
- "value": 1024
- },
- "target_height": {
- "name": "target_height",
- "label": "",
- "value": 1024
- },
- "clip": {
- "name": "clip",
- "label": ""
- },
- "clip2": {
- "name": "clip2",
- "label": ""
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 750,
- "y": -175
- }
- },
- {
- "id": "63e91020-83b2-4f35-b174-ad9692aabb48",
- "type": "invocation",
- "data": {
- "id": "63e91020-83b2-4f35-b174-ad9692aabb48",
- "version": "1.2.2",
- "nodePack": "invokeai",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": false
- },
- "position": {
- "x": 1475,
- "y": -500
- }
- },
{
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "invocation",
@@ -464,37 +508,6 @@
"y": -500
}
},
- {
- "id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
- "type": "invocation",
- "data": {
- "id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
- "version": "1.0.2",
- "label": "",
- "notes": "",
- "type": "vae_loader",
- "inputs": {
- "vae_model": {
- "name": "vae_model",
- "label": "VAE (use the FP16 model)",
- "value": {
- "key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
- "hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
- "name": "sdxl-vae-fp16-fix",
- "base": "sdxl",
- "type": "vae"
- }
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 375,
- "y": -225
- }
- },
{
"id": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"type": "invocation",
diff --git a/invokeai/app/services/workflow_records/default_workflows/Text to Image with LoRA.json b/invokeai/app/services/workflow_records/default_workflows/Text to Image with LoRA.json
index 6df02b675d..b4df4b921c 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Text to Image with LoRA.json
+++ b/invokeai/app/services/workflow_records/default_workflows/Text to Image with LoRA.json
@@ -2,7 +2,7 @@
"name": "Text to Image with LoRA",
"author": "InvokeAI",
"description": "Simple text to image workflow with a LoRA",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text to image, lora, default",
"notes": "",
@@ -37,28 +37,83 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
{
- "id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
+ "id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "invocation",
"data": {
- "id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
- "version": "1.1.1",
+ "id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
+ "version": "1.3.0",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": false,
+ "useCache": true
+ },
+ "position": {
+ "x": 4450,
+ "y": -550
+ }
+ },
+ {
+ "id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
+ "type": "invocation",
+ "data": {
+ "id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
+ "version": "1.2.0",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
- "label": "Negative Prompt",
- "value": ""
+ "label": "Positive Prompt",
+ "value": "super cute tiger cub"
},
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": true,
@@ -67,31 +122,7 @@
},
"position": {
"x": 3425,
- "y": -300
- }
- },
- {
- "id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
- "type": "invocation",
- "data": {
- "id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
- "version": "1.0.2",
- "label": "",
- "notes": "",
- "type": "main_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": 2500,
- "y": -600
+ "y": -575
}
},
{
@@ -99,7 +130,7 @@
"type": "invocation",
"data": {
"id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
- "version": "1.0.2",
+ "version": "1.0.3",
"label": "",
"notes": "",
"type": "lora_loader",
@@ -132,23 +163,51 @@
}
},
{
- "id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
+ "id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"type": "invocation",
"data": {
- "id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
- "version": "1.1.1",
+ "id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": 2500,
+ "y": -600
+ }
+ },
+ {
+ "id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
+ "type": "invocation",
+ "data": {
+ "id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
+ "version": "1.2.0",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
- "label": "Positive Prompt",
- "value": "super cute tiger cub"
+ "label": "Negative Prompt",
+ "value": ""
},
"clip": {
"name": "clip",
"label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
}
},
"isOpen": true,
@@ -157,7 +216,7 @@
},
"position": {
"x": 3425,
- "y": -575
+ "y": -300
}
},
{
@@ -315,52 +374,6 @@
"x": 3425,
"y": 0
}
- },
- {
- "id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
- "type": "invocation",
- "data": {
- "id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
- "version": "1.2.2",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": false,
- "useCache": true
- },
- "position": {
- "x": 4450,
- "y": -550
- }
}
],
"edges": [
diff --git a/invokeai/app/services/workflow_records/default_workflows/Tiled Upscaling (Beta).json b/invokeai/app/services/workflow_records/default_workflows/Tiled Upscaling (Beta).json
index bb0e9062e4..426fe49c41 100644
--- a/invokeai/app/services/workflow_records/default_workflows/Tiled Upscaling (Beta).json
+++ b/invokeai/app/services/workflow_records/default_workflows/Tiled Upscaling (Beta).json
@@ -2,7 +2,7 @@
"name": "Tiled Upscaling (Beta)",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling. ",
- "version": "2.0.0",
+ "version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscaling, sd1.5",
"notes": "",
@@ -41,10 +41,318 @@
}
],
"meta": {
- "category": "default",
- "version": "3.0.0"
+ "version": "3.0.0",
+ "category": "default"
},
"nodes": [
+ {
+ "id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
+ "type": "invocation",
+ "data": {
+ "id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
+ "version": "1.0.3",
+ "label": "",
+ "notes": "",
+ "type": "main_model_loader",
+ "inputs": {
+ "model": {
+ "name": "model",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4514.466823162653,
+ "y": -1235.7908800002283
+ }
+ },
+ {
+ "id": "287f134f-da8d-41d1-884e-5940e8f7b816",
+ "type": "invocation",
+ "data": {
+ "id": "287f134f-da8d-41d1-884e-5940e8f7b816",
+ "version": "1.4.1",
+ "label": "",
+ "notes": "",
+ "type": "ip_adapter",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "ip_adapter_model": {
+ "name": "ip_adapter_model",
+ "label": "IP-Adapter Model (select ip_adapter_sd15)",
+ "value": {
+ "key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
+ "hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
+ "name": "ip_adapter_sd15",
+ "base": "sd-1",
+ "type": "ip_adapter"
+ }
+ },
+ "clip_vision_model": {
+ "name": "clip_vision_model",
+ "label": "",
+ "value": "ViT-H"
+ },
+ "weight": {
+ "name": "weight",
+ "label": "",
+ "value": 0.2
+ },
+ "method": {
+ "name": "method",
+ "label": "",
+ "value": "full"
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "",
+ "value": 1
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -2855.8555540799207,
+ "y": -183.58854843775742
+ }
+ },
+ {
+ "id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
+ "type": "invocation",
+ "data": {
+ "id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
+ "version": "1.3.0",
+ "label": "",
+ "notes": "",
+ "type": "l2i",
+ "inputs": {
+ "board": {
+ "name": "board",
+ "label": ""
+ },
+ "metadata": {
+ "name": "metadata",
+ "label": ""
+ },
+ "latents": {
+ "name": "latents",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -1999.770193862987,
+ "y": -1075
+ }
+ },
+ {
+ "id": "d334f2da-016a-4524-9911-bdab85546888",
+ "type": "invocation",
+ "data": {
+ "id": "d334f2da-016a-4524-9911-bdab85546888",
+ "version": "1.1.2",
+ "label": "",
+ "notes": "",
+ "type": "controlnet",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "control_model": {
+ "name": "control_model",
+ "label": "Control Model (select contro_v11f1e_sd15_tile)",
+ "value": {
+ "key": "773843c8-db1f-4502-8f65-59782efa7960",
+ "hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
+ "name": "control_v11f1e_sd15_tile",
+ "base": "sd-1",
+ "type": "controlnet"
+ }
+ },
+ "control_weight": {
+ "name": "control_weight",
+ "label": "",
+ "value": 1
+ },
+ "begin_step_percent": {
+ "name": "begin_step_percent",
+ "label": "",
+ "value": 0
+ },
+ "end_step_percent": {
+ "name": "end_step_percent",
+ "label": "Structural Control",
+ "value": 1
+ },
+ "control_mode": {
+ "name": "control_mode",
+ "label": "",
+ "value": "more_control"
+ },
+ "resize_mode": {
+ "name": "resize_mode",
+ "label": "",
+ "value": "just_resize"
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -2481.9569385477016,
+ "y": -181.06590482739782
+ }
+ },
+ {
+ "id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
+ "type": "invocation",
+ "data": {
+ "id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
+ "version": "1.1.0",
+ "label": "",
+ "notes": "",
+ "type": "i2l",
+ "inputs": {
+ "image": {
+ "name": "image",
+ "label": ""
+ },
+ "vae": {
+ "name": "vae",
+ "label": ""
+ },
+ "tiled": {
+ "name": "tiled",
+ "label": "",
+ "value": false
+ },
+ "tile_size": {
+ "name": "tile_size",
+ "label": "",
+ "value": 0
+ },
+ "fp32": {
+ "name": "fp32",
+ "label": "",
+ "value": false
+ }
+ },
+ "isOpen": false,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -2908.4791167517287,
+ "y": -408.87504820159086
+ }
+ },
+ {
+ "id": "947c3f88-0305-4695-8355-df4abac64b1c",
+ "type": "invocation",
+ "data": {
+ "id": "947c3f88-0305-4695-8355-df4abac64b1c",
+ "version": "1.2.0",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4014.4136788915944,
+ "y": -968.5677253775948
+ }
+ },
+ {
+ "id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
+ "type": "invocation",
+ "data": {
+ "id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
+ "version": "1.2.0",
+ "label": "",
+ "notes": "",
+ "type": "compel",
+ "inputs": {
+ "prompt": {
+ "name": "prompt",
+ "label": "Positive Prompt",
+ "value": ""
+ },
+ "clip": {
+ "name": "clip",
+ "label": ""
+ },
+ "mask": {
+ "name": "mask",
+ "label": ""
+ }
+ },
+ "isOpen": true,
+ "isIntermediate": true,
+ "useCache": true
+ },
+ "position": {
+ "x": -4014.4136788915944,
+ "y": -1243.5677253775948
+ }
+ },
{
"id": "b875cae6-d8a3-4fdc-b969-4d53cbd03f9a",
"type": "invocation",
@@ -181,64 +489,6 @@
"y": 3.422855503409039
}
},
- {
- "id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
- "type": "invocation",
- "data": {
- "id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
- "version": "1.1.1",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "Positive Prompt",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -4014.4136788915944,
- "y": -1243.5677253775948
- }
- },
- {
- "id": "947c3f88-0305-4695-8355-df4abac64b1c",
- "type": "invocation",
- "data": {
- "id": "947c3f88-0305-4695-8355-df4abac64b1c",
- "version": "1.1.1",
- "label": "",
- "notes": "",
- "type": "compel",
- "inputs": {
- "prompt": {
- "name": "prompt",
- "label": "",
- "value": ""
- },
- "clip": {
- "name": "clip",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -4014.4136788915944,
- "y": -968.5677253775948
- }
- },
{
"id": "b3513fed-ed42-408d-b382-128fdb0de523",
"type": "invocation",
@@ -379,104 +629,6 @@
"y": -29.08699277598673
}
},
- {
- "id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
- "type": "invocation",
- "data": {
- "id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
- "version": "1.0.2",
- "label": "",
- "notes": "",
- "type": "i2l",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": false,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -2908.4791167517287,
- "y": -408.87504820159086
- }
- },
- {
- "id": "d334f2da-016a-4524-9911-bdab85546888",
- "type": "invocation",
- "data": {
- "id": "d334f2da-016a-4524-9911-bdab85546888",
- "version": "1.1.1",
- "label": "",
- "notes": "",
- "type": "controlnet",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "control_model": {
- "name": "control_model",
- "label": "Control Model (select contro_v11f1e_sd15_tile)",
- "value": {
- "key": "773843c8-db1f-4502-8f65-59782efa7960",
- "hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
- "name": "control_v11f1e_sd15_tile",
- "base": "sd-1",
- "type": "controlnet"
- }
- },
- "control_weight": {
- "name": "control_weight",
- "label": "",
- "value": 1
- },
- "begin_step_percent": {
- "name": "begin_step_percent",
- "label": "",
- "value": 0
- },
- "end_step_percent": {
- "name": "end_step_percent",
- "label": "Structural Control",
- "value": 1
- },
- "control_mode": {
- "name": "control_mode",
- "label": "",
- "value": "more_control"
- },
- "resize_mode": {
- "name": "resize_mode",
- "label": "",
- "value": "just_resize"
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -2481.9569385477016,
- "y": -181.06590482739782
- }
- },
{
"id": "1011539e-85de-4e02-a003-0b22358491b8",
"type": "invocation",
@@ -563,52 +715,6 @@
"y": -1006.415909408244
}
},
- {
- "id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
- "type": "invocation",
- "data": {
- "id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
- "version": "1.2.2",
- "label": "",
- "notes": "",
- "type": "l2i",
- "inputs": {
- "board": {
- "name": "board",
- "label": ""
- },
- "metadata": {
- "name": "metadata",
- "label": ""
- },
- "latents": {
- "name": "latents",
- "label": ""
- },
- "vae": {
- "name": "vae",
- "label": ""
- },
- "tiled": {
- "name": "tiled",
- "label": "",
- "value": false
- },
- "fp32": {
- "name": "fp32",
- "label": "",
- "value": false
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -1999.770193862987,
- "y": -1075
- }
- },
{
"id": "ab6f5dda-4b60-4ddf-99f2-f61fb5937527",
"type": "invocation",
@@ -779,56 +885,6 @@
"y": -78.2819050861178
}
},
- {
- "id": "287f134f-da8d-41d1-884e-5940e8f7b816",
- "type": "invocation",
- "data": {
- "id": "287f134f-da8d-41d1-884e-5940e8f7b816",
- "version": "1.2.2",
- "label": "",
- "notes": "",
- "type": "ip_adapter",
- "inputs": {
- "image": {
- "name": "image",
- "label": ""
- },
- "ip_adapter_model": {
- "name": "ip_adapter_model",
- "label": "IP-Adapter Model (select ip_adapter_sd15)",
- "value": {
- "key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
- "hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
- "name": "ip_adapter_sd15",
- "base": "sd-1",
- "type": "ip_adapter"
- }
- },
- "weight": {
- "name": "weight",
- "label": "",
- "value": 0.2
- },
- "begin_step_percent": {
- "name": "begin_step_percent",
- "label": "",
- "value": 0
- },
- "end_step_percent": {
- "name": "end_step_percent",
- "label": "",
- "value": 1
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -2855.8555540799207,
- "y": -183.58854843775742
- }
- },
{
"id": "1f86c8bf-06f9-4e28-abee-02f46f445ac4",
"type": "invocation",
@@ -899,30 +955,6 @@
"y": -41.810810454906914
}
},
- {
- "id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
- "type": "invocation",
- "data": {
- "id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
- "version": "1.0.2",
- "label": "",
- "notes": "",
- "type": "main_model_loader",
- "inputs": {
- "model": {
- "name": "model",
- "label": ""
- }
- },
- "isOpen": true,
- "isIntermediate": true,
- "useCache": true
- },
- "position": {
- "x": -4514.466823162653,
- "y": -1235.7908800002283
- }
- },
{
"id": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
"type": "invocation",
diff --git a/invokeai/app/util/step_callback.py b/invokeai/app/util/step_callback.py
index 8992e59ace..c0c101cd75 100644
--- a/invokeai/app/util/step_callback.py
+++ b/invokeai/app/util/step_callback.py
@@ -5,9 +5,8 @@ from PIL import Image
from invokeai.app.services.session_processor.session_processor_common import CanceledException, ProgressImage
from invokeai.backend.model_manager.config import BaseModelType
-
-from ...backend.stable_diffusion import PipelineIntermediateState
-from ...backend.util.util import image_to_dataURL
+from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
+from invokeai.backend.util.util import image_to_dataURL
if TYPE_CHECKING:
from invokeai.app.services.events.events_base import EventServiceBase
diff --git a/invokeai/backend/image_util/__init__.py b/invokeai/backend/image_util/__init__.py
index f45af9feb4..bc5eed7ddd 100644
--- a/invokeai/backend/image_util/__init__.py
+++ b/invokeai/backend/image_util/__init__.py
@@ -2,6 +2,11 @@
Initialization file for invokeai.backend.image_util methods.
"""
-from .infill_methods.patchmatch import PatchMatch # noqa: F401
-from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
-from .util import InitImageResizer, make_grid # noqa: F401
+from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch # noqa: F401
+from invokeai.backend.image_util.pngwriter import ( # noqa: F401
+ PngWriter,
+ PromptFormatter,
+ retrieve_metadata,
+ write_metadata,
+)
+from invokeai.backend.image_util.util import InitImageResizer, make_grid # noqa: F401
diff --git a/invokeai/backend/image_util/basicsr/rrdbnet_arch.py b/invokeai/backend/image_util/basicsr/rrdbnet_arch.py
index cdb77f3c21..a99a697123 100644
--- a/invokeai/backend/image_util/basicsr/rrdbnet_arch.py
+++ b/invokeai/backend/image_util/basicsr/rrdbnet_arch.py
@@ -2,7 +2,7 @@ import torch
from torch import nn as nn
from torch.nn import functional as F
-from .arch_util import default_init_weights, make_layer, pixel_unshuffle
+from invokeai.backend.image_util.basicsr.arch_util import default_init_weights, make_layer, pixel_unshuffle
class ResidualDenseBlock(nn.Module):
diff --git a/invokeai/backend/image_util/depth_anything/model/dpt.py b/invokeai/backend/image_util/depth_anything/model/dpt.py
index e1101b3c39..9b1e84c7bd 100644
--- a/invokeai/backend/image_util/depth_anything/model/dpt.py
+++ b/invokeai/backend/image_util/depth_anything/model/dpt.py
@@ -4,7 +4,7 @@ import torch
import torch.nn as nn
import torch.nn.functional as F
-from .blocks import FeatureFusionBlock, _make_scratch
+from invokeai.backend.image_util.depth_anything.model.blocks import FeatureFusionBlock, _make_scratch
torchhub_path = Path(__file__).parent.parent / "torchhub"
diff --git a/invokeai/backend/image_util/dw_openpose/wholebody.py b/invokeai/backend/image_util/dw_openpose/wholebody.py
index 3f77f20b9c..ce028df1fe 100644
--- a/invokeai/backend/image_util/dw_openpose/wholebody.py
+++ b/invokeai/backend/image_util/dw_openpose/wholebody.py
@@ -8,11 +8,10 @@ import numpy as np
import onnxruntime as ort
from invokeai.app.services.config.config_default import get_config
+from invokeai.backend.image_util.dw_openpose.onnxdet import inference_detector
+from invokeai.backend.image_util.dw_openpose.onnxpose import inference_pose
from invokeai.backend.util.devices import TorchDevice
-from .onnxdet import inference_detector
-from .onnxpose import inference_pose
-
config = get_config()
diff --git a/invokeai/backend/image_util/lineart_anime.py b/invokeai/backend/image_util/lineart_anime.py
index 6c2d2117e9..11d5aef80c 100644
--- a/invokeai/backend/image_util/lineart_anime.py
+++ b/invokeai/backend/image_util/lineart_anime.py
@@ -99,7 +99,7 @@ class UnetSkipConnectionBlock(nn.Module):
"""
super(UnetSkipConnectionBlock, self).__init__()
self.outermost = outermost
- if type(norm_layer) == functools.partial:
+ if isinstance(norm_layer, functools.partial):
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
diff --git a/invokeai/backend/ip_adapter/ip_adapter.py b/invokeai/backend/ip_adapter/ip_adapter.py
index c33cb3f4ab..87ce029a87 100644
--- a/invokeai/backend/ip_adapter/ip_adapter.py
+++ b/invokeai/backend/ip_adapter/ip_adapter.py
@@ -11,9 +11,8 @@ from PIL import Image
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
-
-from ..raw_model import RawModel
-from .resampler import Resampler
+from invokeai.backend.ip_adapter.resampler import Resampler
+from invokeai.backend.raw_model import RawModel
class IPAdapterStateDict(TypedDict):
@@ -125,22 +124,20 @@ class IPAdapter(RawModel):
self.device, dtype=self.dtype
)
- def to(
- self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, non_blocking: bool = False
- ):
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
if device is not None:
self.device = device
if dtype is not None:
self.dtype = dtype
- self._image_proj_model.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
- self.attn_weights.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
+ self._image_proj_model.to(device=self.device, dtype=self.dtype)
+ self.attn_weights.to(device=self.device, dtype=self.dtype)
- def calc_size(self):
- # workaround for circular import
- from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
+ def calc_size(self) -> int:
+ # HACK(ryand): Fix this issue with circular imports.
+ from invokeai.backend.model_manager.load.model_util import calc_module_size
- return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
+ return calc_module_size(self._image_proj_model) + calc_module_size(self.attn_weights)
def _init_image_proj_model(
self, state_dict: dict[str, torch.Tensor]
diff --git a/invokeai/backend/lora.py b/invokeai/backend/lora.py
index 8d17de0837..8ef81915f1 100644
--- a/invokeai/backend/lora.py
+++ b/invokeai/backend/lora.py
@@ -10,9 +10,7 @@ from safetensors.torch import load_file
from typing_extensions import Self
from invokeai.backend.model_manager import BaseModelType
-from invokeai.backend.util.devices import TorchDevice
-
-from .raw_model import RawModel
+from invokeai.backend.raw_model import RawModel
class LoRALayerBase:
@@ -58,14 +56,9 @@ class LoRALayerBase:
model_size += val.nelement() * val.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
if self.bias is not None:
- self.bias = self.bias.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.bias = self.bias.to(device=device, dtype=dtype)
# TODO: find and debug lora/locon with bias
@@ -107,19 +100,14 @@ class LoRALayer(LoRALayerBase):
model_size += val.nelement() * val.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
- super().to(device=device, dtype=dtype, non_blocking=non_blocking)
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
+ super().to(device=device, dtype=dtype)
- self.up = self.up.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.down = self.down.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.up = self.up.to(device=device, dtype=dtype)
+ self.down = self.down.to(device=device, dtype=dtype)
if self.mid is not None:
- self.mid = self.mid.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.mid = self.mid.to(device=device, dtype=dtype)
class LoHALayer(LoRALayerBase):
@@ -168,23 +156,18 @@ class LoHALayer(LoRALayerBase):
model_size += val.nelement() * val.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
- self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.w1_a = self.w1_a.to(device=device, dtype=dtype)
+ self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.t1 is not None:
- self.t1 = self.t1.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.t1 = self.t1.to(device=device, dtype=dtype)
- self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.w2_a = self.w2_a.to(device=device, dtype=dtype)
+ self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
- self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.t2 = self.t2.to(device=device, dtype=dtype)
class LoKRLayer(LoRALayerBase):
@@ -265,12 +248,7 @@ class LoKRLayer(LoRALayerBase):
model_size += val.nelement() * val.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
if self.w1 is not None:
@@ -278,19 +256,19 @@ class LoKRLayer(LoRALayerBase):
else:
assert self.w1_a is not None
assert self.w1_b is not None
- self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.w1_a = self.w1_a.to(device=device, dtype=dtype)
+ self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.w2 is not None:
- self.w2 = self.w2.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.w2 = self.w2.to(device=device, dtype=dtype)
else:
assert self.w2_a is not None
assert self.w2_b is not None
- self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.w2_a = self.w2_a.to(device=device, dtype=dtype)
+ self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
- self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.t2 = self.t2.to(device=device, dtype=dtype)
class FullLayer(LoRALayerBase):
@@ -320,15 +298,10 @@ class FullLayer(LoRALayerBase):
model_size += self.weight.nelement() * self.weight.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
- self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.weight = self.weight.to(device=device, dtype=dtype)
class IA3Layer(LoRALayerBase):
@@ -360,16 +333,11 @@ class IA3Layer(LoRALayerBase):
model_size += self.on_input.nelement() * self.on_input.element_size()
return model_size
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ):
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
super().to(device=device, dtype=dtype)
- self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
- self.on_input = self.on_input.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ self.weight = self.weight.to(device=device, dtype=dtype)
+ self.on_input = self.on_input.to(device=device, dtype=dtype)
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer]
@@ -391,15 +359,10 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
def name(self) -> str:
return self._name
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
# TODO: try revert if exception?
for _key, layer in self.layers.items():
- layer.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ layer.to(device=device, dtype=dtype)
def calc_size(self) -> int:
model_size = 0
@@ -522,7 +485,7 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
# lower memory consumption by removing already parsed layer values
state_dict[layer_key].clear()
- layer.to(device=device, dtype=dtype, non_blocking=TorchDevice.get_non_blocking(device))
+ layer.to(device=device, dtype=dtype)
model.layers[layer_key] = layer
return model
diff --git a/invokeai/backend/model_manager/__init__.py b/invokeai/backend/model_manager/__init__.py
index 98cc5054c7..199c0c01f7 100644
--- a/invokeai/backend/model_manager/__init__.py
+++ b/invokeai/backend/model_manager/__init__.py
@@ -1,6 +1,6 @@
"""Re-export frequently-used symbols from the Model Manager backend."""
-from .config import (
+from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
BaseModelType,
@@ -13,9 +13,9 @@ from .config import (
SchedulerPredictionType,
SubModelType,
)
-from .load import LoadedModel
-from .probe import ModelProbe
-from .search import ModelSearch
+from invokeai.backend.model_manager.load import LoadedModel
+from invokeai.backend.model_manager.probe import ModelProbe
+from invokeai.backend.model_manager.search import ModelSearch
__all__ = [
"AnyModel",
diff --git a/invokeai/backend/model_manager/config.py b/invokeai/backend/model_manager/config.py
index d788012dc7..f6cc5929c8 100644
--- a/invokeai/backend/model_manager/config.py
+++ b/invokeai/backend/model_manager/config.py
@@ -30,11 +30,10 @@ from diffusers.models.modeling_utils import ModelMixin
from pydantic import BaseModel, ConfigDict, Discriminator, Field, Tag, TypeAdapter
from typing_extensions import Annotated, Any, Dict
-from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
from invokeai.app.util.misc import uuid_string
from invokeai.backend.model_hash.hash_validator import validate_hash
-
-from ..raw_model import RawModel
+from invokeai.backend.raw_model import RawModel
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
# ModelMixin is the base class for all diffusers and transformers models
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
@@ -68,6 +67,7 @@ class ModelType(str, Enum):
IPAdapter = "ip_adapter"
CLIPVision = "clip_vision"
T2IAdapter = "t2i_adapter"
+ SpandrelImageToImage = "spandrel_image_to_image"
class SubModelType(str, Enum):
@@ -372,6 +372,17 @@ class T2IAdapterConfig(DiffusersConfigBase, ControlAdapterConfigBase):
return Tag(f"{ModelType.T2IAdapter.value}.{ModelFormat.Diffusers.value}")
+class SpandrelImageToImageConfig(ModelConfigBase):
+ """Model config for Spandrel Image to Image models."""
+
+ type: Literal[ModelType.SpandrelImageToImage] = ModelType.SpandrelImageToImage
+ format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
+
+ @staticmethod
+ def get_tag() -> Tag:
+ return Tag(f"{ModelType.SpandrelImageToImage.value}.{ModelFormat.Checkpoint.value}")
+
+
def get_model_discriminator_value(v: Any) -> str:
"""
Computes the discriminator value for a model config.
@@ -408,6 +419,7 @@ AnyModelConfig = Annotated[
Annotated[IPAdapterInvokeAIConfig, IPAdapterInvokeAIConfig.get_tag()],
Annotated[IPAdapterCheckpointConfig, IPAdapterCheckpointConfig.get_tag()],
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
+ Annotated[SpandrelImageToImageConfig, SpandrelImageToImageConfig.get_tag()],
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
],
Discriminator(get_model_discriminator_value),
diff --git a/invokeai/backend/model_manager/libc_util.py b/invokeai/backend/model_manager/libc_util.py
deleted file mode 100644
index 1fbcae0a93..0000000000
--- a/invokeai/backend/model_manager/libc_util.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import ctypes
-
-
-class Struct_mallinfo2(ctypes.Structure):
- """A ctypes Structure that matches the libc mallinfo2 struct.
-
- Docs:
- - https://man7.org/linux/man-pages/man3/mallinfo.3.html
- - https://www.gnu.org/software/libc/manual/html_node/Statistics-of-Malloc.html
-
- struct mallinfo2 {
- size_t arena; /* Non-mmapped space allocated (bytes) */
- size_t ordblks; /* Number of free chunks */
- size_t smblks; /* Number of free fastbin blocks */
- size_t hblks; /* Number of mmapped regions */
- size_t hblkhd; /* Space allocated in mmapped regions (bytes) */
- size_t usmblks; /* See below */
- size_t fsmblks; /* Space in freed fastbin blocks (bytes) */
- size_t uordblks; /* Total allocated space (bytes) */
- size_t fordblks; /* Total free space (bytes) */
- size_t keepcost; /* Top-most, releasable space (bytes) */
- };
- """
-
- _fields_ = [
- ("arena", ctypes.c_size_t),
- ("ordblks", ctypes.c_size_t),
- ("smblks", ctypes.c_size_t),
- ("hblks", ctypes.c_size_t),
- ("hblkhd", ctypes.c_size_t),
- ("usmblks", ctypes.c_size_t),
- ("fsmblks", ctypes.c_size_t),
- ("uordblks", ctypes.c_size_t),
- ("fordblks", ctypes.c_size_t),
- ("keepcost", ctypes.c_size_t),
- ]
-
- def __str__(self):
- s = ""
- s += f"{'arena': <10}= {(self.arena/2**30):15.5f} # Non-mmapped space allocated (GB) (uordblks + fordblks)\n"
- s += f"{'ordblks': <10}= {(self.ordblks): >15} # Number of free chunks\n"
- s += f"{'smblks': <10}= {(self.smblks): >15} # Number of free fastbin blocks \n"
- s += f"{'hblks': <10}= {(self.hblks): >15} # Number of mmapped regions \n"
- s += f"{'hblkhd': <10}= {(self.hblkhd/2**30):15.5f} # Space allocated in mmapped regions (GB)\n"
- s += f"{'usmblks': <10}= {(self.usmblks): >15} # Unused\n"
- s += f"{'fsmblks': <10}= {(self.fsmblks/2**30):15.5f} # Space in freed fastbin blocks (GB)\n"
- s += (
- f"{'uordblks': <10}= {(self.uordblks/2**30):15.5f} # Space used by in-use allocations (non-mmapped)"
- " (GB)\n"
- )
- s += f"{'fordblks': <10}= {(self.fordblks/2**30):15.5f} # Space in free blocks (non-mmapped) (GB)\n"
- s += f"{'keepcost': <10}= {(self.keepcost/2**30):15.5f} # Top-most, releasable space (GB)\n"
- return s
-
-
-class LibcUtil:
- """A utility class for interacting with the C Standard Library (`libc`) via ctypes.
-
- Note that this class will raise on __init__() if 'libc.so.6' can't be found. Take care to handle environments where
- this shared library is not available.
-
- TODO: Improve cross-OS compatibility of this class.
- """
-
- def __init__(self):
- self._libc = ctypes.cdll.LoadLibrary("libc.so.6")
-
- def mallinfo2(self) -> Struct_mallinfo2:
- """Calls `libc` `mallinfo2`.
-
- Docs: https://man7.org/linux/man-pages/man3/mallinfo.3.html
- """
- mallinfo2 = self._libc.mallinfo2
- mallinfo2.restype = Struct_mallinfo2
- return mallinfo2()
diff --git a/invokeai/backend/model_manager/load/__init__.py b/invokeai/backend/model_manager/load/__init__.py
index 3d34f02727..d9a07bc250 100644
--- a/invokeai/backend/model_manager/load/__init__.py
+++ b/invokeai/backend/model_manager/load/__init__.py
@@ -6,10 +6,10 @@ Init file for the model loader.
from importlib import import_module
from pathlib import Path
-from .load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
-from .load_default import ModelLoader
-from .model_cache.model_cache_default import ModelCache
-from .model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
+from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
+from invokeai.backend.model_manager.load.load_default import ModelLoader
+from invokeai.backend.model_manager.load.model_cache.model_cache_default import ModelCache
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
# This registers the subclasses that implement loaders of specific model types
loaders = [x.stem for x in Path(Path(__file__).parent, "model_loaders").glob("*.py") if x.stem != "__init__"]
diff --git a/invokeai/backend/model_manager/load/memory_snapshot.py b/invokeai/backend/model_manager/load/memory_snapshot.py
index 195e39361b..66dd070963 100644
--- a/invokeai/backend/model_manager/load/memory_snapshot.py
+++ b/invokeai/backend/model_manager/load/memory_snapshot.py
@@ -5,7 +5,7 @@ import psutil
import torch
from typing_extensions import Self
-from ..util.libc_util import LibcUtil, Struct_mallinfo2
+from invokeai.backend.model_manager.util.libc_util import LibcUtil, Struct_mallinfo2
GB = 2**30 # 1 GB
diff --git a/invokeai/backend/model_manager/load/model_cache/model_cache_default.py b/invokeai/backend/model_manager/load/model_cache/model_cache_default.py
index ff9d8e918e..d1a9740fbd 100644
--- a/invokeai/backend/model_manager/load/model_cache/model_cache_default.py
+++ b/invokeai/backend/model_manager/load/model_cache/model_cache_default.py
@@ -29,13 +29,17 @@ import torch
from invokeai.backend.model_manager import AnyModel, SubModelType
from invokeai.backend.model_manager.load.memory_snapshot import MemorySnapshot, get_pretty_snapshot_diff
+from invokeai.backend.model_manager.load.model_cache.model_cache_base import (
+ CacheRecord,
+ CacheStats,
+ ModelCacheBase,
+ ModelLockerBase,
+)
+from invokeai.backend.model_manager.load.model_cache.model_locker import ModelLocker
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
-from .model_cache_base import CacheRecord, CacheStats, ModelCacheBase, ModelLockerBase
-from .model_locker import ModelLocker
-
# Maximum size of the cache, in gigs
# Default is roughly enough to hold three fp16 diffusers models in RAM simultaneously
DEFAULT_MAX_CACHE_SIZE = 6.0
@@ -160,10 +164,11 @@ class ModelCache(ModelCacheBase[AnyModel]):
key = self._make_cache_key(key, submodel_type)
if key in self._cached_models:
return
- size = calc_model_size_by_data(model)
+ size = calc_model_size_by_data(self.logger, model)
self.make_room(size)
- state_dict = model.state_dict() if isinstance(model, torch.nn.Module) else None
+ running_on_cpu = self.execution_device == torch.device("cpu")
+ state_dict = model.state_dict() if isinstance(model, torch.nn.Module) and not running_on_cpu else None
cache_record = CacheRecord(key=key, model=model, device=self.storage_device, state_dict=state_dict, size=size)
self._cached_models[key] = cache_record
self._cache_stack.append(key)
@@ -285,9 +290,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
else:
new_dict: Dict[str, torch.Tensor] = {}
for k, v in cache_entry.state_dict.items():
- new_dict[k] = v.to(
- target_device, copy=True, non_blocking=TorchDevice.get_non_blocking(target_device)
- )
+ new_dict[k] = v.to(target_device, copy=True)
cache_entry.model.load_state_dict(new_dict, assign=True)
try:
cache_entry.model.to(target_device, non_blocking=TorchDevice.get_non_blocking(target_device))
diff --git a/invokeai/backend/model_manager/load/model_cache/model_locker.py b/invokeai/backend/model_manager/load/model_cache/model_locker.py
index 9de17ca5f5..efbfc726f7 100644
--- a/invokeai/backend/model_manager/load/model_cache/model_locker.py
+++ b/invokeai/backend/model_manager/load/model_cache/model_locker.py
@@ -7,8 +7,11 @@ from typing import Dict, Optional
import torch
from invokeai.backend.model_manager import AnyModel
-
-from .model_cache_base import CacheRecord, ModelCacheBase, ModelLockerBase
+from invokeai.backend.model_manager.load.model_cache.model_cache_base import (
+ CacheRecord,
+ ModelCacheBase,
+ ModelLockerBase,
+)
class ModelLocker(ModelLockerBase):
diff --git a/invokeai/backend/model_manager/load/model_loader_registry.py b/invokeai/backend/model_manager/load/model_loader_registry.py
index bb6bd18d7f..0ce8f8a6b4 100644
--- a/invokeai/backend/model_manager/load/model_loader_registry.py
+++ b/invokeai/backend/model_manager/load/model_loader_registry.py
@@ -18,7 +18,7 @@ Use like this:
from abc import ABC, abstractmethod
from typing import Callable, Dict, Optional, Tuple, Type, TypeVar
-from ..config import (
+from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigBase,
@@ -26,7 +26,7 @@ from ..config import (
ModelType,
SubModelType,
)
-from . import ModelLoaderBase
+from invokeai.backend.model_manager.load import ModelLoaderBase
class ModelLoaderRegistryBase(ABC):
diff --git a/invokeai/backend/model_manager/load/model_loaders/controlnet.py b/invokeai/backend/model_manager/load/model_loaders/controlnet.py
index b2fae37d29..82091874df 100644
--- a/invokeai/backend/model_manager/load/model_loaders/controlnet.py
+++ b/invokeai/backend/model_manager/load/model_loaders/controlnet.py
@@ -13,9 +13,8 @@ from invokeai.backend.model_manager import (
ModelType,
)
from invokeai.backend.model_manager.config import ControlNetCheckpointConfig, SubModelType
-
-from .. import ModelLoaderRegistry
-from .generic_diffusers import GenericDiffusersLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
+from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ControlNet, format=ModelFormat.Diffusers)
diff --git a/invokeai/backend/model_manager/load/model_loaders/generic_diffusers.py b/invokeai/backend/model_manager/load/model_loaders/generic_diffusers.py
index 6320797b8a..dfe38aa79c 100644
--- a/invokeai/backend/model_manager/load/model_loaders/generic_diffusers.py
+++ b/invokeai/backend/model_manager/load/model_loaders/generic_diffusers.py
@@ -18,8 +18,8 @@ from invokeai.backend.model_manager import (
SubModelType,
)
from invokeai.backend.model_manager.config import DiffusersConfigBase
-
-from .. import ModelLoader, ModelLoaderRegistry
+from invokeai.backend.model_manager.load.load_default import ModelLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPVision, format=ModelFormat.Diffusers)
diff --git a/invokeai/backend/model_manager/load/model_loaders/lora.py b/invokeai/backend/model_manager/load/model_loaders/lora.py
index aa0acab6bc..367107c662 100644
--- a/invokeai/backend/model_manager/load/model_loaders/lora.py
+++ b/invokeai/backend/model_manager/load/model_loaders/lora.py
@@ -15,9 +15,9 @@ from invokeai.backend.model_manager import (
ModelType,
SubModelType,
)
+from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
-
-from .. import ModelLoader, ModelLoaderRegistry
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.LoRA, format=ModelFormat.Diffusers)
diff --git a/invokeai/backend/model_manager/load/model_loaders/onnx.py b/invokeai/backend/model_manager/load/model_loaders/onnx.py
index b43e0a1bdf..0a5d8477c4 100644
--- a/invokeai/backend/model_manager/load/model_loaders/onnx.py
+++ b/invokeai/backend/model_manager/load/model_loaders/onnx.py
@@ -13,9 +13,8 @@ from invokeai.backend.model_manager import (
ModelType,
SubModelType,
)
-
-from .. import ModelLoaderRegistry
-from .generic_diffusers import GenericDiffusersLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
+from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.ONNX, format=ModelFormat.ONNX)
diff --git a/invokeai/backend/model_manager/load/model_loaders/spandrel_image_to_image.py b/invokeai/backend/model_manager/load/model_loaders/spandrel_image_to_image.py
new file mode 100644
index 0000000000..7a57c5cf59
--- /dev/null
+++ b/invokeai/backend/model_manager/load/model_loaders/spandrel_image_to_image.py
@@ -0,0 +1,45 @@
+from pathlib import Path
+from typing import Optional
+
+import torch
+
+from invokeai.backend.model_manager.config import (
+ AnyModel,
+ AnyModelConfig,
+ BaseModelType,
+ ModelFormat,
+ ModelType,
+ SubModelType,
+)
+from invokeai.backend.model_manager.load.load_default import ModelLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
+from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
+
+
+@ModelLoaderRegistry.register(
+ base=BaseModelType.Any, type=ModelType.SpandrelImageToImage, format=ModelFormat.Checkpoint
+)
+class SpandrelImageToImageModelLoader(ModelLoader):
+ """Class for loading Spandrel Image-to-Image models (i.e. models wrapped by spandrel.ImageModelDescriptor)."""
+
+ def _load_model(
+ self,
+ config: AnyModelConfig,
+ submodel_type: Optional[SubModelType] = None,
+ ) -> AnyModel:
+ if submodel_type is not None:
+ raise ValueError("Unexpected submodel requested for Spandrel model.")
+
+ model_path = Path(config.path)
+ model = SpandrelImageToImageModel.load_from_file(model_path)
+
+ torch_dtype = self._torch_dtype
+ if not model.supports_dtype(torch_dtype):
+ self._logger.warning(
+ f"The configured dtype ('{self._torch_dtype}') is not supported by the {model.get_model_type_name()} "
+ "model. Falling back to 'float32'."
+ )
+ torch_dtype = torch.float32
+ model.to(dtype=torch_dtype)
+
+ return model
diff --git a/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py b/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
index 95caf848e5..d90352f0e6 100644
--- a/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
+++ b/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
@@ -25,11 +25,10 @@ from invokeai.backend.model_manager.config import (
DiffusersConfigBase,
MainCheckpointConfig,
)
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
+from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
from invokeai.backend.util.silence_warnings import SilenceWarnings
-from .. import ModelLoaderRegistry
-from .generic_diffusers import GenericDiffusersLoader
-
VARIANT_TO_IN_CHANNEL_MAP = {
ModelVariantType.Normal: 4,
ModelVariantType.Depth: 5,
diff --git a/invokeai/backend/model_manager/load/model_loaders/textual_inversion.py b/invokeai/backend/model_manager/load/model_loaders/textual_inversion.py
index cfdc689cc8..8d0f08f91a 100644
--- a/invokeai/backend/model_manager/load/model_loaders/textual_inversion.py
+++ b/invokeai/backend/model_manager/load/model_loaders/textual_inversion.py
@@ -12,10 +12,10 @@ from invokeai.backend.model_manager import (
ModelType,
SubModelType,
)
+from invokeai.backend.model_manager.load.load_default import ModelLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
from invokeai.backend.textual_inversion import TextualInversionModelRaw
-from .. import ModelLoader, ModelLoaderRegistry
-
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.TextualInversion, format=ModelFormat.EmbeddingFile)
@ModelLoaderRegistry.register(
diff --git a/invokeai/backend/model_manager/load/model_loaders/vae.py b/invokeai/backend/model_manager/load/model_loaders/vae.py
index 3c496f59ab..bae29ea773 100644
--- a/invokeai/backend/model_manager/load/model_loaders/vae.py
+++ b/invokeai/backend/model_manager/load/model_loaders/vae.py
@@ -12,9 +12,8 @@ from invokeai.backend.model_manager import (
ModelType,
)
from invokeai.backend.model_manager.config import AnyModel, SubModelType, VAECheckpointConfig
-
-from .. import ModelLoaderRegistry
-from .generic_diffusers import GenericDiffusersLoader
+from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
+from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.VAE, format=ModelFormat.Diffusers)
diff --git a/invokeai/backend/model_manager/load/model_util.py b/invokeai/backend/model_manager/load/model_util.py
index c55eee48fa..f070a42965 100644
--- a/invokeai/backend/model_manager/load/model_util.py
+++ b/invokeai/backend/model_manager/load/model_util.py
@@ -2,25 +2,47 @@
"""Various utility functions needed by the loader and caching system."""
import json
+import logging
from pathlib import Path
from typing import Optional
import torch
-from diffusers import DiffusionPipeline
+from diffusers.pipelines.pipeline_utils import DiffusionPipeline
+from diffusers.schedulers.scheduling_utils import SchedulerMixin
+from transformers import CLIPTokenizer
+from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
+from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager.config import AnyModel
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
+from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
+from invokeai.backend.textual_inversion import TextualInversionModelRaw
-def calc_model_size_by_data(model: AnyModel) -> int:
+def calc_model_size_by_data(logger: logging.Logger, model: AnyModel) -> int:
"""Get size of a model in memory in bytes."""
+ # TODO(ryand): We should create a CacheableModel interface for all models, and move the size calculations down to
+ # the models themselves.
if isinstance(model, DiffusionPipeline):
return _calc_pipeline_by_data(model)
elif isinstance(model, torch.nn.Module):
- return _calc_model_by_data(model)
+ return calc_module_size(model)
elif isinstance(model, IAIOnnxRuntimeModel):
return _calc_onnx_model_by_data(model)
+ elif isinstance(model, SchedulerMixin):
+ return 0
+ elif isinstance(model, CLIPTokenizer):
+ # TODO(ryand): Accurately calculate the tokenizer's size. It's small enough that it shouldn't matter for now.
+ return 0
+ elif isinstance(model, (TextualInversionModelRaw, IPAdapter, LoRAModelRaw, SpandrelImageToImageModel)):
+ return model.calc_size()
else:
+ # TODO(ryand): Promote this from a log to an exception once we are confident that we are handling all of the
+ # supported model types.
+ logger.warning(
+ f"Failed to calculate model size for unexpected model type: {type(model)}. The model will be treated as "
+ "having size 0."
+ )
return 0
@@ -30,11 +52,12 @@ def _calc_pipeline_by_data(pipeline: DiffusionPipeline) -> int:
for submodel_key in pipeline.components.keys():
submodel = getattr(pipeline, submodel_key)
if submodel is not None and isinstance(submodel, torch.nn.Module):
- res += _calc_model_by_data(submodel)
+ res += calc_module_size(submodel)
return res
-def _calc_model_by_data(model: torch.nn.Module) -> int:
+def calc_module_size(model: torch.nn.Module) -> int:
+ """Calculate the size (in bytes) of a torch.nn.Module."""
mem_params = sum([param.nelement() * param.element_size() for param in model.parameters()])
mem_bufs = sum([buf.nelement() * buf.element_size() for buf in model.buffers()])
mem: int = mem_params + mem_bufs # in bytes
diff --git a/invokeai/backend/model_manager/merge.py b/invokeai/backend/model_manager/merge.py
index 125e99be93..b00bc99f3e 100644
--- a/invokeai/backend/model_manager/merge.py
+++ b/invokeai/backend/model_manager/merge.py
@@ -17,16 +17,10 @@ from diffusers.utils import logging as dlogging
from invokeai.app.services.model_install import ModelInstallServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
+from invokeai.backend.model_manager import AnyModelConfig, BaseModelType, ModelType, ModelVariantType
+from invokeai.backend.model_manager.config import MainDiffusersConfig
from invokeai.backend.util.devices import TorchDevice
-from . import (
- AnyModelConfig,
- BaseModelType,
- ModelType,
- ModelVariantType,
-)
-from .config import MainDiffusersConfig
-
class MergeInterpolationMethod(str, Enum):
WeightedSum = "weighted_sum"
diff --git a/invokeai/backend/model_manager/metadata/__init__.py b/invokeai/backend/model_manager/metadata/__init__.py
index 1fd080b679..76da268153 100644
--- a/invokeai/backend/model_manager/metadata/__init__.py
+++ b/invokeai/backend/model_manager/metadata/__init__.py
@@ -16,8 +16,8 @@ data = HuggingFaceMetadataFetch().from_id("")
assert isinstance(data, HuggingFaceMetadata)
"""
-from .fetch import HuggingFaceMetadataFetch, ModelMetadataFetchBase
-from .metadata_base import (
+from invokeai.backend.model_manager.metadata.fetch import HuggingFaceMetadataFetch, ModelMetadataFetchBase
+from invokeai.backend.model_manager.metadata.metadata_base import (
AnyModelRepoMetadata,
AnyModelRepoMetadataValidator,
BaseMetadata,
diff --git a/invokeai/backend/model_manager/metadata/fetch/__init__.py b/invokeai/backend/model_manager/metadata/fetch/__init__.py
index 652a3cf6b7..62b3dc4d54 100644
--- a/invokeai/backend/model_manager/metadata/fetch/__init__.py
+++ b/invokeai/backend/model_manager/metadata/fetch/__init__.py
@@ -10,7 +10,7 @@ data = HuggingFaceMetadataFetch().from_id("")
assert isinstance(data, HuggingFaceMetadata)
"""
-from .fetch_base import ModelMetadataFetchBase
-from .huggingface import HuggingFaceMetadataFetch
+from invokeai.backend.model_manager.metadata.fetch.fetch_base import ModelMetadataFetchBase
+from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
__all__ = ["ModelMetadataFetchBase", "HuggingFaceMetadataFetch"]
diff --git a/invokeai/backend/model_manager/metadata/fetch/fetch_base.py b/invokeai/backend/model_manager/metadata/fetch/fetch_base.py
index f84479404e..b86a029b3e 100644
--- a/invokeai/backend/model_manager/metadata/fetch/fetch_base.py
+++ b/invokeai/backend/model_manager/metadata/fetch/fetch_base.py
@@ -18,8 +18,11 @@ from pydantic.networks import AnyHttpUrl
from requests.sessions import Session
from invokeai.backend.model_manager import ModelRepoVariant
-
-from ..metadata_base import AnyModelRepoMetadata, AnyModelRepoMetadataValidator, BaseMetadata
+from invokeai.backend.model_manager.metadata.metadata_base import (
+ AnyModelRepoMetadata,
+ AnyModelRepoMetadataValidator,
+ BaseMetadata,
+)
class ModelMetadataFetchBase(ABC):
diff --git a/invokeai/backend/model_manager/metadata/fetch/huggingface.py b/invokeai/backend/model_manager/metadata/fetch/huggingface.py
index ab78b3e064..8787ceeb36 100644
--- a/invokeai/backend/model_manager/metadata/fetch/huggingface.py
+++ b/invokeai/backend/model_manager/metadata/fetch/huggingface.py
@@ -25,14 +25,13 @@ from pydantic.networks import AnyHttpUrl
from requests.sessions import Session
from invokeai.backend.model_manager.config import ModelRepoVariant
-
-from ..metadata_base import (
+from invokeai.backend.model_manager.metadata.fetch.fetch_base import ModelMetadataFetchBase
+from invokeai.backend.model_manager.metadata.metadata_base import (
AnyModelRepoMetadata,
HuggingFaceMetadata,
RemoteModelFile,
UnknownMetadataException,
)
-from .fetch_base import ModelMetadataFetchBase
HF_MODEL_RE = r"https?://huggingface.co/([\w\-.]+/[\w\-.]+)"
diff --git a/invokeai/backend/model_manager/metadata/metadata_base.py b/invokeai/backend/model_manager/metadata/metadata_base.py
index f9f5335d17..97fc598380 100644
--- a/invokeai/backend/model_manager/metadata/metadata_base.py
+++ b/invokeai/backend/model_manager/metadata/metadata_base.py
@@ -24,8 +24,7 @@ from requests.sessions import Session
from typing_extensions import Annotated
from invokeai.backend.model_manager import ModelRepoVariant
-
-from ..util import select_hf_files
+from invokeai.backend.model_manager.util.select_hf_files import filter_files
class UnknownMetadataException(Exception):
@@ -112,9 +111,7 @@ class HuggingFaceMetadata(ModelMetadataWithFiles):
session = session or Session()
configure_http_backend(backend_factory=lambda: session) # used in testing
- paths = select_hf_files.filter_files(
- [x.path for x in self.files], variant, subfolder
- ) # all files in the model
+ paths = filter_files([x.path for x in self.files], variant, subfolder) # all files in the model
prefix = f"{subfolder}/" if subfolder else ""
# the next step reads model_index.json to determine which subdirectories belong
# to the model
diff --git a/invokeai/backend/model_manager/probe.py b/invokeai/backend/model_manager/probe.py
index 2f18f1a8a6..1929b3f4fd 100644
--- a/invokeai/backend/model_manager/probe.py
+++ b/invokeai/backend/model_manager/probe.py
@@ -4,15 +4,14 @@ from pathlib import Path
from typing import Any, Dict, Literal, Optional, Union
import safetensors.torch
+import spandrel
import torch
from picklescan.scanner import scan_file_path
import invokeai.backend.util.logging as logger
from invokeai.app.util.misc import uuid_string
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
-from invokeai.backend.util.silence_warnings import SilenceWarnings
-
-from .config import (
+from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ControlAdapterDefaultSettings,
@@ -26,7 +25,9 @@ from .config import (
ModelVariantType,
SchedulerPredictionType,
)
-from .util.model_util import lora_token_vector_length, read_checkpoint_meta
+from invokeai.backend.model_manager.util.model_util import lora_token_vector_length, read_checkpoint_meta
+from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
+from invokeai.backend.util.silence_warnings import SilenceWarnings
CkptType = Dict[str | int, Any]
@@ -221,24 +222,46 @@ class ModelProbe(object):
ckpt = ckpt.get("state_dict", ckpt)
for key in [str(k) for k in ckpt.keys()]:
- if any(key.startswith(v) for v in {"cond_stage_model.", "first_stage_model.", "model.diffusion_model."}):
+ if key.startswith(("cond_stage_model.", "first_stage_model.", "model.diffusion_model.")):
return ModelType.Main
- elif any(key.startswith(v) for v in {"encoder.conv_in", "decoder.conv_in"}):
+ elif key.startswith(("encoder.conv_in", "decoder.conv_in")):
return ModelType.VAE
- elif any(key.startswith(v) for v in {"lora_te_", "lora_unet_"}):
+ elif key.startswith(("lora_te_", "lora_unet_")):
return ModelType.LoRA
- elif any(key.endswith(v) for v in {"to_k_lora.up.weight", "to_q_lora.down.weight"}):
+ elif key.endswith(("to_k_lora.up.weight", "to_q_lora.down.weight")):
return ModelType.LoRA
- elif any(key.startswith(v) for v in {"controlnet", "control_model", "input_blocks"}):
+ elif key.startswith(("controlnet", "control_model", "input_blocks")):
return ModelType.ControlNet
- elif any(key.startswith(v) for v in {"image_proj.", "ip_adapter."}):
+ elif key.startswith(("image_proj.", "ip_adapter.")):
return ModelType.IPAdapter
elif key in {"emb_params", "string_to_param"}:
return ModelType.TextualInversion
- else:
- # diffusers-ti
- if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
- return ModelType.TextualInversion
+
+ # diffusers-ti
+ if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
+ return ModelType.TextualInversion
+
+ # Check if the model can be loaded as a SpandrelImageToImageModel.
+ # This check is intentionally performed last, as it can be expensive (it requires loading the model from disk).
+ try:
+ # It would be nice to avoid having to load the Spandrel model from disk here. A couple of options were
+ # explored to avoid this:
+ # 1. Call `SpandrelImageToImageModel.load_from_state_dict(ckpt)`, where `ckpt` is a state_dict on the meta
+ # device. Unfortunately, some Spandrel models perform operations during initialization that are not
+ # supported on meta tensors.
+ # 2. Spandrel has internal logic to determine a model's type from its state_dict before loading the model.
+ # This logic is not exposed in spandrel's public API. We could copy the logic here, but then we have to
+ # maintain it, and the risk of false positive detections is higher.
+ SpandrelImageToImageModel.load_from_file(model_path)
+ return ModelType.SpandrelImageToImage
+ except spandrel.UnsupportedModelError:
+ pass
+ except RuntimeError as e:
+ if "No such file or directory" in str(e):
+ # This error is expected if the model_path does not exist (which is the case in some unit tests).
+ pass
+ else:
+ raise e
raise InvalidModelConfigException(f"Unable to determine model type for {model_path}")
@@ -570,6 +593,11 @@ class T2IAdapterCheckpointProbe(CheckpointProbeBase):
raise NotImplementedError()
+class SpandrelImageToImageCheckpointProbe(CheckpointProbeBase):
+ def get_base_type(self) -> BaseModelType:
+ return BaseModelType.Any
+
+
########################################################
# classes for probing folders
#######################################################
@@ -777,6 +805,11 @@ class CLIPVisionFolderProbe(FolderProbeBase):
return BaseModelType.Any
+class SpandrelImageToImageFolderProbe(FolderProbeBase):
+ def get_base_type(self) -> BaseModelType:
+ raise NotImplementedError()
+
+
class T2IAdapterFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
@@ -806,6 +839,7 @@ ModelProbe.register_probe("diffusers", ModelType.ControlNet, ControlNetFolderPro
ModelProbe.register_probe("diffusers", ModelType.IPAdapter, IPAdapterFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.CLIPVision, CLIPVisionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.T2IAdapter, T2IAdapterFolderProbe)
+ModelProbe.register_probe("diffusers", ModelType.SpandrelImageToImage, SpandrelImageToImageFolderProbe)
ModelProbe.register_probe("checkpoint", ModelType.Main, PipelineCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.VAE, VaeCheckpointProbe)
@@ -815,5 +849,6 @@ ModelProbe.register_probe("checkpoint", ModelType.ControlNet, ControlNetCheckpoi
ModelProbe.register_probe("checkpoint", ModelType.IPAdapter, IPAdapterCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.CLIPVision, CLIPVisionCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.T2IAdapter, T2IAdapterCheckpointProbe)
+ModelProbe.register_probe("checkpoint", ModelType.SpandrelImageToImage, SpandrelImageToImageCheckpointProbe)
ModelProbe.register_probe("onnx", ModelType.ONNX, ONNXFolderProbe)
diff --git a/invokeai/backend/model_manager/starter_models.py b/invokeai/backend/model_manager/starter_models.py
index a397a2a5dc..e1526392f6 100644
--- a/invokeai/backend/model_manager/starter_models.py
+++ b/invokeai/backend/model_manager/starter_models.py
@@ -340,6 +340,13 @@ STARTER_MODELS: list[StarterModel] = [
description="Controlnet weights trained on sdxl-1.0 compatible with various lineart processors and black/white sketches by Xinsir.",
type=ModelType.ControlNet,
),
+ StarterModel(
+ name="tile-sdxl",
+ base=BaseModelType.StableDiffusionXL,
+ source="xinsir/controlnet-tile-sdxl-1.0",
+ description="Controlnet weights trained on sdxl-1.0 with tiled image conditioning",
+ type=ModelType.ControlNet,
+ ),
# endregion
# region T2I Adapter
StarterModel(
@@ -399,6 +406,43 @@ STARTER_MODELS: list[StarterModel] = [
type=ModelType.T2IAdapter,
),
# endregion
+ # region SpandrelImageToImage
+ StarterModel(
+ name="RealESRGAN_x4plus_anime_6B",
+ base=BaseModelType.Any,
+ source="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth",
+ description="A Real-ESRGAN 4x upscaling model (optimized for anime images).",
+ type=ModelType.SpandrelImageToImage,
+ ),
+ StarterModel(
+ name="RealESRGAN_x4plus",
+ base=BaseModelType.Any,
+ source="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth",
+ description="A Real-ESRGAN 4x upscaling model (general-purpose).",
+ type=ModelType.SpandrelImageToImage,
+ ),
+ StarterModel(
+ name="ESRGAN_SRx4_DF2KOST_official",
+ base=BaseModelType.Any,
+ source="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth",
+ description="The official ESRGAN 4x upscaling model.",
+ type=ModelType.SpandrelImageToImage,
+ ),
+ StarterModel(
+ name="RealESRGAN_x2plus",
+ base=BaseModelType.Any,
+ source="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
+ description="A Real-ESRGAN 2x upscaling model (general-purpose).",
+ type=ModelType.SpandrelImageToImage,
+ ),
+ StarterModel(
+ name="SwinIR - realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN",
+ base=BaseModelType.Any,
+ source="https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN-with-dict-keys-params-and-params_ema.pth",
+ description="A SwinIR 4x upscaling model.",
+ type=ModelType.SpandrelImageToImage,
+ ),
+ # endregion
]
assert len(STARTER_MODELS) == len({m.source for m in STARTER_MODELS}), "Duplicate starter models"
diff --git a/invokeai/backend/model_manager/util/select_hf_files.py b/invokeai/backend/model_manager/util/select_hf_files.py
index 4a63ab27b7..b0a9551437 100644
--- a/invokeai/backend/model_manager/util/select_hf_files.py
+++ b/invokeai/backend/model_manager/util/select_hf_files.py
@@ -17,7 +17,7 @@ from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional, Set
-from ..config import ModelRepoVariant
+from invokeai.backend.model_manager.config import ModelRepoVariant
def filter_files(
diff --git a/invokeai/backend/model_patcher.py b/invokeai/backend/model_patcher.py
index 051d114276..d30f7b3167 100644
--- a/invokeai/backend/model_patcher.py
+++ b/invokeai/backend/model_patcher.py
@@ -5,7 +5,7 @@ from __future__ import annotations
import pickle
from contextlib import contextmanager
-from typing import Any, Dict, Generator, Iterator, List, Optional, Tuple, Union
+from typing import Any, Dict, Generator, Iterator, List, Optional, Tuple, Type, Union
import numpy as np
import torch
@@ -13,14 +13,13 @@ from diffusers import OnnxRuntimeModel, UNet2DConditionModel
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from invokeai.app.shared.models import FreeUConfig
+from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import AnyModel
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
+from invokeai.backend.textual_inversion import TextualInversionManager, TextualInversionModelRaw
from invokeai.backend.util.devices import TorchDevice
-from .lora import LoRAModelRaw
-from .textual_inversion import TextualInversionManager, TextualInversionModelRaw
-
"""
loras = [
(lora_model1, 0.7),
@@ -33,8 +32,27 @@ with LoRAHelper.apply_lora_unet(unet, loras):
"""
-# TODO: rename smth like ModelPatcher and add TI method?
class ModelPatcher:
+ @staticmethod
+ @contextmanager
+ def patch_unet_attention_processor(unet: UNet2DConditionModel, processor_cls: Type[Any]):
+ """A context manager that patches `unet` with the provided attention processor.
+
+ Args:
+ unet (UNet2DConditionModel): The UNet model to patch.
+ processor (Type[Any]): Class which will be initialized for each key and passed to set_attn_processor(...).
+ """
+ unet_orig_processors = unet.attn_processors
+
+ # create separate instance for each attention, to be able modify each attention separately
+ unet_new_processors = {key: processor_cls() for key in unet_orig_processors.keys()}
+ try:
+ unet.set_attn_processor(unet_new_processors)
+ yield None
+
+ finally:
+ unet.set_attn_processor(unet_orig_processors)
+
@staticmethod
def _resolve_lora_key(model: torch.nn.Module, lora_key: str, prefix: str) -> Tuple[str, torch.nn.Module]:
assert "." not in lora_key
@@ -140,15 +158,12 @@ class ModelPatcher:
# We intentionally move to the target device first, then cast. Experimentally, this was found to
# be significantly faster for 16-bit CPU tensors being moved to a CUDA device than doing the
# same thing in a single call to '.to(...)'.
- layer.to(device=device, non_blocking=TorchDevice.get_non_blocking(device))
- layer.to(dtype=torch.float32, non_blocking=TorchDevice.get_non_blocking(device))
+ layer.to(device=device)
+ layer.to(dtype=torch.float32)
# TODO(ryand): Using torch.autocast(...) over explicit casting may offer a speed benefit on CUDA
# devices here. Experimentally, it was found to be very slow on CPU. More investigation needed.
layer_weight = layer.get_weight(module.weight) * (lora_weight * layer_scale)
- layer.to(
- device=TorchDevice.CPU_DEVICE,
- non_blocking=TorchDevice.get_non_blocking(TorchDevice.CPU_DEVICE),
- )
+ layer.to(device=TorchDevice.CPU_DEVICE)
assert isinstance(layer_weight, torch.Tensor) # mypy thinks layer_weight is a float|Any ??!
if module.weight.shape != layer_weight.shape:
@@ -157,7 +172,7 @@ class ModelPatcher:
layer_weight = layer_weight.reshape(module.weight.shape)
assert isinstance(layer_weight, torch.Tensor) # mypy thinks layer_weight is a float|Any ??!
- module.weight += layer_weight.to(dtype=dtype, non_blocking=TorchDevice.get_non_blocking(device))
+ module.weight += layer_weight.to(dtype=dtype)
yield # wait for context manager exit
@@ -165,9 +180,7 @@ class ModelPatcher:
assert hasattr(model, "get_submodule") # mypy not picking up fact that torch.nn.Module has get_submodule()
with torch.no_grad():
for module_key, weight in original_weights.items():
- model.get_submodule(module_key).weight.copy_(
- weight, non_blocking=TorchDevice.get_non_blocking(weight.device)
- )
+ model.get_submodule(module_key).weight.copy_(weight)
@classmethod
@contextmanager
@@ -338,7 +351,7 @@ class ONNXModelPatcher:
loras: List[Tuple[LoRAModelRaw, float]],
prefix: str,
) -> None:
- from .models.base import IAIOnnxRuntimeModel
+ from invokeai.backend.models.base import IAIOnnxRuntimeModel
if not isinstance(model, IAIOnnxRuntimeModel):
raise Exception("Only IAIOnnxRuntimeModel models supported")
@@ -425,7 +438,7 @@ class ONNXModelPatcher:
text_encoder: IAIOnnxRuntimeModel,
ti_list: List[Tuple[str, Any]],
) -> Iterator[Tuple[CLIPTokenizer, TextualInversionManager]]:
- from .models.base import IAIOnnxRuntimeModel
+ from invokeai.backend.models.base import IAIOnnxRuntimeModel
if not isinstance(text_encoder, IAIOnnxRuntimeModel):
raise Exception("Only IAIOnnxRuntimeModel models supported")
diff --git a/invokeai/backend/onnx/onnx_runtime.py b/invokeai/backend/onnx/onnx_runtime.py
index 9fcd4d093f..a8132d4b23 100644
--- a/invokeai/backend/onnx/onnx_runtime.py
+++ b/invokeai/backend/onnx/onnx_runtime.py
@@ -10,7 +10,7 @@ import torch
from onnx import numpy_helper
from onnxruntime import InferenceSession, SessionOptions, get_available_providers
-from ..raw_model import RawModel
+from invokeai.backend.raw_model import RawModel
ONNX_WEIGHTS_NAME = "model.onnx"
@@ -190,12 +190,7 @@ class IAIOnnxRuntimeModel(RawModel):
return self.session.run(None, inputs)
# compatability with RawModel ABC
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
pass
# compatability with diffusers load code
diff --git a/invokeai/backend/raw_model.py b/invokeai/backend/raw_model.py
index 7bca6945d9..23502b20cb 100644
--- a/invokeai/backend/raw_model.py
+++ b/invokeai/backend/raw_model.py
@@ -1,15 +1,3 @@
-"""Base class for 'Raw' models.
-
-The RawModel class is the base class of LoRAModelRaw and TextualInversionModelRaw,
-and is used for type checking of calls to the model patcher. Its main purpose
-is to avoid a circular import issues when lora.py tries to import BaseModelType
-from invokeai.backend.model_manager.config, and the latter tries to import LoRAModelRaw
-from lora.py.
-
-The term 'raw' was introduced to describe a wrapper around a torch.nn.Module
-that adds additional methods and attributes.
-"""
-
from abc import ABC, abstractmethod
from typing import Optional
@@ -17,13 +5,18 @@ import torch
class RawModel(ABC):
- """Abstract base class for 'Raw' model wrappers."""
+ """Base class for 'Raw' models.
+
+ The RawModel class is the base class of LoRAModelRaw, TextualInversionModelRaw, etc.
+ and is used for type checking of calls to the model patcher. Its main purpose
+ is to avoid a circular import issues when lora.py tries to import BaseModelType
+ from invokeai.backend.model_manager.config, and the latter tries to import LoRAModelRaw
+ from lora.py.
+
+ The term 'raw' was introduced to describe a wrapper around a torch.nn.Module
+ that adds additional methods and attributes.
+ """
@abstractmethod
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
pass
diff --git a/invokeai/backend/spandrel_image_to_image_model.py b/invokeai/backend/spandrel_image_to_image_model.py
new file mode 100644
index 0000000000..ccf02c57ac
--- /dev/null
+++ b/invokeai/backend/spandrel_image_to_image_model.py
@@ -0,0 +1,139 @@
+from pathlib import Path
+from typing import Any, Optional
+
+import numpy as np
+import torch
+from PIL import Image
+from spandrel import ImageModelDescriptor, ModelLoader
+
+from invokeai.backend.raw_model import RawModel
+
+
+class SpandrelImageToImageModel(RawModel):
+ """A wrapper for a Spandrel Image-to-Image model.
+
+ The main reason for having a wrapper class is to integrate with the type handling of RawModel.
+ """
+
+ def __init__(self, spandrel_model: ImageModelDescriptor[Any]):
+ self._spandrel_model = spandrel_model
+
+ @staticmethod
+ def pil_to_tensor(image: Image.Image) -> torch.Tensor:
+ """Convert PIL Image to the torch.Tensor format expected by SpandrelImageToImageModel.run().
+
+ Args:
+ image (Image.Image): A PIL Image with shape (H, W, C) and values in the range [0, 255].
+
+ Returns:
+ torch.Tensor: A torch.Tensor with shape (N, C, H, W) and values in the range [0, 1].
+ """
+ image_np = np.array(image)
+ # (H, W, C) -> (C, H, W)
+ image_np = np.transpose(image_np, (2, 0, 1))
+ image_np = image_np / 255
+ image_tensor = torch.from_numpy(image_np).float()
+ # (C, H, W) -> (N, C, H, W)
+ image_tensor = image_tensor.unsqueeze(0)
+ return image_tensor
+
+ @staticmethod
+ def tensor_to_pil(tensor: torch.Tensor) -> Image.Image:
+ """Convert a torch.Tensor produced by SpandrelImageToImageModel.run() to a PIL Image.
+
+ Args:
+ tensor (torch.Tensor): A torch.Tensor with shape (N, C, H, W) and values in the range [0, 1].
+
+ Returns:
+ Image.Image: A PIL Image with shape (H, W, C) and values in the range [0, 255].
+ """
+ # (N, C, H, W) -> (C, H, W)
+ tensor = tensor.squeeze(0)
+ # (C, H, W) -> (H, W, C)
+ tensor = tensor.permute(1, 2, 0)
+ tensor = tensor.clamp(0, 1)
+ tensor = (tensor * 255).cpu().detach().numpy().astype(np.uint8)
+ image = Image.fromarray(tensor)
+ return image
+
+ def run(self, image_tensor: torch.Tensor) -> torch.Tensor:
+ """Run the image-to-image model.
+
+ Args:
+ image_tensor (torch.Tensor): A torch.Tensor with shape (N, C, H, W) and values in the range [0, 1].
+ """
+ return self._spandrel_model(image_tensor)
+
+ @classmethod
+ def load_from_file(cls, file_path: str | Path):
+ model = ModelLoader().load_from_file(file_path)
+ if not isinstance(model, ImageModelDescriptor):
+ raise ValueError(
+ f"Loaded a spandrel model of type '{type(model)}'. Only image-to-image models are supported "
+ "('ImageModelDescriptor')."
+ )
+
+ return cls(spandrel_model=model)
+
+ @classmethod
+ def load_from_state_dict(cls, state_dict: dict[str, torch.Tensor]):
+ model = ModelLoader().load_from_state_dict(state_dict)
+ if not isinstance(model, ImageModelDescriptor):
+ raise ValueError(
+ f"Loaded a spandrel model of type '{type(model)}'. Only image-to-image models are supported "
+ "('ImageModelDescriptor')."
+ )
+
+ return cls(spandrel_model=model)
+
+ def supports_dtype(self, dtype: torch.dtype) -> bool:
+ """Check if the model supports the given dtype."""
+ if dtype == torch.float16:
+ return self._spandrel_model.supports_half
+ elif dtype == torch.bfloat16:
+ return self._spandrel_model.supports_bfloat16
+ elif dtype == torch.float32:
+ # All models support float32.
+ return True
+ else:
+ raise ValueError(f"Unexpected dtype '{dtype}'.")
+
+ def get_model_type_name(self) -> str:
+ """The model type name. Intended for logging / debugging purposes. Do not rely on this field remaining
+ consistent over time.
+ """
+ return str(type(self._spandrel_model.model))
+
+ def to(
+ self,
+ device: Optional[torch.device] = None,
+ dtype: Optional[torch.dtype] = None,
+ non_blocking: bool = False,
+ ) -> None:
+ """Note: Some models have limited dtype support. Call supports_dtype(...) to check if the dtype is supported.
+ Note: The non_blocking parameter is currently ignored."""
+ # TODO(ryand): spandrel.ImageModelDescriptor.to(...) does not support non_blocking. We will have to access the
+ # model directly if we want to apply this optimization.
+ self._spandrel_model.to(device=device, dtype=dtype)
+
+ @property
+ def device(self) -> torch.device:
+ """The device of the underlying model."""
+ return self._spandrel_model.device
+
+ @property
+ def dtype(self) -> torch.dtype:
+ """The dtype of the underlying model."""
+ return self._spandrel_model.dtype
+
+ @property
+ def scale(self) -> int:
+ """The scale of the model (e.g. 1x, 2x, 4x, etc.)."""
+ return self._spandrel_model.scale
+
+ def calc_size(self) -> int:
+ """Get size of the model in memory in bytes."""
+ # HACK(ryand): Fix this issue with circular imports.
+ from invokeai.backend.model_manager.load.model_util import calc_module_size
+
+ return calc_module_size(self._spandrel_model.model)
diff --git a/invokeai/backend/stable_diffusion/__init__.py b/invokeai/backend/stable_diffusion/__init__.py
index ed6782eefa..440cb4410b 100644
--- a/invokeai/backend/stable_diffusion/__init__.py
+++ b/invokeai/backend/stable_diffusion/__init__.py
@@ -2,9 +2,12 @@
Initialization file for the invokeai.backend.stable_diffusion package
"""
-from .diffusers_pipeline import PipelineIntermediateState, StableDiffusionGeneratorPipeline # noqa: F401
-from .diffusion import InvokeAIDiffuserComponent # noqa: F401
-from .seamless import set_seamless # noqa: F401
+from invokeai.backend.stable_diffusion.diffusers_pipeline import ( # noqa: F401
+ PipelineIntermediateState,
+ StableDiffusionGeneratorPipeline,
+)
+from invokeai.backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent # noqa: F401
+from invokeai.backend.stable_diffusion.seamless import set_seamless # noqa: F401
__all__ = [
"PipelineIntermediateState",
diff --git a/invokeai/backend/stable_diffusion/denoise_context.py b/invokeai/backend/stable_diffusion/denoise_context.py
new file mode 100644
index 0000000000..2b43d3fb0f
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/denoise_context.py
@@ -0,0 +1,131 @@
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, Type, Union
+
+import torch
+from diffusers import UNet2DConditionModel
+from diffusers.schedulers.scheduling_utils import SchedulerMixin, SchedulerOutput
+
+if TYPE_CHECKING:
+ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningMode, TextConditioningData
+
+
+@dataclass
+class UNetKwargs:
+ sample: torch.Tensor
+ timestep: Union[torch.Tensor, float, int]
+ encoder_hidden_states: torch.Tensor
+
+ class_labels: Optional[torch.Tensor] = None
+ timestep_cond: Optional[torch.Tensor] = None
+ attention_mask: Optional[torch.Tensor] = None
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None
+ added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None
+ down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None
+ mid_block_additional_residual: Optional[torch.Tensor] = None
+ down_intrablock_additional_residuals: Optional[Tuple[torch.Tensor]] = None
+ encoder_attention_mask: Optional[torch.Tensor] = None
+ # return_dict: bool = True
+
+
+@dataclass
+class DenoiseInputs:
+ """Initial variables passed to denoise. Supposed to be unchanged."""
+
+ # The latent-space image to denoise.
+ # Shape: [batch, channels, latent_height, latent_width]
+ # - If we are inpainting, this is the initial latent image before noise has been added.
+ # - If we are generating a new image, this should be initialized to zeros.
+ # - In some cases, this may be a partially-noised latent image (e.g. when running the SDXL refiner).
+ orig_latents: torch.Tensor
+
+ # kwargs forwarded to the scheduler.step() method.
+ scheduler_step_kwargs: dict[str, Any]
+
+ # Text conditionging data.
+ conditioning_data: TextConditioningData
+
+ # Noise used for two purposes:
+ # 1. Used by the scheduler to noise the initial `latents` before denoising.
+ # 2. Used to noise the `masked_latents` when inpainting.
+ # `noise` should be None if the `latents` tensor has already been noised.
+ # Shape: [1 or batch, channels, latent_height, latent_width]
+ noise: Optional[torch.Tensor]
+
+ # The seed used to generate the noise for the denoising process.
+ # HACK(ryand): seed is only used in a particular case when `noise` is None, but we need to re-generate the
+ # same noise used earlier in the pipeline. This should really be handled in a clearer way.
+ seed: int
+
+ # The timestep schedule for the denoising process.
+ timesteps: torch.Tensor
+
+ # The first timestep in the schedule. This is used to determine the initial noise level, so
+ # should be populated if you want noise applied *even* if timesteps is empty.
+ init_timestep: torch.Tensor
+
+ # Class of attention processor that is used.
+ attention_processor_cls: Type[Any]
+
+
+@dataclass
+class DenoiseContext:
+ """Context with all variables in denoise"""
+
+ # Initial variables passed to denoise. Supposed to be unchanged.
+ inputs: DenoiseInputs
+
+ # Scheduler which used to apply noise predictions.
+ scheduler: SchedulerMixin
+
+ # UNet model.
+ unet: Optional[UNet2DConditionModel] = None
+
+ # Current state of latent-space image in denoising process.
+ # None until `pre_denoise_loop` callback.
+ # Shape: [batch, channels, latent_height, latent_width]
+ latents: Optional[torch.Tensor] = None
+
+ # Current denoising step index.
+ # None until `pre_step` callback.
+ step_index: Optional[int] = None
+
+ # Current denoising step timestep.
+ # None until `pre_step` callback.
+ timestep: Optional[torch.Tensor] = None
+
+ # Arguments which will be passed to UNet model.
+ # Available in `pre_unet`/`post_unet` callbacks, otherwise will be None.
+ unet_kwargs: Optional[UNetKwargs] = None
+
+ # SchedulerOutput class returned from step function(normally, generated by scheduler).
+ # Supposed to be used only in `post_step` callback, otherwise can be None.
+ step_output: Optional[SchedulerOutput] = None
+
+ # Scaled version of `latents`, which will be passed to unet_kwargs initialization.
+ # Available in events inside step(between `pre_step` and `post_stop`).
+ # Shape: [batch, channels, latent_height, latent_width]
+ latent_model_input: Optional[torch.Tensor] = None
+
+ # [TMP] Defines on which conditionings current unet call will be runned.
+ # Available in `pre_unet`/`post_unet` callbacks, otherwise will be None.
+ conditioning_mode: Optional[ConditioningMode] = None
+
+ # [TMP] Noise predictions from negative conditioning.
+ # Available in `apply_cfg` and `post_apply_cfg` callbacks, otherwise will be None.
+ # Shape: [batch, channels, latent_height, latent_width]
+ negative_noise_pred: Optional[torch.Tensor] = None
+
+ # [TMP] Noise predictions from positive conditioning.
+ # Available in `apply_cfg` and `post_apply_cfg` callbacks, otherwise will be None.
+ # Shape: [batch, channels, latent_height, latent_width]
+ positive_noise_pred: Optional[torch.Tensor] = None
+
+ # Combined noise prediction from passed conditionings.
+ # Available in `apply_cfg` and `post_apply_cfg` callbacks, otherwise will be None.
+ # Shape: [batch, channels, latent_height, latent_width]
+ noise_pred: Optional[torch.Tensor] = None
+
+ # Dictionary for extensions to pass extra info about denoise process to other extensions.
+ extra: dict = field(default_factory=dict)
diff --git a/invokeai/backend/stable_diffusion/diffusers_pipeline.py b/invokeai/backend/stable_diffusion/diffusers_pipeline.py
index ee464f73e1..b3a668518b 100644
--- a/invokeai/backend/stable_diffusion/diffusers_pipeline.py
+++ b/invokeai/backend/stable_diffusion/diffusers_pipeline.py
@@ -23,21 +23,12 @@ from invokeai.app.services.config.config_default import get_config
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import IPAdapterData, TextConditioningData
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
from invokeai.backend.stable_diffusion.diffusion.unet_attention_patcher import UNetAttentionPatcher, UNetIPAdapterData
+from invokeai.backend.stable_diffusion.extensions.preview import PipelineIntermediateState
from invokeai.backend.util.attention import auto_detect_slice_size
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.hotfixes import ControlNetModel
-@dataclass
-class PipelineIntermediateState:
- step: int
- order: int
- total_steps: int
- timestep: int
- latents: torch.Tensor
- predicted_original: Optional[torch.Tensor] = None
-
-
@dataclass
class AddsMaskGuidance:
mask: torch.Tensor
diff --git a/invokeai/backend/stable_diffusion/diffusion/__init__.py b/invokeai/backend/stable_diffusion/diffusion/__init__.py
index 854d127a36..712542f79c 100644
--- a/invokeai/backend/stable_diffusion/diffusion/__init__.py
+++ b/invokeai/backend/stable_diffusion/diffusion/__init__.py
@@ -2,4 +2,6 @@
Initialization file for invokeai.models.diffusion
"""
-from .shared_invokeai_diffusion import InvokeAIDiffuserComponent # noqa: F401
+from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import (
+ InvokeAIDiffuserComponent, # noqa: F401
+)
diff --git a/invokeai/backend/stable_diffusion/diffusion/conditioning_data.py b/invokeai/backend/stable_diffusion/diffusion/conditioning_data.py
index 85950a01df..5fe1483ebc 100644
--- a/invokeai/backend/stable_diffusion/diffusion/conditioning_data.py
+++ b/invokeai/backend/stable_diffusion/diffusion/conditioning_data.py
@@ -1,10 +1,17 @@
+from __future__ import annotations
+
import math
from dataclasses import dataclass
-from typing import List, Optional, Union
+from enum import Enum
+from typing import TYPE_CHECKING, List, Optional, Tuple, Union
import torch
-from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
+from invokeai.backend.stable_diffusion.diffusion.regional_prompt_data import RegionalPromptData
+
+if TYPE_CHECKING:
+ from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
+ from invokeai.backend.stable_diffusion.denoise_context import UNetKwargs
@dataclass
@@ -95,6 +102,12 @@ class TextConditioningRegions:
assert self.masks.shape[1] == len(self.ranges)
+class ConditioningMode(Enum):
+ Both = "both"
+ Negative = "negative"
+ Positive = "positive"
+
+
class TextConditioningData:
def __init__(
self,
@@ -103,7 +116,7 @@ class TextConditioningData:
uncond_regions: Optional[TextConditioningRegions],
cond_regions: Optional[TextConditioningRegions],
guidance_scale: Union[float, List[float]],
- guidance_rescale_multiplier: float = 0,
+ guidance_rescale_multiplier: float = 0, # TODO: old backend, remove
):
self.uncond_text = uncond_text
self.cond_text = cond_text
@@ -114,6 +127,7 @@ class TextConditioningData:
# Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate
# images that are closely linked to the text `prompt`, usually at the expense of lower image quality.
self.guidance_scale = guidance_scale
+ # TODO: old backend, remove
# For models trained using zero-terminal SNR ("ztsnr"), it's suggested to use guidance_rescale_multiplier of 0.7.
# See [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
self.guidance_rescale_multiplier = guidance_rescale_multiplier
@@ -121,3 +135,114 @@ class TextConditioningData:
def is_sdxl(self):
assert isinstance(self.uncond_text, SDXLConditioningInfo) == isinstance(self.cond_text, SDXLConditioningInfo)
return isinstance(self.cond_text, SDXLConditioningInfo)
+
+ def to_unet_kwargs(self, unet_kwargs: UNetKwargs, conditioning_mode: ConditioningMode):
+ """Fills unet arguments with data from provided conditionings.
+
+ Args:
+ unet_kwargs (UNetKwargs): Object which stores UNet model arguments.
+ conditioning_mode (ConditioningMode): Describes which conditionings should be used.
+ """
+ _, _, h, w = unet_kwargs.sample.shape
+ device = unet_kwargs.sample.device
+ dtype = unet_kwargs.sample.dtype
+
+ # TODO: combine regions with conditionings
+ if conditioning_mode == ConditioningMode.Both:
+ conditionings = [self.uncond_text, self.cond_text]
+ c_regions = [self.uncond_regions, self.cond_regions]
+ elif conditioning_mode == ConditioningMode.Positive:
+ conditionings = [self.cond_text]
+ c_regions = [self.cond_regions]
+ elif conditioning_mode == ConditioningMode.Negative:
+ conditionings = [self.uncond_text]
+ c_regions = [self.uncond_regions]
+ else:
+ raise ValueError(f"Unexpected conditioning mode: {conditioning_mode}")
+
+ encoder_hidden_states, encoder_attention_mask = self._concat_conditionings_for_batch(
+ [c.embeds for c in conditionings]
+ )
+
+ unet_kwargs.encoder_hidden_states = encoder_hidden_states
+ unet_kwargs.encoder_attention_mask = encoder_attention_mask
+
+ if self.is_sdxl():
+ added_cond_kwargs = dict( # noqa: C408
+ text_embeds=torch.cat([c.pooled_embeds for c in conditionings]),
+ time_ids=torch.cat([c.add_time_ids for c in conditionings]),
+ )
+
+ unet_kwargs.added_cond_kwargs = added_cond_kwargs
+
+ if any(r is not None for r in c_regions):
+ tmp_regions = []
+ for c, r in zip(conditionings, c_regions, strict=True):
+ if r is None:
+ r = TextConditioningRegions(
+ masks=torch.ones((1, 1, h, w), dtype=dtype),
+ ranges=[Range(start=0, end=c.embeds.shape[1])],
+ )
+ tmp_regions.append(r)
+
+ if unet_kwargs.cross_attention_kwargs is None:
+ unet_kwargs.cross_attention_kwargs = {}
+
+ unet_kwargs.cross_attention_kwargs.update(
+ regional_prompt_data=RegionalPromptData(regions=tmp_regions, device=device, dtype=dtype),
+ )
+
+ @staticmethod
+ def _pad_zeros(t: torch.Tensor, pad_shape: tuple, dim: int) -> torch.Tensor:
+ return torch.cat([t, torch.zeros(pad_shape, device=t.device, dtype=t.dtype)], dim=dim)
+
+ @classmethod
+ def _pad_conditioning(
+ cls,
+ cond: torch.Tensor,
+ target_len: int,
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
+ """Pad provided conditioning tensor to target_len by zeros and returns mask of unpadded bytes.
+
+ Args:
+ cond (torch.Tensor): Conditioning tensor which to pads by zeros.
+ target_len (int): To which length(tokens count) pad tensor.
+ """
+ conditioning_attention_mask = torch.ones((cond.shape[0], cond.shape[1]), device=cond.device, dtype=cond.dtype)
+
+ if cond.shape[1] < target_len:
+ conditioning_attention_mask = cls._pad_zeros(
+ conditioning_attention_mask,
+ pad_shape=(cond.shape[0], target_len - cond.shape[1]),
+ dim=1,
+ )
+
+ cond = cls._pad_zeros(
+ cond,
+ pad_shape=(cond.shape[0], target_len - cond.shape[1], cond.shape[2]),
+ dim=1,
+ )
+
+ return cond, conditioning_attention_mask
+
+ @classmethod
+ def _concat_conditionings_for_batch(
+ cls,
+ conditionings: List[torch.Tensor],
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
+ """Concatenate provided conditioning tensors to one batched tensor.
+ If tensors have different sizes then pad them by zeros and creates
+ encoder_attention_mask to exclude padding from attention.
+
+ Args:
+ conditionings (List[torch.Tensor]): List of conditioning tensors to concatenate.
+ """
+ encoder_attention_mask = None
+ max_len = max([c.shape[1] for c in conditionings])
+ if any(c.shape[1] != max_len for c in conditionings):
+ encoder_attention_masks = [None] * len(conditionings)
+ for i in range(len(conditionings)):
+ conditionings[i], encoder_attention_masks[i] = cls._pad_conditioning(conditionings[i], max_len)
+ encoder_attention_mask = torch.cat(encoder_attention_masks)
+
+ return torch.cat(conditionings), encoder_attention_mask
diff --git a/invokeai/backend/stable_diffusion/diffusion/regional_prompt_data.py b/invokeai/backend/stable_diffusion/diffusion/regional_prompt_data.py
index f09cc0a0d2..eddd31f0c4 100644
--- a/invokeai/backend/stable_diffusion/diffusion/regional_prompt_data.py
+++ b/invokeai/backend/stable_diffusion/diffusion/regional_prompt_data.py
@@ -1,9 +1,14 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING
+
import torch
import torch.nn.functional as F
-from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
- TextConditioningRegions,
-)
+if TYPE_CHECKING:
+ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
+ TextConditioningRegions,
+ )
class RegionalPromptData:
diff --git a/invokeai/backend/stable_diffusion/diffusion_backend.py b/invokeai/backend/stable_diffusion/diffusion_backend.py
new file mode 100644
index 0000000000..806deb5e03
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/diffusion_backend.py
@@ -0,0 +1,140 @@
+from __future__ import annotations
+
+import torch
+from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
+from diffusers.schedulers.scheduling_utils import SchedulerMixin, SchedulerOutput
+from tqdm.auto import tqdm
+
+from invokeai.app.services.config.config_default import get_config
+from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext, UNetKwargs
+from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningMode
+from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+from invokeai.backend.stable_diffusion.extensions_manager import ExtensionsManager
+
+
+class StableDiffusionBackend:
+ def __init__(
+ self,
+ unet: UNet2DConditionModel,
+ scheduler: SchedulerMixin,
+ ):
+ self.unet = unet
+ self.scheduler = scheduler
+ config = get_config()
+ self._sequential_guidance = config.sequential_guidance
+
+ def latents_from_embeddings(self, ctx: DenoiseContext, ext_manager: ExtensionsManager):
+ if ctx.inputs.init_timestep.shape[0] == 0:
+ return ctx.inputs.orig_latents
+
+ ctx.latents = ctx.inputs.orig_latents.clone()
+
+ if ctx.inputs.noise is not None:
+ batch_size = ctx.latents.shape[0]
+ # latents = noise * self.scheduler.init_noise_sigma # it's like in t2l according to diffusers
+ ctx.latents = ctx.scheduler.add_noise(
+ ctx.latents, ctx.inputs.noise, ctx.inputs.init_timestep.expand(batch_size)
+ )
+
+ # if no work to do, return latents
+ if ctx.inputs.timesteps.shape[0] == 0:
+ return ctx.latents
+
+ # ext: inpaint[pre_denoise_loop, priority=normal] (maybe init, but not sure if it needed)
+ # ext: preview[pre_denoise_loop, priority=low]
+ ext_manager.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, ctx)
+
+ for ctx.step_index, ctx.timestep in enumerate(tqdm(ctx.inputs.timesteps)): # noqa: B020
+ # ext: inpaint (apply mask to latents on non-inpaint models)
+ ext_manager.run_callback(ExtensionCallbackType.PRE_STEP, ctx)
+
+ # ext: tiles? [override: step]
+ ctx.step_output = self.step(ctx, ext_manager)
+
+ # ext: inpaint[post_step, priority=high] (apply mask to preview on non-inpaint models)
+ # ext: preview[post_step, priority=low]
+ ext_manager.run_callback(ExtensionCallbackType.POST_STEP, ctx)
+
+ ctx.latents = ctx.step_output.prev_sample
+
+ # ext: inpaint[post_denoise_loop] (restore unmasked part)
+ ext_manager.run_callback(ExtensionCallbackType.POST_DENOISE_LOOP, ctx)
+ return ctx.latents
+
+ @torch.inference_mode()
+ def step(self, ctx: DenoiseContext, ext_manager: ExtensionsManager) -> SchedulerOutput:
+ ctx.latent_model_input = ctx.scheduler.scale_model_input(ctx.latents, ctx.timestep)
+
+ # TODO: conditionings as list(conditioning_data.to_unet_kwargs - ready)
+ # Note: The current handling of conditioning doesn't feel very future-proof.
+ # This might change in the future as new requirements come up, but for now,
+ # this is the rough plan.
+ if self._sequential_guidance:
+ ctx.negative_noise_pred = self.run_unet(ctx, ext_manager, ConditioningMode.Negative)
+ ctx.positive_noise_pred = self.run_unet(ctx, ext_manager, ConditioningMode.Positive)
+ else:
+ both_noise_pred = self.run_unet(ctx, ext_manager, ConditioningMode.Both)
+ ctx.negative_noise_pred, ctx.positive_noise_pred = both_noise_pred.chunk(2)
+
+ # ext: override apply_cfg
+ ctx.noise_pred = self.apply_cfg(ctx)
+
+ # ext: cfg_rescale [modify_noise_prediction]
+ # TODO: rename
+ ext_manager.run_callback(ExtensionCallbackType.POST_APPLY_CFG, ctx)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ step_output = ctx.scheduler.step(ctx.noise_pred, ctx.timestep, ctx.latents, **ctx.inputs.scheduler_step_kwargs)
+
+ # clean up locals
+ ctx.latent_model_input = None
+ ctx.negative_noise_pred = None
+ ctx.positive_noise_pred = None
+ ctx.noise_pred = None
+
+ return step_output
+
+ @staticmethod
+ def apply_cfg(ctx: DenoiseContext) -> torch.Tensor:
+ guidance_scale = ctx.inputs.conditioning_data.guidance_scale
+ if isinstance(guidance_scale, list):
+ guidance_scale = guidance_scale[ctx.step_index]
+
+ return torch.lerp(ctx.negative_noise_pred, ctx.positive_noise_pred, guidance_scale)
+ # return ctx.negative_noise_pred + guidance_scale * (ctx.positive_noise_pred - ctx.negative_noise_pred)
+
+ def run_unet(self, ctx: DenoiseContext, ext_manager: ExtensionsManager, conditioning_mode: ConditioningMode):
+ sample = ctx.latent_model_input
+ if conditioning_mode == ConditioningMode.Both:
+ sample = torch.cat([sample] * 2)
+
+ ctx.unet_kwargs = UNetKwargs(
+ sample=sample,
+ timestep=ctx.timestep,
+ encoder_hidden_states=None, # set later by conditoning
+ cross_attention_kwargs=dict( # noqa: C408
+ percent_through=ctx.step_index / len(ctx.inputs.timesteps),
+ ),
+ )
+
+ ctx.conditioning_mode = conditioning_mode
+ ctx.inputs.conditioning_data.to_unet_kwargs(ctx.unet_kwargs, ctx.conditioning_mode)
+
+ # ext: controlnet/ip/t2i [pre_unet]
+ ext_manager.run_callback(ExtensionCallbackType.PRE_UNET, ctx)
+
+ # ext: inpaint [pre_unet, priority=low]
+ # or
+ # ext: inpaint [override: unet_forward]
+ noise_pred = self._unet_forward(**vars(ctx.unet_kwargs))
+
+ ext_manager.run_callback(ExtensionCallbackType.POST_UNET, ctx)
+
+ # clean up locals
+ ctx.unet_kwargs = None
+ ctx.conditioning_mode = None
+
+ return noise_pred
+
+ def _unet_forward(self, **kwargs) -> torch.Tensor:
+ return self.unet(**kwargs).sample
diff --git a/invokeai/backend/stable_diffusion/extension_callback_type.py b/invokeai/backend/stable_diffusion/extension_callback_type.py
new file mode 100644
index 0000000000..aaefbd7ed0
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/extension_callback_type.py
@@ -0,0 +1,12 @@
+from enum import Enum
+
+
+class ExtensionCallbackType(Enum):
+ SETUP = "setup"
+ PRE_DENOISE_LOOP = "pre_denoise_loop"
+ POST_DENOISE_LOOP = "post_denoise_loop"
+ PRE_STEP = "pre_step"
+ POST_STEP = "post_step"
+ PRE_UNET = "pre_unet"
+ POST_UNET = "post_unet"
+ POST_APPLY_CFG = "post_apply_cfg"
diff --git a/invokeai/backend/stable_diffusion/extensions/base.py b/invokeai/backend/stable_diffusion/extensions/base.py
new file mode 100644
index 0000000000..802af86e6d
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/extensions/base.py
@@ -0,0 +1,60 @@
+from __future__ import annotations
+
+from contextlib import contextmanager
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Callable, Dict, List
+
+import torch
+from diffusers import UNet2DConditionModel
+
+if TYPE_CHECKING:
+ from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext
+ from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+
+
+@dataclass
+class CallbackMetadata:
+ callback_type: ExtensionCallbackType
+ order: int
+
+
+@dataclass
+class CallbackFunctionWithMetadata:
+ metadata: CallbackMetadata
+ function: Callable[[DenoiseContext], None]
+
+
+def callback(callback_type: ExtensionCallbackType, order: int = 0):
+ def _decorator(function):
+ function._ext_metadata = CallbackMetadata(
+ callback_type=callback_type,
+ order=order,
+ )
+ return function
+
+ return _decorator
+
+
+class ExtensionBase:
+ def __init__(self):
+ self._callbacks: Dict[ExtensionCallbackType, List[CallbackFunctionWithMetadata]] = {}
+
+ # Register all of the callback methods for this instance.
+ for func_name in dir(self):
+ func = getattr(self, func_name)
+ metadata = getattr(func, "_ext_metadata", None)
+ if metadata is not None and isinstance(metadata, CallbackMetadata):
+ if metadata.callback_type not in self._callbacks:
+ self._callbacks[metadata.callback_type] = []
+ self._callbacks[metadata.callback_type].append(CallbackFunctionWithMetadata(metadata, func))
+
+ def get_callbacks(self):
+ return self._callbacks
+
+ @contextmanager
+ def patch_extension(self, context: DenoiseContext):
+ yield None
+
+ @contextmanager
+ def patch_unet(self, state_dict: Dict[str, torch.Tensor], unet: UNet2DConditionModel):
+ yield None
diff --git a/invokeai/backend/stable_diffusion/extensions/preview.py b/invokeai/backend/stable_diffusion/extensions/preview.py
new file mode 100644
index 0000000000..fd0cc0b9a3
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/extensions/preview.py
@@ -0,0 +1,63 @@
+from __future__ import annotations
+
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Callable, Optional
+
+import torch
+
+from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+from invokeai.backend.stable_diffusion.extensions.base import ExtensionBase, callback
+
+if TYPE_CHECKING:
+ from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext
+
+
+# TODO: change event to accept image instead of latents
+@dataclass
+class PipelineIntermediateState:
+ step: int
+ order: int
+ total_steps: int
+ timestep: int
+ latents: torch.Tensor
+ predicted_original: Optional[torch.Tensor] = None
+
+
+class PreviewExt(ExtensionBase):
+ def __init__(self, callback: Callable[[PipelineIntermediateState], None]):
+ super().__init__()
+ self.callback = callback
+
+ # do last so that all other changes shown
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP, order=1000)
+ def initial_preview(self, ctx: DenoiseContext):
+ self.callback(
+ PipelineIntermediateState(
+ step=-1,
+ order=ctx.scheduler.order,
+ total_steps=len(ctx.inputs.timesteps),
+ timestep=int(ctx.scheduler.config.num_train_timesteps), # TODO: is there any code which uses it?
+ latents=ctx.latents,
+ )
+ )
+
+ # do last so that all other changes shown
+ @callback(ExtensionCallbackType.POST_STEP, order=1000)
+ def step_preview(self, ctx: DenoiseContext):
+ if hasattr(ctx.step_output, "denoised"):
+ predicted_original = ctx.step_output.denoised
+ elif hasattr(ctx.step_output, "pred_original_sample"):
+ predicted_original = ctx.step_output.pred_original_sample
+ else:
+ predicted_original = ctx.step_output.prev_sample
+
+ self.callback(
+ PipelineIntermediateState(
+ step=ctx.step_index,
+ order=ctx.scheduler.order,
+ total_steps=len(ctx.inputs.timesteps),
+ timestep=int(ctx.timestep), # TODO: is there any code which uses it?
+ latents=ctx.step_output.prev_sample,
+ predicted_original=predicted_original, # TODO: is there any reason for additional field?
+ )
+ )
diff --git a/invokeai/backend/stable_diffusion/extensions_manager.py b/invokeai/backend/stable_diffusion/extensions_manager.py
new file mode 100644
index 0000000000..1cae2e4219
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/extensions_manager.py
@@ -0,0 +1,71 @@
+from __future__ import annotations
+
+from contextlib import ExitStack, contextmanager
+from typing import TYPE_CHECKING, Callable, Dict, List, Optional
+
+import torch
+from diffusers import UNet2DConditionModel
+
+from invokeai.app.services.session_processor.session_processor_common import CanceledException
+
+if TYPE_CHECKING:
+ from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext
+ from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+ from invokeai.backend.stable_diffusion.extensions.base import CallbackFunctionWithMetadata, ExtensionBase
+
+
+class ExtensionsManager:
+ def __init__(self, is_canceled: Optional[Callable[[], bool]] = None):
+ self._is_canceled = is_canceled
+
+ # A list of extensions in the order that they were added to the ExtensionsManager.
+ self._extensions: List[ExtensionBase] = []
+ self._ordered_callbacks: Dict[ExtensionCallbackType, List[CallbackFunctionWithMetadata]] = {}
+
+ def add_extension(self, extension: ExtensionBase):
+ self._extensions.append(extension)
+ self._regenerate_ordered_callbacks()
+
+ def _regenerate_ordered_callbacks(self):
+ """Regenerates self._ordered_callbacks. Intended to be called each time a new extension is added."""
+ self._ordered_callbacks = {}
+
+ # Fill the ordered callbacks dictionary.
+ for extension in self._extensions:
+ for callback_type, callbacks in extension.get_callbacks().items():
+ if callback_type not in self._ordered_callbacks:
+ self._ordered_callbacks[callback_type] = []
+ self._ordered_callbacks[callback_type].extend(callbacks)
+
+ # Sort each callback list.
+ for callback_type, callbacks in self._ordered_callbacks.items():
+ # Note that sorted() is stable, so if two callbacks have the same order, the order that they extensions were
+ # added will be preserved.
+ self._ordered_callbacks[callback_type] = sorted(callbacks, key=lambda x: x.metadata.order)
+
+ def run_callback(self, callback_type: ExtensionCallbackType, ctx: DenoiseContext):
+ if self._is_canceled and self._is_canceled():
+ raise CanceledException
+
+ callbacks = self._ordered_callbacks.get(callback_type, [])
+ for cb in callbacks:
+ cb.function(ctx)
+
+ @contextmanager
+ def patch_extensions(self, context: DenoiseContext):
+ if self._is_canceled and self._is_canceled():
+ raise CanceledException
+
+ with ExitStack() as exit_stack:
+ for ext in self._extensions:
+ exit_stack.enter_context(ext.patch_extension(context))
+
+ yield None
+
+ @contextmanager
+ def patch_unet(self, state_dict: Dict[str, torch.Tensor], unet: UNet2DConditionModel):
+ if self._is_canceled and self._is_canceled():
+ raise CanceledException
+
+ # TODO: create logic in PR with extension which uses it
+ yield None
diff --git a/invokeai/backend/stable_diffusion/multi_diffusion_pipeline.py b/invokeai/backend/stable_diffusion/multi_diffusion_pipeline.py
index 0ddcfdd380..6c07fc1c2c 100644
--- a/invokeai/backend/stable_diffusion/multi_diffusion_pipeline.py
+++ b/invokeai/backend/stable_diffusion/multi_diffusion_pipeline.py
@@ -61,6 +61,7 @@ class MultiDiffusionPipeline(StableDiffusionGeneratorPipeline):
# full noise. Investigate the history of why this got commented out.
# latents = noise * self.scheduler.init_noise_sigma # it's like in t2l according to diffusers
latents = self.scheduler.add_noise(latents, noise, batched_init_timestep)
+ assert isinstance(latents, torch.Tensor) # For static type checking.
# TODO(ryand): Look into the implications of passing in latents here that are larger than they will be after
# cropping into regions.
@@ -122,19 +123,42 @@ class MultiDiffusionPipeline(StableDiffusionGeneratorPipeline):
control_data=region_conditioning.control_data,
)
- # Store the results from the region.
- # If two tiles overlap by more than the target overlap amount, crop the left and top edges of the
- # affected tiles to achieve the target overlap.
+ # Build a region_weight matrix that applies gradient blending to the edges of the region.
region = region_conditioning.region
- top_adjustment = max(0, region.overlap.top - target_overlap)
- left_adjustment = max(0, region.overlap.left - target_overlap)
- region_height_slice = slice(region.coords.top + top_adjustment, region.coords.bottom)
- region_width_slice = slice(region.coords.left + left_adjustment, region.coords.right)
- merged_latents[:, :, region_height_slice, region_width_slice] += step_output.prev_sample[
- :, :, top_adjustment:, left_adjustment:
- ]
- # For now, we treat every region as having the same weight.
- merged_latents_weights[:, :, region_height_slice, region_width_slice] += 1.0
+ _, _, region_height, region_width = step_output.prev_sample.shape
+ region_weight = torch.ones(
+ (1, 1, region_height, region_width),
+ dtype=latents.dtype,
+ device=latents.device,
+ )
+ if region.overlap.left > 0:
+ left_grad = torch.linspace(
+ 0, 1, region.overlap.left, device=latents.device, dtype=latents.dtype
+ ).view((1, 1, 1, -1))
+ region_weight[:, :, :, : region.overlap.left] *= left_grad
+ if region.overlap.top > 0:
+ top_grad = torch.linspace(
+ 0, 1, region.overlap.top, device=latents.device, dtype=latents.dtype
+ ).view((1, 1, -1, 1))
+ region_weight[:, :, : region.overlap.top, :] *= top_grad
+ if region.overlap.right > 0:
+ right_grad = torch.linspace(
+ 1, 0, region.overlap.right, device=latents.device, dtype=latents.dtype
+ ).view((1, 1, 1, -1))
+ region_weight[:, :, :, -region.overlap.right :] *= right_grad
+ if region.overlap.bottom > 0:
+ bottom_grad = torch.linspace(
+ 1, 0, region.overlap.bottom, device=latents.device, dtype=latents.dtype
+ ).view((1, 1, -1, 1))
+ region_weight[:, :, -region.overlap.bottom :, :] *= bottom_grad
+
+ # Update the merged results with the region results.
+ merged_latents[
+ :, :, region.coords.top : region.coords.bottom, region.coords.left : region.coords.right
+ ] += step_output.prev_sample * region_weight
+ merged_latents_weights[
+ :, :, region.coords.top : region.coords.bottom, region.coords.left : region.coords.right
+ ] += region_weight
pred_orig_sample = getattr(step_output, "pred_original_sample", None)
if pred_orig_sample is not None:
@@ -142,9 +166,9 @@ class MultiDiffusionPipeline(StableDiffusionGeneratorPipeline):
# they all use the same scheduler.
if merged_pred_original is None:
merged_pred_original = torch.zeros_like(latents)
- merged_pred_original[:, :, region_height_slice, region_width_slice] += pred_orig_sample[
- :, :, top_adjustment:, left_adjustment:
- ]
+ merged_pred_original[
+ :, :, region.coords.top : region.coords.bottom, region.coords.left : region.coords.right
+ ] += pred_orig_sample
# Normalize the merged results.
latents = torch.where(merged_latents_weights > 0, merged_latents / merged_latents_weights, merged_latents)
diff --git a/invokeai/backend/stable_diffusion/schedulers/__init__.py b/invokeai/backend/stable_diffusion/schedulers/__init__.py
index 0b780d3ee2..6c02acda51 100644
--- a/invokeai/backend/stable_diffusion/schedulers/__init__.py
+++ b/invokeai/backend/stable_diffusion/schedulers/__init__.py
@@ -1,3 +1,3 @@
-from .schedulers import SCHEDULER_MAP # noqa: F401
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_MAP # noqa: F401
__all__ = ["SCHEDULER_MAP"]
diff --git a/invokeai/backend/stable_diffusion/schedulers/schedulers.py b/invokeai/backend/stable_diffusion/schedulers/schedulers.py
index 3a55d52d4a..7d6851e278 100644
--- a/invokeai/backend/stable_diffusion/schedulers/schedulers.py
+++ b/invokeai/backend/stable_diffusion/schedulers/schedulers.py
@@ -1,3 +1,5 @@
+from typing import Any, Literal, Type
+
from diffusers import (
DDIMScheduler,
DDPMScheduler,
@@ -16,8 +18,36 @@ from diffusers import (
TCDScheduler,
UniPCMultistepScheduler,
)
+from diffusers.schedulers.scheduling_utils import SchedulerMixin
-SCHEDULER_MAP = {
+SCHEDULER_NAME_VALUES = Literal[
+ "ddim",
+ "ddpm",
+ "deis",
+ "lms",
+ "lms_k",
+ "pndm",
+ "heun",
+ "heun_k",
+ "euler",
+ "euler_k",
+ "euler_a",
+ "kdpm_2",
+ "kdpm_2_a",
+ "dpmpp_2s",
+ "dpmpp_2s_k",
+ "dpmpp_2m",
+ "dpmpp_2m_k",
+ "dpmpp_2m_sde",
+ "dpmpp_2m_sde_k",
+ "dpmpp_sde",
+ "dpmpp_sde_k",
+ "unipc",
+ "lcm",
+ "tcd",
+]
+
+SCHEDULER_MAP: dict[SCHEDULER_NAME_VALUES, tuple[Type[SchedulerMixin], dict[str, Any]]] = {
"ddim": (DDIMScheduler, {}),
"ddpm": (DDPMScheduler, {}),
"deis": (DEISMultistepScheduler, {}),
diff --git a/invokeai/backend/stable_diffusion/vae_tiling.py b/invokeai/backend/stable_diffusion/vae_tiling.py
new file mode 100644
index 0000000000..d31cb331f4
--- /dev/null
+++ b/invokeai/backend/stable_diffusion/vae_tiling.py
@@ -0,0 +1,35 @@
+from contextlib import contextmanager
+
+from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
+from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
+
+
+@contextmanager
+def patch_vae_tiling_params(
+ vae: AutoencoderKL | AutoencoderTiny,
+ tile_sample_min_size: int,
+ tile_latent_min_size: int,
+ tile_overlap_factor: float,
+):
+ """Patch the parameters that control the VAE tiling tile size and overlap.
+
+ These parameters are not explicitly exposed in the VAE's API, but they have a significant impact on the quality of
+ the outputs. As a general rule, bigger tiles produce better results, but this comes at the cost of higher memory
+ usage.
+ """
+ # Record initial config.
+ orig_tile_sample_min_size = vae.tile_sample_min_size
+ orig_tile_latent_min_size = vae.tile_latent_min_size
+ orig_tile_overlap_factor = vae.tile_overlap_factor
+
+ try:
+ # Apply target config.
+ vae.tile_sample_min_size = tile_sample_min_size
+ vae.tile_latent_min_size = tile_latent_min_size
+ vae.tile_overlap_factor = tile_overlap_factor
+ yield
+ finally:
+ # Restore initial config.
+ vae.tile_sample_min_size = orig_tile_sample_min_size
+ vae.tile_latent_min_size = orig_tile_latent_min_size
+ vae.tile_overlap_factor = orig_tile_overlap_factor
diff --git a/invokeai/backend/textual_inversion.py b/invokeai/backend/textual_inversion.py
index 0408176edb..0345478b97 100644
--- a/invokeai/backend/textual_inversion.py
+++ b/invokeai/backend/textual_inversion.py
@@ -9,7 +9,7 @@ from safetensors.torch import load_file
from transformers import CLIPTokenizer
from typing_extensions import Self
-from .raw_model import RawModel
+from invokeai.backend.raw_model import RawModel
class TextualInversionModelRaw(RawModel):
@@ -65,17 +65,20 @@ class TextualInversionModelRaw(RawModel):
return result
- def to(
- self,
- device: Optional[torch.device] = None,
- dtype: Optional[torch.dtype] = None,
- non_blocking: bool = False,
- ) -> None:
+ def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
if not torch.cuda.is_available():
return
for emb in [self.embedding, self.embedding_2]:
if emb is not None:
- emb.to(device=device, dtype=dtype, non_blocking=non_blocking)
+ emb.to(device=device, dtype=dtype)
+
+ def calc_size(self) -> int:
+ """Get the size of this model in bytes."""
+ embedding_size = self.embedding.element_size() * self.embedding.nelement()
+ embedding_2_size = 0
+ if self.embedding_2 is not None:
+ embedding_2_size = self.embedding_2.element_size() * self.embedding_2.nelement()
+ return embedding_size + embedding_2_size
class TextualInversionManager(BaseTextualInversionManager):
diff --git a/invokeai/backend/util/__init__.py b/invokeai/backend/util/__init__.py
index 1e4d467cd0..101215640a 100644
--- a/invokeai/backend/util/__init__.py
+++ b/invokeai/backend/util/__init__.py
@@ -2,8 +2,8 @@
Initialization file for invokeai.backend.util
"""
-from .logging import InvokeAILogger
-from .util import GIG, Chdir, directory_size
+from invokeai.backend.util.logging import InvokeAILogger
+from invokeai.backend.util.util import GIG, Chdir, directory_size
__all__ = [
"GIG",
diff --git a/invokeai/backend/util/devices.py b/invokeai/backend/util/devices.py
index 1cba70c662..83ce055024 100644
--- a/invokeai/backend/util/devices.py
+++ b/invokeai/backend/util/devices.py
@@ -112,15 +112,3 @@ class TorchDevice:
@classmethod
def _to_dtype(cls, precision_name: TorchPrecisionNames) -> torch.dtype:
return NAME_TO_PRECISION[precision_name]
-
- @staticmethod
- def get_non_blocking(to_device: torch.device) -> bool:
- """Return the non_blocking flag to be used when moving a tensor to a given device.
- MPS may have unexpected errors with non-blocking operations - we should not use non-blocking when moving _to_ MPS.
- When moving _from_ MPS, we can use non-blocking operations.
-
- See:
- - https://github.com/pytorch/pytorch/issues/107455
- - https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28
- """
- return False if to_device.type == "mps" else True
diff --git a/invokeai/frontend/web/public/locales/de.json b/invokeai/frontend/web/public/locales/de.json
index 2da27264a1..352b6ffcbf 100644
--- a/invokeai/frontend/web/public/locales/de.json
+++ b/invokeai/frontend/web/public/locales/de.json
@@ -4,7 +4,7 @@
"reportBugLabel": "Fehler melden",
"settingsLabel": "Einstellungen",
"img2img": "Bild zu Bild",
- "nodes": "Arbeitsabläufe",
+ "nodes": "Workflows",
"upload": "Hochladen",
"load": "Laden",
"statusDisconnected": "Getrennt",
@@ -18,16 +18,16 @@
"postprocessing": "Nachbearbeitung",
"t2iAdapter": "T2I Adapter",
"communityLabel": "Gemeinschaft",
- "dontAskMeAgain": "Frag mich nicht nochmal",
- "areYouSure": "Bist du dir sicher?",
+ "dontAskMeAgain": "Nicht nochmal fragen",
+ "areYouSure": "Bist du sicher?",
"on": "An",
- "nodeEditor": "Knoten Editor",
+ "nodeEditor": "Node-Editor",
"ipAdapter": "IP Adapter",
- "auto": "Automatisch",
+ "auto": "Auto",
"controlNet": "ControlNet",
"imageFailedToLoad": "Kann Bild nicht laden",
"modelManager": "Model Manager",
- "learnMore": "Mehr lernen",
+ "learnMore": "Mehr erfahren",
"loading": "Lade",
"random": "Zufall",
"batch": "Stapel-Manager",
@@ -42,7 +42,7 @@
"outputs": "Ausgabe",
"data": "Daten",
"safetensors": "Safe-Tensors",
- "outpaint": "Outpaint (Außen ausmalen)",
+ "outpaint": "Outpaint",
"details": "Details",
"format": "Format",
"unknown": "Unbekannt",
@@ -78,7 +78,20 @@
"add": "Hinzufügen",
"loglevel": "Protokoll Stufe",
"selected": "Ausgewählt",
- "beta": "Beta"
+ "beta": "Beta",
+ "comparing": "Vergleichen",
+ "comparingDesc": "Bilder vergleichen",
+ "editor": "Editor",
+ "goTo": "Gehe zu",
+ "positivePrompt": "Positiv-Prompt",
+ "negativePrompt": "Negativ-Prompt",
+ "editing": "Bearbeiten",
+ "editingDesc": "Bearbeiten auf der Kontrollebenen-Leinwand",
+ "viewing": "Ansehen",
+ "viewingDesc": "Bilder in großer Galerie ansehen",
+ "tab": "Tabulator",
+ "enabled": "Aktiviert",
+ "disabled": "Ausgeschaltet"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -585,17 +598,18 @@
"mode": "Modus",
"resetUI": "$t(accessibility.reset) von UI",
"createIssue": "Ticket erstellen",
- "about": "Über"
+ "about": "Über",
+ "submitSupportTicket": "Support-Ticket senden"
},
"boards": {
- "autoAddBoard": "Automatisches Hinzufügen zum Board",
- "topMessage": "Dieser Ordner enthält Bilder die in den folgenden Funktionen verwendet werden:",
+ "autoAddBoard": "Board automatisch erstellen",
+ "topMessage": "Dieser Ordner enthält Bilder, die in den folgenden Funktionen verwendet werden:",
"move": "Bewegen",
"menuItemAutoAdd": "Auto-Hinzufügen zu diesem Ordner",
"myBoard": "Meine Ordner",
"searchBoard": "Ordner durchsuchen...",
"noMatching": "Keine passenden Ordner",
- "selectBoard": "Ordner aussuchen",
+ "selectBoard": "Ordner wählen",
"cancel": "Abbrechen",
"addBoard": "Board hinzufügen",
"uncategorized": "Ohne Kategorie",
@@ -605,30 +619,41 @@
"clearSearch": "Suche leeren",
"bottomMessage": "Löschen des Boards und seiner Bilder setzt alle Funktionen zurück, die sie gerade verwenden.",
"deleteBoardOnly": "Nur Ordner löschen",
- "deleteBoard": "Löschen Ordner",
- "deleteBoardAndImages": "Löschen Ordner und Bilder",
- "deletedBoardsCannotbeRestored": "Gelöschte Ordner könnte nicht wiederhergestellt werden",
- "movingImagesToBoard_one": "Verschiebe {{count}} Bild zu Ordner:",
- "movingImagesToBoard_other": "Verschiebe {{count}} Bilder in Ordner:"
+ "deleteBoard": "Lösche Ordner",
+ "deleteBoardAndImages": "Lösche Ordner und Bilder",
+ "deletedBoardsCannotbeRestored": "Gelöschte Ordner können nicht wiederhergestellt werden",
+ "movingImagesToBoard_one": "Verschiebe {{count}} Bild in Ordner:",
+ "movingImagesToBoard_other": "Verschiebe {{count}} Bilder in Ordner:",
+ "selectedForAutoAdd": "Ausgewählt für Automatisches hinzufügen",
+ "imagesWithCount_one": "{{count}} Bild",
+ "imagesWithCount_other": "{{count}} Bilder",
+ "addPrivateBoard": "Privaten Ordner hinzufügen",
+ "addSharedBoard": "Geteilten Ordner hinzufügen",
+ "boards": "Ordner",
+ "unarchiveBoard": "Unarchive Ordner",
+ "private": "Private Ordner",
+ "shared": "Geteilte Ordner",
+ "archiveBoard": "Ordner archivieren",
+ "archived": "Archiviert"
},
"controlnet": {
"showAdvanced": "Zeige Erweitert",
- "contentShuffleDescription": "Mischt den Inhalt von einem Bild",
+ "contentShuffleDescription": "Mischt den Inhalt des Bilds",
"addT2IAdapter": "$t(common.t2iAdapter) hinzufügen",
"importImageFromCanvas": "Bild von Zeichenfläche importieren",
- "lineartDescription": "Konvertiere Bild in Strichzeichnung",
+ "lineartDescription": "Konvertiert Bild in Linienzeichnung",
"importMaskFromCanvas": "Importiere Maske von Zeichenfläche",
"hed": "HED",
"hideAdvanced": "Verstecke Erweitert",
"contentShuffle": "Inhalt mischen",
- "beginEndStepPercent": "Start / Ende Step Prozent",
- "duplicate": "Kopieren",
+ "beginEndStepPercent": "Start/Ende Step Prozent",
+ "duplicate": "Duplizieren",
"f": "F",
"h": "H",
- "depthMidasDescription": "Tiefenmap erstellen mit Midas",
+ "depthMidasDescription": "Z-Map erstellen mit Midas",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
- "weight": "Einfluss",
- "selectModel": "Wähle ein Modell",
+ "weight": "Gewichtung",
+ "selectModel": "Modell wählen",
"depthMidas": "Tiefe (Midas)",
"w": "W",
"addControlNet": "$t(common.controlNet) hinzufügen",
@@ -637,7 +662,7 @@
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"fill": "Füllen",
"addIPAdapter": "$t(common.ipAdapter) hinzufügen",
- "colorMapDescription": "Erstelle eine Farbkarte von diesem Bild",
+ "colorMapDescription": "Erstellt eine color-map von diesem Bild",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"imageResolution": "Bild Auflösung",
"depthZoe": "Tiefe (Zoe)",
@@ -646,41 +671,41 @@
"highThreshold": "Hohe Schwelle",
"toggleControlNet": "Dieses ControlNet ein- oder ausschalten",
"delete": "Löschen",
- "controlAdapter_one": "Control Adapter",
- "controlAdapter_other": "Control Adapter",
+ "controlAdapter_one": "Control-Adapter",
+ "controlAdapter_other": "Control-Adapter",
"colorMapTileSize": "Kachelgröße",
- "depthZoeDescription": "Tiefenmap erstellen mit Zoe",
- "setControlImageDimensions": "Setze Control-Bild Auflösung auf Breite/Höhe",
+ "depthZoeDescription": "Z-Map erstellen mit Zoe",
+ "setControlImageDimensions": "Bildauflösung auf optimalen Wert setzen",
"resize": "Größe ändern",
- "resetControlImage": "Zurücksetzen vom Referenz Bild",
+ "resetControlImage": "Zurücksetzen vom Kontrollbild",
"balanced": "Ausgewogen",
"prompt": "Prompt",
- "resizeMode": "Größe",
+ "resizeMode": "Skalierungs-Modus",
"processor": "Prozessor",
- "saveControlImage": "Speichere Referenz Bild",
+ "saveControlImage": "Speichere Kontrollbild",
"safe": "Speichern",
"pidi": "PIDI",
- "normalBae": "Normales BAE",
+ "normalBae": "Normale BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"control": "Kontrolle",
"coarse": "Grob",
"crop": "Zuschneiden",
"pidiDescription": "PIDI-Bildverarbeitung",
- "mediapipeFace": "Mediapipe Gesichter",
+ "mediapipeFace": "Mediapipe Gesicht",
"mlsd": "M-LSD",
- "controlMode": "Steuermodus",
+ "controlMode": "Kontrollmodus",
"cannyDescription": "Canny Umrisserkennung",
"lineart": "Linienzeichnung",
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
- "autoConfigure": "Prozessor Auto-konfig",
+ "autoConfigure": "Prozessor Auto-Konfig",
"normalBaeDescription": "Normale BAE-Verarbeitung",
- "noneDescription": "Es wurde keine Verarbeitung angewendet",
- "lineartAnime": "Lineart Anime / \"Strichzeichnung Anime\"",
+ "noneDescription": "Es wurde nichts angewendet",
+ "lineartAnime": "Strichzeichnung Anime",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
- "canny": "\"Canny\"",
- "hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
+ "canny": "Canny",
+ "hedDescription": "Ganzheitlich-verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximale Anzahl Gesichter",
"resizeSimple": "Größe ändern (einfach)",
@@ -689,21 +714,23 @@
"small": "Klein",
"base": "Basis",
"depthAnything": "Depth Anything",
- "depthAnythingDescription": "Erstellung einer Tiefenkarte mit der Depth-Anything-Technik",
+ "depthAnythingDescription": "Erstellung einer Z-Map mit Depth-Anything-Technik",
"face": "Gesicht",
"body": "Körper",
"hands": "Hände",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Posenschätzung mit DW Openpose",
- "selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus",
+ "selectCLIPVisionModel": "Wähle ein CLIP-Vision Modell aus",
"ipAdapterMethod": "Methode",
"composition": "Nur Komposition",
"full": "Voll",
- "style": "Nur Style"
+ "style": "Nur Stil",
+ "setControlImageDimensionsForce": "Bildauflösung setzen (ev. nicht optimal)",
+ "beginEndStepPercentShort": "Start/Ende %"
},
"queue": {
"status": "Status",
- "cancelTooltip": "Aktuellen Aufgabe abbrechen",
+ "cancelTooltip": "Aufgabe abbrechen",
"queueEmpty": "Warteschlange leer",
"in_progress": "In Arbeit",
"queueFront": "Am Anfang der Warteschlange einreihen",
@@ -739,7 +766,7 @@
"clearQueueAlertDialog2": "Warteschlange wirklich leeren?",
"pruneSucceeded": "{{item_count}} abgeschlossene Elemente aus der Warteschlange entfernt",
"pauseSucceeded": "Prozess angehalten",
- "cancelFailed": "Problem beim Stornieren des Auftrags",
+ "cancelFailed": "Problem beim Abbrechen",
"pauseFailed": "Problem beim Anhalten des Prozesses",
"front": "Vorne",
"pruneTooltip": "Bereinigen Sie {{item_count}} abgeschlossene Aufträge",
@@ -1026,13 +1053,13 @@
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",
- "upscaleMethod": "Vergrößerungsmethoden",
+ "upscaleMethod": "Vergrößerungsmethode",
"metadata": {
- "strength": "Hochauflösender Fix Stärke",
- "enabled": "Hochauflösender Fix aktiviert",
- "method": "Hochauflösender Fix Methode"
+ "strength": "Auflösungs-Fix Stärke",
+ "enabled": "Auflösungs-Fix aktiviert",
+ "method": "Auflösungs-Fix Methode"
},
- "hrf": "Hochauflösender Fix"
+ "hrf": "Hohe-Auflösung-Fix"
},
"models": {
"noMatchingModels": "Keine passenden Modelle",
@@ -1063,7 +1090,7 @@
},
"compositing": {
"coherenceTab": "Kohärenzpass",
- "infillTab": "Füllung / Infill",
+ "infillTab": "Infill",
"title": "Compositing"
}
},
@@ -1104,8 +1131,8 @@
"showDynamicPrompts": "Dynamische Prompts anzeigen"
},
"prompt": {
- "noMatchingTriggers": "Keine passenden Auslöser",
- "addPromptTrigger": "Auslöse Text hinzufügen",
+ "noMatchingTriggers": "Keine passenden Trigger",
+ "addPromptTrigger": "Prompt-Trigger hinzufügen",
"compatibleEmbeddings": "Kompatible Einbettungen"
}
}
diff --git a/invokeai/frontend/web/public/locales/en.json b/invokeai/frontend/web/public/locales/en.json
index 6d3829bc71..91a36bcee3 100644
--- a/invokeai/frontend/web/public/locales/en.json
+++ b/invokeai/frontend/web/public/locales/en.json
@@ -17,9 +17,13 @@
},
"boards": {
"addBoard": "Add Board",
+ "addPrivateBoard": "Add Private Board",
+ "addSharedBoard": "Add Shared Board",
"archiveBoard": "Archive Board",
"archived": "Archived",
"autoAddBoard": "Auto-Add Board",
+ "boards": "Boards",
+ "selectedForAutoAdd": "Selected for Auto-Add",
"bottomMessage": "Deleting this board and its images will reset any features currently using them.",
"cancel": "Cancel",
"changeBoard": "Change Board",
@@ -35,8 +39,10 @@
"movingImagesToBoard_other": "Moving {{count}} images to board:",
"myBoard": "My Board",
"noMatching": "No matching Boards",
+ "private": "Private Boards",
"searchBoard": "Search Boards...",
"selectBoard": "Select a Board",
+ "shared": "Shared Boards",
"topMessage": "This board contains images used in the following features:",
"unarchiveBoard": "Unarchive Board",
"uncategorized": "Uncategorized",
@@ -364,6 +370,8 @@
"deleteImage_other": "Delete {{count}} Images",
"deleteImageBin": "Deleted images will be sent to your operating system's Bin.",
"deleteImagePermanent": "Deleted images cannot be restored.",
+ "displayBoardSearch": "Display Board Search",
+ "displaySearch": "Display Search",
"download": "Download",
"featuresWillReset": "If you delete this image, those features will immediately be reset.",
"galleryImageSize": "Image Size",
@@ -392,6 +400,7 @@
"viewerImage": "Viewer Image",
"compareImage": "Compare Image",
"openInViewer": "Open in Viewer",
+ "searchImages": "Search by Metadata",
"selectAllOnPage": "Select All On Page",
"selectAllOnBoard": "Select All On Board",
"showArchivedBoards": "Show Archived Boards",
diff --git a/invokeai/frontend/web/public/locales/fr.json b/invokeai/frontend/web/public/locales/fr.json
index b8f560e265..558545b2f4 100644
--- a/invokeai/frontend/web/public/locales/fr.json
+++ b/invokeai/frontend/web/public/locales/fr.json
@@ -1,7 +1,7 @@
{
"common": {
"hotkeysLabel": "Raccourcis clavier",
- "languagePickerLabel": "Sélecteur de langue",
+ "languagePickerLabel": "Langue",
"reportBugLabel": "Signaler un bug",
"settingsLabel": "Paramètres",
"img2img": "Image en image",
@@ -17,7 +17,53 @@
"cancel": "Annuler",
"loading": "Chargement",
"txt2img": "Texte vers image",
- "postprocessing": "Post-Traitement"
+ "postprocessing": "Post-Traitement",
+ "file": "Fichier",
+ "orderBy": "Trier par",
+ "comparing": "Comparaison",
+ "add": "Ajouter",
+ "dontAskMeAgain": "Ne plus me demander",
+ "nodeEditor": "Éditeur de nœud",
+ "outputs": "Sorties",
+ "unknown": "Inconnu",
+ "editor": "Éditeur",
+ "error": "Erreur",
+ "installed": "Installé",
+ "format": "format",
+ "goTo": "Aller à",
+ "input": "Saisie",
+ "linear": "Linéaire",
+ "localSystem": "Système local",
+ "learnMore": "En savoir plus",
+ "modelManager": "Gestionnaire de modèle",
+ "notInstalled": "Non $t(common.installed)",
+ "openInNewTab": "Ouvrir dans un nouvel onglet",
+ "somethingWentWrong": "Une erreur s'est produite",
+ "created": "Créé",
+ "tab": "Onglet",
+ "folder": "Dossier",
+ "imageFailedToLoad": "Impossible de charger l'image",
+ "prevPage": "Page précédente",
+ "nextPage": "Page suivante",
+ "selected": "Sélectionné",
+ "save": "Enregistrer",
+ "updated": "Mis à jour",
+ "random": "Aléatoire",
+ "unknownError": "Erreur inconnue",
+ "red": "Rouge",
+ "green": "Vert",
+ "delete": "Supprimer",
+ "simple": "Simple",
+ "template": "Modèle",
+ "advanced": "Avancé",
+ "copy": "Copier",
+ "saveAs": "Enregistrer sous",
+ "blue": "Bleu",
+ "alpha": "Alpha",
+ "editing": "Édition",
+ "enabled": "Activé",
+ "disabled": "Désactivé",
+ "direction": "Direction"
},
"gallery": {
"galleryImageSize": "Taille de l'image",
@@ -368,6 +414,63 @@
"previousImage": "Image précédente",
"showOptionsPanel": "Montrer la page d'options",
"invokeProgressBar": "Barre de Progression Invoke",
- "menu": "Menu"
+ "menu": "Menu",
+ "loadMore": "Charger plus",
+ "about": "À propos",
+ "mode": "Mode"
+ },
+ "boards": {
+ "move": "Déplacer",
+ "cancel": "Annuler",
+ "loading": "Chargement…",
+ "archived": "Archivé",
+ "clearSearch": "Effacer la recherche",
+ "imagesWithCount_one": "{{count}} image",
+ "imagesWithCount_many": "{{count}} images",
+ "imagesWithCount_other": "{{count}} images"
+ },
+ "accordions": {
+ "advanced": {
+ "title": "Avancé"
+ },
+ "image": {
+ "title": "Image"
+ }
+ },
+ "controlnet": {
+ "none": "Aucun",
+ "detectResolution": "Détecter la résolution",
+ "balanced": "Équilibré",
+ "colorMap": "Couleur",
+ "control": "Contrôle",
+ "controlMode": "Mode de contrôle",
+ "processor": "Processeur",
+ "ipAdapterMethod": "Méthode",
+ "delete": "Supprimer",
+ "duplicate": "Dupliquer",
+ "crop": "Rogner",
+ "imageResolution": "Résolution d'image",
+ "resize": "Redimensionner"
+ },
+ "queue": {
+ "clear": "Effacer",
+ "failed": "Échec",
+ "session": "Session",
+ "queueEmpty": "File d'attente vide",
+ "next": "Suivant",
+ "queue": "File d'attente",
+ "clearSucceeded": "File d'attente effacée",
+ "total": "Total",
+ "pending": "En attente",
+ "in_progress": "En cours",
+ "time": "Heure",
+ "status": "État",
+ "openQueue": "Ouvrir la file d'attente",
+ "queueFront": "Ajouter en premier",
+ "cancel": "Annuler",
+ "canceled": "Annulé",
+ "clearQueueAlertDialog2": "Voulez-vous vraiment effacer la file d'attente ?",
+ "queueBack": "Ajouter à la file d'attente",
+ "completed": "Terminé"
}
}
diff --git a/invokeai/frontend/web/public/locales/it.json b/invokeai/frontend/web/public/locales/it.json
index 3c0079de59..eced64a1e3 100644
--- a/invokeai/frontend/web/public/locales/it.json
+++ b/invokeai/frontend/web/public/locales/it.json
@@ -116,9 +116,9 @@
"deleteSelection": "Elimina la selezione",
"image": "immagine",
"drop": "Rilascia",
- "unstarImage": "Rimuovi preferenza immagine",
+ "unstarImage": "Rimuovi contrassegno immagine",
"dropOrUpload": "$t(gallery.drop) o carica",
- "starImage": "Immagine preferita",
+ "starImage": "Contrassegna l'immagine",
"dropToUpload": "$t(gallery.drop) per aggiornare",
"problemDeletingImagesDesc": "Impossibile eliminare una o più immagini",
"problemDeletingImages": "Problema durante l'eliminazione delle immagini",
@@ -142,7 +142,15 @@
"compareHelp1": "Tieni premuto Alt mentre fai clic su un'immagine della galleria o usi i tasti freccia per cambiare l'immagine di confronto.",
"compareHelp2": "Premi M per scorrere le modalità di confronto.",
"compareHelp3": "Premi C per scambiare le immagini confrontate.",
- "compareHelp4": "Premi Z o Esc per uscire."
+ "compareHelp4": "Premi Z o Esc per uscire.",
+ "newestFirst": "Prima i più nuovi",
+ "oldestFirst": "Prima i più vecchi",
+ "sortDirection": "Direzione dell'ordinamento",
+ "showStarredImagesFirst": "Mostra prima le immagini contrassegnate",
+ "showArchivedBoards": "Mostra le bacheche archiviate",
+ "searchImages": "Ricerca per metadati",
+ "displayBoardSearch": "Mostra la ricerca nelle Bacheche",
+ "displaySearch": "Mostra la ricerca"
},
"hotkeys": {
"keyboardShortcuts": "Tasti di scelta rapida",
@@ -941,7 +949,22 @@
"deletedBoardsCannotbeRestored": "Le bacheche eliminate non possono essere ripristinate",
"movingImagesToBoard_one": "Spostare {{count}} immagine nella bacheca:",
"movingImagesToBoard_many": "Spostare {{count}} immagini nella bacheca:",
- "movingImagesToBoard_other": "Spostare {{count}} immagini nella bacheca:"
+ "movingImagesToBoard_other": "Spostare {{count}} immagini nella bacheca:",
+ "imagesWithCount_one": "{{count}} immagine",
+ "imagesWithCount_many": "{{count}} immagini",
+ "imagesWithCount_other": "{{count}} immagini",
+ "assetsWithCount_one": "{{count}} risorsa",
+ "assetsWithCount_many": "{{count}} risorse",
+ "assetsWithCount_other": "{{count}} risorse",
+ "archiveBoard": "Archivia la bacheca",
+ "archived": "Archiviato",
+ "unarchiveBoard": "Annulla l'archiviazione della bacheca",
+ "selectedForAutoAdd": "Selezionato per l'aggiunta automatica",
+ "addSharedBoard": "Aggiungi una Bacheca Condivisa",
+ "boards": "Bacheche",
+ "private": "Bacheche private",
+ "shared": "Bacheche condivise",
+ "addPrivateBoard": "Aggiungi una Bacheca Privata"
},
"controlnet": {
"contentShuffleDescription": "Rimescola il contenuto di un'immagine",
@@ -1005,7 +1028,7 @@
"minConfidence": "Confidenza minima",
"scribble": "Scarabocchio",
"amult": "Angolo di illuminazione",
- "coarse": "Approssimativo",
+ "coarse": "Grossolano",
"resizeSimple": "Ridimensiona (semplice)",
"large": "Grande",
"small": "Piccolo",
@@ -1330,7 +1353,7 @@
"lora": {
"heading": "LoRA",
"paragraphs": [
- "Modelli leggeri utilizzati insieme ai modelli base."
+ "Modelli concettuali utilizzati insieme ai modelli di base."
]
},
"controlNet": {
diff --git a/invokeai/frontend/web/public/locales/ja.json b/invokeai/frontend/web/public/locales/ja.json
index e953944c44..6d5775ecd6 100644
--- a/invokeai/frontend/web/public/locales/ja.json
+++ b/invokeai/frontend/web/public/locales/ja.json
@@ -70,7 +70,25 @@
"outputs": "アウトプット",
"prevPage": "前のページ",
"unknownError": "未知のエラー",
- "orderBy": "並び順:"
+ "orderBy": "並び順:",
+ "comparing": "比較中",
+ "comparingDesc": "2 つの画像の比較する",
+ "enabled": "有効",
+ "notInstalled": "未インストール",
+ "positivePrompt": "プロンプト",
+ "negativePrompt": "除外する要素",
+ "selected": "選択済み",
+ "aboutDesc": "Invokeを業務で利用する場合はマークしてください:",
+ "beta": "ベータ",
+ "disabled": "無効",
+ "loglevel": "ログラベル",
+ "editor": "エディタ",
+ "safetensors": "Safetensors",
+ "tab": "タブ",
+ "viewingDesc": "画像を大きなギャラリービューで開く",
+ "editing": "編集",
+ "editingDesc": "コントロールレイヤキャンバスで編集",
+ "toResolve": "解決方法"
},
"gallery": {
"galleryImageSize": "画像のサイズ",
@@ -105,7 +123,23 @@
"noImageSelected": "画像が選択されていません",
"deleteSelection": "選択中のものを削除",
"downloadSelection": "選択中のものをダウンロード",
- "starImage": "スターをつける"
+ "starImage": "スターをつける",
+ "viewerImage": "閲覧画像",
+ "compareImage": "比較画像",
+ "openInViewer": "ビューアで開く",
+ "selectForCompare": "比較対象として選択",
+ "selectAnImageToCompare": "比較する画像を選択",
+ "slider": "スライダー",
+ "sideBySide": "横並び",
+ "hover": "ホバー",
+ "swapImages": "画像を入れ替える",
+ "compareOptions": "比較オプション",
+ "stretchToFit": "画面に合わせる",
+ "exitCompare": "比較を終了する",
+ "compareHelp1": "Alt キーを押しながらギャラリー画像をクリックするか、矢印キーを使用して比較画像を変更します。",
+ "compareHelp3": "C を押して、比較した画像を入れ替えます。",
+ "compareHelp4": "[Z ]または[Esc ]を押して終了します。",
+ "compareHelp2": "M キーを押して比較モードを切り替えます。"
},
"hotkeys": {
"keyboardShortcuts": "ホットキー",
@@ -282,7 +316,8 @@
"title": "手のひらツール"
},
"nextStagingImage": {
- "desc": "次のプレビュー画像"
+ "desc": "次のプレビュー画像",
+ "title": "次のステージング画像"
},
"cancelAndClear": {
"desc": "生成をキャンセルしキューもクリアします",
@@ -302,6 +337,35 @@
"redoStroke": {
"title": "ストロークをやり直す",
"desc": "ブラシストロークのやり直し"
+ },
+ "resetOptionsAndGallery": {
+ "title": "オプションとギャラリーをリセット",
+ "desc": "オプションとギャラリーパネルをリセット"
+ },
+ "quickToggleMove": {
+ "title": "高速トグル切り替え",
+ "desc": "一時的に移動モードを切り替える"
+ },
+ "toggleSnap": {
+ "title": "スナップの切り替え",
+ "desc": "グリッドへのスナップの切り替え"
+ },
+ "previousStagingImage": {
+ "desc": "ステージング領域の前の画像",
+ "title": "前のステージング画像"
+ },
+ "nodesHotkeys": "ノード",
+ "toggleOptionsAndGallery": {
+ "desc": "オプションとギャラリーパネルのオンオフを切り替える",
+ "title": "オプションとギャラリーを切り替える"
+ },
+ "undoStroke": {
+ "desc": "ブラシストロークの取り消し",
+ "title": "ストロークの取り消し"
+ },
+ "toggleViewer": {
+ "desc": "イメージ ビューアーと現在のタブのワークスペースを切り替えます。",
+ "title": "画像ビューアの切り替え"
}
},
"modelManager": {
@@ -464,10 +528,11 @@
"showGalleryPanel": "ギャラリーパネルを表示",
"menu": "メニュー",
"loadMore": "さらに読み込む",
- "createIssue": "課題の作成",
+ "createIssue": "問題を報告",
"resetUI": "$t(accessibility.reset) UI",
"mode": "モード:",
- "about": "Invoke について"
+ "about": "Invoke について",
+ "submitSupportTicket": "サポート依頼を送信する"
},
"controlnet": {
"resize": "リサイズ",
@@ -551,7 +616,13 @@
"lineartAnime": "アニメ線画",
"mlsdDescription": "最小線分検知",
"dwOpenpose": "DW オープンポーズ",
- "dwOpenposeDescription": "DW オープンポーズによる人体ポーズの推定"
+ "dwOpenposeDescription": "DW オープンポーズによる人体ポーズの推定",
+ "ipAdapterMethod": "方式",
+ "setControlImageDimensionsForce": "モデルを無視してサイズを W/H にコピー",
+ "style": "スタイルのみ",
+ "selectCLIPVisionModel": "CLIP Visionのモデルを選択",
+ "composition": "構図のみ",
+ "beginEndStepPercentShort": "開始 / 終了 %"
},
"metadata": {
"seamless": "シームレス",
@@ -620,7 +691,10 @@
"time": "時間",
"completedIn": "完了まで",
"back": "戻る",
- "prune": "刈り込み"
+ "prune": "刈り込み",
+ "prompts_other": "プロンプト",
+ "iterations_other": "繰り返し",
+ "generations_other": "生成"
},
"models": {
"noMatchingModels": "一致するモデルがありません",
@@ -745,5 +819,10 @@
"addPromptTrigger": "プロンプトトリガーを追加",
"compatibleEmbeddings": "互換性のある埋め込み",
"noMatchingTriggers": "一致するトリガーがありません"
+ },
+ "ui": {
+ "tabs": {
+ "queue": "キュー"
+ }
}
}
diff --git a/invokeai/frontend/web/public/locales/ru.json b/invokeai/frontend/web/public/locales/ru.json
index 2f7c711bf2..ecceaf42aa 100644
--- a/invokeai/frontend/web/public/locales/ru.json
+++ b/invokeai/frontend/web/public/locales/ru.json
@@ -144,7 +144,17 @@
"compareOptions": "Варианты сравнения",
"compareHelp1": "Удерживайте Alt при нажатии на изображение в галерее или при помощи клавиш со стрелками, чтобы изменить сравниваемое изображение.",
"compareHelp2": "Нажмите M , чтобы переключиться между режимами сравнения.",
- "compareHelp3": "Нажмите C , чтобы поменять местами сравниваемые изображения."
+ "compareHelp3": "Нажмите C , чтобы поменять местами сравниваемые изображения.",
+ "newestFirst": "Сначала новые",
+ "sortDirection": "Направление сортировки",
+ "oldestFirst": "Сначала старые",
+ "showStarredImagesFirst": "Сначала избранные изображения",
+ "selectAllOnPage": "Выбрать все на странице",
+ "selectAllOnBoard": "Выбрать все на доске",
+ "showArchivedBoards": "Показать архивированные доски",
+ "searchImages": "Поиск по метаданным",
+ "displayBoardSearch": "Отобразить поиск досок",
+ "displaySearch": "Отобразить поиск"
},
"hotkeys": {
"keyboardShortcuts": "Горячие клавиши",
@@ -1043,7 +1053,22 @@
"downloadBoard": "Скачать доску",
"deleteBoard": "Удалить доску",
"deleteBoardAndImages": "Удалить доску и изображения",
- "deletedBoardsCannotbeRestored": "Удаленные доски не подлежат восстановлению"
+ "deletedBoardsCannotbeRestored": "Удаленные доски не подлежат восстановлению",
+ "assetsWithCount_one": "{{count}} ассет",
+ "assetsWithCount_few": "{{count}} ассета",
+ "assetsWithCount_many": "{{count}} ассетов",
+ "imagesWithCount_one": "{{count}} изображение",
+ "imagesWithCount_few": "{{count}} изображения",
+ "imagesWithCount_many": "{{count}} изображений",
+ "archiveBoard": "Архивировать доску",
+ "archived": "Заархивировано",
+ "unarchiveBoard": "Разархивировать доску",
+ "selectedForAutoAdd": "Выбрано для автодобавления",
+ "addSharedBoard": "Добавить общую доску",
+ "boards": "Доски",
+ "addPrivateBoard": "Добавить личную доску",
+ "private": "Личные доски",
+ "shared": "Общие доски"
},
"dynamicPrompts": {
"seedBehaviour": {
diff --git a/invokeai/frontend/web/public/locales/zh_CN.json b/invokeai/frontend/web/public/locales/zh_CN.json
index 45bab5c6da..c4bb52e7cc 100644
--- a/invokeai/frontend/web/public/locales/zh_CN.json
+++ b/invokeai/frontend/web/public/locales/zh_CN.json
@@ -70,7 +70,23 @@
"add": "添加",
"loglevel": "日志级别",
"copy": "复制",
- "localSystem": "本地系统"
+ "localSystem": "本地系统",
+ "aboutHeading": "掌握你的创造力",
+ "comparing": "对比中",
+ "comparingDesc": "正在对比两张图片",
+ "enabled": "已启用",
+ "disabled": "已禁用",
+ "red": "红",
+ "editor": "编辑器",
+ "positivePrompt": "正向提示词",
+ "negativePrompt": "反向提示词",
+ "selected": "选中的",
+ "viewing": "查看",
+ "viewingDesc": "在大型画廊视图中查看图片",
+ "editing": "编辑中",
+ "green": "绿",
+ "blue": "蓝",
+ "editingDesc": "在控制图层画布上编辑"
},
"gallery": {
"galleryImageSize": "预览大小",
@@ -100,7 +116,24 @@
"problemDeletingImagesDesc": "有一张或多张图像无法被删除",
"problemDeletingImages": "删除图像时出现问题",
"unstarImage": "取消收藏图像",
- "starImage": "收藏图像"
+ "starImage": "收藏图像",
+ "alwaysShowImageSizeBadge": "始终显示图像尺寸",
+ "selectForCompare": "选择以比较",
+ "selectAnImageToCompare": "选择一个图像进行比较",
+ "slider": "滑块",
+ "sideBySide": "并排",
+ "bulkDownloadFailed": "下载失败",
+ "bulkDownloadRequested": "准备下载",
+ "bulkDownloadRequestedDesc": "您的下载请求正在准备中,这可能需要一些时间。",
+ "bulkDownloadRequestFailed": "下载准备过程中出现问题",
+ "viewerImage": "查看器图像",
+ "compareImage": "对比图像",
+ "openInViewer": "在查看器中打开",
+ "selectAllOnBoard": "选择板块全部",
+ "hover": "悬停",
+ "selectAllOnPage": "选择本页全部",
+ "swapImages": "交换图像",
+ "compareOptions": "比较选项"
},
"hotkeys": {
"keyboardShortcuts": "快捷键",
@@ -604,7 +637,8 @@
"mode": "模式",
"resetUI": "$t(accessibility.reset) UI",
"createIssue": "创建问题",
- "about": "关于"
+ "about": "关于",
+ "submitSupportTicket": "提交支持工单"
},
"tooltip": {
"feature": {
@@ -806,7 +840,21 @@
"controlAdapter_other": "Control Adapters",
"lineartAnime": "Lineart Anime",
"canny": "Canny",
- "resizeSimple": "缩放(简单)"
+ "resizeSimple": "缩放(简单)",
+ "body": "身体",
+ "ipAdapterMethod": "方法",
+ "setControlImageDimensionsForce": "将尺寸复制到宽/高(忽略模型)",
+ "depthAnythingDescription": "使用Depth Anything技术生成深度图",
+ "selectCLIPVisionModel": "选择一个CLIP视觉模型",
+ "small": "小",
+ "full": "全部",
+ "large": "大",
+ "face": "脸",
+ "style": "仅风格",
+ "hands": "手",
+ "composition": "仅构图",
+ "modelSize": "模型尺寸",
+ "dwOpenposeDescription": "使用DW Openpose进行人体姿态预估"
},
"queue": {
"status": "状态",
@@ -863,7 +911,10 @@
"graphFailedToQueue": "节点图加入队列失败",
"batchFieldValues": "批处理值",
"time": "时间",
- "openQueue": "打开队列"
+ "openQueue": "打开队列",
+ "prompts_other": "提示词",
+ "iterations_other": "迭代",
+ "generations_other": "生成"
},
"sdxl": {
"refinerStart": "Refiner 开始作用时机",
@@ -1238,5 +1289,9 @@
"image": {
"title": "图像"
}
+ },
+ "prompt": {
+ "addPromptTrigger": "添加提示词触发器",
+ "noMatchingTriggers": "没有匹配的触发器"
}
}
diff --git a/invokeai/frontend/web/src/app/components/InvokeAIUI.tsx b/invokeai/frontend/web/src/app/components/InvokeAIUI.tsx
index 12611943bc..1dd1a265fb 100644
--- a/invokeai/frontend/web/src/app/components/InvokeAIUI.tsx
+++ b/invokeai/frontend/web/src/app/components/InvokeAIUI.tsx
@@ -7,11 +7,10 @@ import { $baseUrl } from 'app/store/nanostores/baseUrl';
import { $customNavComponent } from 'app/store/nanostores/customNavComponent';
import type { CustomStarUi } from 'app/store/nanostores/customStarUI';
import { $customStarUI } from 'app/store/nanostores/customStarUI';
-import { $galleryHeader } from 'app/store/nanostores/galleryHeader';
import { $isDebugging } from 'app/store/nanostores/isDebugging';
import { $logo } from 'app/store/nanostores/logo';
import { $openAPISchemaUrl } from 'app/store/nanostores/openAPISchemaUrl';
-import { $projectId } from 'app/store/nanostores/projectId';
+import { $projectId, $projectName, $projectUrl } from 'app/store/nanostores/projectId';
import { $queueId, DEFAULT_QUEUE_ID } from 'app/store/nanostores/queueId';
import { $store } from 'app/store/nanostores/store';
import { $workflowCategories } from 'app/store/nanostores/workflowCategories';
@@ -37,7 +36,8 @@ interface Props extends PropsWithChildren {
customNavComponent?: ReactNode;
middleware?: Middleware[];
projectId?: string;
- galleryHeader?: ReactNode;
+ projectName?: string;
+ projectUrl?: string;
queueId?: string;
selectedImage?: {
imageName: string;
@@ -58,7 +58,8 @@ const InvokeAIUI = ({
customNavComponent,
middleware,
projectId,
- galleryHeader,
+ projectName,
+ projectUrl,
queueId,
selectedImage,
customStarUi,
@@ -108,7 +109,7 @@ const InvokeAIUI = ({
$projectId.set(undefined);
$queueId.set(DEFAULT_QUEUE_ID);
};
- }, [apiUrl, token, middleware, projectId, queueId]);
+ }, [apiUrl, token, middleware, projectId, queueId, projectName, projectUrl]);
useEffect(() => {
if (customStarUi) {
@@ -141,14 +142,20 @@ const InvokeAIUI = ({
}, [openAPISchemaUrl]);
useEffect(() => {
- if (galleryHeader) {
- $galleryHeader.set(galleryHeader);
- }
+ $projectName.set(projectName);
return () => {
- $galleryHeader.set(undefined);
+ $projectName.set(undefined);
};
- }, [galleryHeader]);
+ }, [projectName]);
+
+ useEffect(() => {
+ $projectUrl.set(projectUrl);
+
+ return () => {
+ $projectUrl.set(undefined);
+ };
+ }, [projectUrl]);
useEffect(() => {
if (logo) {
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
index 48e7fa800c..9698f85219 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/index.ts
@@ -1,7 +1,6 @@
import type { TypedStartListening } from '@reduxjs/toolkit';
import { createListenerMiddleware } from '@reduxjs/toolkit';
import { addCommitStagingAreaImageListener } from 'app/store/middleware/listenerMiddleware/listeners/addCommitStagingAreaImageListener';
-import { addFirstListImagesListener } from 'app/store/middleware/listenerMiddleware/listeners/addFirstListImagesListener.ts';
import { addAnyEnqueuedListener } from 'app/store/middleware/listenerMiddleware/listeners/anyEnqueued';
import { addAppConfigReceivedListener } from 'app/store/middleware/listenerMiddleware/listeners/appConfigReceived';
import { addAppStartedListener } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
@@ -23,9 +22,10 @@ import { addEnqueueRequestedCanvasListener } from 'app/store/middleware/listener
import { addEnqueueRequestedLinear } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedLinear';
import { addEnqueueRequestedNodes } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedNodes';
import { addGalleryImageClickedListener } from 'app/store/middleware/listenerMiddleware/listeners/galleryImageClicked';
+import { addGalleryOffsetChangedListener } from 'app/store/middleware/listenerMiddleware/listeners/galleryOffsetChanged';
import { addGetOpenAPISchemaListener } from 'app/store/middleware/listenerMiddleware/listeners/getOpenAPISchema';
import { addImageAddedToBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageAddedToBoard';
-import { addRequestedSingleImageDeletionListener } from 'app/store/middleware/listenerMiddleware/listeners/imageDeleted';
+import { addImageDeletionListeners } from 'app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners';
import { addImageDroppedListener } from 'app/store/middleware/listenerMiddleware/listeners/imageDropped';
import { addImageRemovedFromBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageRemovedFromBoard';
import { addImagesStarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesStarred';
@@ -69,7 +69,7 @@ const startAppListening = listenerMiddleware.startListening as AppStartListening
addImageUploadedFulfilledListener(startAppListening);
// Image deleted
-addRequestedSingleImageDeletionListener(startAppListening);
+addImageDeletionListeners(startAppListening);
addDeleteBoardAndImagesFulfilledListener(startAppListening);
addImageToDeleteSelectedListener(startAppListening);
@@ -79,6 +79,7 @@ addImagesUnstarredListener(startAppListening);
// Gallery
addGalleryImageClickedListener(startAppListening);
+addGalleryOffsetChangedListener(startAppListening);
// User Invoked
addEnqueueRequestedCanvasListener(startAppListening);
@@ -137,7 +138,6 @@ addModelSelectedListener(startAppListening);
addAppStartedListener(startAppListening);
addModelsLoadedListener(startAppListening);
addAppConfigReceivedListener(startAppListening);
-addFirstListImagesListener(startAppListening);
// Ad-hoc upscale workflwo
addUpscaleRequestedListener(startAppListening);
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addArchivedOrDeletedBoardListener.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addArchivedOrDeletedBoardListener.ts
index 0915929245..1581da9b37 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addArchivedOrDeletedBoardListener.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addArchivedOrDeletedBoardListener.ts
@@ -11,38 +11,114 @@ import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
export const addArchivedOrDeletedBoardListener = (startAppListening: AppStartListening) => {
+ /**
+ * The auto-add board shouldn't be set to an archived board or deleted board. When we archive a board, delete
+ * a board, or change a the archived board visibility flag, we may need to reset the auto-add board.
+ */
startAppListening({
matcher: isAnyOf(
- // Updating a board may change its archived status
- boardsApi.endpoints.updateBoard.matchFulfilled,
- // If the selected/auto-add board was deleted from a different session, we'll only know during the list request,
- boardsApi.endpoints.listAllBoards.matchFulfilled,
// If a board is deleted, we'll need to reset the auto-add board
imagesApi.endpoints.deleteBoard.matchFulfilled,
- imagesApi.endpoints.deleteBoardAndImages.matchFulfilled,
- // When we change the visibility of archived boards, we may need to reset the auto-add board
- shouldShowArchivedBoardsChanged
+ imagesApi.endpoints.deleteBoardAndImages.matchFulfilled
),
effect: async (action, { dispatch, getState }) => {
- /**
- * The auto-add board shouldn't be set to an archived board or deleted board. When we archive a board, delete
- * a board, or change a the archived board visibility flag, we may need to reset the auto-add board.
- */
-
const state = getState();
const queryArgs = selectListBoardsQueryArgs(state);
const queryResult = boardsApi.endpoints.listAllBoards.select(queryArgs)(state);
- const autoAddBoardId = state.gallery.autoAddBoardId;
+ const { autoAddBoardId, selectedBoardId } = state.gallery;
if (!queryResult.data) {
return;
}
+ if (!queryResult.data.find((board) => board.board_id === selectedBoardId)) {
+ dispatch(boardIdSelected({ boardId: 'none' }));
+ dispatch(galleryViewChanged('images'));
+ }
if (!queryResult.data.find((board) => board.board_id === autoAddBoardId)) {
dispatch(autoAddBoardIdChanged('none'));
+ }
+ },
+ });
+
+ // If we archived a board, it may end up hidden. If it's selected or the auto-add board, we should reset those.
+ startAppListening({
+ matcher: boardsApi.endpoints.updateBoard.matchFulfilled,
+ effect: async (action, { dispatch, getState }) => {
+ const state = getState();
+ const queryArgs = selectListBoardsQueryArgs(state);
+ const queryResult = boardsApi.endpoints.listAllBoards.select(queryArgs)(state);
+ const { shouldShowArchivedBoards } = state.gallery;
+
+ if (!queryResult.data) {
+ return;
+ }
+
+ const wasArchived = action.meta.arg.originalArgs.changes.archived === true;
+
+ if (wasArchived && !shouldShowArchivedBoards) {
+ dispatch(autoAddBoardIdChanged('none'));
dispatch(boardIdSelected({ boardId: 'none' }));
dispatch(galleryViewChanged('images'));
}
},
});
+
+ // When we hide archived boards, if the selected or the auto-add board is archived, we should reset those.
+ startAppListening({
+ actionCreator: shouldShowArchivedBoardsChanged,
+ effect: async (action, { dispatch, getState }) => {
+ const shouldShowArchivedBoards = action.payload;
+
+ // We only need to take action if we have just hidden archived boards.
+ if (!shouldShowArchivedBoards) {
+ return;
+ }
+
+ const state = getState();
+ const queryArgs = selectListBoardsQueryArgs(state);
+ const queryResult = boardsApi.endpoints.listAllBoards.select(queryArgs)(state);
+ const { selectedBoardId, autoAddBoardId } = state.gallery;
+
+ if (!queryResult.data) {
+ return;
+ }
+
+ // Handle the case where selected board is archived
+ const selectedBoard = queryResult.data.find((b) => b.board_id === selectedBoardId);
+ if (selectedBoard && selectedBoard.archived) {
+ dispatch(boardIdSelected({ boardId: 'none' }));
+ dispatch(galleryViewChanged('images'));
+ }
+
+ // Handle the case where auto-add board is archived
+ const autoAddBoard = queryResult.data.find((b) => b.board_id === autoAddBoardId);
+ if (autoAddBoard && autoAddBoard.archived) {
+ dispatch(autoAddBoardIdChanged('none'));
+ }
+ },
+ });
+
+ /**
+ * When listing boards, if the selected or auto-add boards are no longer in the list, we should reset them.
+ */
+ startAppListening({
+ matcher: boardsApi.endpoints.listAllBoards.matchFulfilled,
+ effect: async (action, { dispatch, getState }) => {
+ const boards = action.payload;
+ const state = getState();
+ const { selectedBoardId, autoAddBoardId } = state.gallery;
+
+ // Handle the case where selected board isn't in the list of boards
+ if (!boards.find((b) => b.board_id === selectedBoardId)) {
+ dispatch(boardIdSelected({ boardId: 'none' }));
+ dispatch(galleryViewChanged('images'));
+ }
+
+ // Handle the case where auto-add board isn't in the list of boards
+ if (!boards.find((b) => b.board_id === autoAddBoardId)) {
+ dispatch(autoAddBoardIdChanged('none'));
+ }
+ },
+ });
};
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addFirstListImagesListener.ts.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addFirstListImagesListener.ts.ts
deleted file mode 100644
index 5db5f687a1..0000000000
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addFirstListImagesListener.ts.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
-import { imageSelected } from 'features/gallery/store/gallerySlice';
-import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
-import { imagesApi } from 'services/api/endpoints/images';
-import { getListImagesUrl } from 'services/api/util';
-
-export const addFirstListImagesListener = (startAppListening: AppStartListening) => {
- startAppListening({
- matcher: imagesApi.endpoints.listImages.matchFulfilled,
- effect: async (action, { dispatch, unsubscribe, cancelActiveListeners }) => {
- // Only run this listener on the first listImages request for no-board images
- if (action.meta.arg.queryCacheKey !== getListImagesUrl({ board_id: 'none', categories: IMAGE_CATEGORIES })) {
- return;
- }
-
- // this should only run once
- cancelActiveListeners();
- unsubscribe();
-
- const data = action.payload;
-
- if (data.items.length > 0) {
- dispatch(imageSelected(data.items[0] ?? null));
- }
- },
- });
-};
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/galleryOffsetChanged.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/galleryOffsetChanged.ts
new file mode 100644
index 0000000000..51095700e3
--- /dev/null
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/galleryOffsetChanged.ts
@@ -0,0 +1,119 @@
+import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
+import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
+import { imageToCompareChanged, offsetChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
+import { imagesApi } from 'services/api/endpoints/images';
+
+export const addGalleryOffsetChangedListener = (startAppListening: AppStartListening) => {
+ /**
+ * When the user changes pages in the gallery, we need to wait until the next page of images is loaded, then maybe
+ * update the selection.
+ *
+ * There are a three scenarios:
+ *
+ * 1. The page is changed by clicking the pagination buttons. No changes to selection are needed.
+ *
+ * 2. The page is changed by using the arrow keys (without alt).
+ * - When going backwards, select the last image.
+ * - When going forwards, select the first image.
+ *
+ * 3. The page is changed by using the arrows keys with alt. This means the user is changing the comparison image.
+ * - When going backwards, select the last image _as the comparison image_.
+ * - When going forwards, select the first image _as the comparison image_.
+ */
+ startAppListening({
+ actionCreator: offsetChanged,
+ effect: async (action, { dispatch, getState, getOriginalState, take, cancelActiveListeners }) => {
+ // Cancel any active listeners to prevent the selection from changing without user input
+ cancelActiveListeners();
+
+ const { withHotkey } = action.payload;
+
+ if (!withHotkey) {
+ // User changed pages by clicking the pagination buttons - no changes to selection
+ return;
+ }
+
+ const originalState = getOriginalState();
+ const prevOffset = originalState.gallery.offset;
+ const offset = getState().gallery.offset;
+
+ if (offset === prevOffset) {
+ // The page didn't change - bail
+ return;
+ }
+
+ /**
+ * We need to wait until the next page of images is loaded before updating the selection, so we use the correct
+ * page of images.
+ *
+ * The simplest way to do it would be to use `take` to wait for the next fulfilled action, but RTK-Q doesn't
+ * dispatch an action on cache hits. This means the `take` will only return if the cache is empty. If the user
+ * changes to a cached page - a common situation - the `take` will never resolve.
+ *
+ * So we need to take a two-step approach. First, check if we have data in the cache for the page of images. If
+ * we have data cached, use it to update the selection. If we don't have data cached, wait for the next fulfilled
+ * action, which updates the cache, then use the cache to update the selection.
+ */
+
+ // Check if we have data in the cache for the page of images
+ const queryArgs = selectListImagesQueryArgs(getState());
+ let { data } = imagesApi.endpoints.listImages.select(queryArgs)(getState());
+
+ // No data yet - wait for the network request to complete
+ if (!data) {
+ const takeResult = await take(imagesApi.endpoints.listImages.matchFulfilled, 5000);
+ if (!takeResult) {
+ // The request didn't complete in time - bail
+ return;
+ }
+ data = takeResult[0].payload;
+ }
+
+ // We awaited a network request - state could have changed, get fresh state
+ const state = getState();
+ const { selection, imageToCompare } = state.gallery;
+ const imageDTOs = data?.items;
+
+ if (!imageDTOs) {
+ // The page didn't load - bail
+ return;
+ }
+
+ if (withHotkey === 'arrow') {
+ // User changed pages by using the arrow keys - selection changes to first or last image depending
+ if (offset < prevOffset) {
+ // We've gone backwards
+ const lastImage = imageDTOs[imageDTOs.length - 1];
+ if (!selection.some((selectedImage) => selectedImage.image_name === lastImage?.image_name)) {
+ dispatch(selectionChanged(lastImage ? [lastImage] : []));
+ }
+ } else {
+ // We've gone forwards
+ const firstImage = imageDTOs[0];
+ if (!selection.some((selectedImage) => selectedImage.image_name === firstImage?.image_name)) {
+ dispatch(selectionChanged(firstImage ? [firstImage] : []));
+ }
+ }
+ return;
+ }
+
+ if (withHotkey === 'alt+arrow') {
+ // User changed pages by using the arrow keys with alt - comparison image changes to first or last depending
+ if (offset < prevOffset) {
+ // We've gone backwards
+ const lastImage = imageDTOs[imageDTOs.length - 1];
+ if (lastImage && imageToCompare?.image_name !== lastImage.image_name) {
+ dispatch(imageToCompareChanged(lastImage));
+ }
+ } else {
+ // We've gone forwards
+ const firstImage = imageDTOs[0];
+ if (firstImage && imageToCompare?.image_name !== firstImage.image_name) {
+ dispatch(imageToCompareChanged(firstImage));
+ }
+ }
+ return;
+ }
+ },
+ });
+};
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeleted.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners.ts
similarity index 65%
rename from invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeleted.ts
rename to invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners.ts
index 916ec2c47f..489adb7476 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeleted.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners.ts
@@ -22,11 +22,11 @@ import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { isImageFieldInputInstance } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
-import { forEach } from 'lodash-es';
-import { api } from 'services/api';
+import { forEach, intersectionBy } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
+// Some utils to delete images from different parts of the app
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.nodes.present.nodes.forEach((node) => {
if (!isInvocationNode(node)) {
@@ -97,10 +97,11 @@ const deleteControlLayerImages = (state: RootState, dispatch: AppDispatch, image
});
};
-export const addRequestedSingleImageDeletionListener = (startAppListening: AppStartListening) => {
+export const addImageDeletionListeners = (startAppListening: AppStartListening) => {
+ // Handle single image deletion
startAppListening({
actionCreator: imageDeletionConfirmed,
- effect: async (action, { dispatch, getState, condition }) => {
+ effect: async (action, { dispatch, getState }) => {
const { imageDTOs, imagesUsage } = action.payload;
if (imageDTOs.length !== 1 || imagesUsage.length !== 1) {
@@ -116,49 +117,51 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
return;
}
- dispatch(isModalOpenChanged(false));
- const state = getState();
+ try {
+ const state = getState();
+ await dispatch(imagesApi.endpoints.deleteImage.initiate(imageDTO)).unwrap();
- // We need to reset the features where the image is in use - none of these work if their image(s) don't exist
- if (imageUsage.isCanvasImage) {
- dispatch(resetCanvas());
- }
+ if (state.gallery.selection.some((i) => i.image_name === imageDTO.image_name)) {
+ // The deleted image was a selected image, we need to select the next image
+ const newSelection = state.gallery.selection.filter((i) => i.image_name !== imageDTO.image_name);
+
+ if (newSelection.length > 0) {
+ return;
+ }
+
+ // Get the current list of images and select the same index
+ const baseQueryArgs = selectListImagesQueryArgs(state);
+ const data = imagesApi.endpoints.listImages.select(baseQueryArgs)(state).data;
+
+ if (data) {
+ const deletedImageIndex = data.items.findIndex((i) => i.image_name === imageDTO.image_name);
+ const nextImage = data.items[deletedImageIndex + 1] ?? data.items[0] ?? null;
+ if (nextImage?.image_name === imageDTO.image_name) {
+ // If the next image is the same as the deleted one, it means it was the last image, reset selection
+ dispatch(imageSelected(null));
+ } else {
+ dispatch(imageSelected(nextImage));
+ }
+ }
+ }
+
+ // We need to reset the features where the image is in use - none of these work if their image(s) don't exist
+ if (imageUsage.isCanvasImage) {
+ dispatch(resetCanvas());
+ }
- imageDTOs.forEach((imageDTO) => {
deleteControlAdapterImages(state, dispatch, imageDTO);
deleteNodesImages(state, dispatch, imageDTO);
deleteControlLayerImages(state, dispatch, imageDTO);
- });
-
- // Delete from server
- const { requestId } = dispatch(imagesApi.endpoints.deleteImage.initiate(imageDTO));
-
- // Wait for successful deletion, then trigger boards to re-fetch
- const wasImageDeleted = await condition(
- (action) => imagesApi.endpoints.deleteImage.matchFulfilled(action) && action.meta.requestId === requestId,
- 30000
- );
-
- if (wasImageDeleted) {
- dispatch(api.util.invalidateTags([{ type: 'Board', id: imageDTO.board_id ?? 'none' }]));
- }
-
- const lastSelectedImage = state.gallery.selection[state.gallery.selection.length - 1]?.image_name;
-
- if (imageDTO && imageDTO?.image_name === lastSelectedImage) {
- const baseQueryArgs = selectListImagesQueryArgs(state);
- const { data } = imagesApi.endpoints.listImages.select(baseQueryArgs)(state);
-
- if (data && data.items) {
- const newlySelectedImage = data?.items.find((img) => img.image_name !== imageDTO?.image_name);
- dispatch(imageSelected(newlySelectedImage || null));
- } else {
- dispatch(imageSelected(null));
- }
+ } catch {
+ // no-op
+ } finally {
+ dispatch(isModalOpenChanged(false));
}
},
});
+ // Handle multiple image deletion
startAppListening({
actionCreator: imageDeletionConfirmed,
effect: async (action, { dispatch, getState }) => {
@@ -170,20 +173,20 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
}
try {
- // Delete from server
- await dispatch(imagesApi.endpoints.deleteImages.initiate({ imageDTOs })).unwrap();
const state = getState();
- const queryArgs = selectListImagesQueryArgs(state);
- const { data } = imagesApi.endpoints.listImages.select(queryArgs)(state);
+ await dispatch(imagesApi.endpoints.deleteImages.initiate({ imageDTOs })).unwrap();
- if (data && data.items[0]) {
- dispatch(imageSelected(data.items[0]));
- } else {
- dispatch(imageSelected(null));
+ if (intersectionBy(state.gallery.selection, imageDTOs, 'image_name').length > 0) {
+ // Some selected images were deleted, need to select the next image
+ const queryArgs = selectListImagesQueryArgs(state);
+ const { data } = imagesApi.endpoints.listImages.select(queryArgs)(state);
+ if (data) {
+ // When we delete multiple images, we clear the selection. Then, the the next time we load images, we will
+ // select the first one. This is handled below in the listener for `imagesApi.endpoints.listImages.matchFulfilled`.
+ dispatch(imageSelected(null));
+ }
}
- dispatch(isModalOpenChanged(false));
-
// We need to reset the features where the image is in use - none of these work if their image(s) don't exist
if (imagesUsage.some((i) => i.isCanvasImage)) {
@@ -197,14 +200,20 @@ export const addRequestedSingleImageDeletionListener = (startAppListening: AppSt
});
} catch {
// no-op
+ } finally {
+ dispatch(isModalOpenChanged(false));
}
},
});
+ // When we list images, if no images is selected, select the first one.
startAppListening({
- matcher: imagesApi.endpoints.deleteImage.matchPending,
- effect: () => {
- //
+ matcher: imagesApi.endpoints.listImages.matchFulfilled,
+ effect: (action, { dispatch, getState }) => {
+ const selection = getState().gallery.selection;
+ if (selection.length === 0) {
+ dispatch(imageSelected(action.payload.items[0] ?? null));
+ }
},
});
diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts
index 40cacc78cb..2fc6210397 100644
--- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts
+++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts
@@ -91,7 +91,7 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
);
}
- dispatch(offsetChanged(0));
+ dispatch(offsetChanged({ offset: 0 }));
if (!imageDTO.board_id && gallery.selectedBoardId !== 'none') {
dispatch(
diff --git a/invokeai/frontend/web/src/app/store/nanostores/galleryHeader.ts b/invokeai/frontend/web/src/app/store/nanostores/galleryHeader.ts
deleted file mode 100644
index 5de7b1dd40..0000000000
--- a/invokeai/frontend/web/src/app/store/nanostores/galleryHeader.ts
+++ /dev/null
@@ -1,4 +0,0 @@
-import { atom } from 'nanostores';
-import type { ReactNode } from 'react';
-
-export const $galleryHeader = atom(undefined);
diff --git a/invokeai/frontend/web/src/app/store/nanostores/projectId.ts b/invokeai/frontend/web/src/app/store/nanostores/projectId.ts
index 2268ccdff1..c2b14e91ac 100644
--- a/invokeai/frontend/web/src/app/store/nanostores/projectId.ts
+++ b/invokeai/frontend/web/src/app/store/nanostores/projectId.ts
@@ -4,3 +4,6 @@ import { atom } from 'nanostores';
* The optional project-id header.
*/
export const $projectId = atom();
+
+export const $projectName = atom();
+export const $projectUrl = atom();
diff --git a/invokeai/frontend/web/src/app/types/invokeai.ts b/invokeai/frontend/web/src/app/types/invokeai.ts
index 21636ada49..6d7416d95d 100644
--- a/invokeai/frontend/web/src/app/types/invokeai.ts
+++ b/invokeai/frontend/web/src/app/types/invokeai.ts
@@ -65,6 +65,7 @@ export type AppConfig = {
*/
shouldUpdateImagesOnConnect: boolean;
shouldFetchMetadataFromApi: boolean;
+ allowPrivateBoards: boolean;
disabledTabs: InvokeTabName[];
disabledFeatures: AppFeature[];
disabledSDFeatures: SDFeature[];
diff --git a/invokeai/frontend/web/src/common/components/IAIDropOverlay.tsx b/invokeai/frontend/web/src/common/components/IAIDropOverlay.tsx
index cd3e0cbee1..51e5583bc6 100644
--- a/invokeai/frontend/web/src/common/components/IAIDropOverlay.tsx
+++ b/invokeai/frontend/web/src/common/components/IAIDropOverlay.tsx
@@ -52,8 +52,8 @@ const IAIDropOverlay = (props: Props) => {
bottom={0.5}
opacity={1}
borderWidth={2}
- borderColor={isOver ? 'base.50' : 'base.300'}
- borderRadius="lg"
+ borderColor={isOver ? 'base.300' : 'base.500'}
+ borderRadius="base"
borderStyle="dashed"
transitionProperty="common"
transitionDuration="0.1s"
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddBadge.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddBadge.tsx
new file mode 100644
index 0000000000..a8b1f9f4fb
--- /dev/null
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddBadge.tsx
@@ -0,0 +1,14 @@
+import { Badge } from '@invoke-ai/ui-library';
+import { memo } from 'react';
+import { useTranslation } from 'react-i18next';
+
+export const AutoAddBadge = memo(() => {
+ const { t } = useTranslation();
+ return (
+
+ {t('common.auto')}
+
+ );
+});
+
+AutoAddBadge.displayName = 'AutoAddBadge';
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddIcon.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddIcon.tsx
deleted file mode 100644
index 9dd6a59c49..0000000000
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/AutoAddIcon.tsx
+++ /dev/null
@@ -1,16 +0,0 @@
-import { Badge, Flex } from '@invoke-ai/ui-library';
-import { memo } from 'react';
-import { useTranslation } from 'react-i18next';
-
-const AutoAddIcon = () => {
- const { t } = useTranslation();
- return (
-
-
- {t('common.auto')}
-
-
- );
-};
-
-export default memo(AutoAddIcon);
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx
index 677919bd30..4c05d78bb8 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx
@@ -71,13 +71,11 @@ const BoardContextMenu = ({ board, setBoardToDelete, children }: Props) => {
() => (
- }
- isDisabled={isSelectedForAutoAdd || autoAssignBoardOnClick}
- onClick={handleSetAutoAdd}
- >
- {t('boards.menuItemAutoAdd')}
-
+ {!autoAssignBoardOnClick && (
+ } isDisabled={isSelectedForAutoAdd} onClick={handleSetAutoAdd}>
+ {isSelectedForAutoAdd ? t('boards.selectedForAutoAdd') : t('boards.menuItemAutoAdd')}
+
+ )}
{isBulkDownloadEnabled && (
} onClickCapture={handleBulkDownload}>
{t('boards.downloadBoard')}
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx
index 5cd4d001f4..60468de145 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx
@@ -1,26 +1,52 @@
import { IconButton } from '@invoke-ai/ui-library';
-import { memo, useCallback } from 'react';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { boardIdSelected, boardSearchTextChanged } from 'features/gallery/store/gallerySlice';
+import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
import { useCreateBoardMutation } from 'services/api/endpoints/boards';
-const AddBoardButton = () => {
+type Props = {
+ isPrivateBoard: boolean;
+};
+
+const AddBoardButton = ({ isPrivateBoard }: Props) => {
const { t } = useTranslation();
+ const dispatch = useAppDispatch();
+ const allowPrivateBoards = useAppSelector((s) => s.config.allowPrivateBoards);
const [createBoard, { isLoading }] = useCreateBoardMutation();
- const DEFAULT_BOARD_NAME = t('boards.myBoard');
- const handleCreateBoard = useCallback(() => {
- createBoard(DEFAULT_BOARD_NAME);
- }, [createBoard, DEFAULT_BOARD_NAME]);
+ const label = useMemo(() => {
+ if (!allowPrivateBoards) {
+ return t('boards.addBoard');
+ }
+ if (isPrivateBoard) {
+ return t('boards.addPrivateBoard');
+ }
+ return t('boards.addSharedBoard');
+ }, [allowPrivateBoards, isPrivateBoard, t]);
+
+ const handleCreateBoard = useCallback(async () => {
+ try {
+ const board = await createBoard({ board_name: t('boards.myBoard'), is_private: isPrivateBoard }).unwrap();
+ dispatch(boardIdSelected({ boardId: board.board_id }));
+ dispatch(boardSearchTextChanged(''));
+ } catch {
+ //no-op
+ }
+ }, [t, createBoard, isPrivateBoard, dispatch]);
return (
}
isLoading={isLoading}
- tooltip={t('boards.addBoard')}
- aria-label={t('boards.addBoard')}
+ tooltip={label}
+ aria-label={label}
onClick={handleCreateBoard}
- size="sm"
+ size="md"
data-testid="add-board-button"
+ variant="link"
+ w={8}
+ h={8}
/>
);
};
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx
index e47edd21fc..4325281e0f 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx
@@ -1,16 +1,17 @@
-import { Collapse, Flex, Grid, GridItem } from '@invoke-ai/ui-library';
+import { Box, Flex, Text } from '@invoke-ai/ui-library';
+import { EMPTY_ARRAY } from 'app/store/constants';
import { useAppSelector } from 'app/store/storeHooks';
import { overlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
import DeleteBoardModal from 'features/gallery/components/Boards/DeleteBoardModal';
import { selectListBoardsQueryArgs } from 'features/gallery/store/gallerySelectors';
import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
import type { CSSProperties } from 'react';
-import { memo, useState } from 'react';
+import { memo, useMemo, useState } from 'react';
+import { useTranslation } from 'react-i18next';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
import type { BoardDTO } from 'services/api/types';
import AddBoardButton from './AddBoardButton';
-import BoardsSearch from './BoardsSearch';
import GalleryBoard from './GalleryBoard';
import NoBoardBoard from './NoBoardBoard';
@@ -19,56 +20,96 @@ const overlayScrollbarsStyles: CSSProperties = {
width: '100%',
};
-type Props = {
- isOpen: boolean;
-};
-
-const BoardsList = (props: Props) => {
- const { isOpen } = props;
+const BoardsList = () => {
const selectedBoardId = useAppSelector((s) => s.gallery.selectedBoardId);
const boardSearchText = useAppSelector((s) => s.gallery.boardSearchText);
+ const allowPrivateBoards = useAppSelector((s) => s.config.allowPrivateBoards);
const queryArgs = useAppSelector(selectListBoardsQueryArgs);
const { data: boards } = useListAllBoardsQuery(queryArgs);
- const filteredBoards = boardSearchText
- ? boards?.filter((board) => board.board_name.toLowerCase().includes(boardSearchText.toLowerCase()))
- : boards;
const [boardToDelete, setBoardToDelete] = useState();
+ const { t } = useTranslation();
+
+ const { filteredPrivateBoards, filteredSharedBoards } = useMemo(() => {
+ const filteredBoards = boardSearchText
+ ? boards?.filter((board) => board.board_name.toLowerCase().includes(boardSearchText.toLowerCase()))
+ : boards;
+ const filteredPrivateBoards = filteredBoards?.filter((board) => board.is_private) ?? EMPTY_ARRAY;
+ const filteredSharedBoards = filteredBoards?.filter((board) => !board.is_private) ?? EMPTY_ARRAY;
+ return { filteredPrivateBoards, filteredSharedBoards };
+ }, [boardSearchText, boards]);
return (
<>
-
-
-
-
-
-
+
+
-
-
-
-
- {filteredBoards &&
- filteredBoards.map((board, index) => (
-
+ {allowPrivateBoards && (
+
+
+
+ {t('boards.private')}
+
+
+
+
+
+ {filteredPrivateBoards.map((board) => (
-
+ ))}
+
+
+ )}
+
+
+
+ {allowPrivateBoards ? t('boards.shared') : t('boards.boards')}
+
+
+
+
+ {!allowPrivateBoards && }
+ {filteredSharedBoards.map((board) => (
+
))}
-
+
+
-
-
+
+
>
);
};
-
export default memo(BoardsList);
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx
index 931c1e6cbb..fb53a13795 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx
@@ -40,7 +40,7 @@ const BoardsSearch = () => {
);
return (
-
+
{
- return (
-
-
-
- );
+const _hover: SystemStyleObject = {
+ bg: 'base.850',
};
interface GalleryBoardProps {
@@ -42,170 +49,165 @@ interface GalleryBoardProps {
const GalleryBoard = ({ board, isSelected, setBoardToDelete }: GalleryBoardProps) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
+ const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const autoAssignBoardOnClick = useAppSelector((s) => s.gallery.autoAssignBoardOnClick);
- const selectIsSelectedForAutoAdd = useMemo(
- () => createSelector(selectGallerySlice, (gallery) => board.board_id === gallery.autoAddBoardId),
- [board.board_id]
- );
+ const selectedBoardId = useAppSelector((s) => s.gallery.selectedBoardId);
+ const editingDisclosure = useDisclosure();
+ const [localBoardName, setLocalBoardName] = useState(board.board_name);
+ const onStartEditingRef = useRef(undefined);
- const isSelectedForAutoAdd = useAppSelector(selectIsSelectedForAutoAdd);
- const [isHovered, setIsHovered] = useState(false);
- const handleMouseOver = useCallback(() => {
- setIsHovered(true);
- }, []);
- const handleMouseOut = useCallback(() => {
- setIsHovered(false);
- }, []);
-
- const { currentData: coverImage } = useGetImageDTOQuery(board.cover_image_name ?? skipToken);
-
- const { board_name, board_id } = board;
- const [localBoardName, setLocalBoardName] = useState(board_name);
-
- const handleSelectBoard = useCallback(() => {
- dispatch(boardIdSelected({ boardId: board_id }));
- if (autoAssignBoardOnClick && !board.archived) {
- dispatch(autoAddBoardIdChanged(board_id));
+ const onClick = useCallback(() => {
+ if (selectedBoardId !== board.board_id) {
+ dispatch(boardIdSelected({ boardId: board.board_id }));
}
- }, [board_id, autoAssignBoardOnClick, dispatch, board.archived]);
+ if (autoAssignBoardOnClick && autoAddBoardId !== board.board_id) {
+ dispatch(autoAddBoardIdChanged(board.board_id));
+ }
+ }, [selectedBoardId, board.board_id, autoAssignBoardOnClick, autoAddBoardId, dispatch]);
const [updateBoard, { isLoading: isUpdateBoardLoading }] = useUpdateBoardMutation();
const droppableData: AddToBoardDropData = useMemo(
() => ({
- id: board_id,
+ id: board.board_id,
actionType: 'ADD_TO_BOARD',
- context: { boardId: board_id },
+ context: { boardId: board.board_id },
}),
- [board_id]
+ [board.board_id]
);
- const handleSubmit = useCallback(
+ const onSubmit = useCallback(
async (newBoardName: string) => {
- // empty strings are not allowed
if (!newBoardName.trim()) {
- setLocalBoardName(board_name);
- return;
- }
+ // empty strings are not allowed
+ setLocalBoardName(board.board_name);
+ } else if (newBoardName === board.board_name) {
+ // don't updated the board name if it hasn't changed
+ } else {
+ try {
+ const { board_name } = await updateBoard({
+ board_id: board.board_id,
+ changes: { board_name: newBoardName },
+ }).unwrap();
- // don't updated the board name if it hasn't changed
- if (newBoardName === board_name) {
- return;
- }
-
- try {
- const { board_name } = await updateBoard({
- board_id,
- changes: { board_name: newBoardName },
- }).unwrap();
-
- // update local state
- setLocalBoardName(board_name);
- } catch {
- // revert on error
- setLocalBoardName(board_name);
+ // update local state
+ setLocalBoardName(board_name);
+ } catch {
+ // revert on error
+ setLocalBoardName(board.board_name);
+ }
}
+ editingDisclosure.onClose();
},
- [board_id, board_name, updateBoard]
+ [board.board_id, board.board_name, editingDisclosure, updateBoard]
);
- const handleChange = useCallback((newBoardName: string) => {
+ const onChange = useCallback((newBoardName: string) => {
setLocalBoardName(newBoardName);
}, []);
- return (
-
-
-
- {(ref) => (
- }
- openDelay={1000}
- >
-
- {board.archived && }
- {coverImage?.thumbnail_url ? (
-
- ) : (
-
-
-
- )}
- {isSelectedForAutoAdd && }
-
-
-
-
-
-
-
+ const onDoubleClick = useCallback((e: MouseEvent) => {
+ if (onStartEditingRef.current) {
+ onStartEditingRef.current(e);
+ }
+ }, []);
- {t('unifiedCanvas.move')}} />
-
-
- )}
-
-
-
+ return (
+
+ {(ref) => (
+ }
+ openDelay={1000}
+ placement="left"
+ closeOnScroll
+ >
+
+
+
+
+
+
+
+ {autoAddBoardId === board.board_id && !editingDisclosure.isOpen && }
+ {board.archived && !editingDisclosure.isOpen && }
+ {!editingDisclosure.isOpen && {board.image_count} }
+
+ {t('unifiedCanvas.move')}} />
+
+
+ )}
+
);
};
+const JankEditableHijack = memo((props: { onStartEditingRef: MutableRefObject }) => {
+ const editableControls = useEditableControls();
+ useEffect(() => {
+ props.onStartEditingRef.current = editableControls.getEditButtonProps().onClick;
+ }, [props, editableControls]);
+ return null;
+});
+
+JankEditableHijack.displayName = 'JankEditableHijack';
+
export default memo(GalleryBoard);
+
+const CoverImage = ({ board }: { board: BoardDTO }) => {
+ const { currentData: coverImage } = useGetImageDTOQuery(board.cover_image_name ?? skipToken);
+
+ if (coverImage) {
+ return (
+
+ );
+ }
+
+ return (
+
+
+
+ );
+};
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx
index 2e823ea25b..14bf3d5742 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx
@@ -1,25 +1,35 @@
-import { Box, Flex, Image, Text, Tooltip } from '@invoke-ai/ui-library';
+import type { SystemStyleObject } from '@invoke-ai/ui-library';
+import { Flex, Icon, Text, Tooltip } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIDroppable from 'common/components/IAIDroppable';
-import SelectionOverlay from 'common/components/SelectionOverlay';
import type { RemoveFromBoardDropData } from 'features/dnd/types';
-import AutoAddIcon from 'features/gallery/components/Boards/AutoAddIcon';
+import { AutoAddBadge } from 'features/gallery/components/Boards/AutoAddBadge';
import { BoardTotalsTooltip } from 'features/gallery/components/Boards/BoardsList/BoardTotalsTooltip';
import NoBoardBoardContextMenu from 'features/gallery/components/Boards/NoBoardBoardContextMenu';
import { autoAddBoardIdChanged, boardIdSelected } from 'features/gallery/store/gallerySlice';
-import InvokeLogoSVG from 'public/assets/images/invoke-symbol-wht-lrg.svg';
-import { memo, useCallback, useMemo, useState } from 'react';
+import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
+import { useGetBoardImagesTotalQuery } from 'services/api/endpoints/boards';
import { useBoardName } from 'services/api/hooks/useBoardName';
interface Props {
isSelected: boolean;
}
+const _hover: SystemStyleObject = {
+ bg: 'base.850',
+};
+
const NoBoardBoard = memo(({ isSelected }: Props) => {
const dispatch = useAppDispatch();
+ const { imagesTotal } = useGetBoardImagesTotalQuery('none', {
+ selectFromResult: ({ data }) => {
+ return { imagesTotal: data?.total ?? 0 };
+ },
+ });
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const autoAssignBoardOnClick = useAppSelector((s) => s.gallery.autoAssignBoardOnClick);
+ const boardSearchText = useAppSelector((s) => s.gallery.boardSearchText);
const boardName = useBoardName('none');
const handleSelectBoard = useCallback(() => {
dispatch(boardIdSelected({ boardId: 'none' }));
@@ -27,15 +37,6 @@ const NoBoardBoard = memo(({ isSelected }: Props) => {
dispatch(autoAddBoardIdChanged('none'));
}
}, [dispatch, autoAssignBoardOnClick]);
- const [isHovered, setIsHovered] = useState(false);
-
- const handleMouseOver = useCallback(() => {
- setIsHovered(true);
- }, []);
-
- const handleMouseOut = useCallback(() => {
- setIsHovered(false);
- }, []);
const droppableData: RemoveFromBoardDropData = useMemo(
() => ({
@@ -44,76 +45,66 @@ const NoBoardBoard = memo(({ isSelected }: Props) => {
}),
[]
);
+
+ const filteredOut = useMemo(() => {
+ return boardSearchText ? !boardName.toLowerCase().includes(boardSearchText.toLowerCase()) : false;
+ }, [boardName, boardSearchText]);
+
const { t } = useTranslation();
+
+ if (filteredOut) {
+ return null;
+ }
+
return (
-
-
-
- {(ref) => (
- } openDelay={1000}>
-
-
-
-
- {autoAddBoardId === 'none' && }
-
- {boardName}
-
-
- {t('unifiedCanvas.move')}} />
-
-
- )}
-
-
-
+
+ {(ref) => (
+ }
+ openDelay={1000}
+ placement="left"
+ closeOnScroll
+ >
+
+
+ {/* iconified from public/assets/images/invoke-symbol-wht-lrg.svg */}
+
+
+
+
+
+
+ {boardName}
+
+ {autoAddBoardId === 'none' && }
+ {imagesTotal}
+ {t('unifiedCanvas.move')}} />
+
+
+ )}
+
);
});
diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/NoBoardBoardContextMenu.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/NoBoardBoardContextMenu.tsx
index b15226b9e6..3836f53e0d 100644
--- a/invokeai/frontend/web/src/features/gallery/components/Boards/NoBoardBoardContextMenu.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/Boards/NoBoardBoardContextMenu.tsx
@@ -33,13 +33,11 @@ const NoBoardBoardContextMenu = ({ children }: Props) => {
() => (
- }
- isDisabled={isSelectedForAutoAdd || autoAssignBoardOnClick}
- onClick={handleSetAutoAdd}
- >
- {t('boards.menuItemAutoAdd')}
-
+ {!autoAssignBoardOnClick && (
+ } isDisabled={isSelectedForAutoAdd} onClick={handleSetAutoAdd}>
+ {isSelectedForAutoAdd ? t('boards.selectedForAutoAdd') : t('boards.menuItemAutoAdd')}
+
+ )}
{isBulkDownloadEnabled && (
} onClickCapture={handleBulkDownload}>
{t('boards.downloadBoard')}
diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx
index 55aec17ab2..8ede311f9e 100644
--- a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx
@@ -1,48 +1,31 @@
-import { Button, Flex, Icon, Spacer } from '@invoke-ai/ui-library';
+import { Flex, Text } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
-import { memo, useMemo } from 'react';
-import { PiCaretUpBold } from 'react-icons/pi';
+import { memo } from 'react';
import { useBoardName } from 'services/api/hooks/useBoardName';
type Props = {
- isOpen: boolean;
- onToggle: () => void;
+ onClick: () => void;
};
const GalleryBoardName = (props: Props) => {
- const { isOpen, onToggle } = props;
const selectedBoardId = useAppSelector((s) => s.gallery.selectedBoardId);
const boardName = useBoardName(selectedBoardId);
- const formattedBoardName = useMemo(() => {
- if (boardName.length > 20) {
- return `${boardName.substring(0, 20)}...`;
- }
- return boardName;
- }, [boardName]);
-
return (
-
- {formattedBoardName}
-
-
+
+ {boardName}
+
);
};
diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryHeader.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryHeader.tsx
new file mode 100644
index 0000000000..69c2c8fe35
--- /dev/null
+++ b/invokeai/frontend/web/src/features/gallery/components/GalleryHeader.tsx
@@ -0,0 +1,34 @@
+import { Flex, Link, Text } from '@invoke-ai/ui-library';
+import { useStore } from '@nanostores/react';
+import { $projectName, $projectUrl } from 'app/store/nanostores/projectId';
+import { memo } from 'react';
+
+import GalleryBoardName from './GalleryBoardName';
+
+type Props = {
+ onClickBoardName: () => void;
+};
+
+export const GalleryHeader = memo((props: Props) => {
+ const projectName = useStore($projectName);
+ const projectUrl = useStore($projectUrl);
+
+ if (projectName && projectUrl) {
+ return (
+
+
+ {projectName}
+
+
+
+ );
+ }
+
+ return (
+
+
+
+ );
+});
+
+GalleryHeader.displayName = 'GalleryHeader';
diff --git a/invokeai/frontend/web/src/features/gallery/components/GallerySettingsPopover/GallerySettingsPopover.tsx b/invokeai/frontend/web/src/features/gallery/components/GallerySettingsPopover/GallerySettingsPopover.tsx
index b03c606eb6..c6a4005f15 100644
--- a/invokeai/frontend/web/src/features/gallery/components/GallerySettingsPopover/GallerySettingsPopover.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/GallerySettingsPopover/GallerySettingsPopover.tsx
@@ -17,7 +17,7 @@ const GallerySettingsPopover = () => {
return (
- } />
+ } variant="link" h="full" />
diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageGalleryContent.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageGalleryContent.tsx
index 8f173965a9..5a096f5cef 100644
--- a/invokeai/frontend/web/src/features/gallery/components/ImageGalleryContent.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/ImageGalleryContent.tsx
@@ -1,25 +1,71 @@
-import { Box, Button, ButtonGroup, Flex, Tab, TabList, Tabs, useDisclosure } from '@invoke-ai/ui-library';
-import { useStore } from '@nanostores/react';
-import { $galleryHeader } from 'app/store/nanostores/galleryHeader';
+import type { ChakraProps } from '@invoke-ai/ui-library';
+import {
+ Box,
+ Collapse,
+ Divider,
+ Flex,
+ IconButton,
+ Spacer,
+ Tab,
+ TabList,
+ Tabs,
+ useDisclosure,
+} from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { GalleryHeader } from 'features/gallery/components/GalleryHeader';
import { galleryViewChanged } from 'features/gallery/store/gallerySlice';
-import { memo, useCallback } from 'react';
+import ResizeHandle from 'features/ui/components/tabs/ResizeHandle';
+import { usePanel, type UsePanelOptions } from 'features/ui/hooks/usePanel';
+import type { CSSProperties } from 'react';
+import { memo, useCallback, useMemo, useRef } from 'react';
import { useTranslation } from 'react-i18next';
-import { PiImagesBold } from 'react-icons/pi';
-import { RiServerLine } from 'react-icons/ri';
+import { PiMagnifyingGlassBold } from 'react-icons/pi';
+import type { ImperativePanelGroupHandle } from 'react-resizable-panels';
+import { Panel, PanelGroup } from 'react-resizable-panels';
import BoardsList from './Boards/BoardsList/BoardsList';
-import GalleryBoardName from './GalleryBoardName';
+import BoardsSearch from './Boards/BoardsList/BoardsSearch';
import GallerySettingsPopover from './GallerySettingsPopover/GallerySettingsPopover';
import GalleryImageGrid from './ImageGrid/GalleryImageGrid';
import { GalleryPagination } from './ImageGrid/GalleryPagination';
+import { GallerySearch } from './ImageGrid/GallerySearch';
+
+const COLLAPSE_STYLES: CSSProperties = { flexShrink: 0, minHeight: 0 };
+
+const BASE_STYLES: ChakraProps['sx'] = {
+ fontWeight: 'semibold',
+ fontSize: 'sm',
+ color: 'base.300',
+};
+
+const SELECTED_STYLES: ChakraProps['sx'] = {
+ borderColor: 'base.800',
+ borderBottomColor: 'base.900',
+ color: 'invokeBlue.300',
+};
const ImageGalleryContent = () => {
const { t } = useTranslation();
const galleryView = useAppSelector((s) => s.gallery.galleryView);
+ const searchTerm = useAppSelector((s) => s.gallery.searchTerm);
+ const boardSearchText = useAppSelector((s) => s.gallery.boardSearchText);
const dispatch = useAppDispatch();
- const galleryHeader = useStore($galleryHeader);
- const { isOpen: isBoardListOpen, onToggle: onToggleBoardList } = useDisclosure({ defaultIsOpen: true });
+ const searchDisclosure = useDisclosure({ defaultIsOpen: false });
+ const boardSearchDisclosure = useDisclosure({ defaultIsOpen: false });
+ const panelGroupRef = useRef(null);
+
+ const boardsListPanelOptions = useMemo(
+ () => ({
+ unit: 'pixels',
+ minSize: 128,
+ defaultSize: 256,
+ fallbackMinSizePct: 20,
+ panelGroupRef,
+ panelGroupDirection: 'vertical',
+ }),
+ []
+ );
+ const boardsListPanel = usePanel(boardsListPanelOptions);
const handleClickImages = useCallback(() => {
dispatch(galleryViewChanged('images'));
@@ -30,59 +76,103 @@ const ImageGalleryContent = () => {
}, [dispatch]);
return (
-
- {galleryHeader}
-
-
-
-
-
-
-
+
+
+
+
+
+ }
+ variant="link"
+ />
+ {boardSearchText && (
+
+ )}
-
-
-
-
-
- }
- data-testid="images-tab"
- >
- {t('parameters.images')}
-
- }
- data-testid="assets-tab"
- >
- {t('gallery.assets')}
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {t('parameters.images')}
+
+
+ {t('gallery.assets')}
+
+
+
+ }
+ variant="link"
+ />
+ {searchTerm && (
+
+ )}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
);
};
diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImageGrid.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImageGrid.tsx
index 0be30505d5..9e0ba18f8a 100644
--- a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImageGrid.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImageGrid.tsx
@@ -124,7 +124,7 @@ const Content = () => {
}, [calculateNewLimit, container, dispatch]);
return (
-
+
{
const { goPrev, goNext, isPrevEnabled, isNextEnabled, pageButtons, goToPage, currentPage, total } =
useGalleryPagination();
+ const onClickPrev = useCallback(() => {
+ goPrev();
+ }, [goPrev]);
+
+ const onClickNext = useCallback(() => {
+ goNext();
+ }, [goNext]);
+
if (!total) {
return null;
}
@@ -16,7 +25,7 @@ export const GalleryPagination = () => {
size="sm"
aria-label="prev"
icon={ }
- onClick={goPrev}
+ onClick={onClickPrev}
isDisabled={!isPrevEnabled}
variant="ghost"
/>
@@ -45,7 +54,7 @@ export const GalleryPagination = () => {
size="sm"
aria-label="next"
icon={ }
- onClick={goNext}
+ onClick={onClickNext}
isDisabled={!isNextEnabled}
variant="ghost"
/>
diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySearch.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySearch.tsx
new file mode 100644
index 0000000000..9e3a1bf487
--- /dev/null
+++ b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySearch.tsx
@@ -0,0 +1,66 @@
+import { IconButton, Input, InputGroup, InputRightElement, Spinner } from '@invoke-ai/ui-library';
+import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
+import { selectListImagesQueryArgs } from 'features/gallery/store/gallerySelectors';
+import { searchTermChanged } from 'features/gallery/store/gallerySlice';
+import { debounce } from 'lodash-es';
+import type { ChangeEvent } from 'react';
+import { useCallback, useMemo, useState } from 'react';
+import { useTranslation } from 'react-i18next';
+import { PiXBold } from 'react-icons/pi';
+import { useListImagesQuery } from 'services/api/endpoints/images';
+
+export const GallerySearch = () => {
+ const dispatch = useAppDispatch();
+ const searchTerm = useAppSelector((s) => s.gallery.searchTerm);
+ const { t } = useTranslation();
+ const [searchTermInput, setSearchTermInput] = useState(searchTerm);
+ const queryArgs = useAppSelector(selectListImagesQueryArgs);
+ const { isPending } = useListImagesQuery(queryArgs, {
+ selectFromResult: ({ isLoading, isFetching }) => ({ isPending: isLoading || isFetching }),
+ });
+ const debouncedSetSearchTerm = useMemo(() => {
+ return debounce((value: string) => {
+ dispatch(searchTermChanged(value));
+ }, 1000);
+ }, [dispatch]);
+
+ const handleChangeInput = useCallback(
+ (e: ChangeEvent) => {
+ setSearchTermInput(e.target.value);
+ debouncedSetSearchTerm(e.target.value);
+ },
+ [debouncedSetSearchTerm]
+ );
+
+ const handleClearInput = useCallback(() => {
+ setSearchTermInput('');
+ dispatch(searchTermChanged(''));
+ }, [dispatch]);
+
+ return (
+
+
+ {isPending && (
+
+
+
+ )}
+ {!isPending && searchTermInput.length && (
+
+ }
+ />
+
+ )}
+
+ );
+};
diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySelectionCountTag.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySelectionCountTag.tsx
index e82a575991..6e111e59c0 100644
--- a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySelectionCountTag.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GallerySelectionCountTag.tsx
@@ -17,8 +17,8 @@ export const GallerySelectionCountTag = () => {
}, [dispatch]);
const onSelectPage = useCallback(() => {
- dispatch(selectionChanged(imageDTOs));
- }, [dispatch, imageDTOs]);
+ dispatch(selectionChanged([...selection, ...imageDTOs]));
+ }, [dispatch, selection, imageDTOs]);
useHotkeys(['ctrl+a', 'meta+a'], onSelectPage, { preventDefault: true }, [onSelectPage]);
diff --git a/invokeai/frontend/web/src/features/gallery/components/NextPrevImageButtons.tsx b/invokeai/frontend/web/src/features/gallery/components/NextPrevImageButtons.tsx
index 400eb38a94..7b2c06359b 100644
--- a/invokeai/frontend/web/src/features/gallery/components/NextPrevImageButtons.tsx
+++ b/invokeai/frontend/web/src/features/gallery/components/NextPrevImageButtons.tsx
@@ -1,11 +1,11 @@
import type { ChakraProps } from '@invoke-ai/ui-library';
-import { Box, Flex, IconButton, Spinner } from '@invoke-ai/ui-library';
+import { Box, IconButton } from '@invoke-ai/ui-library';
import { useGalleryImages } from 'features/gallery/hooks/useGalleryImages';
import { useGalleryNavigation } from 'features/gallery/hooks/useGalleryNavigation';
import { useGalleryPagination } from 'features/gallery/hooks/useGalleryPagination';
-import { memo } from 'react';
+import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
-import { PiCaretDoubleRightBold, PiCaretLeftBold, PiCaretRightBold } from 'react-icons/pi';
+import { PiCaretLeftBold, PiCaretRightBold } from 'react-icons/pi';
const nextPrevButtonStyles: ChakraProps['sx'] = {
color: 'base.100',
@@ -14,52 +14,78 @@ const nextPrevButtonStyles: ChakraProps['sx'] = {
const NextPrevImageButtons = () => {
const { t } = useTranslation();
-
- const { prevImage, nextImage, isOnFirstImage, isOnLastImage } = useGalleryNavigation();
+ const { prevImage, nextImage, isOnFirstImageOfView, isOnLastImageOfView } = useGalleryNavigation();
const { isFetching } = useGalleryImages().queryResult;
- const { isNextEnabled, goNext } = useGalleryPagination();
+ const { isNextEnabled, goNext, isPrevEnabled, goPrev } = useGalleryPagination();
+
+ const shouldShowLeftArrow = useMemo(() => {
+ if (!isOnFirstImageOfView) {
+ return true;
+ }
+ if (isPrevEnabled) {
+ return true;
+ }
+ return false;
+ }, [isOnFirstImageOfView, isPrevEnabled]);
+
+ const onClickLeftArrow = useCallback(() => {
+ if (isOnFirstImageOfView) {
+ if (isPrevEnabled && !isFetching) {
+ goPrev('arrow');
+ }
+ } else {
+ prevImage();
+ }
+ }, [goPrev, isFetching, isOnFirstImageOfView, isPrevEnabled, prevImage]);
+
+ const shouldShowRightArrow = useMemo(() => {
+ if (!isOnLastImageOfView) {
+ return true;
+ }
+ if (isNextEnabled) {
+ return true;
+ }
+ return false;
+ }, [isNextEnabled, isOnLastImageOfView]);
+
+ const onClickRightArrow = useCallback(() => {
+ if (isOnLastImageOfView) {
+ if (isNextEnabled && !isFetching) {
+ goNext('arrow');
+ }
+ } else {
+ nextImage();
+ }
+ }, [goNext, isFetching, isNextEnabled, isOnLastImageOfView, nextImage]);
return (
- {!isOnFirstImage && (
+ {shouldShowLeftArrow && (
}
variant="unstyled"
- onClick={prevImage}
+ onClick={onClickLeftArrow}
boxSize={16}
sx={nextPrevButtonStyles}
+ isDisabled={isFetching}
/>
)}
- {!isOnLastImage && (
+ {shouldShowRightArrow && (
}
variant="unstyled"
- onClick={nextImage}
+ onClick={onClickRightArrow}
boxSize={16}
sx={nextPrevButtonStyles}
+ isDisabled={isFetching}
/>
)}
- {isOnLastImage && isNextEnabled && !isFetching && (
- }
- variant="unstyled"
- onClick={goNext}
- boxSize={16}
- sx={nextPrevButtonStyles}
- />
- )}
- {isOnLastImage && isNextEnabled && isFetching && (
-
-
-
- )}
);
diff --git a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryHotkeys.ts b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryHotkeys.ts
index 6dd7065204..5b14fb7a8c 100644
--- a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryHotkeys.ts
+++ b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryHotkeys.ts
@@ -28,7 +28,8 @@ export const useGalleryHotkeys = () => {
handleRightImage,
handleUpImage,
handleDownImage,
- areImagesBelowCurrent,
+ isOnFirstRow,
+ isOnLastRow,
isOnFirstImageOfView,
isOnLastImageOfView,
} = useGalleryNavigation();
@@ -37,7 +38,7 @@ export const useGalleryHotkeys = () => {
['left', 'alt+left'],
(e) => {
if (isOnFirstImageOfView && isPrevEnabled && !queryResult.isFetching) {
- goPrev();
+ goPrev(e.altKey ? 'alt+arrow' : 'arrow');
return;
}
canNavigateGallery && handleLeftImage(e.altKey);
@@ -52,7 +53,7 @@ export const useGalleryHotkeys = () => {
return;
}
if (isOnLastImageOfView && isNextEnabled && !queryResult.isFetching) {
- goNext();
+ goNext(e.altKey ? 'alt+arrow' : 'arrow');
return;
}
if (!isOnLastImageOfView) {
@@ -65,22 +66,26 @@ export const useGalleryHotkeys = () => {
useHotkeys(
['up', 'alt+up'],
(e) => {
+ if (isOnFirstRow && isPrevEnabled && !queryResult.isFetching) {
+ goPrev(e.altKey ? 'alt+arrow' : 'arrow');
+ return;
+ }
handleUpImage(e.altKey);
},
{ preventDefault: true },
- [handleUpImage]
+ [handleUpImage, canNavigateGallery, isOnFirstRow, goPrev, isPrevEnabled, queryResult.isFetching]
);
useHotkeys(
['down', 'alt+down'],
(e) => {
- if (!areImagesBelowCurrent && isNextEnabled && !queryResult.isFetching) {
- goNext();
+ if (isOnLastRow && isNextEnabled && !queryResult.isFetching) {
+ goNext(e.altKey ? 'alt+arrow' : 'arrow');
return;
}
handleDownImage(e.altKey);
},
{ preventDefault: true },
- [areImagesBelowCurrent, goNext, isNextEnabled, queryResult.isFetching, handleDownImage]
+ [isOnLastRow, goNext, isNextEnabled, queryResult.isFetching, handleDownImage]
);
};
diff --git a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryNavigation.ts b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryNavigation.ts
index 3227cbdfa1..1655e284cb 100644
--- a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryNavigation.ts
+++ b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryNavigation.ts
@@ -112,7 +112,8 @@ type UseGalleryNavigationReturn = {
nextImage: () => void;
isOnFirstImage: boolean;
isOnLastImage: boolean;
- areImagesBelowCurrent: boolean;
+ isOnFirstRow: boolean;
+ isOnLastRow: boolean;
isOnFirstImageOfView: boolean;
isOnLastImageOfView: boolean;
};
@@ -167,10 +168,11 @@ export const useGalleryNavigation = (): UseGalleryNavigationReturn => {
[lastSelectedImageIndex, loadedImagesCount]
);
- const areImagesBelowCurrent = useMemo(() => {
- const imagesPerRow = getImagesPerRow();
- return lastSelectedImageIndex + imagesPerRow < loadedImagesCount;
- }, [lastSelectedImageIndex, loadedImagesCount]);
+ const isOnFirstRow = useMemo(() => lastSelectedImageIndex < getImagesPerRow(), [lastSelectedImageIndex]);
+ const isOnLastRow = useMemo(
+ () => lastSelectedImageIndex >= loadedImagesCount - getImagesPerRow(),
+ [lastSelectedImageIndex, loadedImagesCount]
+ );
const isOnFirstImageOfView = useMemo(() => {
return lastSelectedImageIndex === 0;
@@ -223,7 +225,8 @@ export const useGalleryNavigation = (): UseGalleryNavigationReturn => {
handleDownImage,
isOnFirstImage,
isOnLastImage,
- areImagesBelowCurrent,
+ isOnFirstRow,
+ isOnLastRow,
nextImage,
prevImage,
isOnFirstImageOfView,
diff --git a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryPagination.ts b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryPagination.ts
index cfd4a488c3..75bea2dcb1 100644
--- a/invokeai/frontend/web/src/features/gallery/hooks/useGalleryPagination.ts
+++ b/invokeai/frontend/web/src/features/gallery/hooks/useGalleryPagination.ts
@@ -80,25 +80,31 @@ export const useGalleryPagination = () => {
return offset > 0;
}, [count, offset]);
- const goNext = useCallback(() => {
- dispatch(offsetChanged(offset + (limit || 0)));
- }, [dispatch, offset, limit]);
+ const goNext = useCallback(
+ (withHotkey?: 'arrow' | 'alt+arrow') => {
+ dispatch(offsetChanged({ offset: offset + (limit || 0), withHotkey }));
+ },
+ [dispatch, offset, limit]
+ );
- const goPrev = useCallback(() => {
- dispatch(offsetChanged(Math.max(offset - (limit || 0), 0)));
- }, [dispatch, offset, limit]);
+ const goPrev = useCallback(
+ (withHotkey?: 'arrow' | 'alt+arrow') => {
+ dispatch(offsetChanged({ offset: Math.max(offset - (limit || 0), 0), withHotkey }));
+ },
+ [dispatch, offset, limit]
+ );
const goToPage = useCallback(
(page: number) => {
- dispatch(offsetChanged(page * (limit || 0)));
+ dispatch(offsetChanged({ offset: page * (limit || 0) }));
},
[dispatch, limit]
);
const goToFirst = useCallback(() => {
- dispatch(offsetChanged(0));
+ dispatch(offsetChanged({ offset: 0 }));
}, [dispatch]);
const goToLast = useCallback(() => {
- dispatch(offsetChanged((pages - 1) * (limit || 0)));
+ dispatch(offsetChanged({ offset: (pages - 1) * (limit || 0) }));
}, [dispatch, pages, limit]);
// handle when total/pages decrease and user is on high page number (ie bulk removing or deleting)
@@ -127,44 +133,22 @@ export const useGalleryPagination = () => {
return Math.min((currentPage + 1) * (limit || 0), total);
}, [currentPage, limit, total]);
- const api = useMemo(
- () => ({
- count,
- total,
- currentPage,
- pages,
- isNextEnabled,
- isPrevEnabled,
- goNext,
- goPrev,
- goToPage,
- goToFirst,
- goToLast,
- pageButtons,
- isFirstEnabled,
- isLastEnabled,
- rangeDisplay,
- numberOnPage,
- }),
- [
- count,
- total,
- currentPage,
- pages,
- isNextEnabled,
- isPrevEnabled,
- goNext,
- goPrev,
- goToPage,
- goToFirst,
- goToLast,
- pageButtons,
- isFirstEnabled,
- isLastEnabled,
- rangeDisplay,
- numberOnPage,
- ]
- );
-
- return api;
+ return {
+ count,
+ total,
+ currentPage,
+ pages,
+ isNextEnabled,
+ isPrevEnabled,
+ goNext,
+ goPrev,
+ goToPage,
+ goToFirst,
+ goToLast,
+ pageButtons,
+ isFirstEnabled,
+ isLastEnabled,
+ rangeDisplay,
+ numberOnPage,
+ };
};
diff --git a/invokeai/frontend/web/src/features/gallery/store/gallerySelectors.ts b/invokeai/frontend/web/src/features/gallery/store/gallerySelectors.ts
index 832da474ca..76ed4c6659 100644
--- a/invokeai/frontend/web/src/features/gallery/store/gallerySelectors.ts
+++ b/invokeai/frontend/web/src/features/gallery/store/gallerySelectors.ts
@@ -22,6 +22,7 @@ export const selectListImagesQueryArgs = createMemoizedSelector(
is_intermediate: false,
starred_first: gallery.starredFirst,
order_dir: gallery.orderDir,
+ search_term: gallery.searchTerm,
}
: skipToken
);
diff --git a/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts b/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts
index d5d83e53e8..487c07dbc2 100644
--- a/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts
+++ b/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts
@@ -19,7 +19,8 @@ const initialGalleryState: GalleryState = {
limit: 20,
offset: 0,
starredFirst: true,
- orderDir: 'ASC',
+ orderDir: 'DESC',
+ searchTerm: '',
isImageViewerOpen: true,
imageToCompare: null,
comparisonMode: 'slider',
@@ -103,8 +104,9 @@ export const gallerySlice = createSlice({
comparisonFitChanged: (state, action: PayloadAction<'contain' | 'fill'>) => {
state.comparisonFit = action.payload;
},
- offsetChanged: (state, action: PayloadAction) => {
- state.offset = action.payload;
+ offsetChanged: (state, action: PayloadAction<{ offset: number; withHotkey?: 'arrow' | 'alt+arrow' }>) => {
+ const { offset } = action.payload;
+ state.offset = offset;
},
limitChanged: (state, action: PayloadAction) => {
state.limit = action.payload;
@@ -118,6 +120,10 @@ export const gallerySlice = createSlice({
orderDirChanged: (state, action: PayloadAction) => {
state.orderDir = action.payload;
},
+ searchTermChanged: (state, action: PayloadAction) => {
+ state.searchTerm = action.payload;
+ state.offset = 0;
+ },
},
});
@@ -143,6 +149,7 @@ export const {
orderDirChanged,
starredFirstChanged,
shouldShowArchivedBoardsChanged,
+ searchTermChanged,
} = gallerySlice.actions;
export const selectGallerySlice = (state: RootState) => state.gallery;
diff --git a/invokeai/frontend/web/src/features/gallery/store/types.ts b/invokeai/frontend/web/src/features/gallery/store/types.ts
index 16c899e6a7..7d7a321515 100644
--- a/invokeai/frontend/web/src/features/gallery/store/types.ts
+++ b/invokeai/frontend/web/src/features/gallery/store/types.ts
@@ -22,6 +22,7 @@ export type GalleryState = {
limit: number;
starredFirst: boolean;
orderDir: OrderDir;
+ searchTerm: string;
alwaysShowImageSizeBadge: boolean;
imageToCompare: ImageDTO | null;
comparisonMode: ComparisonMode;
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelList.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelList.tsx
index 67e65dbfb6..b82917221e 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelList.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelList.tsx
@@ -11,6 +11,7 @@ import {
useLoRAModels,
useMainModels,
useRefinerModels,
+ useSpandrelImageToImageModels,
useT2IAdapterModels,
useVAEModels,
} from 'services/api/hooks/modelsByType';
@@ -71,6 +72,13 @@ const ModelList = () => {
[vaeModels, searchTerm, filteredModelType]
);
+ const [spandrelImageToImageModels, { isLoading: isLoadingSpandrelImageToImageModels }] =
+ useSpandrelImageToImageModels();
+ const filteredSpandrelImageToImageModels = useMemo(
+ () => modelsFilter(spandrelImageToImageModels, searchTerm, filteredModelType),
+ [spandrelImageToImageModels, searchTerm, filteredModelType]
+ );
+
const totalFilteredModels = useMemo(() => {
return (
filteredMainModels.length +
@@ -80,7 +88,8 @@ const ModelList = () => {
filteredControlNetModels.length +
filteredT2IAdapterModels.length +
filteredIPAdapterModels.length +
- filteredVAEModels.length
+ filteredVAEModels.length +
+ filteredSpandrelImageToImageModels.length
);
}, [
filteredControlNetModels.length,
@@ -91,6 +100,7 @@ const ModelList = () => {
filteredRefinerModels.length,
filteredT2IAdapterModels.length,
filteredVAEModels.length,
+ filteredSpandrelImageToImageModels.length,
]);
return (
@@ -143,6 +153,17 @@ const ModelList = () => {
{!isLoadingT2IAdapterModels && filteredT2IAdapterModels.length > 0 && (
)}
+ {/* Spandrel Image to Image List */}
+ {isLoadingSpandrelImageToImageModels && (
+
+ )}
+ {!isLoadingSpandrelImageToImageModels && filteredSpandrelImageToImageModels.length > 0 && (
+
+ )}
{totalFilteredModels === 0 && (
{t('modelManager.noMatchingModels')}
diff --git a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelTypeFilter.tsx b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelTypeFilter.tsx
index 76802b36e7..1a2444870b 100644
--- a/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelTypeFilter.tsx
+++ b/invokeai/frontend/web/src/features/modelManagerV2/subpanels/ModelManagerPanel/ModelTypeFilter.tsx
@@ -21,6 +21,7 @@ export const ModelTypeFilter = () => {
t2i_adapter: t('common.t2iAdapter'),
ip_adapter: t('common.ipAdapter'),
clip_vision: 'Clip Vision',
+ spandrel_image_to_image: 'Image-to-Image',
}),
[t]
);
diff --git a/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/InputFieldRenderer.tsx b/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/InputFieldRenderer.tsx
index 99937ceec4..d863def973 100644
--- a/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/InputFieldRenderer.tsx
+++ b/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/InputFieldRenderer.tsx
@@ -32,6 +32,8 @@ import {
isSDXLMainModelFieldInputTemplate,
isSDXLRefinerModelFieldInputInstance,
isSDXLRefinerModelFieldInputTemplate,
+ isSpandrelImageToImageModelFieldInputInstance,
+ isSpandrelImageToImageModelFieldInputTemplate,
isStringFieldInputInstance,
isStringFieldInputTemplate,
isT2IAdapterModelFieldInputInstance,
@@ -54,6 +56,7 @@ import NumberFieldInputComponent from './inputs/NumberFieldInputComponent';
import RefinerModelFieldInputComponent from './inputs/RefinerModelFieldInputComponent';
import SchedulerFieldInputComponent from './inputs/SchedulerFieldInputComponent';
import SDXLMainModelFieldInputComponent from './inputs/SDXLMainModelFieldInputComponent';
+import SpandrelImageToImageModelFieldInputComponent from './inputs/SpandrelImageToImageModelFieldInputComponent';
import StringFieldInputComponent from './inputs/StringFieldInputComponent';
import T2IAdapterModelFieldInputComponent from './inputs/T2IAdapterModelFieldInputComponent';
import VAEModelFieldInputComponent from './inputs/VAEModelFieldInputComponent';
@@ -125,6 +128,20 @@ const InputFieldRenderer = ({ nodeId, fieldName }: InputFieldProps) => {
if (isT2IAdapterModelFieldInputInstance(fieldInstance) && isT2IAdapterModelFieldInputTemplate(fieldTemplate)) {
return ;
}
+
+ if (
+ isSpandrelImageToImageModelFieldInputInstance(fieldInstance) &&
+ isSpandrelImageToImageModelFieldInputTemplate(fieldTemplate)
+ ) {
+ return (
+
+ );
+ }
+
if (isColorFieldInputInstance(fieldInstance) && isColorFieldInputTemplate(fieldTemplate)) {
return ;
}
diff --git a/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/inputs/SpandrelImageToImageModelFieldInputComponent.tsx b/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/inputs/SpandrelImageToImageModelFieldInputComponent.tsx
new file mode 100644
index 0000000000..ccd4eaa797
--- /dev/null
+++ b/invokeai/frontend/web/src/features/nodes/components/flow/nodes/Invocation/fields/inputs/SpandrelImageToImageModelFieldInputComponent.tsx
@@ -0,0 +1,55 @@
+import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
+import { useAppDispatch } from 'app/store/storeHooks';
+import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
+import { fieldSpandrelImageToImageModelValueChanged } from 'features/nodes/store/nodesSlice';
+import type {
+ SpandrelImageToImageModelFieldInputInstance,
+ SpandrelImageToImageModelFieldInputTemplate,
+} from 'features/nodes/types/field';
+import { memo, useCallback } from 'react';
+import { useSpandrelImageToImageModels } from 'services/api/hooks/modelsByType';
+import type { SpandrelImageToImageModelConfig } from 'services/api/types';
+
+import type { FieldComponentProps } from './types';
+
+const SpandrelImageToImageModelFieldInputComponent = (
+ props: FieldComponentProps
+) => {
+ const { nodeId, field } = props;
+ const dispatch = useAppDispatch();
+
+ const [modelConfigs, { isLoading }] = useSpandrelImageToImageModels();
+
+ const _onChange = useCallback(
+ (value: SpandrelImageToImageModelConfig | null) => {
+ if (!value) {
+ return;
+ }
+ dispatch(
+ fieldSpandrelImageToImageModelValueChanged({
+ nodeId,
+ fieldName: field.name,
+ value,
+ })
+ );
+ },
+ [dispatch, field.name, nodeId]
+ );
+
+ const { options, value, onChange } = useGroupedModelCombobox({
+ modelConfigs,
+ onChange: _onChange,
+ selectedModel: field.value,
+ isLoading,
+ });
+
+ return (
+
+
+
+
+
+ );
+};
+
+export default memo(SpandrelImageToImageModelFieldInputComponent);
diff --git a/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts b/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts
index 5ebc5de147..f9214c1572 100644
--- a/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts
+++ b/invokeai/frontend/web/src/features/nodes/store/nodesSlice.ts
@@ -19,6 +19,7 @@ import type {
ModelIdentifierFieldValue,
SchedulerFieldValue,
SDXLRefinerModelFieldValue,
+ SpandrelImageToImageModelFieldValue,
StatefulFieldValue,
StringFieldValue,
T2IAdapterModelFieldValue,
@@ -39,6 +40,7 @@ import {
zModelIdentifierFieldValue,
zSchedulerFieldValue,
zSDXLRefinerModelFieldValue,
+ zSpandrelImageToImageModelFieldValue,
zStatefulFieldValue,
zStringFieldValue,
zT2IAdapterModelFieldValue,
@@ -333,6 +335,12 @@ export const nodesSlice = createSlice({
fieldT2IAdapterModelValueChanged: (state, action: FieldValueAction) => {
fieldValueReducer(state, action, zT2IAdapterModelFieldValue);
},
+ fieldSpandrelImageToImageModelValueChanged: (
+ state,
+ action: FieldValueAction
+ ) => {
+ fieldValueReducer(state, action, zSpandrelImageToImageModelFieldValue);
+ },
fieldEnumModelValueChanged: (state, action: FieldValueAction) => {
fieldValueReducer(state, action, zEnumFieldValue);
},
@@ -384,6 +392,7 @@ export const {
fieldImageValueChanged,
fieldIPAdapterModelValueChanged,
fieldT2IAdapterModelValueChanged,
+ fieldSpandrelImageToImageModelValueChanged,
fieldLabelChanged,
fieldLoRAModelValueChanged,
fieldModelIdentifierValueChanged,
diff --git a/invokeai/frontend/web/src/features/nodes/types/common.ts b/invokeai/frontend/web/src/features/nodes/types/common.ts
index 54e126af3a..2ea8900281 100644
--- a/invokeai/frontend/web/src/features/nodes/types/common.ts
+++ b/invokeai/frontend/web/src/features/nodes/types/common.ts
@@ -66,6 +66,7 @@ const zModelType = z.enum([
'embedding',
'onnx',
'clip_vision',
+ 'spandrel_image_to_image',
]);
const zSubModelType = z.enum([
'unet',
diff --git a/invokeai/frontend/web/src/features/nodes/types/constants.ts b/invokeai/frontend/web/src/features/nodes/types/constants.ts
index 4ede5cd479..05697c384c 100644
--- a/invokeai/frontend/web/src/features/nodes/types/constants.ts
+++ b/invokeai/frontend/web/src/features/nodes/types/constants.ts
@@ -38,6 +38,7 @@ export const MODEL_TYPES = [
'VAEField',
'CLIPField',
'T2IAdapterModelField',
+ 'SpandrelImageToImageModelField',
];
/**
@@ -62,6 +63,7 @@ export const FIELD_COLORS: { [key: string]: string } = {
MainModelField: 'teal.500',
SDXLMainModelField: 'teal.500',
SDXLRefinerModelField: 'teal.500',
+ SpandrelImageToImageModelField: 'teal.500',
StringField: 'yellow.500',
T2IAdapterField: 'teal.500',
T2IAdapterModelField: 'teal.500',
diff --git a/invokeai/frontend/web/src/features/nodes/types/field.ts b/invokeai/frontend/web/src/features/nodes/types/field.ts
index e2a84e3390..925bd40b9d 100644
--- a/invokeai/frontend/web/src/features/nodes/types/field.ts
+++ b/invokeai/frontend/web/src/features/nodes/types/field.ts
@@ -139,6 +139,10 @@ const zT2IAdapterModelFieldType = zFieldTypeBase.extend({
name: z.literal('T2IAdapterModelField'),
originalType: zStatelessFieldType.optional(),
});
+const zSpandrelImageToImageModelFieldType = zFieldTypeBase.extend({
+ name: z.literal('SpandrelImageToImageModelField'),
+ originalType: zStatelessFieldType.optional(),
+});
const zSchedulerFieldType = zFieldTypeBase.extend({
name: z.literal('SchedulerField'),
originalType: zStatelessFieldType.optional(),
@@ -160,6 +164,7 @@ const zStatefulFieldType = z.union([
zControlNetModelFieldType,
zIPAdapterModelFieldType,
zT2IAdapterModelFieldType,
+ zSpandrelImageToImageModelFieldType,
zColorFieldType,
zSchedulerFieldType,
]);
@@ -581,6 +586,33 @@ export const isT2IAdapterModelFieldInputTemplate = (val: unknown): val is T2IAda
zT2IAdapterModelFieldInputTemplate.safeParse(val).success;
// #endregion
+// #region SpandrelModelToModelField
+
+export const zSpandrelImageToImageModelFieldValue = zModelIdentifierField.optional();
+const zSpandrelImageToImageModelFieldInputInstance = zFieldInputInstanceBase.extend({
+ value: zSpandrelImageToImageModelFieldValue,
+});
+const zSpandrelImageToImageModelFieldInputTemplate = zFieldInputTemplateBase.extend({
+ type: zSpandrelImageToImageModelFieldType,
+ originalType: zFieldType.optional(),
+ default: zSpandrelImageToImageModelFieldValue,
+});
+const zSpandrelImageToImageModelFieldOutputTemplate = zFieldOutputTemplateBase.extend({
+ type: zSpandrelImageToImageModelFieldType,
+});
+export type SpandrelImageToImageModelFieldValue = z.infer;
+export type SpandrelImageToImageModelFieldInputInstance = z.infer;
+export type SpandrelImageToImageModelFieldInputTemplate = z.infer;
+export const isSpandrelImageToImageModelFieldInputInstance = (
+ val: unknown
+): val is SpandrelImageToImageModelFieldInputInstance =>
+ zSpandrelImageToImageModelFieldInputInstance.safeParse(val).success;
+export const isSpandrelImageToImageModelFieldInputTemplate = (
+ val: unknown
+): val is SpandrelImageToImageModelFieldInputTemplate =>
+ zSpandrelImageToImageModelFieldInputTemplate.safeParse(val).success;
+// #endregion
+
// #region SchedulerField
export const zSchedulerFieldValue = zSchedulerField.optional();
@@ -667,6 +699,7 @@ export const zStatefulFieldValue = z.union([
zControlNetModelFieldValue,
zIPAdapterModelFieldValue,
zT2IAdapterModelFieldValue,
+ zSpandrelImageToImageModelFieldValue,
zColorFieldValue,
zSchedulerFieldValue,
]);
@@ -694,6 +727,7 @@ const zStatefulFieldInputInstance = z.union([
zControlNetModelFieldInputInstance,
zIPAdapterModelFieldInputInstance,
zT2IAdapterModelFieldInputInstance,
+ zSpandrelImageToImageModelFieldInputInstance,
zColorFieldInputInstance,
zSchedulerFieldInputInstance,
]);
@@ -722,6 +756,7 @@ const zStatefulFieldInputTemplate = z.union([
zControlNetModelFieldInputTemplate,
zIPAdapterModelFieldInputTemplate,
zT2IAdapterModelFieldInputTemplate,
+ zSpandrelImageToImageModelFieldInputTemplate,
zColorFieldInputTemplate,
zSchedulerFieldInputTemplate,
zStatelessFieldInputTemplate,
@@ -751,6 +786,7 @@ const zStatefulFieldOutputTemplate = z.union([
zControlNetModelFieldOutputTemplate,
zIPAdapterModelFieldOutputTemplate,
zT2IAdapterModelFieldOutputTemplate,
+ zSpandrelImageToImageModelFieldOutputTemplate,
zColorFieldOutputTemplate,
zSchedulerFieldOutputTemplate,
]);
diff --git a/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputInstance.ts b/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputInstance.ts
index 597779fd61..a5a2d89f03 100644
--- a/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputInstance.ts
+++ b/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputInstance.ts
@@ -18,6 +18,7 @@ const FIELD_VALUE_FALLBACK_MAP: Record =
SDXLRefinerModelField: undefined,
StringField: '',
T2IAdapterModelField: undefined,
+ SpandrelImageToImageModelField: undefined,
VAEModelField: undefined,
ControlNetModelField: undefined,
};
diff --git a/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts b/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts
index 2b77274526..8478415cd1 100644
--- a/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts
+++ b/invokeai/frontend/web/src/features/nodes/util/schema/buildFieldInputTemplate.ts
@@ -17,6 +17,7 @@ import type {
SchedulerFieldInputTemplate,
SDXLMainModelFieldInputTemplate,
SDXLRefinerModelFieldInputTemplate,
+ SpandrelImageToImageModelFieldInputTemplate,
StatefulFieldType,
StatelessFieldInputTemplate,
StringFieldInputTemplate,
@@ -263,6 +264,17 @@ const buildT2IAdapterModelFieldInputTemplate: FieldInputTemplateBuilder = ({ schemaObject, baseField, fieldType }) => {
+ const template: SpandrelImageToImageModelFieldInputTemplate = {
+ ...baseField,
+ type: fieldType,
+ default: schemaObject.default ?? undefined,
+ };
+
+ return template;
+};
const buildBoardFieldInputTemplate: FieldInputTemplateBuilder = ({
schemaObject,
baseField,
@@ -377,6 +389,7 @@ export const TEMPLATE_BUILDER_MAP: Record {
const panelHandleRef = useRef(null);
const [_minSize, _setMinSize] = useState(arg.unit === 'percentages' ? arg.minSize : 0);
+ const [_defaultSize, _setDefaultSize] = useState(arg.defaultSize ?? arg.minSize);
// If the units are pixels, we need to calculate the min size as a percentage of the available space,
// then resize the panel if it is too small.
@@ -113,18 +126,16 @@ export const usePanel = (arg: UsePanelOptions): UsePanelReturn => {
}
const minSizePct = getSizeAsPercentage(arg.minSize, arg.panelGroupRef, arg.panelGroupDirection);
-
_setMinSize(minSizePct);
- /**
- * TODO(psyche): Ideally, we only resize the panel if there is not enough room for it in the
- * panel group. This is a bit tricky, though. We'd need to track the last known panel size
- * and compare it to the new size before resizing. This introduces some complexity that I'd
- * rather not need to maintain.
- *
- * For now, we'll just resize the panel to the min size every time the panel group is resized.
- */
- if (!panelHandleRef.current.isCollapsed()) {
+ const defaultSizePct = getSizeAsPercentage(
+ arg.defaultSize ?? arg.minSize,
+ arg.panelGroupRef,
+ arg.panelGroupDirection
+ );
+ _setDefaultSize(defaultSizePct);
+
+ if (!panelHandleRef.current.isCollapsed() && panelHandleRef.current.getSize() < minSizePct && minSizePct > 0) {
panelHandleRef.current.resize(minSizePct);
}
});
@@ -133,8 +144,12 @@ export const usePanel = (arg: UsePanelOptions): UsePanelReturn => {
panelGroupHandleElements.forEach((el) => resizeObserver.observe(el));
// Resize the panel to the min size once on startup
- const minSizePct = getSizeAsPercentage(arg.minSize, arg.panelGroupRef, arg.panelGroupDirection);
- panelHandleRef.current?.resize(minSizePct);
+ const defaultSizePct = getSizeAsPercentage(
+ arg.defaultSize ?? arg.minSize,
+ arg.panelGroupRef,
+ arg.panelGroupDirection
+ );
+ panelHandleRef.current?.resize(defaultSizePct);
return () => {
resizeObserver.disconnect();
@@ -188,14 +203,14 @@ export const usePanel = (arg: UsePanelOptions): UsePanelReturn => {
const onDoubleClickHandle = useCallback(() => {
// If the panel is really super close to the min size, collapse it
- if (Math.abs((panelHandleRef.current?.getSize() ?? 0) - _minSize) < 0.01) {
+ if (Math.abs((panelHandleRef.current?.getSize() ?? 0) - _defaultSize) < 0.01) {
collapse();
return;
}
// Otherwise, resize to the min size
- panelHandleRef.current?.resize(_minSize);
- }, [_minSize, collapse]);
+ panelHandleRef.current?.resize(_defaultSize);
+ }, [_defaultSize, collapse]);
return {
ref: panelHandleRef,
@@ -209,6 +224,7 @@ export const usePanel = (arg: UsePanelOptions): UsePanelReturn => {
collapse,
resize,
onDoubleClickHandle,
+ defaultSize: _defaultSize,
};
};
diff --git a/invokeai/frontend/web/src/services/api/authToastMiddleware.ts b/invokeai/frontend/web/src/services/api/authToastMiddleware.ts
index 3a906a613b..fd8483e8b2 100644
--- a/invokeai/frontend/web/src/services/api/authToastMiddleware.ts
+++ b/invokeai/frontend/web/src/services/api/authToastMiddleware.ts
@@ -29,7 +29,7 @@ export const authToastMiddleware: Middleware = () => (next) => (action) => {
const endpointName = parsed.meta?.arg?.endpointName;
if (endpointName === 'getImageDTO') {
// do not show toast if problem is image access
- return;
+ return next(action);
}
const customMessage = parsed.payload.data.detail !== 'Forbidden' ? parsed.payload.data.detail : undefined;
diff --git a/invokeai/frontend/web/src/services/api/endpoints/boards.ts b/invokeai/frontend/web/src/services/api/endpoints/boards.ts
index 177aa0e340..55ebeab318 100644
--- a/invokeai/frontend/web/src/services/api/endpoints/boards.ts
+++ b/invokeai/frontend/web/src/services/api/endpoints/boards.ts
@@ -1,5 +1,11 @@
import { ASSETS_CATEGORIES, IMAGE_CATEGORIES } from 'features/gallery/store/types';
-import type { BoardDTO, ListBoardsArgs, OffsetPaginatedResults_ImageDTO_, UpdateBoardArg } from 'services/api/types';
+import type {
+ BoardDTO,
+ CreateBoardArg,
+ ListBoardsArgs,
+ OffsetPaginatedResults_ImageDTO_,
+ UpdateBoardArg,
+} from 'services/api/types';
import { getListImagesUrl } from 'services/api/util';
import type { ApiTagDescription } from '..';
@@ -87,11 +93,11 @@ export const boardsApi = api.injectEndpoints({
* Boards Mutations
*/
- createBoard: build.mutation({
- query: (board_name) => ({
+ createBoard: build.mutation({
+ query: ({ board_name, is_private }) => ({
url: buildBoardsUrl(),
method: 'POST',
- params: { board_name },
+ params: { board_name, is_private },
}),
invalidatesTags: [{ type: 'Board', id: LIST_TAG }],
}),
diff --git a/invokeai/frontend/web/src/services/api/endpoints/images.ts b/invokeai/frontend/web/src/services/api/endpoints/images.ts
index 9605bc29c4..2040021d6d 100644
--- a/invokeai/frontend/web/src/services/api/endpoints/images.ts
+++ b/invokeai/frontend/web/src/services/api/endpoints/images.ts
@@ -93,7 +93,6 @@ export const imagesApi = api.injectEndpoints({
const boardId = imageDTO.board_id ?? 'none';
return [
- { type: 'Image', id: imageDTO.image_name },
{
type: 'ImageList',
id: getListImagesUrl({
@@ -138,9 +137,6 @@ export const imagesApi = api.injectEndpoints({
id: boardId,
},
];
- for (const imageDTO of imageDTOs) {
- tags.push({ type: 'Image', id: imageDTO.image_name });
- }
return tags;
}
@@ -508,7 +504,6 @@ export const imagesApi = api.injectEndpoints({
export const {
useGetIntermediatesCountQuery,
useListImagesQuery,
- useGetImageDTOQuery,
useGetImageMetadataQuery,
useGetImageWorkflowQuery,
useLazyGetImageWorkflowQuery,
@@ -526,6 +521,10 @@ export const {
useBulkDownloadImagesMutation,
} = imagesApi;
+export const useGetImageDTOQuery = (...args: Parameters) => {
+ return imagesApi.useGetImageDTOQuery(...args);
+};
+
/**
* Imperative RTKQ helper to fetch an ImageDTO.
* @param image_name The name of the image to fetch
diff --git a/invokeai/frontend/web/src/services/api/hooks/modelsByType.ts b/invokeai/frontend/web/src/services/api/hooks/modelsByType.ts
index a42df8f600..0f1a81c26f 100644
--- a/invokeai/frontend/web/src/services/api/hooks/modelsByType.ts
+++ b/invokeai/frontend/web/src/services/api/hooks/modelsByType.ts
@@ -11,6 +11,7 @@ import {
isNonSDXLMainModelConfig,
isRefinerMainModelModelConfig,
isSDXLMainModelModelConfig,
+ isSpandrelImageToImageModelConfig,
isT2IAdapterModelConfig,
isTIModelConfig,
isVAEModelConfig,
@@ -39,6 +40,7 @@ export const useLoRAModels = buildModelsHook(isLoRAModelConfig);
export const useControlNetAndT2IAdapterModels = buildModelsHook(isControlNetOrT2IAdapterModelConfig);
export const useControlNetModels = buildModelsHook(isControlNetModelConfig);
export const useT2IAdapterModels = buildModelsHook(isT2IAdapterModelConfig);
+export const useSpandrelImageToImageModels = buildModelsHook(isSpandrelImageToImageModelConfig);
export const useIPAdapterModels = buildModelsHook(isIPAdapterModelConfig);
export const useEmbeddingModels = buildModelsHook(isTIModelConfig);
export const useVAEModels = buildModelsHook(isVAEModelConfig);
diff --git a/invokeai/frontend/web/src/services/api/schema.ts b/invokeai/frontend/web/src/services/api/schema.ts
index 07f1cdb34b..890de69e35 100644
--- a/invokeai/frontend/web/src/services/api/schema.ts
+++ b/invokeai/frontend/web/src/services/api/schema.ts
@@ -1058,6 +1058,11 @@ export type components = {
* @description Whether or not the board is archived.
*/
archived: boolean;
+ /**
+ * Is Private
+ * @description Whether the board is private.
+ */
+ is_private?: boolean | null;
/**
* Image Count
* @description The number of images in the board.
@@ -4784,7 +4789,7 @@ export type components = {
* @description The nodes in this graph
*/
nodes?: {
- [key: string]: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
+ [key: string]: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["SpandrelImageToImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
};
/**
* Edges
@@ -6561,6 +6566,12 @@ export type components = {
* @default false
*/
tiled?: boolean;
+ /**
+ * Tile Size
+ * @description The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the
+ * @default 0
+ */
+ tile_size?: number;
/**
* Fp32
* @description Whether or not to use full float32 precision
@@ -7149,7 +7160,7 @@ export type components = {
* Invocation
* @description The ID of the invocation
*/
- invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
+ invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["SpandrelImageToImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
/**
* Invocation Source Id
* @description The ID of the prepared invocation's source node
@@ -7195,7 +7206,7 @@ export type components = {
* Invocation
* @description The ID of the invocation
*/
- invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
+ invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["SpandrelImageToImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
/**
* Invocation Source Id
* @description The ID of the prepared invocation's source node
@@ -7258,7 +7269,7 @@ export type components = {
* Invocation
* @description The ID of the invocation
*/
- invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
+ invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["SpandrelImageToImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
/**
* Invocation Source Id
* @description The ID of the prepared invocation's source node
@@ -7293,145 +7304,146 @@ export type components = {
project_id: string | null;
};
InvocationOutputMap: {
- img_channel_offset: components["schemas"]["ImageOutput"];
- create_denoise_mask: components["schemas"]["DenoiseMaskOutput"];
- round_float: components["schemas"]["FloatOutput"];
- img_blur: components["schemas"]["ImageOutput"];
- sdxl_model_loader: components["schemas"]["SDXLModelLoaderOutput"];
- lblend: components["schemas"]["LatentsOutput"];
- img_lerp: components["schemas"]["ImageOutput"];
- zoe_depth_image_processor: components["schemas"]["ImageOutput"];
- range_of_size: components["schemas"]["IntegerCollectionOutput"];
- freeu: components["schemas"]["UNetOutput"];
- alpha_mask_to_tensor: components["schemas"]["MaskOutput"];
- latents: components["schemas"]["LatentsOutput"];
- controlnet: components["schemas"]["ControlOutput"];
- canny_image_processor: components["schemas"]["ImageOutput"];
- color_map_image_processor: components["schemas"]["ImageOutput"];
- l2i: components["schemas"]["ImageOutput"];
- string_join_three: components["schemas"]["StringOutput"];
- crop_latents: components["schemas"]["LatentsOutput"];
- img_resize: components["schemas"]["ImageOutput"];
- img_crop: components["schemas"]["ImageOutput"];
- face_off: components["schemas"]["FaceOffOutput"];
- seamless: components["schemas"]["SeamlessModeOutput"];
- string: components["schemas"]["StringOutput"];
- img_watermark: components["schemas"]["ImageOutput"];
- img_paste: components["schemas"]["ImageOutput"];
- ideal_size: components["schemas"]["IdealSizeOutput"];
- img_scale: components["schemas"]["ImageOutput"];
- mul: components["schemas"]["IntegerOutput"];
- conditioning: components["schemas"]["ConditioningOutput"];
- add: components["schemas"]["IntegerOutput"];
- conditioning_collection: components["schemas"]["ConditioningCollectionOutput"];
- create_gradient_mask: components["schemas"]["GradientMaskOutput"];
- float: components["schemas"]["FloatOutput"];
- sdxl_refiner_compel_prompt: components["schemas"]["ConditioningOutput"];
- string_collection: components["schemas"]["StringCollectionOutput"];
- image_mask_to_tensor: components["schemas"]["MaskOutput"];
- infill_cv2: components["schemas"]["ImageOutput"];
- boolean_collection: components["schemas"]["BooleanCollectionOutput"];
- color: components["schemas"]["ColorOutput"];
- lresize: components["schemas"]["LatentsOutput"];
- image: components["schemas"]["ImageOutput"];
- esrgan: components["schemas"]["ImageOutput"];
- image_collection: components["schemas"]["ImageCollectionOutput"];
- metadata: components["schemas"]["MetadataOutput"];
- scheduler: components["schemas"]["SchedulerOutput"];
- img_pad_crop: components["schemas"]["ImageOutput"];
- integer: components["schemas"]["IntegerOutput"];
- boolean: components["schemas"]["BooleanOutput"];
- float_math: components["schemas"]["FloatOutput"];
- sdxl_compel_prompt: components["schemas"]["ConditioningOutput"];
- tile_image_processor: components["schemas"]["ImageOutput"];
- img_channel_multiply: components["schemas"]["ImageOutput"];
- tomask: components["schemas"]["ImageOutput"];
- sub: components["schemas"]["IntegerOutput"];
- img_mul: components["schemas"]["ImageOutput"];
- string_split: components["schemas"]["String2Output"];
- model_identifier: components["schemas"]["ModelIdentifierOutput"];
- pair_tile_image: components["schemas"]["PairTileImageOutput"];
- heuristic_resize: components["schemas"]["ImageOutput"];
- infill_lama: components["schemas"]["ImageOutput"];
- lineart_anime_image_processor: components["schemas"]["ImageOutput"];
- float_range: components["schemas"]["FloatCollectionOutput"];
- infill_patchmatch: components["schemas"]["ImageOutput"];
- mediapipe_face_processor: components["schemas"]["ImageOutput"];
- cv_inpaint: components["schemas"]["ImageOutput"];
- denoise_latents: components["schemas"]["LatentsOutput"];
- range: components["schemas"]["IntegerCollectionOutput"];
- content_shuffle_image_processor: components["schemas"]["ImageOutput"];
- compel: components["schemas"]["ConditioningOutput"];
- leres_image_processor: components["schemas"]["ImageOutput"];
- prompt_from_file: components["schemas"]["StringCollectionOutput"];
noise: components["schemas"]["NoiseOutput"];
- i2l: components["schemas"]["LatentsOutput"];
- pidi_image_processor: components["schemas"]["ImageOutput"];
- merge_metadata: components["schemas"]["MetadataOutput"];
- img_conv: components["schemas"]["ImageOutput"];
- ip_adapter: components["schemas"]["IPAdapterOutput"];
- div: components["schemas"]["IntegerOutput"];
- dw_openpose_image_processor: components["schemas"]["ImageOutput"];
- rand_int: components["schemas"]["IntegerOutput"];
- calculate_image_tiles_even_split: components["schemas"]["CalculateImageTilesOutput"];
- show_image: components["schemas"]["ImageOutput"];
- calculate_image_tiles_min_overlap: components["schemas"]["CalculateImageTilesOutput"];
- float_to_int: components["schemas"]["IntegerOutput"];
- infill_rgba: components["schemas"]["ImageOutput"];
- face_mask_detection: components["schemas"]["FaceMaskOutput"];
- sdxl_lora_collection_loader: components["schemas"]["SDXLLoRALoaderOutput"];
- iterate: components["schemas"]["IterateInvocationOutput"];
- rectangle_mask: components["schemas"]["MaskOutput"];
- merge_tiles_to_image: components["schemas"]["ImageOutput"];
- lineart_image_processor: components["schemas"]["ImageOutput"];
- lora_collection_loader: components["schemas"]["LoRALoaderOutput"];
- lora_selector: components["schemas"]["LoRASelectorOutput"];
- midas_depth_image_processor: components["schemas"]["ImageOutput"];
- face_identifier: components["schemas"]["ImageOutput"];
- save_image: components["schemas"]["ImageOutput"];
- unsharp_mask: components["schemas"]["ImageOutput"];
- string_split_neg: components["schemas"]["StringPosNegOutput"];
- img_chan: components["schemas"]["ImageOutput"];
- float_collection: components["schemas"]["FloatCollectionOutput"];
- lora_loader: components["schemas"]["LoRALoaderOutput"];
- random_range: components["schemas"]["IntegerCollectionOutput"];
- invert_tensor_mask: components["schemas"]["MaskOutput"];
- clip_skip: components["schemas"]["CLIPSkipInvocationOutput"];
- sdxl_lora_loader: components["schemas"]["SDXLLoRALoaderOutput"];
- string_join: components["schemas"]["StringOutput"];
- img_hue_adjust: components["schemas"]["ImageOutput"];
- img_nsfw: components["schemas"]["ImageOutput"];
- tile_to_properties: components["schemas"]["TileToPropertiesOutput"];
- calculate_image_tiles: components["schemas"]["CalculateImageTilesOutput"];
- mask_combine: components["schemas"]["ImageOutput"];
- main_model_loader: components["schemas"]["ModelLoaderOutput"];
- img_ilerp: components["schemas"]["ImageOutput"];
- string_replace: components["schemas"]["StringOutput"];
- sdxl_refiner_model_loader: components["schemas"]["SDXLRefinerModelLoaderOutput"];
- hed_image_processor: components["schemas"]["ImageOutput"];
- latents_collection: components["schemas"]["LatentsCollectionOutput"];
- infill_tile: components["schemas"]["ImageOutput"];
- vae_loader: components["schemas"]["VAEOutput"];
- depth_anything_image_processor: components["schemas"]["ImageOutput"];
- lscale: components["schemas"]["LatentsOutput"];
- t2i_adapter: components["schemas"]["T2IAdapterOutput"];
- metadata_item: components["schemas"]["MetadataItemOutput"];
- blank_image: components["schemas"]["ImageOutput"];
- tiled_multi_diffusion_denoise_latents: components["schemas"]["LatentsOutput"];
- canvas_paste_back: components["schemas"]["ImageOutput"];
- rand_float: components["schemas"]["FloatOutput"];
- mask_from_id: components["schemas"]["ImageOutput"];
- segment_anything_processor: components["schemas"]["ImageOutput"];
- normalbae_image_processor: components["schemas"]["ImageOutput"];
- mask_edge: components["schemas"]["ImageOutput"];
- dynamic_prompt: components["schemas"]["StringCollectionOutput"];
+ pair_tile_image: components["schemas"]["PairTileImageOutput"];
color_correct: components["schemas"]["ImageOutput"];
- integer_math: components["schemas"]["IntegerOutput"];
- core_metadata: components["schemas"]["MetadataOutput"];
+ tile_to_properties: components["schemas"]["TileToPropertiesOutput"];
+ float_to_int: components["schemas"]["IntegerOutput"];
+ rand_int: components["schemas"]["IntegerOutput"];
+ latents: components["schemas"]["LatentsOutput"];
+ canvas_paste_back: components["schemas"]["ImageOutput"];
+ controlnet: components["schemas"]["ControlOutput"];
+ img_blur: components["schemas"]["ImageOutput"];
+ freeu: components["schemas"]["UNetOutput"];
+ string: components["schemas"]["StringOutput"];
+ boolean_collection: components["schemas"]["BooleanCollectionOutput"];
+ boolean: components["schemas"]["BooleanOutput"];
+ lresize: components["schemas"]["LatentsOutput"];
+ mask_from_id: components["schemas"]["ImageOutput"];
+ string_split: components["schemas"]["String2Output"];
+ create_gradient_mask: components["schemas"]["GradientMaskOutput"];
+ seamless: components["schemas"]["SeamlessModeOutput"];
+ merge_tiles_to_image: components["schemas"]["ImageOutput"];
+ canny_image_processor: components["schemas"]["ImageOutput"];
+ crop_latents: components["schemas"]["LatentsOutput"];
+ mask_edge: components["schemas"]["ImageOutput"];
+ img_paste: components["schemas"]["ImageOutput"];
+ zoe_depth_image_processor: components["schemas"]["ImageOutput"];
+ img_nsfw: components["schemas"]["ImageOutput"];
+ img_mul: components["schemas"]["ImageOutput"];
+ spandrel_image_to_image: components["schemas"]["ImageOutput"];
+ tomask: components["schemas"]["ImageOutput"];
+ color_map_image_processor: components["schemas"]["ImageOutput"];
+ sdxl_refiner_model_loader: components["schemas"]["SDXLRefinerModelLoaderOutput"];
+ infill_rgba: components["schemas"]["ImageOutput"];
+ model_identifier: components["schemas"]["ModelIdentifierOutput"];
+ metadata: components["schemas"]["MetadataOutput"];
+ img_ilerp: components["schemas"]["ImageOutput"];
+ add: components["schemas"]["IntegerOutput"];
+ img_channel_multiply: components["schemas"]["ImageOutput"];
+ integer: components["schemas"]["IntegerOutput"];
integer_collection: components["schemas"]["IntegerCollectionOutput"];
+ img_crop: components["schemas"]["ImageOutput"];
+ show_image: components["schemas"]["ImageOutput"];
+ string_replace: components["schemas"]["StringOutput"];
+ prompt_from_file: components["schemas"]["StringCollectionOutput"];
+ string_join: components["schemas"]["StringOutput"];
+ metadata_item: components["schemas"]["MetadataItemOutput"];
+ lblend: components["schemas"]["LatentsOutput"];
+ t2i_adapter: components["schemas"]["T2IAdapterOutput"];
+ infill_cv2: components["schemas"]["ImageOutput"];
+ sdxl_refiner_compel_prompt: components["schemas"]["ConditioningOutput"];
+ core_metadata: components["schemas"]["MetadataOutput"];
+ invert_tensor_mask: components["schemas"]["MaskOutput"];
+ integer_math: components["schemas"]["IntegerOutput"];
+ content_shuffle_image_processor: components["schemas"]["ImageOutput"];
+ dynamic_prompt: components["schemas"]["StringCollectionOutput"];
+ lineart_anime_image_processor: components["schemas"]["ImageOutput"];
+ string_split_neg: components["schemas"]["StringPosNegOutput"];
+ round_float: components["schemas"]["FloatOutput"];
+ rand_float: components["schemas"]["FloatOutput"];
+ lora_collection_loader: components["schemas"]["LoRALoaderOutput"];
+ midas_depth_image_processor: components["schemas"]["ImageOutput"];
+ random_range: components["schemas"]["IntegerCollectionOutput"];
+ sub: components["schemas"]["IntegerOutput"];
+ infill_lama: components["schemas"]["ImageOutput"];
+ float_range: components["schemas"]["FloatCollectionOutput"];
+ save_image: components["schemas"]["ImageOutput"];
+ iterate: components["schemas"]["IterateInvocationOutput"];
+ hed_image_processor: components["schemas"]["ImageOutput"];
+ dw_openpose_image_processor: components["schemas"]["ImageOutput"];
+ scheduler: components["schemas"]["SchedulerOutput"];
+ string_collection: components["schemas"]["StringCollectionOutput"];
+ lineart_image_processor: components["schemas"]["ImageOutput"];
+ image: components["schemas"]["ImageOutput"];
+ merge_metadata: components["schemas"]["MetadataOutput"];
+ image_collection: components["schemas"]["ImageCollectionOutput"];
+ img_watermark: components["schemas"]["ImageOutput"];
+ pidi_image_processor: components["schemas"]["ImageOutput"];
+ sdxl_lora_collection_loader: components["schemas"]["SDXLLoRALoaderOutput"];
collect: components["schemas"]["CollectInvocationOutput"];
- mlsd_image_processor: components["schemas"]["ImageOutput"];
+ lora_selector: components["schemas"]["LoRASelectorOutput"];
+ tile_image_processor: components["schemas"]["ImageOutput"];
+ denoise_latents: components["schemas"]["LatentsOutput"];
+ sdxl_lora_loader: components["schemas"]["SDXLLoRALoaderOutput"];
+ img_conv: components["schemas"]["ImageOutput"];
+ face_mask_detection: components["schemas"]["FaceMaskOutput"];
+ infill_patchmatch: components["schemas"]["ImageOutput"];
+ rectangle_mask: components["schemas"]["MaskOutput"];
+ img_lerp: components["schemas"]["ImageOutput"];
+ tiled_multi_diffusion_denoise_latents: components["schemas"]["LatentsOutput"];
+ face_identifier: components["schemas"]["ImageOutput"];
step_param_easing: components["schemas"]["FloatCollectionOutput"];
+ unsharp_mask: components["schemas"]["ImageOutput"];
+ mediapipe_face_processor: components["schemas"]["ImageOutput"];
+ calculate_image_tiles: components["schemas"]["CalculateImageTilesOutput"];
+ lscale: components["schemas"]["LatentsOutput"];
+ color: components["schemas"]["ColorOutput"];
+ lora_loader: components["schemas"]["LoRALoaderOutput"];
+ sdxl_compel_prompt: components["schemas"]["ConditioningOutput"];
+ calculate_image_tiles_even_split: components["schemas"]["CalculateImageTilesOutput"];
+ conditioning: components["schemas"]["ConditioningOutput"];
+ float_collection: components["schemas"]["FloatCollectionOutput"];
+ img_pad_crop: components["schemas"]["ImageOutput"];
+ mul: components["schemas"]["IntegerOutput"];
+ heuristic_resize: components["schemas"]["ImageOutput"];
+ create_denoise_mask: components["schemas"]["DenoiseMaskOutput"];
+ img_chan: components["schemas"]["ImageOutput"];
+ leres_image_processor: components["schemas"]["ImageOutput"];
+ infill_tile: components["schemas"]["ImageOutput"];
+ i2l: components["schemas"]["LatentsOutput"];
+ string_join_three: components["schemas"]["StringOutput"];
+ ip_adapter: components["schemas"]["IPAdapterOutput"];
+ main_model_loader: components["schemas"]["ModelLoaderOutput"];
+ float: components["schemas"]["FloatOutput"];
+ compel: components["schemas"]["ConditioningOutput"];
+ range_of_size: components["schemas"]["IntegerCollectionOutput"];
+ normalbae_image_processor: components["schemas"]["ImageOutput"];
+ ideal_size: components["schemas"]["IdealSizeOutput"];
+ conditioning_collection: components["schemas"]["ConditioningCollectionOutput"];
+ depth_anything_image_processor: components["schemas"]["ImageOutput"];
+ mask_combine: components["schemas"]["ImageOutput"];
+ l2i: components["schemas"]["ImageOutput"];
+ latents_collection: components["schemas"]["LatentsCollectionOutput"];
+ float_math: components["schemas"]["FloatOutput"];
+ img_hue_adjust: components["schemas"]["ImageOutput"];
+ img_scale: components["schemas"]["ImageOutput"];
+ esrgan: components["schemas"]["ImageOutput"];
+ vae_loader: components["schemas"]["VAEOutput"];
+ sdxl_model_loader: components["schemas"]["SDXLModelLoaderOutput"];
+ clip_skip: components["schemas"]["CLIPSkipInvocationOutput"];
+ segment_anything_processor: components["schemas"]["ImageOutput"];
+ img_resize: components["schemas"]["ImageOutput"];
+ range: components["schemas"]["IntegerCollectionOutput"];
+ calculate_image_tiles_min_overlap: components["schemas"]["CalculateImageTilesOutput"];
+ mlsd_image_processor: components["schemas"]["ImageOutput"];
+ img_channel_offset: components["schemas"]["ImageOutput"];
+ cv_inpaint: components["schemas"]["ImageOutput"];
+ image_mask_to_tensor: components["schemas"]["MaskOutput"];
+ blank_image: components["schemas"]["ImageOutput"];
+ div: components["schemas"]["IntegerOutput"];
+ alpha_mask_to_tensor: components["schemas"]["MaskOutput"];
+ face_off: components["schemas"]["FaceOffOutput"];
};
/**
* InvocationStartedEvent
@@ -7467,7 +7479,7 @@ export type components = {
* Invocation
* @description The ID of the invocation
*/
- invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
+ invocation: components["schemas"]["AddInvocation"] | components["schemas"]["AlphaMaskToTensorInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["BooleanInvocation"] | components["schemas"]["CLIPSkipInvocation"] | components["schemas"]["CV2InfillInvocation"] | components["schemas"]["CalculateImageTilesEvenSplitInvocation"] | components["schemas"]["CalculateImageTilesInvocation"] | components["schemas"]["CalculateImageTilesMinimumOverlapInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["CanvasPasteBackInvocation"] | components["schemas"]["CenterPadCropInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ColorMapImageProcessorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["CoreMetadataInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["CreateGradientMaskInvocation"] | components["schemas"]["CropLatentsCoreInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["DWOpenposeImageProcessorInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["DepthAnythingImageProcessorInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["FaceIdentifierInvocation"] | components["schemas"]["FaceMaskInvocation"] | components["schemas"]["FaceOffInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["FloatMathInvocation"] | components["schemas"]["FloatToIntegerInvocation"] | components["schemas"]["FreeUInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["HeuristicResizeInvocation"] | components["schemas"]["IPAdapterInvocation"] | components["schemas"]["IdealSizeInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageChannelMultiplyInvocation"] | components["schemas"]["ImageChannelOffsetInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageMaskToTensorInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerMathInvocation"] | components["schemas"]["InvertTensorMaskInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LoRACollectionLoader"] | components["schemas"]["LoRALoaderInvocation"] | components["schemas"]["LoRASelectorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["MaskFromIDInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["MergeMetadataInvocation"] | components["schemas"]["MergeTilesToImageInvocation"] | components["schemas"]["MetadataInvocation"] | components["schemas"]["MetadataItemInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["ModelIdentifierInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["PairTileImageInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["RandomFloatInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RectangleMaskInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["RoundInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLLoRACollectionLoader"] | components["schemas"]["SDXLLoRALoaderInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SaveImageInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["SchedulerInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["SpandrelImageToImageInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringJoinInvocation"] | components["schemas"]["StringJoinThreeInvocation"] | components["schemas"]["StringReplaceInvocation"] | components["schemas"]["StringSplitInvocation"] | components["schemas"]["StringSplitNegInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["T2IAdapterInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["TileToPropertiesInvocation"] | components["schemas"]["TiledMultiDiffusionDenoiseLatents"] | components["schemas"]["UnsharpMaskInvocation"] | components["schemas"]["VAELoaderInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"];
/**
* Invocation Source Id
* @description The ID of the prepared invocation's source node
@@ -7769,6 +7781,12 @@ export type components = {
* @default false
*/
tiled?: boolean;
+ /**
+ * Tile Size
+ * @description The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the
+ * @default 0
+ */
+ tile_size?: number;
/**
* Fp32
* @description Whether or not to use full float32 precision
@@ -9585,7 +9603,7 @@ export type components = {
* Config Out
* @description After successful installation, this will hold the configuration object.
*/
- config_out?: (components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"]) | null;
+ config_out?: (components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"]) | null;
/**
* Inplace
* @description Leave model in its current location; otherwise install under models directory
@@ -9671,7 +9689,7 @@ export type components = {
* Config
* @description The model's config
*/
- config: components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ config: components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
/**
* @description The submodel type, if any
* @default null
@@ -9692,7 +9710,7 @@ export type components = {
* Config
* @description The model's config
*/
- config: components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ config: components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
/**
* @description The submodel type, if any
* @default null
@@ -9791,7 +9809,7 @@ export type components = {
* @description Model type.
* @enum {string}
*/
- ModelType: "onnx" | "main" | "vae" | "lora" | "controlnet" | "embedding" | "ip_adapter" | "clip_vision" | "t2i_adapter";
+ ModelType: "onnx" | "main" | "vae" | "lora" | "controlnet" | "embedding" | "ip_adapter" | "clip_vision" | "t2i_adapter" | "spandrel_image_to_image";
/**
* ModelVariantType
* @description Variant type.
@@ -9804,7 +9822,7 @@ export type components = {
*/
ModelsList: {
/** Models */
- models: (components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"])[];
+ models: (components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"])[];
};
/**
* Multiply Integers
@@ -11857,6 +11875,121 @@ export type components = {
*/
type: "show_image";
};
+ /**
+ * SpandrelImageToImageConfig
+ * @description Model config for Spandrel Image to Image models.
+ */
+ SpandrelImageToImageConfig: {
+ /**
+ * Key
+ * @description A unique key for this model.
+ */
+ key: string;
+ /**
+ * Hash
+ * @description The hash of the model file(s).
+ */
+ hash: string;
+ /**
+ * Path
+ * @description Path to the model on the filesystem. Relative paths are relative to the Invoke root directory.
+ */
+ path: string;
+ /**
+ * Name
+ * @description Name of the model.
+ */
+ name: string;
+ /** @description The base model. */
+ base: components["schemas"]["BaseModelType"];
+ /**
+ * Description
+ * @description Model description
+ */
+ description?: string | null;
+ /**
+ * Source
+ * @description The original source of the model (path, URL or repo_id).
+ */
+ source: string;
+ /** @description The type of source */
+ source_type: components["schemas"]["ModelSourceType"];
+ /**
+ * Source Api Response
+ * @description The original API response from the source, as stringified JSON.
+ */
+ source_api_response?: string | null;
+ /**
+ * Cover Image
+ * @description Url for image to preview model
+ */
+ cover_image?: string | null;
+ /**
+ * Type
+ * @default spandrel_image_to_image
+ * @constant
+ * @enum {string}
+ */
+ type: "spandrel_image_to_image";
+ /**
+ * Format
+ * @default checkpoint
+ * @constant
+ * @enum {string}
+ */
+ format: "checkpoint";
+ };
+ /**
+ * Image-to-Image
+ * @description Run any spandrel image-to-image model (https://github.com/chaiNNer-org/spandrel).
+ */
+ SpandrelImageToImageInvocation: {
+ /**
+ * @description The board to save the image to
+ * @default null
+ */
+ board?: components["schemas"]["BoardField"] | null;
+ /**
+ * @description Optional metadata to be saved with the image
+ * @default null
+ */
+ metadata?: components["schemas"]["MetadataField"] | null;
+ /**
+ * Id
+ * @description The id of this instance of an invocation. Must be unique among all instances of invocations.
+ */
+ id: string;
+ /**
+ * Is Intermediate
+ * @description Whether or not this is an intermediate invocation.
+ * @default false
+ */
+ is_intermediate?: boolean;
+ /**
+ * Use Cache
+ * @description Whether or not to use the cache
+ * @default true
+ */
+ use_cache?: boolean;
+ /**
+ * @description The input image
+ * @default null
+ */
+ image?: components["schemas"]["ImageField"];
+ /**
+ * Image-to-Image Model
+ * @description Image-to-Image model
+ * @default null
+ */
+ image_to_image_model?: components["schemas"]["ModelIdentifierField"];
+ /**
+ * type
+ * @default spandrel_image_to_image
+ * @constant
+ * @enum {string}
+ */
+ type: "spandrel_image_to_image";
+ };
/** StarterModel */
StarterModel: {
/** Description */
@@ -13128,7 +13261,7 @@ export type components = {
* used, and the type will be ignored. They are included here for backwards compatibility.
* @enum {string}
*/
- UIType: "MainModelField" | "SDXLMainModelField" | "SDXLRefinerModelField" | "ONNXModelField" | "VAEModelField" | "LoRAModelField" | "ControlNetModelField" | "IPAdapterModelField" | "T2IAdapterModelField" | "SchedulerField" | "AnyField" | "CollectionField" | "CollectionItemField" | "DEPRECATED_Boolean" | "DEPRECATED_Color" | "DEPRECATED_Conditioning" | "DEPRECATED_Control" | "DEPRECATED_Float" | "DEPRECATED_Image" | "DEPRECATED_Integer" | "DEPRECATED_Latents" | "DEPRECATED_String" | "DEPRECATED_BooleanCollection" | "DEPRECATED_ColorCollection" | "DEPRECATED_ConditioningCollection" | "DEPRECATED_ControlCollection" | "DEPRECATED_FloatCollection" | "DEPRECATED_ImageCollection" | "DEPRECATED_IntegerCollection" | "DEPRECATED_LatentsCollection" | "DEPRECATED_StringCollection" | "DEPRECATED_BooleanPolymorphic" | "DEPRECATED_ColorPolymorphic" | "DEPRECATED_ConditioningPolymorphic" | "DEPRECATED_ControlPolymorphic" | "DEPRECATED_FloatPolymorphic" | "DEPRECATED_ImagePolymorphic" | "DEPRECATED_IntegerPolymorphic" | "DEPRECATED_LatentsPolymorphic" | "DEPRECATED_StringPolymorphic" | "DEPRECATED_UNet" | "DEPRECATED_Vae" | "DEPRECATED_CLIP" | "DEPRECATED_Collection" | "DEPRECATED_CollectionItem" | "DEPRECATED_Enum" | "DEPRECATED_WorkflowField" | "DEPRECATED_IsIntermediate" | "DEPRECATED_BoardField" | "DEPRECATED_MetadataItem" | "DEPRECATED_MetadataItemCollection" | "DEPRECATED_MetadataItemPolymorphic" | "DEPRECATED_MetadataDict";
+ UIType: "MainModelField" | "SDXLMainModelField" | "SDXLRefinerModelField" | "ONNXModelField" | "VAEModelField" | "LoRAModelField" | "ControlNetModelField" | "IPAdapterModelField" | "T2IAdapterModelField" | "SpandrelImageToImageModelField" | "SchedulerField" | "AnyField" | "CollectionField" | "CollectionItemField" | "DEPRECATED_Boolean" | "DEPRECATED_Color" | "DEPRECATED_Conditioning" | "DEPRECATED_Control" | "DEPRECATED_Float" | "DEPRECATED_Image" | "DEPRECATED_Integer" | "DEPRECATED_Latents" | "DEPRECATED_String" | "DEPRECATED_BooleanCollection" | "DEPRECATED_ColorCollection" | "DEPRECATED_ConditioningCollection" | "DEPRECATED_ControlCollection" | "DEPRECATED_FloatCollection" | "DEPRECATED_ImageCollection" | "DEPRECATED_IntegerCollection" | "DEPRECATED_LatentsCollection" | "DEPRECATED_StringCollection" | "DEPRECATED_BooleanPolymorphic" | "DEPRECATED_ColorPolymorphic" | "DEPRECATED_ConditioningPolymorphic" | "DEPRECATED_ControlPolymorphic" | "DEPRECATED_FloatPolymorphic" | "DEPRECATED_ImagePolymorphic" | "DEPRECATED_IntegerPolymorphic" | "DEPRECATED_LatentsPolymorphic" | "DEPRECATED_StringPolymorphic" | "DEPRECATED_UNet" | "DEPRECATED_Vae" | "DEPRECATED_CLIP" | "DEPRECATED_Collection" | "DEPRECATED_CollectionItem" | "DEPRECATED_Enum" | "DEPRECATED_WorkflowField" | "DEPRECATED_IsIntermediate" | "DEPRECATED_BoardField" | "DEPRECATED_MetadataItem" | "DEPRECATED_MetadataItemCollection" | "DEPRECATED_MetadataItemPolymorphic" | "DEPRECATED_MetadataDict";
/** UNetField */
UNetField: {
/** @description Info to load unet submodel */
@@ -13831,7 +13964,7 @@ export type operations = {
/** @description Successful Response */
200: {
content: {
- "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
};
};
/** @description Validation Error */
@@ -13857,7 +13990,7 @@ export type operations = {
/** @description The model configuration was retrieved successfully */
200: {
content: {
- "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
};
};
/** @description Bad request */
@@ -13939,7 +14072,7 @@ export type operations = {
/** @description The model was updated successfully */
200: {
content: {
- "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
};
};
/** @description Bad request */
@@ -14347,7 +14480,7 @@ export type operations = {
/** @description Model converted successfully */
200: {
content: {
- "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
+ "application/json": components["schemas"]["MainDiffusersConfig"] | components["schemas"]["MainCheckpointConfig"] | components["schemas"]["VAEDiffusersConfig"] | components["schemas"]["VAECheckpointConfig"] | components["schemas"]["ControlNetDiffusersConfig"] | components["schemas"]["ControlNetCheckpointConfig"] | components["schemas"]["LoRALyCORISConfig"] | components["schemas"]["LoRADiffusersConfig"] | components["schemas"]["TextualInversionFileConfig"] | components["schemas"]["TextualInversionFolderConfig"] | components["schemas"]["IPAdapterInvokeAIConfig"] | components["schemas"]["IPAdapterCheckpointConfig"] | components["schemas"]["T2IAdapterConfig"] | components["schemas"]["SpandrelImageToImageConfig"] | components["schemas"]["CLIPVisionDiffusersConfig"];
};
};
/** @description Bad request */
@@ -14836,6 +14969,8 @@ export type operations = {
order_dir?: components["schemas"]["SQLiteDirection"];
/** @description Whether to sort by starred images first */
starred_first?: boolean;
+ /** @description The term to search for */
+ search_term?: string | null;
};
};
responses: {
@@ -15012,6 +15147,8 @@ export type operations = {
query: {
/** @description The name of the board to create */
board_name: string;
+ /** @description Whether the board is private */
+ is_private?: boolean;
};
};
responses: {
diff --git a/invokeai/frontend/web/src/services/api/types.ts b/invokeai/frontend/web/src/services/api/types.ts
index 162bdf6abc..b8ffa46c82 100644
--- a/invokeai/frontend/web/src/services/api/types.ts
+++ b/invokeai/frontend/web/src/services/api/types.ts
@@ -11,6 +11,8 @@ export type ListBoardsArgs = NonNullable {
+ return config.type === 'spandrel_image_to_image';
+};
+
export const isControlAdapterModelConfig = (
config: AnyModelConfig
): config is ControlNetModelConfig | T2IAdapterModelConfig | IPAdapterModelConfig => {
diff --git a/invokeai/invocation_api/__init__.py b/invokeai/invocation_api/__init__.py
index 97260c4dfe..586f85b9c2 100644
--- a/invokeai/invocation_api/__init__.py
+++ b/invokeai/invocation_api/__init__.py
@@ -11,7 +11,6 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
-from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
from invokeai.app.invocations.fields import (
BoardField,
ColorField,
@@ -78,6 +77,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
ConditioningFieldData,
SDXLConditioningInfo,
)
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.util.devices import CPU_DEVICE, CUDA_DEVICE, MPS_DEVICE, choose_precision, choose_torch_device
from invokeai.version import __version__
@@ -163,7 +163,7 @@ __all__ = [
"BaseModelType",
"ModelType",
"SubModelType",
- # invokeai.app.invocations.constants
+ # invokeai.backend.stable_diffusion.schedulers.schedulers
"SCHEDULER_NAME_VALUES",
# invokeai.version
"__version__",
diff --git a/invokeai/version/__init__.py b/invokeai/version/__init__.py
index dbc30bc636..57efb1af95 100644
--- a/invokeai/version/__init__.py
+++ b/invokeai/version/__init__.py
@@ -2,7 +2,7 @@
initialization file for invokeai
"""
-from .invokeai_version import __version__ # noqa: F401
+from invokeai.version.invokeai_version import __version__ # noqa: F401
__app_id__ = "invoke-ai/InvokeAI"
__app_name__ = "InvokeAI"
diff --git a/invokeai/version/invokeai_version.py b/invokeai/version/invokeai_version.py
index 1bdb160e96..09545bce26 100644
--- a/invokeai/version/invokeai_version.py
+++ b/invokeai/version/invokeai_version.py
@@ -1 +1 @@
-__version__ = "4.2.5"
+__version__ = "4.2.6post1"
diff --git a/pyproject.toml b/pyproject.toml
index fcc0aff60c..9953c1c1a0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -46,6 +46,7 @@ dependencies = [
"opencv-python==4.9.0.80",
"pytorch-lightning==2.1.3",
"safetensors==0.4.3",
+ "spandrel==0.3.4",
"timm==0.6.13", # needed to override timm latest in controlnet_aux, see https://github.com/isl-org/ZoeDepth/issues/26
"torch==2.2.2",
"torchmetrics==0.11.4",
@@ -206,7 +207,12 @@ ignore = [
"B008", # https://docs.astral.sh/ruff/rules/function-call-in-default-argument/
"B904", # https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/
]
-select = ["B", "C", "E", "F", "W", "I"]
+select = ["B", "C", "E", "F", "W", "I", "TID"]
+
+[tool.ruff.lint.flake8-tidy-imports]
+# Disallow all relative imports.
+ban-relative-imports = "all"
+
#=== End: Ruff
#=== Begin: MyPy
diff --git a/tests/app/services/model_install/test_model_install.py b/tests/app/services/model_install/test_model_install.py
index 0c212cca76..5e4f2a389f 100644
--- a/tests/app/services/model_install/test_model_install.py
+++ b/tests/app/services/model_install/test_model_install.py
@@ -18,6 +18,7 @@ from invokeai.app.services.events.events_common import (
ModelInstallDownloadProgressEvent,
ModelInstallDownloadsCompleteEvent,
ModelInstallDownloadStartedEvent,
+ ModelInstallErrorEvent,
ModelInstallStartedEvent,
)
from invokeai.app.services.model_install import (
@@ -339,7 +340,13 @@ def test_404_download(mm2_installer: ModelInstallServiceBase, mm2_app_config: In
assert job.error_type == "HTTPError"
assert job.error
assert "NOT FOUND" in job.error
+ assert job.error_traceback is not None
assert job.error_traceback.startswith("Traceback")
+ bus = mm2_installer.event_bus
+ assert bus is not None
+ assert hasattr(bus, "events") # the dummyeventservice has this
+ event_types = [type(x) for x in bus.events]
+ assert ModelInstallErrorEvent in event_types
def test_other_error_during_install(
diff --git a/tests/app/services/model_records/test_model_records_sql.py b/tests/app/services/model_records/test_model_records_sql.py
index d39e95ab3d..e6a89dff06 100644
--- a/tests/app/services/model_records/test_model_records_sql.py
+++ b/tests/app/services/model_records/test_model_records_sql.py
@@ -40,7 +40,7 @@ def store(
config._root = datadir
logger = InvokeAILogger.get_logger(config=config)
db = create_mock_sqlite_database(config, logger)
- return ModelRecordServiceSQL(db)
+ return ModelRecordServiceSQL(db, logger)
def example_ti_config(key: Optional[str] = None) -> TextualInversionFileConfig:
diff --git a/tests/backend/model_manager/model_manager_fixtures.py b/tests/backend/model_manager/model_manager_fixtures.py
index 6fd8c51b54..621b7c65b4 100644
--- a/tests/backend/model_manager/model_manager_fixtures.py
+++ b/tests/backend/model_manager/model_manager_fixtures.py
@@ -110,7 +110,7 @@ def mm2_installer(
logger = InvokeAILogger.get_logger()
db = create_mock_sqlite_database(mm2_app_config, logger)
events = TestEventService()
- store = ModelRecordServiceSQL(db)
+ store = ModelRecordServiceSQL(db, logger)
installer = ModelInstallService(
app_config=mm2_app_config,
@@ -128,7 +128,7 @@ def mm2_installer(
def mm2_record_store(mm2_app_config: InvokeAIAppConfig) -> ModelRecordServiceBase:
logger = InvokeAILogger.get_logger(config=mm2_app_config)
db = create_mock_sqlite_database(mm2_app_config, logger)
- store = ModelRecordServiceSQL(db)
+ store = ModelRecordServiceSQL(db, logger)
# add five simple config records to the database
config1 = VAEDiffusersConfig(
key="test_config_1",
diff --git a/tests/backend/stable_diffusion/extensions/test_base.py b/tests/backend/stable_diffusion/extensions/test_base.py
new file mode 100644
index 0000000000..d024c551a2
--- /dev/null
+++ b/tests/backend/stable_diffusion/extensions/test_base.py
@@ -0,0 +1,46 @@
+from unittest import mock
+
+from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext
+from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+from invokeai.backend.stable_diffusion.extensions.base import ExtensionBase, callback
+
+
+class MockExtension(ExtensionBase):
+ """A mock ExtensionBase subclass for testing purposes."""
+
+ def __init__(self, x: int):
+ super().__init__()
+ self._x = x
+
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP)
+ def set_step_index(self, ctx: DenoiseContext):
+ ctx.step_index = self._x
+
+
+def test_extension_base_callback_registration():
+ """Test that a callback can be successfully registered with an extension."""
+ val = 5
+ mock_extension = MockExtension(val)
+
+ mock_ctx = mock.MagicMock()
+
+ callbacks = mock_extension.get_callbacks()
+ pre_denoise_loop_cbs = callbacks.get(ExtensionCallbackType.PRE_DENOISE_LOOP, [])
+ assert len(pre_denoise_loop_cbs) == 1
+
+ # Call the mock callback.
+ pre_denoise_loop_cbs[0].function(mock_ctx)
+
+ # Confirm that the callback ran.
+ assert mock_ctx.step_index == val
+
+
+def test_extension_base_empty_callback_type():
+ """Test that an empty list is returned when no callbacks are registered for a given callback type."""
+ mock_extension = MockExtension(5)
+
+ # There should be no callbacks registered for POST_DENOISE_LOOP.
+ callbacks = mock_extension.get_callbacks()
+
+ post_denoise_loop_cbs = callbacks.get(ExtensionCallbackType.POST_DENOISE_LOOP, [])
+ assert len(post_denoise_loop_cbs) == 0
diff --git a/tests/backend/stable_diffusion/schedulers/test_schedulers.py b/tests/backend/stable_diffusion/schedulers/test_schedulers.py
new file mode 100644
index 0000000000..bb49fc4f3b
--- /dev/null
+++ b/tests/backend/stable_diffusion/schedulers/test_schedulers.py
@@ -0,0 +1,10 @@
+from typing import get_args
+
+from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_MAP, SCHEDULER_NAME_VALUES
+
+
+def test_scheduler_map_has_all_keys():
+ # Assert that SCHEDULER_MAP has all keys from SCHEDULER_NAME_VALUES.
+ # TODO(ryand): This feels like it should be a type check, but I couldn't find a clean way to do this and didn't want
+ # to spend more time on it.
+ assert set(SCHEDULER_MAP.keys()) == set(get_args(SCHEDULER_NAME_VALUES))
diff --git a/tests/backend/stable_diffusion/test_extension_manager.py b/tests/backend/stable_diffusion/test_extension_manager.py
new file mode 100644
index 0000000000..889f8316e5
--- /dev/null
+++ b/tests/backend/stable_diffusion/test_extension_manager.py
@@ -0,0 +1,112 @@
+from unittest import mock
+
+import pytest
+
+from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext
+from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
+from invokeai.backend.stable_diffusion.extensions.base import ExtensionBase, callback
+from invokeai.backend.stable_diffusion.extensions_manager import ExtensionsManager
+
+
+class MockExtension(ExtensionBase):
+ """A mock ExtensionBase subclass for testing purposes."""
+
+ def __init__(self, x: int):
+ super().__init__()
+ self._x = x
+
+ # Note that order is not specified. It should default to 0.
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP)
+ def set_step_index(self, ctx: DenoiseContext):
+ ctx.step_index = self._x
+
+
+class MockExtensionLate(ExtensionBase):
+ """A mock ExtensionBase subclass with a high order value on its PRE_DENOISE_LOOP callback."""
+
+ def __init__(self, x: int):
+ super().__init__()
+ self._x = x
+
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP, order=1000)
+ def set_step_index(self, ctx: DenoiseContext):
+ ctx.step_index = self._x
+
+
+def test_extension_manager_run_callback():
+ """Test that run_callback runs all callbacks for the given callback type."""
+
+ em = ExtensionsManager()
+ mock_extension_1 = MockExtension(1)
+ em.add_extension(mock_extension_1)
+
+ mock_ctx = mock.MagicMock()
+ em.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, mock_ctx)
+
+ assert mock_ctx.step_index == 1
+
+
+def test_extension_manager_run_callback_no_callbacks():
+ """Test that run_callback does not raise an error when there are no callbacks for the given callback type."""
+ em = ExtensionsManager()
+ mock_ctx = mock.MagicMock()
+ em.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, mock_ctx)
+
+
+@pytest.mark.parametrize(
+ ["extension_1", "extension_2"],
+ # Regardless of initialization order, we expect MockExtensionLate to run last.
+ [(MockExtension(1), MockExtensionLate(2)), (MockExtensionLate(2), MockExtension(1))],
+)
+def test_extension_manager_order_callbacks(extension_1: ExtensionBase, extension_2: ExtensionBase):
+ """Test that run_callback runs callbacks in the correct order."""
+ em = ExtensionsManager()
+ em.add_extension(extension_1)
+ em.add_extension(extension_2)
+
+ mock_ctx = mock.MagicMock()
+ em.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, mock_ctx)
+
+ assert mock_ctx.step_index == 2
+
+
+class MockExtensionStableSort(ExtensionBase):
+ """A mock extension with three PRE_DENOISE_LOOP callbacks, each with a different order value."""
+
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP, order=-1000)
+ def early(self, ctx: DenoiseContext):
+ pass
+
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP)
+ def middle(self, ctx: DenoiseContext):
+ pass
+
+ @callback(ExtensionCallbackType.PRE_DENOISE_LOOP, order=1000)
+ def late(self, ctx: DenoiseContext):
+ pass
+
+
+def test_extension_manager_stable_sort():
+ """Test that when two callbacks have the same 'order' value, they are sorted based on the order they were added to
+ the ExtensionsManager."""
+
+ em = ExtensionsManager()
+
+ mock_extension_1 = MockExtensionStableSort()
+ mock_extension_2 = MockExtensionStableSort()
+
+ em.add_extension(mock_extension_1)
+ em.add_extension(mock_extension_2)
+
+ expected_order = [
+ mock_extension_1.early,
+ mock_extension_2.early,
+ mock_extension_1.middle,
+ mock_extension_2.middle,
+ mock_extension_1.late,
+ mock_extension_2.late,
+ ]
+
+ # It's not ideal that we are accessing a private attribute here, but this was the most direct way to assert the
+ # desired behaviour.
+ assert [cb.function for cb in em._ordered_callbacks[ExtensionCallbackType.PRE_DENOISE_LOOP]] == expected_order
diff --git a/tests/backend/stable_diffusion/test_vae_tiling.py b/tests/backend/stable_diffusion/test_vae_tiling.py
new file mode 100644
index 0000000000..4d97a8869b
--- /dev/null
+++ b/tests/backend/stable_diffusion/test_vae_tiling.py
@@ -0,0 +1,13 @@
+from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
+
+from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
+
+
+def test_patch_vae_tiling_params():
+ """Smoke test the patch_vae_tiling_params(...) context manager. The main purpose of this unit test is to detect if
+ diffusers ever changes the attributes of the AutoencoderKL class that we expect to exist.
+ """
+ vae = AutoencoderKL()
+
+ with patch_vae_tiling_params(vae, 1, 2, 3):
+ pass
diff --git a/tests/test_graph_execution_state.py b/tests/test_graph_execution_state.py
index d8cbc38860..9aee5febc9 100644
--- a/tests/test_graph_execution_state.py
+++ b/tests/test_graph_execution_state.py
@@ -3,13 +3,6 @@ from unittest.mock import Mock
import pytest
-# This import must happen before other invoke imports or test in other files(!!) break
-from .test_nodes import ( # isort: split
- PromptCollectionTestInvocation,
- PromptTestInvocation,
- TextToImageTestInvocation,
-)
-
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext
from invokeai.app.invocations.collections import RangeInvocation
from invokeai.app.invocations.math import AddInvocation, MultiplyInvocation
@@ -20,7 +13,13 @@ from invokeai.app.services.shared.graph import (
IterateInvocation,
)
-from .test_nodes import create_edge
+# This import must happen before other invoke imports or test in other files(!!) break
+from tests.test_nodes import (
+ PromptCollectionTestInvocation,
+ PromptTestInvocation,
+ TextToImageTestInvocation,
+ create_edge,
+)
@pytest.fixture
diff --git a/tests/test_node_graph.py b/tests/test_node_graph.py
index 861f1bd07b..ebaab9ff54 100644
--- a/tests/test_node_graph.py
+++ b/tests/test_node_graph.py
@@ -27,8 +27,7 @@ from invokeai.app.services.shared.graph import (
NodeNotFoundError,
are_connections_compatible,
)
-
-from .test_nodes import (
+from tests.test_nodes import (
AnyTypeTestInvocation,
ImageToImageTestInvocation,
ListPassThroughInvocation,
diff --git a/tests/test_session_queue.py b/tests/test_session_queue.py
index bf26b9b002..f0a9dd4808 100644
--- a/tests/test_session_queue.py
+++ b/tests/test_session_queue.py
@@ -11,8 +11,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
prepare_values_to_insert,
)
from invokeai.app.services.shared.graph import Graph, GraphExecutionState
-
-from .test_nodes import PromptTestInvocation
+from tests.test_nodes import PromptTestInvocation
@pytest.fixture