mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
merge with main & resolve conflicts
This commit is contained in:
commit
1e357bd21b
@ -9,9 +9,9 @@ runs:
|
||||
node-version: '18'
|
||||
|
||||
- name: setup pnpm
|
||||
uses: pnpm/action-setup@v2
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 8
|
||||
version: 8.15.6
|
||||
run_install: false
|
||||
|
||||
- name: get pnpm store directory
|
||||
|
2
.github/pull_request_template.md
vendored
2
.github/pull_request_template.md
vendored
@ -8,7 +8,7 @@
|
||||
|
||||
## QA Instructions
|
||||
|
||||
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
|
||||
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
|
||||
|
||||
## Merge Plan
|
||||
|
||||
|
45
README.md
45
README.md
@ -12,12 +12,24 @@
|
||||
|
||||
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
|
||||
|
||||
[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
|
||||
Invoke is available in two editions:
|
||||
|
||||
| **Community Edition** | **Professional Edition** |
|
||||
|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
|
||||
| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
|
||||
| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
|
||||
| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
|
||||
| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
|
||||
| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
|
||||
|
||||
<div align="center">
|
||||
|
||||

|
||||
|
||||
# Documentation
|
||||
| **Quick Links** |
|
||||
|----------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
|
||||
|
||||
</div>
|
||||
|
||||
## Quick Start
|
||||
@ -37,6 +49,33 @@ Invoke is a leading creative engine built to empower professionals and enthusias
|
||||
|
||||
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
|
||||
|
||||
## Docker Container
|
||||
|
||||
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
|
||||
|
||||
### Generate!
|
||||
|
||||
Run the container, modifying the command as necessary:
|
||||
|
||||
```bash
|
||||
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||
```
|
||||
|
||||
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
|
||||
|
||||
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
|
||||
|
||||
### Persist your data
|
||||
|
||||
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
|
||||
|
||||
### DIY
|
||||
|
||||
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
|
||||
|
||||
## Troubleshooting, FAQ and Support
|
||||
|
||||
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
|
||||
@ -114,3 +153,5 @@ Original portions of the software are Copyright © 2024 by respective contributo
|
||||
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
|
||||
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
|
||||
[translation status link]: https://hosted.weblate.org/engage/invokeai/
|
||||
[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
|
||||
[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html
|
||||
|
@ -19,8 +19,9 @@
|
||||
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
|
||||
# INVOKEAI_PORT=9090
|
||||
|
||||
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
|
||||
# GPU_DRIVER=nvidia #| rocm
|
||||
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
|
||||
# GPU_DRIVER=cuda #| rocm
|
||||
|
||||
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
|
||||
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
|
||||
# CONTAINER_UID=1000
|
||||
|
@ -1,41 +1,75 @@
|
||||
# InvokeAI Containerized
|
||||
# Invoke in Docker
|
||||
|
||||
All commands should be run within the `docker` directory: `cd docker`
|
||||
- Ensure that Docker can use the GPU on your system
|
||||
- This documentation assumes Linux, but should work similarly under Windows with WSL2
|
||||
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
|
||||
|
||||
## Quickstart :rocket:
|
||||
## Quickstart :lightning:
|
||||
|
||||
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
|
||||
No `docker compose`, no persistence, just a simple one-liner using the official images:
|
||||
|
||||
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
|
||||
**CUDA:**
|
||||
|
||||
## Detailed setup
|
||||
```bash
|
||||
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||
```
|
||||
|
||||
**ROCm:**
|
||||
|
||||
```bash
|
||||
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
|
||||
```
|
||||
|
||||
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
|
||||
|
||||
> [!TIP]
|
||||
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
|
||||
|
||||
## Customize the container
|
||||
|
||||
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
|
||||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.sample .env
|
||||
# edit .env to your liking if you need to; it is well commented.
|
||||
./run.sh
|
||||
```
|
||||
|
||||
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
|
||||
|
||||
## Docker setup in detail
|
||||
|
||||
#### Linux
|
||||
|
||||
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
|
||||
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
|
||||
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
|
||||
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
|
||||
3. Ensure docker daemon is able to access the GPU.
|
||||
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||
- [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||
- [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
|
||||
|
||||
#### macOS
|
||||
|
||||
> [!TIP]
|
||||
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
|
||||
|
||||
If you are still reading:
|
||||
|
||||
1. Ensure Docker has at least 16GB RAM
|
||||
2. Enable VirtioFS for file sharing
|
||||
3. Enable `docker compose` V2 support
|
||||
|
||||
This is done via Docker Desktop preferences
|
||||
This is done via Docker Desktop preferences.
|
||||
|
||||
### Configure Invoke environment
|
||||
### Configure the Invoke Environment
|
||||
|
||||
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
|
||||
a. the desired location of the InvokeAI runtime directory, or
|
||||
b. an existing, v3.0.0 compatible runtime directory.
|
||||
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
|
||||
1. Execute `run.sh`
|
||||
|
||||
The image will be built automatically if needed.
|
||||
|
||||
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
|
||||
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
|
||||
|
||||
### Use a GPU
|
||||
|
||||
@ -43,9 +77,9 @@ The runtime directory (holding models and outputs) will be created in the locati
|
||||
- WSL2 is *required* for Windows.
|
||||
- only `x86_64` architecture is supported.
|
||||
|
||||
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
|
||||
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
|
||||
|
||||
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
|
||||
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
|
||||
|
||||
## Customize
|
||||
|
||||
@ -59,10 +93,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
|
||||
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
|
||||
HUGGINGFACE_TOKEN=the_actual_token
|
||||
CONTAINER_UID=1000
|
||||
GPU_DRIVER=nvidia
|
||||
GPU_DRIVER=cuda
|
||||
```
|
||||
|
||||
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
|
||||
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
|
||||
|
||||
## Even More Customizing!
|
||||
|
||||
|
@ -1,7 +1,5 @@
|
||||
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
|
||||
|
||||
version: '3.8'
|
||||
|
||||
x-invokeai: &invokeai
|
||||
image: "local/invokeai:latest"
|
||||
build:
|
||||
@ -32,7 +30,7 @@ x-invokeai: &invokeai
|
||||
|
||||
|
||||
services:
|
||||
invokeai-nvidia:
|
||||
invokeai-cuda:
|
||||
<<: *invokeai
|
||||
deploy:
|
||||
resources:
|
||||
|
@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
|
||||
# but it is useful to have the full SSH server e.g. on Runpod.
|
||||
# (use SCP to copy files to/from the image, etc)
|
||||
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
|
||||
apt-get update
|
||||
apt-get install -y openssh-server
|
||||
pushd "$HOME"
|
||||
mkdir -p .ssh
|
||||
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
|
||||
chmod -R 700 .ssh
|
||||
popd
|
||||
service ssh start
|
||||
apt-get update
|
||||
apt-get install -y openssh-server
|
||||
pushd "$HOME"
|
||||
mkdir -p .ssh
|
||||
echo "${PUBLIC_KEY}" >.ssh/authorized_keys
|
||||
chmod -R 700 .ssh
|
||||
popd
|
||||
service ssh start
|
||||
fi
|
||||
|
||||
mkdir -p "${INVOKEAI_ROOT}"
|
||||
chown --recursive ${USER} "${INVOKEAI_ROOT}"
|
||||
chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
|
||||
cd "${INVOKEAI_ROOT}"
|
||||
|
||||
# Run the CMD as the Container User (not root).
|
||||
|
@ -8,11 +8,15 @@ run() {
|
||||
local build_args=""
|
||||
local profile=""
|
||||
|
||||
# create .env file if it doesn't exist, otherwise docker compose will fail
|
||||
touch .env
|
||||
|
||||
# parse .env file for build args
|
||||
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
|
||||
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
|
||||
|
||||
[[ -z "$profile" ]] && profile="nvidia"
|
||||
# default to 'cuda' profile
|
||||
[[ -z "$profile" ]] && profile="cuda"
|
||||
|
||||
local service_name="invokeai-$profile"
|
||||
|
||||
|
@ -408,7 +408,7 @@ config = get_config()
|
||||
|
||||
logger = InvokeAILogger.get_logger(config=config)
|
||||
db = SqliteDatabase(config.db_path, logger)
|
||||
record_store = ModelRecordServiceSQL(db)
|
||||
record_store = ModelRecordServiceSQL(db, logger)
|
||||
queue = DownloadQueueService()
|
||||
queue.start()
|
||||
|
||||
|
@ -4,50 +4,37 @@ title: Installing with Docker
|
||||
|
||||
# :fontawesome-brands-docker: Docker
|
||||
|
||||
!!! warning "macOS and AMD GPU Users"
|
||||
!!! warning "macOS users"
|
||||
|
||||
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
|
||||
because Docker containers can not access the GPU on macOS.
|
||||
|
||||
!!! warning "AMD GPU Users"
|
||||
|
||||
Container support for AMD GPUs has been reported to work by the community, but has not received
|
||||
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
|
||||
use the `build.sh` script to build the image for this to take effect at build time.
|
||||
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
|
||||
|
||||
!!! tip "Linux and Windows Users"
|
||||
|
||||
For optimal performance, configure your Docker daemon to access your machine's GPU.
|
||||
Configure Docker to access your machine's GPU.
|
||||
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
|
||||
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||
|
||||
## Why containers?
|
||||
|
||||
They provide a flexible, reliable way to build and deploy InvokeAI.
|
||||
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
|
||||
methodology for details on why running applications in such a stateless fashion is important.
|
||||
|
||||
The container is configured for CUDA by default, but can be built to support AMD GPUs
|
||||
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
|
||||
|
||||
Developers on Apple silicon (M1/M2/M3): You
|
||||
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
|
||||
and performance is reduced compared with running it directly on macOS but for
|
||||
development purposes it's fine. Once you're done with development tasks on your
|
||||
laptop you can build for the target platform and architecture and deploy to
|
||||
another environment with NVIDIA GPUs on-premises or in the cloud.
|
||||
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
|
||||
|
||||
## TL;DR
|
||||
|
||||
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
|
||||
Ensure your Docker setup is able to use your GPU. Then:
|
||||
|
||||
```bash
|
||||
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
|
||||
```
|
||||
|
||||
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
|
||||
|
||||
## Build-It-Yourself
|
||||
|
||||
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
|
||||
|
||||
```bash
|
||||
# docker compose commands should be run from the `docker` directory
|
||||
cd docker
|
||||
cp .env.sample .env
|
||||
docker compose up
|
||||
```
|
||||
|
||||
## Installation in a Linux container (desktop)
|
||||
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
|
||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
|
||||
increase Swap and Disk image size too.
|
||||
|
||||
#### Get a Huggingface-Token
|
||||
|
||||
Besides the Docker Agent you will need an Account on
|
||||
[huggingface.co](https://huggingface.co/join).
|
||||
|
||||
After you succesfully registered your account, go to
|
||||
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
|
||||
a token and copy it, since you will need in for the next step.
|
||||
|
||||
### Setup
|
||||
|
||||
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
|
||||
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
|
||||
|
||||
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
|
||||
|
||||
@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
|
||||
## Troubleshooting / FAQ
|
||||
|
||||
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
|
||||
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
|
||||
and you may have cloned this repository before the issue was fixed. To solve this, please change
|
||||
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
|
||||
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
|
||||
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
|
||||
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
|
||||
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
|
||||
to reset the file to its most recent version.
|
||||
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
|
||||
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
|
||||
|
@ -13,7 +13,7 @@ echo 2. Open the developer console
|
||||
echo 3. Command-line help
|
||||
echo Q - Quit
|
||||
echo.
|
||||
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
|
||||
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
|
||||
echo.
|
||||
set /P choice="Please enter 1-4, Q: [1] "
|
||||
if not defined choice set choice=1
|
||||
|
@ -4,37 +4,39 @@ from logging import Logger
|
||||
|
||||
import torch
|
||||
|
||||
from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
|
||||
from invokeai.app.services.board_images.board_images_default import BoardImagesService
|
||||
from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
|
||||
from invokeai.app.services.boards.boards_default import BoardService
|
||||
from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
|
||||
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||
from invokeai.app.services.download.download_default import DownloadQueueService
|
||||
from invokeai.app.services.events.events_fastapievents import FastAPIEventService
|
||||
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
|
||||
from invokeai.app.services.image_records.image_records_sqlite import SqliteImageRecordStorage
|
||||
from invokeai.app.services.images.images_default import ImageService
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
|
||||
from invokeai.app.services.invocation_services import InvocationServices
|
||||
from invokeai.app.services.invocation_stats.invocation_stats_default import InvocationStatsService
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_images.model_images_default import ModelImageFileStorageDisk
|
||||
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService
|
||||
from invokeai.app.services.model_records.model_records_sql import ModelRecordServiceSQL
|
||||
from invokeai.app.services.names.names_default import SimpleNameService
|
||||
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
|
||||
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
|
||||
from invokeai.app.services.session_processor.session_processor_default import (
|
||||
DefaultSessionProcessor,
|
||||
DefaultSessionRunner,
|
||||
)
|
||||
from invokeai.app.services.session_queue.session_queue_sqlite import SqliteSessionQueue
|
||||
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
|
||||
from invokeai.app.services.urls.urls_default import LocalUrlService
|
||||
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
|
||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
from invokeai.version.invokeai_version import __version__
|
||||
|
||||
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
|
||||
from ..services.board_images.board_images_default import BoardImagesService
|
||||
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
|
||||
from ..services.boards.boards_default import BoardService
|
||||
from ..services.bulk_download.bulk_download_default import BulkDownloadService
|
||||
from ..services.config import InvokeAIAppConfig
|
||||
from ..services.download import DownloadQueueService
|
||||
from ..services.events.events_fastapievents import FastAPIEventService
|
||||
from ..services.image_files.image_files_disk import DiskImageFileStorage
|
||||
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
|
||||
from ..services.images.images_default import ImageService
|
||||
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
|
||||
from ..services.invocation_services import InvocationServices
|
||||
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
|
||||
from ..services.invoker import Invoker
|
||||
from ..services.model_images.model_images_default import ModelImageFileStorageDisk
|
||||
from ..services.model_manager.model_manager_default import ModelManagerService
|
||||
from ..services.model_records import ModelRecordServiceSQL
|
||||
from ..services.names.names_default import SimpleNameService
|
||||
from ..services.session_processor.session_processor_default import DefaultSessionProcessor, DefaultSessionRunner
|
||||
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
|
||||
from ..services.urls.urls_default import LocalUrlService
|
||||
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
|
||||
|
||||
|
||||
# TODO: is there a better way to achieve this?
|
||||
def check_internet() -> bool:
|
||||
@ -97,7 +99,7 @@ class ApiDependencies:
|
||||
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
|
||||
model_manager = ModelManagerService.build_model_manager(
|
||||
app_config=configuration,
|
||||
model_record_service=ModelRecordServiceSQL(db=db),
|
||||
model_record_service=ModelRecordServiceSQL(db=db, logger=logger),
|
||||
download_queue=download_queue_service,
|
||||
events=events,
|
||||
)
|
||||
|
@ -10,14 +10,13 @@ from fastapi import Body
|
||||
from fastapi.routing import APIRouter
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
|
||||
from invokeai.backend.util.logging import logging
|
||||
from invokeai.version import __version__
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
|
||||
class LogLevel(int, Enum):
|
||||
NotSet = logging.NOTSET
|
||||
|
@ -2,7 +2,7 @@ from fastapi import Body, HTTPException
|
||||
from fastapi.routing import APIRouter
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
|
||||
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
|
||||
|
||||
|
@ -4,12 +4,11 @@ from fastapi import Body, HTTPException, Path, Query
|
||||
from fastapi.routing import APIRouter
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||
from invokeai.app.services.boards.boards_common import BoardDTO
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
|
||||
|
||||
|
||||
@ -32,6 +31,7 @@ class DeleteBoardResult(BaseModel):
|
||||
)
|
||||
async def create_board(
|
||||
board_name: str = Query(description="The name of the board to create"),
|
||||
is_private: bool = Query(default=False, description="Whether the board is private"),
|
||||
) -> BoardDTO:
|
||||
"""Creates a board"""
|
||||
try:
|
||||
|
@ -8,13 +8,12 @@ from fastapi.routing import APIRouter
|
||||
from pydantic.networks import AnyHttpUrl
|
||||
from starlette.exceptions import HTTPException
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.services.download import (
|
||||
DownloadJob,
|
||||
UnknownJobIDException,
|
||||
)
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
|
||||
|
||||
|
||||
|
@ -8,6 +8,7 @@ from fastapi.routing import APIRouter
|
||||
from PIL import Image
|
||||
from pydantic import BaseModel, Field, JsonValue
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.invocations.fields import MetadataField
|
||||
from invokeai.app.services.image_records.image_records_common import (
|
||||
ImageCategory,
|
||||
@ -18,8 +19,6 @@ from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
images_router = APIRouter(prefix="/v1/images", tags=["images"])
|
||||
|
||||
|
||||
@ -234,21 +233,14 @@ async def get_image_workflow(
|
||||
)
|
||||
async def get_image_full(
|
||||
image_name: str = Path(description="The name of full-resolution image file to get"),
|
||||
) -> FileResponse:
|
||||
) -> Response:
|
||||
"""Gets a full-resolution image file"""
|
||||
|
||||
try:
|
||||
path = ApiDependencies.invoker.services.images.get_path(image_name)
|
||||
|
||||
if not ApiDependencies.invoker.services.images.validate_path(path):
|
||||
raise HTTPException(status_code=404)
|
||||
|
||||
response = FileResponse(
|
||||
path,
|
||||
media_type="image/png",
|
||||
filename=image_name,
|
||||
content_disposition_type="inline",
|
||||
)
|
||||
with open(path, "rb") as f:
|
||||
content = f.read()
|
||||
response = Response(content, media_type="image/png")
|
||||
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
|
||||
return response
|
||||
except Exception:
|
||||
@ -269,15 +261,14 @@ async def get_image_full(
|
||||
)
|
||||
async def get_image_thumbnail(
|
||||
image_name: str = Path(description="The name of thumbnail image file to get"),
|
||||
) -> FileResponse:
|
||||
) -> Response:
|
||||
"""Gets a thumbnail image file"""
|
||||
|
||||
try:
|
||||
path = ApiDependencies.invoker.services.images.get_path(image_name, thumbnail=True)
|
||||
if not ApiDependencies.invoker.services.images.validate_path(path):
|
||||
raise HTTPException(status_code=404)
|
||||
|
||||
response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
|
||||
with open(path, "rb") as f:
|
||||
content = f.read()
|
||||
response = Response(content, media_type="image/webp")
|
||||
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
|
||||
return response
|
||||
except Exception:
|
||||
@ -323,18 +314,12 @@ async def list_image_dtos(
|
||||
limit: int = Query(default=10, description="The number of images per page"),
|
||||
order_dir: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The order of sort"),
|
||||
starred_first: bool = Query(default=True, description="Whether to sort by starred images first"),
|
||||
search_term: Optional[str] = Query(default=None, description="The term to search for"),
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
"""Gets a list of image DTOs"""
|
||||
|
||||
image_dtos = ApiDependencies.invoker.services.images.get_many(
|
||||
offset,
|
||||
limit,
|
||||
starred_first,
|
||||
order_dir,
|
||||
image_origin,
|
||||
categories,
|
||||
is_intermediate,
|
||||
board_id,
|
||||
offset, limit, starred_first, order_dir, image_origin, categories, is_intermediate, board_id, search_term
|
||||
)
|
||||
|
||||
return image_dtos
|
||||
|
@ -16,6 +16,7 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
|
||||
from starlette.exceptions import HTTPException
|
||||
from typing_extensions import Annotated
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
|
||||
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
|
||||
from invokeai.app.services.model_records import (
|
||||
@ -35,8 +36,6 @@ from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataW
|
||||
from invokeai.backend.model_manager.search import ModelSearch
|
||||
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
|
||||
|
||||
# images are immutable; set a high max-age
|
||||
|
@ -4,6 +4,7 @@ from fastapi import Body, Path, Query
|
||||
from fastapi.routing import APIRouter
|
||||
from pydantic import BaseModel
|
||||
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
|
||||
from invokeai.app.services.session_queue.session_queue_common import (
|
||||
QUEUE_ITEM_STATUS,
|
||||
@ -19,8 +20,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
|
||||
)
|
||||
from invokeai.app.services.shared.pagination import CursorPaginatedResults
|
||||
|
||||
from ..dependencies import ApiDependencies
|
||||
|
||||
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
|
||||
|
||||
|
||||
|
@ -20,14 +20,9 @@ from torch.backends.mps import is_available as is_mps_available
|
||||
# noinspection PyUnresolvedReferences
|
||||
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
|
||||
import invokeai.frontend.web as web_dir
|
||||
from invokeai.app.api.dependencies import ApiDependencies
|
||||
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.app.util.custom_openapi import get_openapi_func
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from ..backend.util.logging import InvokeAILogger
|
||||
from .api.dependencies import ApiDependencies
|
||||
from .api.routers import (
|
||||
from invokeai.app.api.routers import (
|
||||
app_info,
|
||||
board_images,
|
||||
boards,
|
||||
@ -38,7 +33,11 @@ from .api.routers import (
|
||||
utilities,
|
||||
workflows,
|
||||
)
|
||||
from .api.sockets import SocketIO
|
||||
from invokeai.app.api.sockets import SocketIO
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.app.util.custom_openapi import get_openapi_func
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
app_config = get_config()
|
||||
|
||||
@ -162,6 +161,7 @@ def invoke_api() -> None:
|
||||
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
|
||||
# https://github.com/WaylonWalker
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.settimeout(1)
|
||||
if s.connect_ex(("localhost", port)) == 0:
|
||||
return find_port(port=port + 1)
|
||||
else:
|
||||
|
@ -40,7 +40,7 @@ from invokeai.app.util.misc import uuid_string
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ..services.invocation_services import InvocationServices
|
||||
from invokeai.app.services.invocation_services import InvocationServices
|
||||
|
||||
logger = InvokeAILogger.get_logger()
|
||||
|
||||
|
@ -4,13 +4,12 @@
|
||||
import numpy as np
|
||||
from pydantic import ValidationInfo, field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import InputField
|
||||
from invokeai.app.invocations.primitives import IntegerCollectionOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.misc import SEED_MAX
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField
|
||||
|
||||
|
||||
@invocation(
|
||||
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
|
||||
|
@ -5,6 +5,7 @@ from compel import Compel, ReturnedEmbeddingsType
|
||||
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
|
||||
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.fields import (
|
||||
ConditioningField,
|
||||
FieldDescriptions,
|
||||
@ -14,6 +15,7 @@ from invokeai.app.invocations.fields import (
|
||||
TensorField,
|
||||
UIComponent,
|
||||
)
|
||||
from invokeai.app.invocations.model import CLIPField
|
||||
from invokeai.app.invocations.primitives import ConditioningOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.ti_utils import generate_ti_list
|
||||
@ -26,9 +28,6 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||
)
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from .model import CLIPField
|
||||
|
||||
# unconditioned: Optional[torch.Tensor]
|
||||
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
from typing import Literal
|
||||
|
||||
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
LATENT_SCALE_FACTOR = 8
|
||||
@ -11,9 +10,6 @@ factor is hard-coded to a literal '8' rather than using this constant.
|
||||
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
|
||||
"""
|
||||
|
||||
SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
|
||||
"""A literal type representing the valid scheduler names."""
|
||||
|
||||
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
|
||||
"""A literal type for PIL image modes supported by Invoke"""
|
||||
|
||||
|
@ -22,6 +22,13 @@ from controlnet_aux.util import HWC3, ade_palette
|
||||
from PIL import Image
|
||||
from pydantic import BaseModel, Field, field_validator, model_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
Classification,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
from invokeai.app.invocations.fields import (
|
||||
FieldDescriptions,
|
||||
ImageField,
|
||||
@ -46,8 +53,6 @@ from invokeai.backend.image_util.util import np_to_pil, pil_to_np
|
||||
from invokeai.backend.model_manager.load import LoadedModelWithoutConfig
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
|
||||
|
||||
|
||||
class ControlField(BaseModel):
|
||||
image: ImageField = Field(description="The control image")
|
||||
|
@ -5,13 +5,11 @@ import cv2 as cv
|
||||
import numpy
|
||||
from PIL import Image, ImageOps
|
||||
|
||||
from invokeai.app.invocations.fields import ImageField
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField, WithBoard, WithMetadata
|
||||
|
||||
|
||||
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
|
||||
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
|
||||
import inspect
|
||||
import os
|
||||
from contextlib import ExitStack
|
||||
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
|
||||
|
||||
@ -17,7 +18,7 @@ from torchvision.transforms.functional import resize as tv_resize
|
||||
from transformers import CLIPVisionModelWithProjection
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
||||
from invokeai.app.invocations.fields import (
|
||||
ConditioningField,
|
||||
@ -39,6 +40,7 @@ from invokeai.backend.lora import LoRAModelRaw
|
||||
from invokeai.backend.model_manager import BaseModelType
|
||||
from invokeai.backend.model_patcher import ModelPatcher
|
||||
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
|
||||
from invokeai.backend.stable_diffusion.denoise_context import DenoiseContext, DenoiseInputs
|
||||
from invokeai.backend.stable_diffusion.diffusers_pipeline import (
|
||||
ControlNetData,
|
||||
StableDiffusionGeneratorPipeline,
|
||||
@ -53,7 +55,13 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||
TextConditioningData,
|
||||
TextConditioningRegions,
|
||||
)
|
||||
from invokeai.backend.stable_diffusion.diffusion.custom_atttention import CustomAttnProcessor2_0
|
||||
from invokeai.backend.stable_diffusion.diffusion_backend import StableDiffusionBackend
|
||||
from invokeai.backend.stable_diffusion.extension_callback_type import ExtensionCallbackType
|
||||
from invokeai.backend.stable_diffusion.extensions.preview import PreviewExt
|
||||
from invokeai.backend.stable_diffusion.extensions_manager import ExtensionsManager
|
||||
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
|
||||
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.hotfixes import ControlNetModel
|
||||
from invokeai.backend.util.mask import to_standard_float_mask
|
||||
@ -313,9 +321,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
context: InvocationContext,
|
||||
positive_conditioning_field: Union[ConditioningField, list[ConditioningField]],
|
||||
negative_conditioning_field: Union[ConditioningField, list[ConditioningField]],
|
||||
unet: UNet2DConditionModel,
|
||||
latent_height: int,
|
||||
latent_width: int,
|
||||
device: torch.device,
|
||||
dtype: torch.dtype,
|
||||
cfg_scale: float | list[float],
|
||||
steps: int,
|
||||
cfg_rescale_multiplier: float,
|
||||
@ -329,10 +338,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
uncond_list = [uncond_list]
|
||||
|
||||
cond_text_embeddings, cond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
|
||||
cond_list, context, unet.device, unet.dtype
|
||||
cond_list, context, device, dtype
|
||||
)
|
||||
uncond_text_embeddings, uncond_text_embedding_masks = DenoiseLatentsInvocation._get_text_embeddings_and_masks(
|
||||
uncond_list, context, unet.device, unet.dtype
|
||||
uncond_list, context, device, dtype
|
||||
)
|
||||
|
||||
cond_text_embedding, cond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
|
||||
@ -340,14 +349,14 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
masks=cond_text_embedding_masks,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
dtype=unet.dtype,
|
||||
dtype=dtype,
|
||||
)
|
||||
uncond_text_embedding, uncond_regions = DenoiseLatentsInvocation._concat_regional_text_embeddings(
|
||||
text_conditionings=uncond_text_embeddings,
|
||||
masks=uncond_text_embedding_masks,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
dtype=unet.dtype,
|
||||
dtype=dtype,
|
||||
)
|
||||
|
||||
if isinstance(cfg_scale, list):
|
||||
@ -706,9 +715,108 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
|
||||
return seed, noise, latents
|
||||
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
if os.environ.get("USE_MODULAR_DENOISE", False):
|
||||
return self._new_invoke(context)
|
||||
else:
|
||||
return self._old_invoke(context)
|
||||
|
||||
@torch.no_grad()
|
||||
@SilenceWarnings() # This quenches the NSFW nag from diffusers.
|
||||
def invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
def _new_invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
ext_manager = ExtensionsManager(is_canceled=context.util.is_canceled)
|
||||
|
||||
device = TorchDevice.choose_torch_device()
|
||||
dtype = TorchDevice.choose_torch_dtype()
|
||||
|
||||
seed, noise, latents = self.prepare_noise_and_latents(context, self.noise, self.latents)
|
||||
latents = latents.to(device=device, dtype=dtype)
|
||||
if noise is not None:
|
||||
noise = noise.to(device=device, dtype=dtype)
|
||||
|
||||
_, _, latent_height, latent_width = latents.shape
|
||||
|
||||
conditioning_data = self.get_conditioning_data(
|
||||
context=context,
|
||||
positive_conditioning_field=self.positive_conditioning,
|
||||
negative_conditioning_field=self.negative_conditioning,
|
||||
cfg_scale=self.cfg_scale,
|
||||
steps=self.steps,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
device=device,
|
||||
dtype=dtype,
|
||||
# TODO: old backend, remove
|
||||
cfg_rescale_multiplier=self.cfg_rescale_multiplier,
|
||||
)
|
||||
|
||||
scheduler = get_scheduler(
|
||||
context=context,
|
||||
scheduler_info=self.unet.scheduler,
|
||||
scheduler_name=self.scheduler,
|
||||
seed=seed,
|
||||
)
|
||||
|
||||
timesteps, init_timestep, scheduler_step_kwargs = self.init_scheduler(
|
||||
scheduler,
|
||||
seed=seed,
|
||||
device=device,
|
||||
steps=self.steps,
|
||||
denoising_start=self.denoising_start,
|
||||
denoising_end=self.denoising_end,
|
||||
)
|
||||
|
||||
denoise_ctx = DenoiseContext(
|
||||
inputs=DenoiseInputs(
|
||||
orig_latents=latents,
|
||||
timesteps=timesteps,
|
||||
init_timestep=init_timestep,
|
||||
noise=noise,
|
||||
seed=seed,
|
||||
scheduler_step_kwargs=scheduler_step_kwargs,
|
||||
conditioning_data=conditioning_data,
|
||||
attention_processor_cls=CustomAttnProcessor2_0,
|
||||
),
|
||||
unet=None,
|
||||
scheduler=scheduler,
|
||||
)
|
||||
|
||||
# get the unet's config so that we can pass the base to sd_step_callback()
|
||||
unet_config = context.models.get_config(self.unet.unet.key)
|
||||
|
||||
### preview
|
||||
def step_callback(state: PipelineIntermediateState) -> None:
|
||||
context.util.sd_step_callback(state, unet_config.base)
|
||||
|
||||
ext_manager.add_extension(PreviewExt(step_callback))
|
||||
|
||||
# ext: t2i/ip adapter
|
||||
ext_manager.run_callback(ExtensionCallbackType.SETUP, denoise_ctx)
|
||||
|
||||
unet_info = context.models.load(self.unet.unet)
|
||||
assert isinstance(unet_info.model, UNet2DConditionModel)
|
||||
with (
|
||||
unet_info.model_on_device() as (model_state_dict, unet),
|
||||
ModelPatcher.patch_unet_attention_processor(unet, denoise_ctx.inputs.attention_processor_cls),
|
||||
# ext: controlnet
|
||||
ext_manager.patch_extensions(unet),
|
||||
# ext: freeu, seamless, ip adapter, lora
|
||||
ext_manager.patch_unet(model_state_dict, unet),
|
||||
):
|
||||
sd_backend = StableDiffusionBackend(unet, scheduler)
|
||||
denoise_ctx.unet = unet
|
||||
result_latents = sd_backend.latents_from_embeddings(denoise_ctx, ext_manager)
|
||||
|
||||
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
|
||||
result_latents = result_latents.detach().to("cpu")
|
||||
TorchDevice.empty_cache()
|
||||
|
||||
name = context.tensors.save(tensor=result_latents)
|
||||
return LatentsOutput.build(latents_name=name, latents=result_latents, seed=None)
|
||||
|
||||
@torch.no_grad()
|
||||
@SilenceWarnings() # This quenches the NSFW nag from diffusers.
|
||||
def _old_invoke(self, context: InvocationContext) -> LatentsOutput:
|
||||
seed, noise, latents = self.prepare_noise_and_latents(context, self.noise, self.latents)
|
||||
|
||||
mask, masked_latents, gradient_mask = self.prep_inpaint_mask(context, latents)
|
||||
@ -787,7 +895,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
context=context,
|
||||
positive_conditioning_field=self.positive_conditioning,
|
||||
negative_conditioning_field=self.negative_conditioning,
|
||||
unet=unet,
|
||||
device=unet.device,
|
||||
dtype=unet.dtype,
|
||||
latent_height=latent_height,
|
||||
latent_width=latent_width,
|
||||
cfg_scale=self.cfg_scale,
|
||||
|
@ -48,6 +48,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
|
||||
ControlNetModel = "ControlNetModelField"
|
||||
IPAdapterModel = "IPAdapterModelField"
|
||||
T2IAdapterModel = "T2IAdapterModelField"
|
||||
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
|
||||
# endregion
|
||||
|
||||
# region Misc Field Types
|
||||
@ -134,6 +135,7 @@ class FieldDescriptions:
|
||||
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
|
||||
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
|
||||
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
|
||||
spandrel_image_to_image_model = "Image-to-Image model"
|
||||
lora_weight = "The weight at which the LoRA is applied to each model"
|
||||
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
|
||||
raw_prompt = "Raw prompt text (no parsing)"
|
||||
@ -160,6 +162,7 @@ class FieldDescriptions:
|
||||
fp32 = "Whether or not to use full float32 precision"
|
||||
precision = "Precision to use"
|
||||
tiled = "Processing using overlapping tiles (reduce memory consumption)"
|
||||
vae_tile_size = "The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the model will be used. Larger tile sizes generally produce better results at the cost of higher memory usage."
|
||||
detect_res = "Pixel resolution for detection"
|
||||
image_res = "Pixel resolution for output image"
|
||||
safe_mode = "Whether or not to use safe mode"
|
||||
|
@ -6,6 +6,7 @@ import cv2
|
||||
import numpy
|
||||
from PIL import Image, ImageChops, ImageFilter, ImageOps
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
|
||||
from invokeai.app.invocations.constants import IMAGE_MODES
|
||||
from invokeai.app.invocations.fields import (
|
||||
ColorField,
|
||||
@ -21,8 +22,6 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||
|
||||
from .baseinvocation import BaseInvocation, Classification, invocation
|
||||
|
||||
|
||||
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
|
||||
class ShowImageInvocation(BaseInvocation):
|
||||
|
@ -1,3 +1,4 @@
|
||||
from contextlib import nullcontext
|
||||
from functools import singledispatchmethod
|
||||
|
||||
import einops
|
||||
@ -12,7 +13,7 @@ from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
|
||||
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.fields import (
|
||||
FieldDescriptions,
|
||||
ImageField,
|
||||
@ -24,6 +25,7 @@ from invokeai.app.invocations.primitives import LatentsOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.model_manager import LoadedModel
|
||||
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
|
||||
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
|
||||
|
||||
|
||||
@invocation(
|
||||
@ -31,7 +33,7 @@ from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_t
|
||||
title="Image to Latents",
|
||||
tags=["latents", "image", "vae", "i2l"],
|
||||
category="latents",
|
||||
version="1.0.2",
|
||||
version="1.1.0",
|
||||
)
|
||||
class ImageToLatentsInvocation(BaseInvocation):
|
||||
"""Encodes an image into latents."""
|
||||
@ -44,12 +46,17 @@ class ImageToLatentsInvocation(BaseInvocation):
|
||||
input=Input.Connection,
|
||||
)
|
||||
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
|
||||
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
|
||||
# offer a way to directly set None values.
|
||||
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
|
||||
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
|
||||
|
||||
@staticmethod
|
||||
def vae_encode(vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor) -> torch.Tensor:
|
||||
def vae_encode(
|
||||
vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor, tile_size: int = 0
|
||||
) -> torch.Tensor:
|
||||
with vae_info as vae:
|
||||
assert isinstance(vae, torch.nn.Module)
|
||||
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
|
||||
orig_dtype = vae.dtype
|
||||
if upcast:
|
||||
vae.to(dtype=torch.float32)
|
||||
@ -81,9 +88,18 @@ class ImageToLatentsInvocation(BaseInvocation):
|
||||
else:
|
||||
vae.disable_tiling()
|
||||
|
||||
tiling_context = nullcontext()
|
||||
if tile_size > 0:
|
||||
tiling_context = patch_vae_tiling_params(
|
||||
vae,
|
||||
tile_sample_min_size=tile_size,
|
||||
tile_latent_min_size=tile_size // LATENT_SCALE_FACTOR,
|
||||
tile_overlap_factor=0.25,
|
||||
)
|
||||
|
||||
# non_noised_latents_from_image
|
||||
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
|
||||
with torch.inference_mode():
|
||||
with torch.inference_mode(), tiling_context:
|
||||
latents = ImageToLatentsInvocation._encode_to_tensor(vae, image_tensor)
|
||||
|
||||
latents = vae.config.scaling_factor * latents
|
||||
@ -101,7 +117,9 @@ class ImageToLatentsInvocation(BaseInvocation):
|
||||
if image_tensor.dim() == 3:
|
||||
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
|
||||
|
||||
latents = self.vae_encode(vae_info, self.fp32, self.tiled, image_tensor)
|
||||
latents = self.vae_encode(
|
||||
vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
|
||||
)
|
||||
|
||||
latents = latents.to("cpu")
|
||||
name = context.tensors.save(tensor=latents)
|
||||
|
@ -3,7 +3,9 @@ from typing import Literal, get_args
|
||||
|
||||
from PIL import Image
|
||||
|
||||
from invokeai.app.invocations.fields import ColorField, ImageField
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import ColorField, ImageField, InputField, WithBoard, WithMetadata
|
||||
from invokeai.app.invocations.image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.misc import SEED_MAX
|
||||
@ -14,10 +16,6 @@ from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, in
|
||||
from invokeai.backend.image_util.infill_methods.tile import infill_tile
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField, WithBoard, WithMetadata
|
||||
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
|
||||
|
||||
logger = InvokeAILogger.get_logger()
|
||||
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
from contextlib import nullcontext
|
||||
|
||||
import torch
|
||||
from diffusers.image_processor import VaeImageProcessor
|
||||
from diffusers.models.attention_processor import (
|
||||
@ -8,10 +10,9 @@ from diffusers.models.attention_processor import (
|
||||
)
|
||||
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
|
||||
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
|
||||
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION
|
||||
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.fields import (
|
||||
FieldDescriptions,
|
||||
Input,
|
||||
@ -24,6 +25,7 @@ from invokeai.app.invocations.model import VAEField
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.stable_diffusion import set_seamless
|
||||
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
|
||||
@ -32,7 +34,7 @@ from invokeai.backend.util.devices import TorchDevice
|
||||
title="Latents to Image",
|
||||
tags=["latents", "image", "vae", "l2i"],
|
||||
category="latents",
|
||||
version="1.2.2",
|
||||
version="1.3.0",
|
||||
)
|
||||
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
"""Generates an image from latents."""
|
||||
@ -46,6 +48,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
input=Input.Connection,
|
||||
)
|
||||
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
|
||||
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
|
||||
# offer a way to directly set None values.
|
||||
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
|
||||
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
|
||||
|
||||
@torch.no_grad()
|
||||
@ -53,9 +58,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
latents = context.tensors.load(self.latents.latents_name)
|
||||
|
||||
vae_info = context.models.load(self.vae.vae)
|
||||
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
|
||||
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
|
||||
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
|
||||
assert isinstance(vae, torch.nn.Module)
|
||||
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
|
||||
latents = latents.to(vae.device)
|
||||
if self.fp32:
|
||||
vae.to(dtype=torch.float32)
|
||||
@ -87,10 +92,19 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
else:
|
||||
vae.disable_tiling()
|
||||
|
||||
tiling_context = nullcontext()
|
||||
if self.tile_size > 0:
|
||||
tiling_context = patch_vae_tiling_params(
|
||||
vae,
|
||||
tile_sample_min_size=self.tile_size,
|
||||
tile_latent_min_size=self.tile_size // LATENT_SCALE_FACTOR,
|
||||
tile_overlap_factor=0.25,
|
||||
)
|
||||
|
||||
# clear memory as vae decode can request a lot
|
||||
TorchDevice.empty_cache()
|
||||
|
||||
with torch.inference_mode():
|
||||
with torch.inference_mode(), tiling_context:
|
||||
# copied from diffusers pipeline
|
||||
latents = latents / vae.config.scaling_factor
|
||||
image = vae.decode(latents, return_dict=False)[0]
|
||||
|
@ -5,12 +5,11 @@ from typing import Literal
|
||||
import numpy as np
|
||||
from pydantic import ValidationInfo, field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField
|
||||
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
|
||||
|
||||
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
|
||||
class AddInvocation(BaseInvocation):
|
||||
|
@ -14,8 +14,7 @@ from invokeai.app.invocations.fields import (
|
||||
from invokeai.app.invocations.model import ModelIdentifierField
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
|
||||
|
||||
from ...version import __version__
|
||||
from invokeai.version.invokeai_version import __version__
|
||||
|
||||
|
||||
class MetadataItemField(BaseModel):
|
||||
|
@ -3,18 +3,17 @@ from typing import List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.shared.models import FreeUConfig
|
||||
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
|
||||
|
||||
from .baseinvocation import (
|
||||
from invokeai.app.invocations.baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
Classification,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.shared.models import FreeUConfig
|
||||
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
|
||||
|
||||
|
||||
class ModelIdentifierField(BaseModel):
|
||||
|
@ -4,18 +4,12 @@
|
||||
import torch
|
||||
from pydantic import field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.misc import SEED_MAX
|
||||
|
||||
from ...backend.util.devices import TorchDevice
|
||||
from .baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
"""
|
||||
Utilities
|
||||
|
@ -39,12 +39,11 @@ from easing_functions import (
|
||||
)
|
||||
from matplotlib.ticker import MaxNLocator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import InputField
|
||||
from invokeai.app.invocations.primitives import FloatCollectionOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField
|
||||
|
||||
|
||||
@invocation(
|
||||
"float_range",
|
||||
|
@ -4,6 +4,7 @@ from typing import Optional
|
||||
|
||||
import torch
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.fields import (
|
||||
ColorField,
|
||||
@ -21,13 +22,6 @@ from invokeai.app.invocations.fields import (
|
||||
from invokeai.app.services.images.images_common import ImageDTO
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
|
||||
"""
|
||||
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
|
||||
- primitive nodes
|
||||
|
@ -5,12 +5,11 @@ import numpy as np
|
||||
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
|
||||
from pydantic import field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import InputField, UIComponent
|
||||
from invokeai.app.invocations.primitives import StringCollectionOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField, UIComponent
|
||||
|
||||
|
||||
@invocation(
|
||||
"dynamic_prompt",
|
||||
|
@ -1,5 +1,4 @@
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.invocations.fields import (
|
||||
FieldDescriptions,
|
||||
InputField,
|
||||
@ -7,6 +6,7 @@ from invokeai.app.invocations.fields import (
|
||||
UIType,
|
||||
)
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||
|
||||
|
||||
@invocation_output("scheduler_output")
|
||||
|
@ -1,15 +1,9 @@
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
|
||||
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.model_manager import SubModelType
|
||||
|
||||
from .baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
|
||||
|
||||
|
||||
@invocation_output("sdxl_model_loader_output")
|
||||
class SDXLModelLoaderOutput(BaseInvocationOutput):
|
||||
|
144
invokeai/app/invocations/spandrel_image_to_image.py
Normal file
144
invokeai/app/invocations/spandrel_image_to_image.py
Normal file
@ -0,0 +1,144 @@
|
||||
import numpy as np
|
||||
import torch
|
||||
from PIL import Image
|
||||
from tqdm import tqdm
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import (
|
||||
FieldDescriptions,
|
||||
ImageField,
|
||||
InputField,
|
||||
UIType,
|
||||
WithBoard,
|
||||
WithMetadata,
|
||||
)
|
||||
from invokeai.app.invocations.model import ModelIdentifierField
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
|
||||
from invokeai.backend.tiles.tiles import calc_tiles_min_overlap
|
||||
from invokeai.backend.tiles.utils import TBLR, Tile
|
||||
|
||||
|
||||
@invocation("spandrel_image_to_image", title="Image-to-Image", tags=["upscale"], category="upscale", version="1.1.0")
|
||||
class SpandrelImageToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
|
||||
"""Run any spandrel image-to-image model (https://github.com/chaiNNer-org/spandrel)."""
|
||||
|
||||
image: ImageField = InputField(description="The input image")
|
||||
image_to_image_model: ModelIdentifierField = InputField(
|
||||
title="Image-to-Image Model",
|
||||
description=FieldDescriptions.spandrel_image_to_image_model,
|
||||
ui_type=UIType.SpandrelImageToImageModel,
|
||||
)
|
||||
tile_size: int = InputField(
|
||||
default=512, description="The tile size for tiled image-to-image. Set to 0 to disable tiling."
|
||||
)
|
||||
|
||||
def _scale_tile(self, tile: Tile, scale: int) -> Tile:
|
||||
return Tile(
|
||||
coords=TBLR(
|
||||
top=tile.coords.top * scale,
|
||||
bottom=tile.coords.bottom * scale,
|
||||
left=tile.coords.left * scale,
|
||||
right=tile.coords.right * scale,
|
||||
),
|
||||
overlap=TBLR(
|
||||
top=tile.overlap.top * scale,
|
||||
bottom=tile.overlap.bottom * scale,
|
||||
left=tile.overlap.left * scale,
|
||||
right=tile.overlap.right * scale,
|
||||
),
|
||||
)
|
||||
|
||||
@torch.inference_mode()
|
||||
def invoke(self, context: InvocationContext) -> ImageOutput:
|
||||
# Images are converted to RGB, because most models don't support an alpha channel. In the future, we may want to
|
||||
# revisit this.
|
||||
image = context.images.get_pil(self.image.image_name, mode="RGB")
|
||||
|
||||
# Compute the image tiles.
|
||||
if self.tile_size > 0:
|
||||
min_overlap = 20
|
||||
tiles = calc_tiles_min_overlap(
|
||||
image_height=image.height,
|
||||
image_width=image.width,
|
||||
tile_height=self.tile_size,
|
||||
tile_width=self.tile_size,
|
||||
min_overlap=min_overlap,
|
||||
)
|
||||
else:
|
||||
# No tiling. Generate a single tile that covers the entire image.
|
||||
min_overlap = 0
|
||||
tiles = [
|
||||
Tile(
|
||||
coords=TBLR(top=0, bottom=image.height, left=0, right=image.width),
|
||||
overlap=TBLR(top=0, bottom=0, left=0, right=0),
|
||||
)
|
||||
]
|
||||
|
||||
# Sort tiles first by left x coordinate, then by top y coordinate. During tile processing, we want to iterate
|
||||
# over tiles left-to-right, top-to-bottom.
|
||||
tiles = sorted(tiles, key=lambda x: x.coords.left)
|
||||
tiles = sorted(tiles, key=lambda x: x.coords.top)
|
||||
|
||||
# Prepare input image for inference.
|
||||
image_tensor = SpandrelImageToImageModel.pil_to_tensor(image)
|
||||
|
||||
# Load the model.
|
||||
spandrel_model_info = context.models.load(self.image_to_image_model)
|
||||
|
||||
# Run the model on each tile.
|
||||
with spandrel_model_info as spandrel_model:
|
||||
assert isinstance(spandrel_model, SpandrelImageToImageModel)
|
||||
|
||||
# Scale the tiles for re-assembling the final image.
|
||||
scale = spandrel_model.scale
|
||||
scaled_tiles = [self._scale_tile(tile, scale=scale) for tile in tiles]
|
||||
|
||||
# Prepare the output tensor.
|
||||
_, channels, height, width = image_tensor.shape
|
||||
output_tensor = torch.zeros(
|
||||
(height * scale, width * scale, channels), dtype=torch.uint8, device=torch.device("cpu")
|
||||
)
|
||||
|
||||
image_tensor = image_tensor.to(device=spandrel_model.device, dtype=spandrel_model.dtype)
|
||||
|
||||
for tile, scaled_tile in tqdm(list(zip(tiles, scaled_tiles, strict=True)), desc="Upscaling Tiles"):
|
||||
# Exit early if the invocation has been canceled.
|
||||
if context.util.is_canceled():
|
||||
raise CanceledException
|
||||
|
||||
# Extract the current tile from the input tensor.
|
||||
input_tile = image_tensor[
|
||||
:, :, tile.coords.top : tile.coords.bottom, tile.coords.left : tile.coords.right
|
||||
].to(device=spandrel_model.device, dtype=spandrel_model.dtype)
|
||||
|
||||
# Run the model on the tile.
|
||||
output_tile = spandrel_model.run(input_tile)
|
||||
|
||||
# Convert the output tile into the output tensor's format.
|
||||
# (N, C, H, W) -> (C, H, W)
|
||||
output_tile = output_tile.squeeze(0)
|
||||
# (C, H, W) -> (H, W, C)
|
||||
output_tile = output_tile.permute(1, 2, 0)
|
||||
output_tile = output_tile.clamp(0, 1)
|
||||
output_tile = (output_tile * 255).to(dtype=torch.uint8, device=torch.device("cpu"))
|
||||
|
||||
# Merge the output tile into the output tensor.
|
||||
# We only keep half of the overlap on the top and left side of the tile. We do this in case there are
|
||||
# edge artifacts. We don't bother with any 'blending' in the current implementation - for most upscalers
|
||||
# it seems unnecessary, but we may find a need in the future.
|
||||
top_overlap = scaled_tile.overlap.top // 2
|
||||
left_overlap = scaled_tile.overlap.left // 2
|
||||
output_tensor[
|
||||
scaled_tile.coords.top + top_overlap : scaled_tile.coords.bottom,
|
||||
scaled_tile.coords.left + left_overlap : scaled_tile.coords.right,
|
||||
:,
|
||||
] = output_tile[top_overlap:, left_overlap:, :]
|
||||
|
||||
# Convert the output tensor to a PIL image.
|
||||
np_image = output_tensor.detach().numpy().astype(np.uint8)
|
||||
pil_image = Image.fromarray(np_image)
|
||||
image_dto = context.images.save(image=pil_image)
|
||||
return ImageOutput.build(image_dto)
|
@ -2,17 +2,11 @@
|
||||
|
||||
import re
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
|
||||
from invokeai.app.invocations.fields import InputField, OutputField, UIComponent
|
||||
from invokeai.app.invocations.primitives import StringOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
|
||||
from .baseinvocation import (
|
||||
BaseInvocation,
|
||||
BaseInvocationOutput,
|
||||
invocation,
|
||||
invocation_output,
|
||||
)
|
||||
from .fields import InputField, OutputField, UIComponent
|
||||
from .primitives import StringOutput
|
||||
|
||||
|
||||
@invocation_output("string_pos_neg_output")
|
||||
class StringPosNegOutput(BaseInvocationOutput):
|
||||
|
@ -8,7 +8,7 @@ from diffusers.schedulers.scheduling_utils import SchedulerMixin
|
||||
from pydantic import field_validator
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
|
||||
from invokeai.app.invocations.controlnet_image_processors import ControlField
|
||||
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
|
||||
from invokeai.app.invocations.fields import (
|
||||
@ -29,6 +29,7 @@ from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
|
||||
MultiDiffusionPipeline,
|
||||
MultiDiffusionRegionConditioning,
|
||||
)
|
||||
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||
from invokeai.backend.tiles.tiles import (
|
||||
calc_tiles_min_overlap,
|
||||
)
|
||||
@ -174,6 +175,10 @@ class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
|
||||
_, _, latent_height, latent_width = latents.shape
|
||||
|
||||
# Calculate the tile locations to cover the latent-space image.
|
||||
# TODO(ryand): In the future, we may want to revisit the tile overlap strategy. Things to consider:
|
||||
# - How much overlap 'context' to provide for each denoising step.
|
||||
# - How much overlap to use during merging/blending.
|
||||
# - Should we 'jitter' the tile locations in each step so that the seams are in different places?
|
||||
tiles = calc_tiles_min_overlap(
|
||||
image_height=latent_height,
|
||||
image_width=latent_width,
|
||||
@ -217,7 +222,8 @@ class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
|
||||
context=context,
|
||||
positive_conditioning_field=self.positive_conditioning,
|
||||
negative_conditioning_field=self.negative_conditioning,
|
||||
unet=unet,
|
||||
device=unet.device,
|
||||
dtype=unet.dtype,
|
||||
latent_height=latent_tile_height,
|
||||
latent_width=latent_tile_width,
|
||||
cfg_scale=self.cfg_scale,
|
||||
|
@ -6,15 +6,13 @@ import numpy as np
|
||||
from PIL import Image
|
||||
from pydantic import ConfigDict
|
||||
|
||||
from invokeai.app.invocations.fields import ImageField
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
|
||||
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
|
||||
from invokeai.app.invocations.primitives import ImageOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
|
||||
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
|
||||
|
||||
from .baseinvocation import BaseInvocation, invocation
|
||||
from .fields import InputField, WithBoard, WithMetadata
|
||||
|
||||
# TODO: Populate this from disk?
|
||||
# TODO: Use model manager to load?
|
||||
ESRGAN_MODELS = Literal[
|
||||
|
@ -2,12 +2,11 @@ import sqlite3
|
||||
import threading
|
||||
from typing import Optional, cast
|
||||
|
||||
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
||||
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
|
||||
from .board_image_records_base import BoardImageRecordStorageBase
|
||||
|
||||
|
||||
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
|
||||
_conn: sqlite3.Connection
|
||||
|
@ -1,9 +1,8 @@
|
||||
from typing import Optional
|
||||
|
||||
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
|
||||
from .board_images_base import BoardImagesServiceABC
|
||||
|
||||
|
||||
class BoardImagesService(BoardImagesServiceABC):
|
||||
__invoker: Invoker
|
||||
|
@ -1,9 +1,8 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
|
||||
from .board_records_common import BoardChanges, BoardRecord
|
||||
|
||||
|
||||
class BoardRecordStorageBase(ABC):
|
||||
"""Low-level service responsible for interfacing with the board record store."""
|
||||
|
@ -24,6 +24,8 @@ class BoardRecord(BaseModelExcludeNull):
|
||||
"""The name of the cover image of the board."""
|
||||
archived: bool = Field(description="Whether or not the board is archived.")
|
||||
"""Whether or not the board is archived."""
|
||||
is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
|
||||
"""Whether the board is private."""
|
||||
|
||||
|
||||
def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
||||
@ -38,6 +40,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
||||
updated_at = board_dict.get("updated_at", get_iso_timestamp())
|
||||
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
|
||||
archived = board_dict.get("archived", False)
|
||||
is_private = board_dict.get("is_private", False)
|
||||
|
||||
return BoardRecord(
|
||||
board_id=board_id,
|
||||
@ -47,6 +50,7 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
|
||||
updated_at=updated_at,
|
||||
deleted_at=deleted_at,
|
||||
archived=archived,
|
||||
is_private=is_private,
|
||||
)
|
||||
|
||||
|
||||
|
@ -2,12 +2,8 @@ import sqlite3
|
||||
import threading
|
||||
from typing import Union, cast
|
||||
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
|
||||
from .board_records_base import BoardRecordStorageBase
|
||||
from .board_records_common import (
|
||||
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
|
||||
from invokeai.app.services.board_records.board_records_common import (
|
||||
BoardChanges,
|
||||
BoardRecord,
|
||||
BoardRecordDeleteException,
|
||||
@ -15,6 +11,9 @@ from .board_records_common import (
|
||||
BoardRecordSaveException,
|
||||
deserialize_board_record,
|
||||
)
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
|
||||
|
||||
class SqliteBoardRecordStorage(BoardRecordStorageBase):
|
||||
|
@ -1,10 +1,9 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||
from invokeai.app.services.boards.boards_common import BoardDTO
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
|
||||
from .boards_common import BoardDTO
|
||||
|
||||
|
||||
class BoardServiceABC(ABC):
|
||||
"""High-level service for board management."""
|
||||
|
@ -2,7 +2,7 @@ from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from ..board_records.board_records_common import BoardRecord
|
||||
from invokeai.app.services.board_records.board_records_common import BoardRecord
|
||||
|
||||
|
||||
class BoardDTO(BoardRecord):
|
||||
|
@ -1,11 +1,9 @@
|
||||
from invokeai.app.services.board_records.board_records_common import BoardChanges
|
||||
from invokeai.app.services.boards.boards_common import BoardDTO
|
||||
from invokeai.app.services.boards.boards_base import BoardServiceABC
|
||||
from invokeai.app.services.boards.boards_common import BoardDTO, board_record_to_dto
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
|
||||
from .boards_base import BoardServiceABC
|
||||
from .boards_common import board_record_to_dto
|
||||
|
||||
|
||||
class BoardService(BoardServiceABC):
|
||||
__invoker: Invoker
|
||||
|
@ -4,6 +4,7 @@ from typing import Optional, Union
|
||||
from zipfile import ZipFile
|
||||
|
||||
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
|
||||
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
|
||||
from invokeai.app.services.bulk_download.bulk_download_common import (
|
||||
DEFAULT_BULK_DOWNLOAD_ID,
|
||||
BulkDownloadException,
|
||||
@ -15,8 +16,6 @@ from invokeai.app.services.images.images_common import ImageDTO
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
|
||||
from .bulk_download_base import BulkDownloadBase
|
||||
|
||||
|
||||
class BulkDownloadService(BulkDownloadBase):
|
||||
def start(self, invoker: Invoker) -> None:
|
||||
|
@ -1,7 +1,6 @@
|
||||
"""Init file for InvokeAI configure package."""
|
||||
|
||||
from invokeai.app.services.config.config_common import PagingArgumentParser
|
||||
|
||||
from .config_default import InvokeAIAppConfig, get_config
|
||||
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
|
||||
|
||||
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]
|
||||
|
@ -1,13 +1,13 @@
|
||||
"""Init file for download queue."""
|
||||
|
||||
from .download_base import (
|
||||
from invokeai.app.services.download.download_base import (
|
||||
DownloadJob,
|
||||
DownloadJobStatus,
|
||||
DownloadQueueServiceBase,
|
||||
MultiFileDownloadJob,
|
||||
UnknownJobIDException,
|
||||
)
|
||||
from .download_default import DownloadQueueService, TqdmProgress
|
||||
from invokeai.app.services.download.download_default import DownloadQueueService, TqdmProgress
|
||||
|
||||
__all__ = [
|
||||
"DownloadJob",
|
||||
|
@ -16,12 +16,7 @@ from requests import HTTPError
|
||||
from tqdm import tqdm
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig, get_config
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.util.misc import get_iso_timestamp
|
||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from .download_base import (
|
||||
from invokeai.app.services.download.download_base import (
|
||||
DownloadEventHandler,
|
||||
DownloadExceptionHandler,
|
||||
DownloadJob,
|
||||
@ -33,6 +28,10 @@ from .download_base import (
|
||||
ServiceInactiveException,
|
||||
UnknownJobIDException,
|
||||
)
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.util.misc import get_iso_timestamp
|
||||
from invokeai.backend.model_manager.metadata import RemoteModelFile
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
# Maximum number of bytes to download during each call to requests.iter_content()
|
||||
DOWNLOAD_CHUNK_SIZE = 100000
|
||||
@ -185,7 +184,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
|
||||
job = DownloadJob(
|
||||
source=url,
|
||||
dest=path,
|
||||
access_token=access_token,
|
||||
access_token=access_token or self._lookup_access_token(url),
|
||||
)
|
||||
mfdj.download_parts.add(job)
|
||||
self._download_part2parent[job.source] = mfdj
|
||||
|
@ -6,12 +6,11 @@ from queue import Empty, Queue
|
||||
|
||||
from fastapi_events.dispatcher import dispatch
|
||||
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.events.events_common import (
|
||||
EventBase,
|
||||
)
|
||||
|
||||
from .events_base import EventServiceBase
|
||||
|
||||
|
||||
class FastAPIEventService(EventServiceBase):
|
||||
def __init__(self, event_handler_id: int) -> None:
|
||||
|
@ -7,12 +7,15 @@ from PIL import Image, PngImagePlugin
|
||||
from PIL.Image import Image as PILImageType
|
||||
from send2trash import send2trash
|
||||
|
||||
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
|
||||
from invokeai.app.services.image_files.image_files_common import (
|
||||
ImageFileDeleteException,
|
||||
ImageFileNotFoundException,
|
||||
ImageFileSaveException,
|
||||
)
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
|
||||
|
||||
from .image_files_base import ImageFileStorageBase
|
||||
from .image_files_common import ImageFileDeleteException, ImageFileNotFoundException, ImageFileSaveException
|
||||
|
||||
|
||||
class DiskImageFileStorage(ImageFileStorageBase):
|
||||
"""Stores images on disk"""
|
||||
|
@ -3,11 +3,15 @@ from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from invokeai.app.invocations.fields import MetadataField
|
||||
from invokeai.app.services.image_records.image_records_common import (
|
||||
ImageCategory,
|
||||
ImageRecord,
|
||||
ImageRecordChanges,
|
||||
ResourceOrigin,
|
||||
)
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
|
||||
from .image_records_common import ImageCategory, ImageRecord, ImageRecordChanges, ResourceOrigin
|
||||
|
||||
|
||||
class ImageRecordStorageBase(ABC):
|
||||
"""Low-level service responsible for interfacing with the image record store."""
|
||||
@ -44,6 +48,7 @@ class ImageRecordStorageBase(ABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageRecord]:
|
||||
"""Gets a page of image records."""
|
||||
pass
|
||||
|
@ -4,12 +4,8 @@ from datetime import datetime
|
||||
from typing import Optional, Union, cast
|
||||
|
||||
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
|
||||
from .image_records_base import ImageRecordStorageBase
|
||||
from .image_records_common import (
|
||||
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
|
||||
from invokeai.app.services.image_records.image_records_common import (
|
||||
IMAGE_DTO_COLS,
|
||||
ImageCategory,
|
||||
ImageRecord,
|
||||
@ -20,6 +16,9 @@ from .image_records_common import (
|
||||
ResourceOrigin,
|
||||
deserialize_image_record,
|
||||
)
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
|
||||
|
||||
class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||
@ -151,6 +150,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageRecord]:
|
||||
try:
|
||||
self._lock.acquire()
|
||||
@ -211,6 +211,13 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
|
||||
"""
|
||||
query_params.append(board_id)
|
||||
|
||||
# Search term condition
|
||||
if search_term:
|
||||
query_conditions += """--sql
|
||||
AND images.metadata LIKE ?
|
||||
"""
|
||||
query_params.append(f"%{search_term.lower()}%")
|
||||
|
||||
if starred_first:
|
||||
query_pagination = f"""--sql
|
||||
ORDER BY images.starred DESC, images.created_at {order_dir.value} LIMIT ? OFFSET ?
|
||||
|
@ -123,6 +123,7 @@ class ImageServiceABC(ABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
"""Gets a paginated list of image DTOs."""
|
||||
pass
|
||||
|
@ -3,16 +3,12 @@ from typing import Optional
|
||||
from PIL.Image import Image as PILImageType
|
||||
|
||||
from invokeai.app.invocations.fields import MetadataField
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
|
||||
from ..image_files.image_files_common import (
|
||||
from invokeai.app.services.image_files.image_files_common import (
|
||||
ImageFileDeleteException,
|
||||
ImageFileNotFoundException,
|
||||
ImageFileSaveException,
|
||||
)
|
||||
from ..image_records.image_records_common import (
|
||||
from invokeai.app.services.image_records.image_records_common import (
|
||||
ImageCategory,
|
||||
ImageRecord,
|
||||
ImageRecordChanges,
|
||||
@ -23,8 +19,11 @@ from ..image_records.image_records_common import (
|
||||
InvalidOriginException,
|
||||
ResourceOrigin,
|
||||
)
|
||||
from .images_base import ImageServiceABC
|
||||
from .images_common import ImageDTO, image_record_to_dto
|
||||
from invokeai.app.services.images.images_base import ImageServiceABC
|
||||
from invokeai.app.services.images.images_common import ImageDTO, image_record_to_dto
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
|
||||
|
||||
|
||||
class ImageService(ImageServiceABC):
|
||||
@ -214,6 +213,7 @@ class ImageService(ImageServiceABC):
|
||||
categories: Optional[list[ImageCategory]] = None,
|
||||
is_intermediate: Optional[bool] = None,
|
||||
board_id: Optional[str] = None,
|
||||
search_term: Optional[str] = None,
|
||||
) -> OffsetPaginatedResults[ImageDTO]:
|
||||
try:
|
||||
results = self.__invoker.services.image_records.get_many(
|
||||
@ -225,6 +225,7 @@ class ImageService(ImageServiceABC):
|
||||
categories,
|
||||
is_intermediate,
|
||||
board_id,
|
||||
search_term,
|
||||
)
|
||||
|
||||
image_dtos = [
|
||||
|
@ -10,29 +10,28 @@ if TYPE_CHECKING:
|
||||
|
||||
import torch
|
||||
|
||||
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
||||
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
|
||||
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
|
||||
from invokeai.app.services.boards.boards_base import BoardServiceABC
|
||||
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.download import DownloadQueueServiceBase
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
|
||||
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
|
||||
from invokeai.app.services.images.images_base import ImageServiceABC
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
|
||||
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
||||
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
|
||||
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
|
||||
from invokeai.app.services.names.names_base import NameServiceBase
|
||||
from invokeai.app.services.session_processor.session_processor_base import SessionProcessorBase
|
||||
from invokeai.app.services.session_queue.session_queue_base import SessionQueueBase
|
||||
from invokeai.app.services.urls.urls_base import UrlServiceBase
|
||||
from invokeai.app.services.workflow_records.workflow_records_base import WorkflowRecordsStorageBase
|
||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
|
||||
|
||||
from .board_image_records.board_image_records_base import BoardImageRecordStorageBase
|
||||
from .board_images.board_images_base import BoardImagesServiceABC
|
||||
from .board_records.board_records_base import BoardRecordStorageBase
|
||||
from .boards.boards_base import BoardServiceABC
|
||||
from .bulk_download.bulk_download_base import BulkDownloadBase
|
||||
from .config import InvokeAIAppConfig
|
||||
from .download import DownloadQueueServiceBase
|
||||
from .events.events_base import EventServiceBase
|
||||
from .image_files.image_files_base import ImageFileStorageBase
|
||||
from .image_records.image_records_base import ImageRecordStorageBase
|
||||
from .images.images_base import ImageServiceABC
|
||||
from .invocation_cache.invocation_cache_base import InvocationCacheBase
|
||||
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
||||
from .model_images.model_images_base import ModelImageFileStorageBase
|
||||
from .model_manager.model_manager_base import ModelManagerServiceBase
|
||||
from .names.names_base import NameServiceBase
|
||||
from .session_processor.session_processor_base import SessionProcessorBase
|
||||
from .session_queue.session_queue_base import SessionQueueBase
|
||||
from .urls.urls_base import UrlServiceBase
|
||||
from .workflow_records.workflow_records_base import WorkflowRecordsStorageBase
|
||||
|
||||
|
||||
class InvocationServices:
|
||||
"""Services that can be used by invocations"""
|
||||
|
@ -9,11 +9,8 @@ import torch
|
||||
|
||||
import invokeai.backend.util.logging as logger
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.backend.model_manager.load.model_cache import CacheStats
|
||||
|
||||
from .invocation_stats_base import InvocationStatsServiceBase
|
||||
from .invocation_stats_common import (
|
||||
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
|
||||
from invokeai.app.services.invocation_stats.invocation_stats_common import (
|
||||
GESStatsNotFoundError,
|
||||
GraphExecutionStats,
|
||||
GraphExecutionStatsSummary,
|
||||
@ -22,6 +19,8 @@ from .invocation_stats_common import (
|
||||
NodeExecutionStats,
|
||||
NodeExecutionStatsSummary,
|
||||
)
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.backend.model_manager.load.model_cache import CacheStats
|
||||
|
||||
# Size of 1GB in bytes.
|
||||
GB = 2**30
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
|
||||
|
||||
|
||||
from .invocation_services import InvocationServices
|
||||
from invokeai.app.services.invocation_services import InvocationServices
|
||||
|
||||
|
||||
class Invoker:
|
||||
|
@ -5,15 +5,14 @@ from PIL.Image import Image as PILImageType
|
||||
from send2trash import send2trash
|
||||
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
from invokeai.app.util.thumbnails import make_thumbnail
|
||||
|
||||
from .model_images_base import ModelImageFileStorageBase
|
||||
from .model_images_common import (
|
||||
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
|
||||
from invokeai.app.services.model_images.model_images_common import (
|
||||
ModelImageFileDeleteException,
|
||||
ModelImageFileNotFoundException,
|
||||
ModelImageFileSaveException,
|
||||
)
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
from invokeai.app.util.thumbnails import make_thumbnail
|
||||
|
||||
|
||||
class ModelImageFileStorageDisk(ModelImageFileStorageBase):
|
||||
|
@ -1,9 +1,7 @@
|
||||
"""Initialization file for model install service package."""
|
||||
|
||||
from .model_install_base import (
|
||||
ModelInstallServiceBase,
|
||||
)
|
||||
from .model_install_common import (
|
||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||
from invokeai.app.services.model_install.model_install_common import (
|
||||
HFModelSource,
|
||||
InstallStatus,
|
||||
LocalModelSource,
|
||||
@ -12,7 +10,7 @@ from .model_install_common import (
|
||||
UnknownInstallJobException,
|
||||
URLModelSource,
|
||||
)
|
||||
from .model_install_default import ModelInstallService
|
||||
from invokeai.app.services.model_install.model_install_default import ModelInstallService
|
||||
|
||||
__all__ = [
|
||||
"ModelInstallServiceBase",
|
||||
|
@ -23,6 +23,16 @@ from invokeai.app.services.download import DownloadQueueServiceBase, MultiFileDo
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||
from invokeai.app.services.model_install.model_install_common import (
|
||||
MODEL_SOURCE_TO_TYPE_MAP,
|
||||
HFModelSource,
|
||||
InstallStatus,
|
||||
LocalModelSource,
|
||||
ModelInstallJob,
|
||||
ModelSource,
|
||||
StringLikeSource,
|
||||
URLModelSource,
|
||||
)
|
||||
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
|
||||
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
|
||||
from invokeai.backend.model_manager.config import (
|
||||
@ -47,17 +57,6 @@ from invokeai.backend.util.catch_sigint import catch_sigint
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.util import slugify
|
||||
|
||||
from .model_install_common import (
|
||||
MODEL_SOURCE_TO_TYPE_MAP,
|
||||
HFModelSource,
|
||||
InstallStatus,
|
||||
LocalModelSource,
|
||||
ModelInstallJob,
|
||||
ModelSource,
|
||||
StringLikeSource,
|
||||
URLModelSource,
|
||||
)
|
||||
|
||||
TMPDIR_PREFIX = "tmpinstall_"
|
||||
|
||||
|
||||
@ -857,7 +856,7 @@ class ModelInstallService(ModelInstallServiceBase):
|
||||
with self._lock:
|
||||
if install_job := self._download_cache.pop(download_job.id, None):
|
||||
assert excp is not None
|
||||
install_job.set_error(excp)
|
||||
self._set_error(install_job, excp)
|
||||
self._download_queue.cancel_job(download_job)
|
||||
|
||||
# Let other threads know that the number of downloads has changed
|
||||
|
@ -1,6 +1,6 @@
|
||||
"""Initialization file for model load service module."""
|
||||
|
||||
from .model_load_base import ModelLoadServiceBase
|
||||
from .model_load_default import ModelLoadService
|
||||
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||
from invokeai.app.services.model_load.model_load_default import ModelLoadService
|
||||
|
||||
__all__ = ["ModelLoadServiceBase", "ModelLoadService"]
|
||||
|
@ -10,6 +10,7 @@ from torch import load as torch_load
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
|
||||
from invokeai.backend.model_manager.load import (
|
||||
LoadedModel,
|
||||
@ -22,8 +23,6 @@ from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from .model_load_base import ModelLoadServiceBase
|
||||
|
||||
|
||||
class ModelLoadService(ModelLoadServiceBase):
|
||||
"""Wrapper around ModelLoaderRegistry."""
|
||||
|
@ -1,10 +1,9 @@
|
||||
"""Initialization file for model manager service."""
|
||||
|
||||
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService, ModelManagerServiceBase
|
||||
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
|
||||
from invokeai.backend.model_manager.load import LoadedModel
|
||||
|
||||
from .model_manager_default import ModelManagerService, ModelManagerServiceBase
|
||||
|
||||
__all__ = [
|
||||
"ModelManagerServiceBase",
|
||||
"ModelManagerService",
|
||||
|
@ -5,14 +5,13 @@ from abc import ABC, abstractmethod
|
||||
import torch
|
||||
from typing_extensions import Self
|
||||
|
||||
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
|
||||
from ..config import InvokeAIAppConfig
|
||||
from ..download import DownloadQueueServiceBase
|
||||
from ..events.events_base import EventServiceBase
|
||||
from ..model_install import ModelInstallServiceBase
|
||||
from ..model_load import ModelLoadServiceBase
|
||||
from ..model_records import ModelRecordServiceBase
|
||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
|
||||
|
||||
|
||||
class ModelManagerServiceBase(ABC):
|
||||
|
@ -6,19 +6,20 @@ from typing import Optional
|
||||
import torch
|
||||
from typing_extensions import Self
|
||||
|
||||
from invokeai.app.services.config.config_default import InvokeAIAppConfig
|
||||
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
|
||||
from invokeai.app.services.model_install.model_install_default import ModelInstallService
|
||||
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
|
||||
from invokeai.app.services.model_load.model_load_default import ModelLoadService
|
||||
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
|
||||
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
|
||||
from invokeai.backend.model_manager.load import ModelCache, ModelLoaderRegistry
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
from ..config import InvokeAIAppConfig
|
||||
from ..download import DownloadQueueServiceBase
|
||||
from ..events.events_base import EventServiceBase
|
||||
from ..model_install import ModelInstallService, ModelInstallServiceBase
|
||||
from ..model_load import ModelLoadService, ModelLoadServiceBase
|
||||
from ..model_records import ModelRecordServiceBase
|
||||
from .model_manager_base import ModelManagerServiceBase
|
||||
|
||||
|
||||
class ModelManagerService(ModelManagerServiceBase):
|
||||
"""
|
||||
|
@ -40,12 +40,24 @@ Typical usage:
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
from math import ceil
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Union
|
||||
|
||||
import pydantic
|
||||
|
||||
from invokeai.app.services.model_records.model_records_base import (
|
||||
DuplicateModelException,
|
||||
ModelRecordChanges,
|
||||
ModelRecordOrderBy,
|
||||
ModelRecordServiceBase,
|
||||
ModelSummary,
|
||||
UnknownModelException,
|
||||
)
|
||||
from invokeai.app.services.shared.pagination import PaginatedResults
|
||||
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
|
||||
from invokeai.backend.model_manager.config import (
|
||||
AnyModelConfig,
|
||||
BaseModelType,
|
||||
@ -54,21 +66,11 @@ from invokeai.backend.model_manager.config import (
|
||||
ModelType,
|
||||
)
|
||||
|
||||
from ..shared.sqlite.sqlite_database import SqliteDatabase
|
||||
from .model_records_base import (
|
||||
DuplicateModelException,
|
||||
ModelRecordChanges,
|
||||
ModelRecordOrderBy,
|
||||
ModelRecordServiceBase,
|
||||
ModelSummary,
|
||||
UnknownModelException,
|
||||
)
|
||||
|
||||
|
||||
class ModelRecordServiceSQL(ModelRecordServiceBase):
|
||||
"""Implementation of the ModelConfigStore ABC using a SQL database."""
|
||||
|
||||
def __init__(self, db: SqliteDatabase):
|
||||
def __init__(self, db: SqliteDatabase, logger: logging.Logger):
|
||||
"""
|
||||
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
|
||||
|
||||
@ -77,6 +79,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
|
||||
super().__init__()
|
||||
self._db = db
|
||||
self._cursor = db.conn.cursor()
|
||||
self._logger = logger
|
||||
|
||||
@property
|
||||
def db(self) -> SqliteDatabase:
|
||||
@ -292,7 +295,20 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
|
||||
tuple(bindings),
|
||||
)
|
||||
result = self._cursor.fetchall()
|
||||
results = [ModelConfigFactory.make_config(json.loads(x[0]), timestamp=x[1]) for x in result]
|
||||
|
||||
# Parse the model configs.
|
||||
results: list[AnyModelConfig] = []
|
||||
for row in result:
|
||||
try:
|
||||
model_config = ModelConfigFactory.make_config(json.loads(row[0]), timestamp=row[1])
|
||||
except pydantic.ValidationError:
|
||||
# We catch this error so that the app can still run if there are invalid model configs in the database.
|
||||
# One reason that an invalid model config might be in the database is if someone had to rollback from a
|
||||
# newer version of the app that added a new model type.
|
||||
self._logger.warning(f"Found an invalid model config in the database. Ignoring this model. ({row[0]})")
|
||||
else:
|
||||
results.append(model_config)
|
||||
|
||||
return results
|
||||
|
||||
def search_by_path(self, path: Union[str, Path]) -> List[AnyModelConfig]:
|
||||
|
@ -1,7 +1,6 @@
|
||||
from invokeai.app.services.names.names_base import NameServiceBase
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
|
||||
from .names_base import NameServiceBase
|
||||
|
||||
|
||||
class SimpleNameService(NameServiceBase):
|
||||
"""Creates image names from UUIDs."""
|
||||
|
@ -13,24 +13,24 @@ from invokeai.app.services.events.events_common import (
|
||||
register_events,
|
||||
)
|
||||
from invokeai.app.services.invocation_stats.invocation_stats_common import GESStatsNotFoundError
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
from invokeai.app.services.session_processor.session_processor_base import (
|
||||
InvocationServices,
|
||||
OnAfterRunNode,
|
||||
OnAfterRunSession,
|
||||
OnBeforeRunNode,
|
||||
OnBeforeRunSession,
|
||||
OnNodeError,
|
||||
OnNonFatalProcessorError,
|
||||
SessionProcessorBase,
|
||||
SessionRunnerBase,
|
||||
)
|
||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException
|
||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException, SessionProcessorStatus
|
||||
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem, SessionQueueItemNotFoundError
|
||||
from invokeai.app.services.shared.graph import NodeInputError
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
|
||||
from invokeai.app.util.profiler import Profiler
|
||||
|
||||
from ..invoker import Invoker
|
||||
from .session_processor_base import InvocationServices, SessionProcessorBase, SessionRunnerBase
|
||||
from .session_processor_common import SessionProcessorStatus
|
||||
|
||||
|
||||
class DefaultSessionRunner(SessionRunnerBase):
|
||||
"""Processes a single session's invocations."""
|
||||
|
@ -1,6 +1,6 @@
|
||||
import os
|
||||
|
||||
from .urls_base import UrlServiceBase
|
||||
from invokeai.app.services.urls.urls_base import UrlServiceBase
|
||||
|
||||
|
||||
class LocalUrlService(UrlServiceBase):
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "ESRGAN Upscaling with Canny ControlNet",
|
||||
"author": "InvokeAI",
|
||||
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "upscale, controlnet, default",
|
||||
"notes": "",
|
||||
@ -36,14 +36,13 @@
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"id": "0e71a27e-a22b-4a9b-b20a-6d789abff2bc",
|
||||
"nodes": [
|
||||
{
|
||||
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
|
||||
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
|
||||
"version": "1.1.1",
|
||||
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
@ -57,6 +56,10 @@
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -65,122 +68,63 @@
|
||||
},
|
||||
"position": {
|
||||
"x": 1250,
|
||||
"y": 1500
|
||||
"y": 1200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
|
||||
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
|
||||
"version": "1.0.2",
|
||||
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
|
||||
"version": "1.1.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": {
|
||||
"key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
|
||||
"hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
|
||||
"name": "Deliberate_v5",
|
||||
"base": "sd-1",
|
||||
"type": "main"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 700,
|
||||
"y": 1375
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "image",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": "Image To Upscale",
|
||||
"value": {
|
||||
"image_name": "d2e42ba6-d420-496b-82db-91c9b75956c1.png"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 344.5593065887157,
|
||||
"y": 1698.161491368619
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
|
||||
"version": "1.3.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "esrgan",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"model_name": {
|
||||
"name": "model_name",
|
||||
"label": "Upscaler Model",
|
||||
"value": "RealESRGAN_x2plus.pth"
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 400
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 717.3863693661265,
|
||||
"y": 1721.9215053134815
|
||||
"x": 1650,
|
||||
"y": 1675
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
|
||||
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
|
||||
"version": "1.3.2",
|
||||
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "canny_image_processor",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
@ -190,38 +134,37 @@
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": 512
|
||||
"value": false
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 100
|
||||
"value": 0
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": 200
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1200,
|
||||
"y": 1900
|
||||
"x": 2559.4751127537957,
|
||||
"y": 1246.6000376741406
|
||||
}
|
||||
},
|
||||
{
|
||||
@ -229,7 +172,7 @@
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
|
||||
"version": "1.1.1",
|
||||
"version": "1.1.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
@ -285,6 +228,193 @@
|
||||
"y": 1902.9649340196056
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
|
||||
"version": "1.3.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "canny_image_processor",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"label": "",
|
||||
"value": 100
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"label": "",
|
||||
"value": 200
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1200,
|
||||
"y": 1900
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
|
||||
"version": "1.0.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": "",
|
||||
"value": {
|
||||
"key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
|
||||
"hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
|
||||
"name": "Deliberate_v5",
|
||||
"base": "sd-1",
|
||||
"type": "main"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 700,
|
||||
"y": 1375
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1250,
|
||||
"y": 1500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "image",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": "Image To Upscale"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 344.5593065887157,
|
||||
"y": 1698.161491368619
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
|
||||
"version": "1.3.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "esrgan",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"model_name": {
|
||||
"name": "model_name",
|
||||
"label": "Upscaler Model",
|
||||
"value": "RealESRGAN_x2plus.pth"
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 400
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 717.3863693661265,
|
||||
"y": 1721.9215053134815
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
|
||||
"type": "invocation",
|
||||
@ -413,122 +543,6 @@
|
||||
"y": 1232.6219060454753
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2559.4751127537957,
|
||||
"y": 1246.6000376741406
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1650,
|
||||
"y": 1675
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1250,
|
||||
"y": 1200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
|
||||
"type": "invocation",
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Face Detailer with IP-Adapter & Canny (See Note in Details)",
|
||||
"author": "kosmoskatten",
|
||||
"description": "A workflow to add detail to and improve faces. This workflow is most effective when used with a model that creates realistic outputs. ",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "face detailer, IP-Adapter, Canny",
|
||||
"notes": "Set this image as the blur mask: https://i.imgur.com/Gxi61zP.png",
|
||||
@ -37,16 +37,219 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
|
||||
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
|
||||
"version": "1.1.1",
|
||||
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
|
||||
"version": "1.0.3",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2031.5518710051792,
|
||||
"y": -492.1742944307074
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
|
||||
"version": "1.3.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "canny_image_processor",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"label": "",
|
||||
"value": 100
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"label": "",
|
||||
"value": 200
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3519.4131037388597,
|
||||
"y": 576.7946795840575
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
|
||||
"version": "1.1.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select canny)",
|
||||
"value": {
|
||||
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
|
||||
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
|
||||
"name": "sd-controlnet-canny",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "balanced"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3950,
|
||||
"y": 150
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "22b750db-b85e-486b-b278-ac983e329813",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "22b750db-b85e-486b-b278-ac983e329813",
|
||||
"version": "1.4.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "ip_adapter",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"ip_adapter_model": {
|
||||
"name": "ip_adapter_model",
|
||||
"label": "IP-Adapter Model (select IP Adapter Face)",
|
||||
"value": {
|
||||
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
|
||||
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
|
||||
"name": "ip_adapter_sd15",
|
||||
"base": "sd-1",
|
||||
"type": "ip_adapter"
|
||||
}
|
||||
},
|
||||
"clip_vision_model": {
|
||||
"name": "clip_vision_model",
|
||||
"label": "",
|
||||
"value": "ViT-H"
|
||||
},
|
||||
"weight": {
|
||||
"name": "weight",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"method": {
|
||||
"name": "method",
|
||||
"label": "",
|
||||
"value": "full"
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 0.8
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3575,
|
||||
"y": -200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
@ -60,6 +263,140 @@
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2550,
|
||||
"y": -525
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4980.1395106966565,
|
||||
"y": -255.9158921745602
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
|
||||
"version": "1.1.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3100,
|
||||
"y": -275
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -251,45 +588,6 @@
|
||||
"y": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3100,
|
||||
"y": -275
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "bd06261d-a74a-4d1f-8374-745ed6194bc2",
|
||||
"type": "invocation",
|
||||
@ -418,53 +716,6 @@
|
||||
"y": -175
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4980.1395106966565,
|
||||
"y": -255.9158921745602
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2974e5b3-3d41-4b6f-9953-cd21e8f3a323",
|
||||
"type": "invocation",
|
||||
@ -692,201 +943,6 @@
|
||||
"y": -275
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2550,
|
||||
"y": -525
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "22b750db-b85e-486b-b278-ac983e329813",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "22b750db-b85e-486b-b278-ac983e329813",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "ip_adapter",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"ip_adapter_model": {
|
||||
"name": "ip_adapter_model",
|
||||
"label": "IP-Adapter Model (select IP Adapter Face)",
|
||||
"value": {
|
||||
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
|
||||
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
|
||||
"name": "ip_adapter_sd15",
|
||||
"base": "sd-1",
|
||||
"type": "ip_adapter"
|
||||
}
|
||||
},
|
||||
"weight": {
|
||||
"name": "weight",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 0.8
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3575,
|
||||
"y": -200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select canny)",
|
||||
"value": {
|
||||
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
|
||||
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
|
||||
"name": "sd-controlnet-canny",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 0.5
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "balanced"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3950,
|
||||
"y": 150
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
|
||||
"version": "1.3.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "canny_image_processor",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"label": "",
|
||||
"value": 100
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"label": "",
|
||||
"value": 200
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3519.4131037388597,
|
||||
"y": 576.7946795840575
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "4bd4ae80-567f-4366-b8c6-3bb06f4fb46a",
|
||||
"type": "invocation",
|
||||
@ -1035,30 +1091,6 @@
|
||||
"x": 2578.2364832140506,
|
||||
"y": 78.7948456497351
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
|
||||
"version": "1.0.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2031.5518710051792,
|
||||
"y": -492.1742944307074
|
||||
}
|
||||
}
|
||||
],
|
||||
"edges": [
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Multi ControlNet (Canny & Depth)",
|
||||
"author": "InvokeAI",
|
||||
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "ControlNet, canny, depth",
|
||||
"notes": "",
|
||||
@ -37,140 +37,104 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
|
||||
"id": "9db25398-c869-4a63-8815-c6559341ef12",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
|
||||
"version": "1.0.2",
|
||||
"id": "9db25398-c869-4a63-8815-c6559341ef12",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "image",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": "Depth Input Image"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3666.135718057363,
|
||||
"y": 186.66887319822808
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select depth)",
|
||||
"value": {
|
||||
"key": "87e8855c-671f-4c9e-bbbb-8ed47ccb4aac",
|
||||
"hash": "blake3:2550bf22a53942dfa28ab2fed9d10d80851112531f44d977168992edf9d0534c",
|
||||
"name": "control_v11f1p_sd15_depth",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": 1
|
||||
"value": false
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "balanced"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4477.604342844504,
|
||||
"y": -49.39005411272677
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4075,
|
||||
"x": 5675,
|
||||
"y": -825
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
|
||||
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
|
||||
"version": "1.0.2",
|
||||
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
|
||||
"version": "1.3.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"type": "canny_image_processor",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"label": "",
|
||||
"value": 100
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"label": "",
|
||||
"value": 200
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -178,29 +142,52 @@
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3600,
|
||||
"y": -1000
|
||||
"x": 4095.757337055795,
|
||||
"y": -455.63440891935863
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
|
||||
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
|
||||
"version": "1.1.1",
|
||||
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
|
||||
"version": "1.2.4",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"type": "midas_depth_image_processor",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"a_mult": {
|
||||
"name": "a_mult",
|
||||
"label": "",
|
||||
"value": 2
|
||||
},
|
||||
"bg_th": {
|
||||
"name": "bg_th",
|
||||
"label": "",
|
||||
"value": 0.1
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -208,8 +195,8 @@
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4075,
|
||||
"y": -1125
|
||||
"x": 4082.783145980783,
|
||||
"y": 0.01629251229994111
|
||||
}
|
||||
},
|
||||
{
|
||||
@ -217,7 +204,7 @@
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
|
||||
"version": "1.1.1",
|
||||
"version": "1.1.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
@ -273,6 +260,185 @@
|
||||
"y": -618.4221638099414
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4075,
|
||||
"y": -1125
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
|
||||
"version": "1.0.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3600,
|
||||
"y": -1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4075,
|
||||
"y": -825
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
|
||||
"version": "1.1.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select depth)",
|
||||
"value": {
|
||||
"key": "87e8855c-671f-4c9e-bbbb-8ed47ccb4aac",
|
||||
"hash": "blake3:2550bf22a53942dfa28ab2fed9d10d80851112531f44d977168992edf9d0534c",
|
||||
"name": "control_v11f1p_sd15_depth",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "balanced"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4477.604342844504,
|
||||
"y": -49.39005411272677
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "image",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": "Depth Input Image"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 3666.135718057363,
|
||||
"y": 186.66887319822808
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
|
||||
"type": "invocation",
|
||||
@ -322,159 +488,6 @@
|
||||
"y": -575
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
|
||||
"version": "1.2.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "midas_depth_image_processor",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"a_mult": {
|
||||
"name": "a_mult",
|
||||
"label": "",
|
||||
"value": 2
|
||||
},
|
||||
"bg_th": {
|
||||
"name": "bg_th",
|
||||
"label": "",
|
||||
"value": 0.1
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4082.783145980783,
|
||||
"y": 0.01629251229994111
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
|
||||
"version": "1.3.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "canny_image_processor",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"detect_resolution": {
|
||||
"name": "detect_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"image_resolution": {
|
||||
"name": "image_resolution",
|
||||
"label": "",
|
||||
"value": 512
|
||||
},
|
||||
"low_threshold": {
|
||||
"name": "low_threshold",
|
||||
"label": "",
|
||||
"value": 100
|
||||
},
|
||||
"high_threshold": {
|
||||
"name": "high_threshold",
|
||||
"label": "",
|
||||
"value": 200
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4095.757337055795,
|
||||
"y": -455.63440891935863
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "9db25398-c869-4a63-8815-c6559341ef12",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "9db25398-c869-4a63-8815-c6559341ef12",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 5675,
|
||||
"y": -825
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
|
||||
"type": "invocation",
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -2,7 +2,7 @@
|
||||
"name": "Prompt from File",
|
||||
"author": "InvokeAI",
|
||||
"description": "Sample workflow using Prompt from File node",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "text2image, prompt from file, default",
|
||||
"notes": "",
|
||||
@ -37,16 +37,68 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
|
||||
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
|
||||
"version": "1.1.1",
|
||||
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2037.861329274915,
|
||||
"y": -329.8393457509562
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
@ -60,6 +112,69 @@
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 925,
|
||||
"y": -275
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
|
||||
"version": "1.0.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 0,
|
||||
"y": -375
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
@ -141,61 +256,6 @@
|
||||
"y": -400
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 0,
|
||||
"y": -375
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 925,
|
||||
"y": -275
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
|
||||
"type": "invocation",
|
||||
@ -268,53 +328,6 @@
|
||||
"y": -50
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2037.861329274915,
|
||||
"y": -329.8393457509562
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
|
||||
"type": "invocation",
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Text to Image - SD1.5",
|
||||
"author": "InvokeAI",
|
||||
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "text2image, SD1.5, SD2, default",
|
||||
"notes": "",
|
||||
@ -33,16 +33,127 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1800,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "Positive Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": "Super cute tiger cub, national geographic award-winning photograph"
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1000,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
|
||||
"version": "1.0.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 600,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
|
||||
"version": "1.1.1",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "Negative Compel Prompt",
|
||||
"notes": "",
|
||||
@ -56,6 +167,10 @@
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -108,61 +223,6 @@
|
||||
"y": 325
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 600,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "Positive Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": "Super cute tiger cub, national geographic award-winning photograph"
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1000,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
|
||||
"type": "invocation",
|
||||
@ -280,53 +340,6 @@
|
||||
"x": 1400,
|
||||
"y": 25
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": true
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 1800,
|
||||
"y": 25
|
||||
}
|
||||
}
|
||||
],
|
||||
"edges": [
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Text to Image - SDXL",
|
||||
"author": "InvokeAI",
|
||||
"description": "Sample text to image workflow for SDXL",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "text2image, SDXL, default",
|
||||
"notes": "",
|
||||
@ -29,10 +29,271 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
|
||||
"version": "1.0.3",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "vae_loader",
|
||||
"inputs": {
|
||||
"vae_model": {
|
||||
"name": "vae_model",
|
||||
"label": "VAE (use the FP16 model)",
|
||||
"value": {
|
||||
"key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
|
||||
"hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
|
||||
"name": "sdxl-vae-fp16-fix",
|
||||
"base": "sdxl",
|
||||
"type": "vae"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 375,
|
||||
"y": -225
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
|
||||
"version": "1.3.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": false
|
||||
},
|
||||
"position": {
|
||||
"x": 1475,
|
||||
"y": -500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "SDXL Positive Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "sdxl_compel_prompt",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"style": {
|
||||
"name": "style",
|
||||
"label": "Positive Style",
|
||||
"value": ""
|
||||
},
|
||||
"original_width": {
|
||||
"name": "original_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"original_height": {
|
||||
"name": "original_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"crop_top": {
|
||||
"name": "crop_top",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"crop_left": {
|
||||
"name": "crop_left",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"target_width": {
|
||||
"name": "target_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"target_height": {
|
||||
"name": "target_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"clip2": {
|
||||
"name": "clip2",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 750,
|
||||
"y": -175
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
|
||||
"version": "1.0.3",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "sdxl_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": "",
|
||||
"value": {
|
||||
"key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
|
||||
"hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
|
||||
"name": "dreamshaper-xl-v2-turbo",
|
||||
"base": "sdxl",
|
||||
"type": "main"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 375,
|
||||
"y": -500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
|
||||
"version": "1.2.0",
|
||||
"nodePack": "invokeai",
|
||||
"label": "SDXL Negative Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "sdxl_compel_prompt",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"style": {
|
||||
"name": "style",
|
||||
"label": "Negative Style",
|
||||
"value": ""
|
||||
},
|
||||
"original_width": {
|
||||
"name": "original_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"original_height": {
|
||||
"name": "original_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"crop_top": {
|
||||
"name": "crop_top",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"crop_left": {
|
||||
"name": "crop_left",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"target_width": {
|
||||
"name": "target_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"target_height": {
|
||||
"name": "target_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"clip2": {
|
||||
"name": "clip2",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 750,
|
||||
"y": 200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "3774ec24-a69e-4254-864c-097d07a6256f",
|
||||
"type": "invocation",
|
||||
@ -88,75 +349,6 @@
|
||||
"y": -125
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "SDXL Negative Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "sdxl_compel_prompt",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"style": {
|
||||
"name": "style",
|
||||
"label": "Negative Style",
|
||||
"value": ""
|
||||
},
|
||||
"original_width": {
|
||||
"name": "original_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"original_height": {
|
||||
"name": "original_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"crop_top": {
|
||||
"name": "crop_top",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"crop_left": {
|
||||
"name": "crop_left",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"target_width": {
|
||||
"name": "target_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"target_height": {
|
||||
"name": "target_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"clip2": {
|
||||
"name": "clip2",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 750,
|
||||
"y": 200
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
|
||||
"type": "invocation",
|
||||
@ -229,154 +421,6 @@
|
||||
"y": -50
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
|
||||
"version": "1.0.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "sdxl_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": "",
|
||||
"value": {
|
||||
"key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
|
||||
"hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
|
||||
"name": "dreamshaper-xl-v2-turbo",
|
||||
"base": "sdxl",
|
||||
"type": "main"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 375,
|
||||
"y": -500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
|
||||
"version": "1.1.1",
|
||||
"nodePack": "invokeai",
|
||||
"label": "SDXL Positive Compel Prompt",
|
||||
"notes": "",
|
||||
"type": "sdxl_compel_prompt",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"style": {
|
||||
"name": "style",
|
||||
"label": "Positive Style",
|
||||
"value": ""
|
||||
},
|
||||
"original_width": {
|
||||
"name": "original_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"original_height": {
|
||||
"name": "original_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"crop_top": {
|
||||
"name": "crop_top",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"crop_left": {
|
||||
"name": "crop_left",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"target_width": {
|
||||
"name": "target_width",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"target_height": {
|
||||
"name": "target_height",
|
||||
"label": "",
|
||||
"value": 1024
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"clip2": {
|
||||
"name": "clip2",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 750,
|
||||
"y": -175
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
|
||||
"version": "1.2.2",
|
||||
"nodePack": "invokeai",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": false
|
||||
},
|
||||
"position": {
|
||||
"x": 1475,
|
||||
"y": -500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
|
||||
"type": "invocation",
|
||||
@ -464,37 +508,6 @@
|
||||
"y": -500
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
|
||||
"version": "1.0.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "vae_loader",
|
||||
"inputs": {
|
||||
"vae_model": {
|
||||
"name": "vae_model",
|
||||
"label": "VAE (use the FP16 model)",
|
||||
"value": {
|
||||
"key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
|
||||
"hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
|
||||
"name": "sdxl-vae-fp16-fix",
|
||||
"base": "sdxl",
|
||||
"type": "vae"
|
||||
}
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 375,
|
||||
"y": -225
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
|
||||
"type": "invocation",
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Text to Image with LoRA",
|
||||
"author": "InvokeAI",
|
||||
"description": "Simple text to image workflow with a LoRA",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "text to image, lora, default",
|
||||
"notes": "",
|
||||
@ -37,28 +37,83 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
|
||||
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
|
||||
"version": "1.1.1",
|
||||
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
|
||||
"version": "1.3.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4450,
|
||||
"y": -550
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
|
||||
"version": "1.2.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
"label": "Positive Prompt",
|
||||
"value": "super cute tiger cub"
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -67,31 +122,7 @@
|
||||
},
|
||||
"position": {
|
||||
"x": 3425,
|
||||
"y": -300
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
|
||||
"version": "1.0.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2500,
|
||||
"y": -600
|
||||
"y": -575
|
||||
}
|
||||
},
|
||||
{
|
||||
@ -99,7 +130,7 @@
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
|
||||
"version": "1.0.2",
|
||||
"version": "1.0.3",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "lora_loader",
|
||||
@ -132,23 +163,51 @@
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
|
||||
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
|
||||
"version": "1.1.1",
|
||||
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
|
||||
"version": "1.0.3",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 2500,
|
||||
"y": -600
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
|
||||
"version": "1.2.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": "super cute tiger cub"
|
||||
"label": "Negative Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
@ -157,7 +216,7 @@
|
||||
},
|
||||
"position": {
|
||||
"x": 3425,
|
||||
"y": -575
|
||||
"y": -300
|
||||
}
|
||||
},
|
||||
{
|
||||
@ -315,52 +374,6 @@
|
||||
"x": 3425,
|
||||
"y": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
|
||||
"version": "1.2.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": false,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": 4450,
|
||||
"y": -550
|
||||
}
|
||||
}
|
||||
],
|
||||
"edges": [
|
||||
|
@ -2,7 +2,7 @@
|
||||
"name": "Tiled Upscaling (Beta)",
|
||||
"author": "Invoke",
|
||||
"description": "A workflow to upscale an input image with tiled upscaling. ",
|
||||
"version": "2.0.0",
|
||||
"version": "2.1.0",
|
||||
"contact": "invoke@invoke.ai",
|
||||
"tags": "tiled, upscaling, sd1.5",
|
||||
"notes": "",
|
||||
@ -41,10 +41,318 @@
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"category": "default",
|
||||
"version": "3.0.0"
|
||||
"version": "3.0.0",
|
||||
"category": "default"
|
||||
},
|
||||
"nodes": [
|
||||
{
|
||||
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
|
||||
"version": "1.0.3",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4514.466823162653,
|
||||
"y": -1235.7908800002283
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
|
||||
"version": "1.4.1",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "ip_adapter",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"ip_adapter_model": {
|
||||
"name": "ip_adapter_model",
|
||||
"label": "IP-Adapter Model (select ip_adapter_sd15)",
|
||||
"value": {
|
||||
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
|
||||
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
|
||||
"name": "ip_adapter_sd15",
|
||||
"base": "sd-1",
|
||||
"type": "ip_adapter"
|
||||
}
|
||||
},
|
||||
"clip_vision_model": {
|
||||
"name": "clip_vision_model",
|
||||
"label": "",
|
||||
"value": "ViT-H"
|
||||
},
|
||||
"weight": {
|
||||
"name": "weight",
|
||||
"label": "",
|
||||
"value": 0.2
|
||||
},
|
||||
"method": {
|
||||
"name": "method",
|
||||
"label": "",
|
||||
"value": "full"
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2855.8555540799207,
|
||||
"y": -183.58854843775742
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
|
||||
"version": "1.3.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -1999.770193862987,
|
||||
"y": -1075
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d334f2da-016a-4524-9911-bdab85546888",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d334f2da-016a-4524-9911-bdab85546888",
|
||||
"version": "1.1.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select contro_v11f1e_sd15_tile)",
|
||||
"value": {
|
||||
"key": "773843c8-db1f-4502-8f65-59782efa7960",
|
||||
"hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
|
||||
"name": "control_v11f1e_sd15_tile",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "Structural Control",
|
||||
"value": 1
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "more_control"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2481.9569385477016,
|
||||
"y": -181.06590482739782
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
|
||||
"version": "1.1.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"tile_size": {
|
||||
"name": "tile_size",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2908.4791167517287,
|
||||
"y": -408.87504820159086
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
|
||||
"version": "1.2.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4014.4136788915944,
|
||||
"y": -968.5677253775948
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
|
||||
"version": "1.2.0",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
},
|
||||
"mask": {
|
||||
"name": "mask",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4014.4136788915944,
|
||||
"y": -1243.5677253775948
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "b875cae6-d8a3-4fdc-b969-4d53cbd03f9a",
|
||||
"type": "invocation",
|
||||
@ -181,64 +489,6 @@
|
||||
"y": 3.422855503409039
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
|
||||
"version": "1.1.1",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Positive Prompt",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4014.4136788915944,
|
||||
"y": -1243.5677253775948
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
|
||||
"version": "1.1.1",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "compel",
|
||||
"inputs": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "",
|
||||
"value": ""
|
||||
},
|
||||
"clip": {
|
||||
"name": "clip",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4014.4136788915944,
|
||||
"y": -968.5677253775948
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "b3513fed-ed42-408d-b382-128fdb0de523",
|
||||
"type": "invocation",
|
||||
@ -379,104 +629,6 @@
|
||||
"y": -29.08699277598673
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
|
||||
"version": "1.0.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "i2l",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": false,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2908.4791167517287,
|
||||
"y": -408.87504820159086
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d334f2da-016a-4524-9911-bdab85546888",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "d334f2da-016a-4524-9911-bdab85546888",
|
||||
"version": "1.1.1",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "controlnet",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"control_model": {
|
||||
"name": "control_model",
|
||||
"label": "Control Model (select contro_v11f1e_sd15_tile)",
|
||||
"value": {
|
||||
"key": "773843c8-db1f-4502-8f65-59782efa7960",
|
||||
"hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
|
||||
"name": "control_v11f1e_sd15_tile",
|
||||
"base": "sd-1",
|
||||
"type": "controlnet"
|
||||
}
|
||||
},
|
||||
"control_weight": {
|
||||
"name": "control_weight",
|
||||
"label": "",
|
||||
"value": 1
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "Structural Control",
|
||||
"value": 1
|
||||
},
|
||||
"control_mode": {
|
||||
"name": "control_mode",
|
||||
"label": "",
|
||||
"value": "more_control"
|
||||
},
|
||||
"resize_mode": {
|
||||
"name": "resize_mode",
|
||||
"label": "",
|
||||
"value": "just_resize"
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2481.9569385477016,
|
||||
"y": -181.06590482739782
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1011539e-85de-4e02-a003-0b22358491b8",
|
||||
"type": "invocation",
|
||||
@ -563,52 +715,6 @@
|
||||
"y": -1006.415909408244
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
|
||||
"version": "1.2.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "l2i",
|
||||
"inputs": {
|
||||
"board": {
|
||||
"name": "board",
|
||||
"label": ""
|
||||
},
|
||||
"metadata": {
|
||||
"name": "metadata",
|
||||
"label": ""
|
||||
},
|
||||
"latents": {
|
||||
"name": "latents",
|
||||
"label": ""
|
||||
},
|
||||
"vae": {
|
||||
"name": "vae",
|
||||
"label": ""
|
||||
},
|
||||
"tiled": {
|
||||
"name": "tiled",
|
||||
"label": "",
|
||||
"value": false
|
||||
},
|
||||
"fp32": {
|
||||
"name": "fp32",
|
||||
"label": "",
|
||||
"value": false
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -1999.770193862987,
|
||||
"y": -1075
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "ab6f5dda-4b60-4ddf-99f2-f61fb5937527",
|
||||
"type": "invocation",
|
||||
@ -779,56 +885,6 @@
|
||||
"y": -78.2819050861178
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
|
||||
"version": "1.2.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "ip_adapter",
|
||||
"inputs": {
|
||||
"image": {
|
||||
"name": "image",
|
||||
"label": ""
|
||||
},
|
||||
"ip_adapter_model": {
|
||||
"name": "ip_adapter_model",
|
||||
"label": "IP-Adapter Model (select ip_adapter_sd15)",
|
||||
"value": {
|
||||
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
|
||||
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
|
||||
"name": "ip_adapter_sd15",
|
||||
"base": "sd-1",
|
||||
"type": "ip_adapter"
|
||||
}
|
||||
},
|
||||
"weight": {
|
||||
"name": "weight",
|
||||
"label": "",
|
||||
"value": 0.2
|
||||
},
|
||||
"begin_step_percent": {
|
||||
"name": "begin_step_percent",
|
||||
"label": "",
|
||||
"value": 0
|
||||
},
|
||||
"end_step_percent": {
|
||||
"name": "end_step_percent",
|
||||
"label": "",
|
||||
"value": 1
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -2855.8555540799207,
|
||||
"y": -183.58854843775742
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1f86c8bf-06f9-4e28-abee-02f46f445ac4",
|
||||
"type": "invocation",
|
||||
@ -899,30 +955,6 @@
|
||||
"y": -41.810810454906914
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
|
||||
"type": "invocation",
|
||||
"data": {
|
||||
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
|
||||
"version": "1.0.2",
|
||||
"label": "",
|
||||
"notes": "",
|
||||
"type": "main_model_loader",
|
||||
"inputs": {
|
||||
"model": {
|
||||
"name": "model",
|
||||
"label": ""
|
||||
}
|
||||
},
|
||||
"isOpen": true,
|
||||
"isIntermediate": true,
|
||||
"useCache": true
|
||||
},
|
||||
"position": {
|
||||
"x": -4514.466823162653,
|
||||
"y": -1235.7908800002283
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
|
||||
"type": "invocation",
|
||||
|
@ -5,9 +5,8 @@ from PIL import Image
|
||||
|
||||
from invokeai.app.services.session_processor.session_processor_common import CanceledException, ProgressImage
|
||||
from invokeai.backend.model_manager.config import BaseModelType
|
||||
|
||||
from ...backend.stable_diffusion import PipelineIntermediateState
|
||||
from ...backend.util.util import image_to_dataURL
|
||||
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
|
||||
from invokeai.backend.util.util import image_to_dataURL
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from invokeai.app.services.events.events_base import EventServiceBase
|
||||
|
@ -2,6 +2,11 @@
|
||||
Initialization file for invokeai.backend.image_util methods.
|
||||
"""
|
||||
|
||||
from .infill_methods.patchmatch import PatchMatch # noqa: F401
|
||||
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
|
||||
from .util import InitImageResizer, make_grid # noqa: F401
|
||||
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch # noqa: F401
|
||||
from invokeai.backend.image_util.pngwriter import ( # noqa: F401
|
||||
PngWriter,
|
||||
PromptFormatter,
|
||||
retrieve_metadata,
|
||||
write_metadata,
|
||||
)
|
||||
from invokeai.backend.image_util.util import InitImageResizer, make_grid # noqa: F401
|
||||
|
@ -2,7 +2,7 @@ import torch
|
||||
from torch import nn as nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .arch_util import default_init_weights, make_layer, pixel_unshuffle
|
||||
from invokeai.backend.image_util.basicsr.arch_util import default_init_weights, make_layer, pixel_unshuffle
|
||||
|
||||
|
||||
class ResidualDenseBlock(nn.Module):
|
||||
|
@ -4,7 +4,7 @@ import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from .blocks import FeatureFusionBlock, _make_scratch
|
||||
from invokeai.backend.image_util.depth_anything.model.blocks import FeatureFusionBlock, _make_scratch
|
||||
|
||||
torchhub_path = Path(__file__).parent.parent / "torchhub"
|
||||
|
||||
|
@ -8,11 +8,10 @@ import numpy as np
|
||||
import onnxruntime as ort
|
||||
|
||||
from invokeai.app.services.config.config_default import get_config
|
||||
from invokeai.backend.image_util.dw_openpose.onnxdet import inference_detector
|
||||
from invokeai.backend.image_util.dw_openpose.onnxpose import inference_pose
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .onnxdet import inference_detector
|
||||
from .onnxpose import inference_pose
|
||||
|
||||
config = get_config()
|
||||
|
||||
|
||||
|
@ -99,7 +99,7 @@ class UnetSkipConnectionBlock(nn.Module):
|
||||
"""
|
||||
super(UnetSkipConnectionBlock, self).__init__()
|
||||
self.outermost = outermost
|
||||
if type(norm_layer) == functools.partial:
|
||||
if isinstance(norm_layer, functools.partial):
|
||||
use_bias = norm_layer.func == nn.InstanceNorm2d
|
||||
else:
|
||||
use_bias = norm_layer == nn.InstanceNorm2d
|
||||
|
@ -11,9 +11,8 @@ from PIL import Image
|
||||
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
||||
|
||||
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
|
||||
|
||||
from ..raw_model import RawModel
|
||||
from .resampler import Resampler
|
||||
from invokeai.backend.ip_adapter.resampler import Resampler
|
||||
from invokeai.backend.raw_model import RawModel
|
||||
|
||||
|
||||
class IPAdapterStateDict(TypedDict):
|
||||
@ -125,22 +124,20 @@ class IPAdapter(RawModel):
|
||||
self.device, dtype=self.dtype
|
||||
)
|
||||
|
||||
def to(
|
||||
self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, non_blocking: bool = False
|
||||
):
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
|
||||
if device is not None:
|
||||
self.device = device
|
||||
if dtype is not None:
|
||||
self.dtype = dtype
|
||||
|
||||
self._image_proj_model.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
|
||||
self.attn_weights.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
|
||||
self._image_proj_model.to(device=self.device, dtype=self.dtype)
|
||||
self.attn_weights.to(device=self.device, dtype=self.dtype)
|
||||
|
||||
def calc_size(self):
|
||||
# workaround for circular import
|
||||
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
|
||||
def calc_size(self) -> int:
|
||||
# HACK(ryand): Fix this issue with circular imports.
|
||||
from invokeai.backend.model_manager.load.model_util import calc_module_size
|
||||
|
||||
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
|
||||
return calc_module_size(self._image_proj_model) + calc_module_size(self.attn_weights)
|
||||
|
||||
def _init_image_proj_model(
|
||||
self, state_dict: dict[str, torch.Tensor]
|
||||
|
@ -10,9 +10,7 @@ from safetensors.torch import load_file
|
||||
from typing_extensions import Self
|
||||
|
||||
from invokeai.backend.model_manager import BaseModelType
|
||||
from invokeai.backend.util.devices import TorchDevice
|
||||
|
||||
from .raw_model import RawModel
|
||||
from invokeai.backend.raw_model import RawModel
|
||||
|
||||
|
||||
class LoRALayerBase:
|
||||
@ -58,14 +56,9 @@ class LoRALayerBase:
|
||||
model_size += val.nelement() * val.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
if self.bias is not None:
|
||||
self.bias = self.bias.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.bias = self.bias.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
# TODO: find and debug lora/locon with bias
|
||||
@ -107,19 +100,14 @@ class LoRALayer(LoRALayerBase):
|
||||
model_size += val.nelement() * val.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
super().to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.up = self.up.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.down = self.down.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.up = self.up.to(device=device, dtype=dtype)
|
||||
self.down = self.down.to(device=device, dtype=dtype)
|
||||
|
||||
if self.mid is not None:
|
||||
self.mid = self.mid.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.mid = self.mid.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class LoHALayer(LoRALayerBase):
|
||||
@ -168,23 +156,18 @@ class LoHALayer(LoRALayerBase):
|
||||
model_size += val.nelement() * val.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
|
||||
if self.t1 is not None:
|
||||
self.t1 = self.t1.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t1 = self.t1.to(device=device, dtype=dtype)
|
||||
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
|
||||
if self.t2 is not None:
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class LoKRLayer(LoRALayerBase):
|
||||
@ -265,12 +248,7 @@ class LoKRLayer(LoRALayerBase):
|
||||
model_size += val.nelement() * val.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
if self.w1 is not None:
|
||||
@ -278,19 +256,19 @@ class LoKRLayer(LoRALayerBase):
|
||||
else:
|
||||
assert self.w1_a is not None
|
||||
assert self.w1_b is not None
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
|
||||
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
|
||||
|
||||
if self.w2 is not None:
|
||||
self.w2 = self.w2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2 = self.w2.to(device=device, dtype=dtype)
|
||||
else:
|
||||
assert self.w2_a is not None
|
||||
assert self.w2_b is not None
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
|
||||
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
|
||||
|
||||
if self.t2 is not None:
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.t2 = self.t2.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class FullLayer(LoRALayerBase):
|
||||
@ -320,15 +298,10 @@ class FullLayer(LoRALayerBase):
|
||||
model_size += self.weight.nelement() * self.weight.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.weight = self.weight.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
class IA3Layer(LoRALayerBase):
|
||||
@ -360,16 +333,11 @@ class IA3Layer(LoRALayerBase):
|
||||
model_size += self.on_input.nelement() * self.on_input.element_size()
|
||||
return model_size
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
):
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
|
||||
super().to(device=device, dtype=dtype)
|
||||
|
||||
self.weight = self.weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.on_input = self.on_input.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
self.weight = self.weight.to(device=device, dtype=dtype)
|
||||
self.on_input = self.on_input.to(device=device, dtype=dtype)
|
||||
|
||||
|
||||
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer]
|
||||
@ -391,15 +359,10 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
|
||||
def name(self) -> str:
|
||||
return self._name
|
||||
|
||||
def to(
|
||||
self,
|
||||
device: Optional[torch.device] = None,
|
||||
dtype: Optional[torch.dtype] = None,
|
||||
non_blocking: bool = False,
|
||||
) -> None:
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
# TODO: try revert if exception?
|
||||
for _key, layer in self.layers.items():
|
||||
layer.to(device=device, dtype=dtype, non_blocking=non_blocking)
|
||||
layer.to(device=device, dtype=dtype)
|
||||
|
||||
def calc_size(self) -> int:
|
||||
model_size = 0
|
||||
@ -522,7 +485,7 @@ class LoRAModelRaw(RawModel): # (torch.nn.Module):
|
||||
# lower memory consumption by removing already parsed layer values
|
||||
state_dict[layer_key].clear()
|
||||
|
||||
layer.to(device=device, dtype=dtype, non_blocking=TorchDevice.get_non_blocking(device))
|
||||
layer.to(device=device, dtype=dtype)
|
||||
model.layers[layer_key] = layer
|
||||
|
||||
return model
|
||||
|
@ -1,6 +1,6 @@
|
||||
"""Re-export frequently-used symbols from the Model Manager backend."""
|
||||
|
||||
from .config import (
|
||||
from invokeai.backend.model_manager.config import (
|
||||
AnyModel,
|
||||
AnyModelConfig,
|
||||
BaseModelType,
|
||||
@ -13,9 +13,9 @@ from .config import (
|
||||
SchedulerPredictionType,
|
||||
SubModelType,
|
||||
)
|
||||
from .load import LoadedModel
|
||||
from .probe import ModelProbe
|
||||
from .search import ModelSearch
|
||||
from invokeai.backend.model_manager.load import LoadedModel
|
||||
from invokeai.backend.model_manager.probe import ModelProbe
|
||||
from invokeai.backend.model_manager.search import ModelSearch
|
||||
|
||||
__all__ = [
|
||||
"AnyModel",
|
||||
|
@ -30,11 +30,10 @@ from diffusers.models.modeling_utils import ModelMixin
|
||||
from pydantic import BaseModel, ConfigDict, Discriminator, Field, Tag, TypeAdapter
|
||||
from typing_extensions import Annotated, Any, Dict
|
||||
|
||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
from invokeai.backend.model_hash.hash_validator import validate_hash
|
||||
|
||||
from ..raw_model import RawModel
|
||||
from invokeai.backend.raw_model import RawModel
|
||||
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
|
||||
|
||||
# ModelMixin is the base class for all diffusers and transformers models
|
||||
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
|
||||
@ -68,6 +67,7 @@ class ModelType(str, Enum):
|
||||
IPAdapter = "ip_adapter"
|
||||
CLIPVision = "clip_vision"
|
||||
T2IAdapter = "t2i_adapter"
|
||||
SpandrelImageToImage = "spandrel_image_to_image"
|
||||
|
||||
|
||||
class SubModelType(str, Enum):
|
||||
@ -372,6 +372,17 @@ class T2IAdapterConfig(DiffusersConfigBase, ControlAdapterConfigBase):
|
||||
return Tag(f"{ModelType.T2IAdapter.value}.{ModelFormat.Diffusers.value}")
|
||||
|
||||
|
||||
class SpandrelImageToImageConfig(ModelConfigBase):
|
||||
"""Model config for Spandrel Image to Image models."""
|
||||
|
||||
type: Literal[ModelType.SpandrelImageToImage] = ModelType.SpandrelImageToImage
|
||||
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
|
||||
|
||||
@staticmethod
|
||||
def get_tag() -> Tag:
|
||||
return Tag(f"{ModelType.SpandrelImageToImage.value}.{ModelFormat.Checkpoint.value}")
|
||||
|
||||
|
||||
def get_model_discriminator_value(v: Any) -> str:
|
||||
"""
|
||||
Computes the discriminator value for a model config.
|
||||
@ -408,6 +419,7 @@ AnyModelConfig = Annotated[
|
||||
Annotated[IPAdapterInvokeAIConfig, IPAdapterInvokeAIConfig.get_tag()],
|
||||
Annotated[IPAdapterCheckpointConfig, IPAdapterCheckpointConfig.get_tag()],
|
||||
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
|
||||
Annotated[SpandrelImageToImageConfig, SpandrelImageToImageConfig.get_tag()],
|
||||
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
|
||||
],
|
||||
Discriminator(get_model_discriminator_value),
|
||||
|
@ -1,75 +0,0 @@
|
||||
import ctypes
|
||||
|
||||
|
||||
class Struct_mallinfo2(ctypes.Structure):
|
||||
"""A ctypes Structure that matches the libc mallinfo2 struct.
|
||||
|
||||
Docs:
|
||||
- https://man7.org/linux/man-pages/man3/mallinfo.3.html
|
||||
- https://www.gnu.org/software/libc/manual/html_node/Statistics-of-Malloc.html
|
||||
|
||||
struct mallinfo2 {
|
||||
size_t arena; /* Non-mmapped space allocated (bytes) */
|
||||
size_t ordblks; /* Number of free chunks */
|
||||
size_t smblks; /* Number of free fastbin blocks */
|
||||
size_t hblks; /* Number of mmapped regions */
|
||||
size_t hblkhd; /* Space allocated in mmapped regions (bytes) */
|
||||
size_t usmblks; /* See below */
|
||||
size_t fsmblks; /* Space in freed fastbin blocks (bytes) */
|
||||
size_t uordblks; /* Total allocated space (bytes) */
|
||||
size_t fordblks; /* Total free space (bytes) */
|
||||
size_t keepcost; /* Top-most, releasable space (bytes) */
|
||||
};
|
||||
"""
|
||||
|
||||
_fields_ = [
|
||||
("arena", ctypes.c_size_t),
|
||||
("ordblks", ctypes.c_size_t),
|
||||
("smblks", ctypes.c_size_t),
|
||||
("hblks", ctypes.c_size_t),
|
||||
("hblkhd", ctypes.c_size_t),
|
||||
("usmblks", ctypes.c_size_t),
|
||||
("fsmblks", ctypes.c_size_t),
|
||||
("uordblks", ctypes.c_size_t),
|
||||
("fordblks", ctypes.c_size_t),
|
||||
("keepcost", ctypes.c_size_t),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
s = ""
|
||||
s += f"{'arena': <10}= {(self.arena/2**30):15.5f} # Non-mmapped space allocated (GB) (uordblks + fordblks)\n"
|
||||
s += f"{'ordblks': <10}= {(self.ordblks): >15} # Number of free chunks\n"
|
||||
s += f"{'smblks': <10}= {(self.smblks): >15} # Number of free fastbin blocks \n"
|
||||
s += f"{'hblks': <10}= {(self.hblks): >15} # Number of mmapped regions \n"
|
||||
s += f"{'hblkhd': <10}= {(self.hblkhd/2**30):15.5f} # Space allocated in mmapped regions (GB)\n"
|
||||
s += f"{'usmblks': <10}= {(self.usmblks): >15} # Unused\n"
|
||||
s += f"{'fsmblks': <10}= {(self.fsmblks/2**30):15.5f} # Space in freed fastbin blocks (GB)\n"
|
||||
s += (
|
||||
f"{'uordblks': <10}= {(self.uordblks/2**30):15.5f} # Space used by in-use allocations (non-mmapped)"
|
||||
" (GB)\n"
|
||||
)
|
||||
s += f"{'fordblks': <10}= {(self.fordblks/2**30):15.5f} # Space in free blocks (non-mmapped) (GB)\n"
|
||||
s += f"{'keepcost': <10}= {(self.keepcost/2**30):15.5f} # Top-most, releasable space (GB)\n"
|
||||
return s
|
||||
|
||||
|
||||
class LibcUtil:
|
||||
"""A utility class for interacting with the C Standard Library (`libc`) via ctypes.
|
||||
|
||||
Note that this class will raise on __init__() if 'libc.so.6' can't be found. Take care to handle environments where
|
||||
this shared library is not available.
|
||||
|
||||
TODO: Improve cross-OS compatibility of this class.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._libc = ctypes.cdll.LoadLibrary("libc.so.6")
|
||||
|
||||
def mallinfo2(self) -> Struct_mallinfo2:
|
||||
"""Calls `libc` `mallinfo2`.
|
||||
|
||||
Docs: https://man7.org/linux/man-pages/man3/mallinfo.3.html
|
||||
"""
|
||||
mallinfo2 = self._libc.mallinfo2
|
||||
mallinfo2.restype = Struct_mallinfo2
|
||||
return mallinfo2()
|
@ -6,10 +6,10 @@ Init file for the model loader.
|
||||
from importlib import import_module
|
||||
from pathlib import Path
|
||||
|
||||
from .load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
|
||||
from .load_default import ModelLoader
|
||||
from .model_cache.model_cache_default import ModelCache
|
||||
from .model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
|
||||
from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
|
||||
from invokeai.backend.model_manager.load.load_default import ModelLoader
|
||||
from invokeai.backend.model_manager.load.model_cache.model_cache_default import ModelCache
|
||||
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry, ModelLoaderRegistryBase
|
||||
|
||||
# This registers the subclasses that implement loaders of specific model types
|
||||
loaders = [x.stem for x in Path(Path(__file__).parent, "model_loaders").glob("*.py") if x.stem != "__init__"]
|
||||
|
@ -5,7 +5,7 @@ import psutil
|
||||
import torch
|
||||
from typing_extensions import Self
|
||||
|
||||
from ..util.libc_util import LibcUtil, Struct_mallinfo2
|
||||
from invokeai.backend.model_manager.util.libc_util import LibcUtil, Struct_mallinfo2
|
||||
|
||||
GB = 2**30 # 1 GB
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user