Merge branch 'main' into lstein/feat/config-migration

This commit is contained in:
psychedelicious 2024-05-14 16:56:40 +10:00 committed by GitHub
commit 00ccd73d53
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
111 changed files with 5106 additions and 5216 deletions
README.md
docs
invokeai
app
backend
frontend/web
package.jsonpnpm-lock.yaml
public/locales
src
app/store/middleware/listenerMiddleware/listeners
common
features
canvas
components/IAICanvasToolbar
store
controlLayers
deleteImageModal/store
gallery
metadata
nodes
parameters
queue/components

View File

@ -12,7 +12,7 @@
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
[Installation][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
<div align="center">

View File

@ -4,278 +4,6 @@ title: Training
# :material-file-document: Training
# Textual Inversion Training
## **Personalizing Text-to-Image Generation**
Invoke Training has moved to its own repository, with a dedicated UI for accessing common scripts like Textual Inversion and LoRA training.
You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated
notebooks.
## **Hardware and Software Requirements**
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS.md) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
## **Preparing for Training**
To train, prepare a folder that contains 3-5 images that illustrate
the object or concept. It is good to provide a variety of examples or
poses to avoid overtraining the system. Format these images as PNG
(preferred) or JPG. You do not need to resize or crop the images in
advance, but for more control you may wish to do so.
Place the training images in a directory on the machine InvokeAI runs
on. We recommend placing them in a subdirectory of the
`text-inversion-training-data` folder located in the InvokeAI root
directory, ordinarily `~/invokeai` (Linux/Mac), or
`C:\Users\your_name\invokeai` (Windows). For example, to create an
embedding for the "psychedelic" style, you'd place the training images
into the directory
`~invokeai/text-inversion-training-data/psychedelic`.
## **Launching Training Using the Console Front End**
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start training tool selecting choice (3):
```sh
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:
<figure markdown>
![ti-frontend](../assets/textual-inversion/ti-frontend.png)
</figure>
The interface is keyboard-based. Move from field to field using
control-N (^N) to move to the next field and control-P (^P) to the
previous one. <Tab> and <shift-TAB> work as well. Once a field is
active, use the cursor keys. In a checkbox group, use the up and down
cursor keys to move from choice to choice, and <space> to select a
choice. In a scrollbar, use the left and right cursor keys to increase
and decrease the value of the scroll. In textfields, type the desired
values.
The number of parameters may look intimidating, but in most cases the
predefined defaults work fine. The red circled fields in the above
illustration are the ones you will adjust most frequently.
### Model Name
This will list all the diffusers models that are currently
installed. Select the one you wish to use as the basis for your
embedding. Be aware that if you use a SD-1.X-based model for your
training, you will only be able to use this embedding with other
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
be able to use the embeddings with models based on SD-2.X.
### Trigger Term
This is the prompt term you will use to trigger the embedding. Type a
single word or phrase you wish to use as the trigger, example
"psychedelic" (without angle brackets). Within InvokeAI, you will then
be able to activate the trigger using the syntax `<psychedelic>`.
### Initializer
This is a single character that is used internally during the training
process as a placeholder for the trigger term. It defaults to "*" and
can usually be left alone.
### Resume from last saved checkpoint
As training proceeds, textual inversion will write a series of
intermediate files that can be used to resume training from where it
was left off in the case of an interruption. This checkbox will be
automatically selected if you provide a previously used trigger term
and at least one checkpoint file is found on disk.
Note that as of 20 January 2023, resume does not seem to be working
properly due to an issue with the upstream code.
### Data Training Directory
This is the location of the images to be used for training. When you
select a trigger term like "my-trigger", the frontend will prepopulate
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
but you can change the path to wherever you want.
### Output Destination Directory
This is the location of the logs, checkpoint files, and embedding
files created during training. When you select a trigger term like
"my-trigger", the frontend will prepopulate this field with
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
path to wherever you want.
### Image resolution
The images in the training directory will be automatically scaled to
the value you use here. For best results, you will want to use the
same default resolution of the underlying model (512 pixels for
SD-1.5, 768 for the larger version of SD-2.1).
### Center crop images
If this is selected, your images will be center cropped to make them
square before resizing them to the desired resolution. Center cropping
can indiscriminately cut off the top of subjects' heads for portrait
aspect images, so if you have images like this, you may wish to use a
photoeditor to manually crop them to a square aspect ratio.
### Mixed precision
Select the floating point precision for the embedding. "no" will
result in a full 32-bit precision, "fp16" will provide 16-bit
precision, and "bf16" will provide mixed precision (only available
when XFormers is used).
### Max training steps
How many steps the training will take before the model converges. Most
training sets will converge with 2000-3000 steps.
### Batch size
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
The rate at which the system adjusts its internal weights during
training. Higher values risk overtraining (getting the same image each
time), and lower values will take more steps to train a good
model. The default of 0.0005 is conservative; you may wish to increase
it to 0.005 to speed up training.
### Scale learning rate by number of GPUs, steps and batch size
If this is selected (the default) the system will adjust the provided
learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
This adjusts how the learning rate changes over the course of
training. The default "constant" means to use a constant learning rate
for the entire training session. The other values scale the learning
rate according to various formulas.
Only "constant" is supported by the XFormers library.
### Gradient accumulation steps
This is a parameter that allows you to use bigger batch sizes than
your GPU's VRAM would ordinarily accommodate, at the cost of some
performance.
### Warmup steps
If "constant_with_warmup" is selected in the learning rate scheduler,
then this provides the number of warmup steps. Warmup steps have a
very low learning rate, and are one way of preventing early
overtraining.
## The training run
Start the training run by advancing to the OK button (bottom right)
and pressing <enter>. A series of progress messages will be displayed
as the training process proceeds. This may take an hour or two,
depending on settings and the speed of your system. Various log and
checkpoint files will be written into the output directory (ordinarily
`~/invokeai/text-inversion-output/my-model/`)
At the end of successful training, the system will copy the file
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
directory, using a subdirectory named after the trigger token. For
example, if the trigger token was `psychedelic`, then look for the
embeddings file in
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
You may now launch InvokeAI and try out a prompt that uses the trigger
term. For example `a plate of banana sushi in <psychedelic> style`.
## **Training with the Command-Line Script**
Training can also be done using a traditional command-line script. It
can be launched from within the "developer's console", or from the
command line after activating InvokeAI's virtual environment.
It accepts a large number of arguments, which can be summarized by
passing the `--help` argument:
```sh
invokeai-ti --help
```
Typical usage is shown here:
```sh
invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=style \
--initializer_token='*' \
--placeholder_token='<psychedelic>' \
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
--scale_lr \
--train_batch_size=8 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=0.0005 \
--resume_from_checkpoint=latest \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following
resources:
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.
* [HuggingFace's textual inversion training
page](https://huggingface.co/docs/diffusers/training/text_inversion)
* [HuggingFace example script
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
(Note that this script is similar to, but not identical, to
`textual_inversion`, but produces embed files that are completely compatible.
---
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team
You can find more by visiting the repo at https://github.com/invoke-ai/invoke-training

View File

@ -1,8 +1,10 @@
# Automatic Install
# Automatic Install & Updates
The installer is used for both new installs and updates.
**The same packaged installer file can be used for both new installs and updates.**
Using the installer for updates will leave everything you've added since installation, and just update the core libraries used to run Invoke.
Simply use the same path you installed to originally.
Both release and pre-release versions can be installed using it. It also supports install a wheel if needed.
Both release and pre-release versions can be installed using the installer. It also supports install through a wheel if needed.
Be sure to review the [installation requirements] and ensure your system has everything it needs to install Invoke.

View File

@ -1,4 +1,4 @@
# Installation Overview
# Installation and Updating Overview
Before installing, review the [installation requirements] to ensure your system is set up properly.
@ -6,14 +6,21 @@ See the [FAQ] for frequently-encountered installation issues.
If you need more help, join our [discord] or [create an issue].
<h2>Automatic Install</h2>
<h2>Automatic Install & Updates </h2>
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
⬆️ The same installer is also the best way to update InvokeAI - Simply rerun it for the same folder you installed to.
The installation process simply manages installation for the core libraries & application dependencies that run Invoke.
Any models, images, or other assets in the Invoke root folder won't be affected by the installation process.
<h2>Manual Install</h2>
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
Updates are managed by reinstalling the latest version through PyPi.
<h2>Developer Install</h2>
If you want to contribute to InvokeAI, consult the [developer install guide].

View File

@ -37,13 +37,13 @@ Invoke runs best with a dedicated GPU, but will fall back to running on CPU, alb
=== "Nvidia"
```
Any GPU with at least 8GB VRAM. Linux only.
Any GPU with at least 8GB VRAM.
```
=== "AMD"
```
Any GPU with at least 16GB VRAM.
Any GPU with at least 16GB VRAM. Linux only.
```
=== "Mac"

View File

@ -13,7 +13,6 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
@ -109,9 +108,7 @@ async def get_config() -> AppConfig:
upscaling_models.append(str(Path(model).stem))
upscaler = Upscaler(upscaling_method="esrgan", upscaling_models=upscaling_models)
nsfw_methods = []
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
nsfw_methods = ["nsfw_checker"]
watermarking_methods = ["invisible_watermark"]

View File

@ -6,7 +6,7 @@ import pathlib
import shutil
import traceback
from copy import deepcopy
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List, Optional, Type
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse
@ -16,6 +16,7 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
@ -52,6 +53,13 @@ class ModelsList(BaseModel):
model_config = ConfigDict(use_enum_values=True)
def add_cover_image_to_model_config(config: AnyModelConfig, dependencies: Type[ApiDependencies]) -> AnyModelConfig:
"""Add a cover image URL to a model configuration."""
cover_image = dependencies.invoker.services.model_images.get_url(config.key)
config.cover_image = cover_image
return config
##############################################################################
# These are example inputs and outputs that are used in places where Swagger
# is unable to generate a correct example.
@ -118,8 +126,7 @@ async def list_model_records(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
for model in found_models:
cover_image = ApiDependencies.invoker.services.model_images.get_url(model.key)
model.cover_image = cover_image
model = add_cover_image_to_model_config(model, ApiDependencies)
return ModelsList(models=found_models)
@ -160,12 +167,9 @@ async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_manager.store
try:
config: AnyModelConfig = record_store.get_model(key)
cover_image = ApiDependencies.invoker.services.model_images.get_url(key)
config.cover_image = cover_image
return config
config = ApiDependencies.invoker.services.model_manager.store.get_model(key)
return add_cover_image_to_model_config(config, ApiDependencies)
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@ -294,14 +298,15 @@ async def update_model_record(
installer = ApiDependencies.invoker.services.model_manager.install
try:
record_store.update_model(key, changes=changes)
model_response: AnyModelConfig = installer.sync_model_path(key)
config = installer.sync_model_path(key)
config = add_cover_image_to_model_config(config, ApiDependencies)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
return config
@model_manager_router.get(
@ -648,6 +653,14 @@ async def convert_model(
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
# Update the model image if the model had one
try:
model_image = ApiDependencies.invoker.services.model_images.get(key)
ApiDependencies.invoker.services.model_images.save(model_image, new_key)
ApiDependencies.invoker.services.model_images.delete(key)
except ModelImageFileNotFoundException:
pass
# delete the original safetensors file
installer.delete(key)
@ -655,7 +668,8 @@ async def convert_model(
shutil.rmtree(cache_path)
# return the config record for the new diffusers directory
new_config: AnyModelConfig = store.get_model(new_key)
new_config = store.get_model(new_key)
new_config = add_cover_image_to_model_config(new_config, ApiDependencies)
return new_config

View File

@ -1,6 +1,5 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from pathlib import Path
from typing import Literal, Optional
import cv2
@ -504,7 +503,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.2.2",
version="1.2.3",
)
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add blur to NSFW-flagged images"""
@ -516,23 +515,12 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
logger = context.logger
logger.debug("Running NSFW checker")
if SafetyChecker.has_nsfw_concept(image):
logger.info("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = self._get_caution_img()
blurry_image.paste(caution, (0, 0), caution)
image = blurry_image
image = SafetyChecker.blur_if_nsfw(image)
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)
def _get_caution_img(self) -> Image.Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))
@invocation(
"img_watermark",

View File

@ -586,13 +586,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
unet: UNet2DConditionModel,
scheduler: Scheduler,
) -> StableDiffusionGeneratorPipeline:
# TODO:
# configure_model_padding(
# unet,
# self.seamless,
# self.seamless_axes,
# )
class FakeVae:
class FakeVaeConfig:
def __init__(self) -> None:

View File

@ -4,5 +4,4 @@ Initialization file for invokeai.backend.image_util methods.
from .infill_methods.patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .seamless import configure_model_padding # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401

View File

@ -8,7 +8,7 @@ from pathlib import Path
import numpy as np
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from PIL import Image
from PIL import Image, ImageFilter
from transformers import AutoFeatureExtractor
import invokeai.backend.util.logging as logger
@ -16,6 +16,7 @@ from invokeai.app.services.config import get_config
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.silence_warnings import SilenceWarnings
repo_id = "CompVis/stable-diffusion-safety-checker"
CHECKER_PATH = "core/convert/stable-diffusion-safety-checker"
@ -24,30 +25,30 @@ class SafetyChecker:
Wrapper around SafetyChecker model.
"""
safety_checker = None
feature_extractor = None
tried_load: bool = False
safety_checker = None
@classmethod
def _load_safety_checker(cls):
if cls.tried_load:
if cls.safety_checker is not None and cls.feature_extractor is not None:
return
try:
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(get_config().models_path / CHECKER_PATH)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(get_config().models_path / CHECKER_PATH)
model_path = get_config().models_path / CHECKER_PATH
if model_path.exists():
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(model_path)
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(model_path)
else:
model_path.mkdir(parents=True, exist_ok=True)
cls.feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)
cls.feature_extractor.save_pretrained(model_path, safe_serialization=True)
cls.safety_checker = StableDiffusionSafetyChecker.from_pretrained(repo_id)
cls.safety_checker.save_pretrained(model_path, safe_serialization=True)
except Exception as e:
logger.warning(f"Could not load NSFW checker: {str(e)}")
cls.tried_load = True
@classmethod
def safety_checker_available(cls) -> bool:
return Path(get_config().models_path, CHECKER_PATH).exists()
@classmethod
def has_nsfw_concept(cls, image: Image.Image) -> bool:
if not cls.safety_checker_available() and cls.tried_load:
return False
cls._load_safety_checker()
if cls.safety_checker is None or cls.feature_extractor is None:
return False
@ -60,3 +61,24 @@ class SafetyChecker:
with SilenceWarnings():
checked_image, has_nsfw_concept = cls.safety_checker(images=x_image, clip_input=features.pixel_values)
return has_nsfw_concept[0]
@classmethod
def blur_if_nsfw(cls, image: Image.Image) -> Image.Image:
if cls.has_nsfw_concept(image):
logger.warning("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = cls._get_caution_img()
# Center the caution image on the blurred image
x = (blurry_image.width - caution.width) // 2
y = (blurry_image.height - caution.height) // 2
blurry_image.paste(caution, (x, y), caution)
image = blurry_image
return image
@classmethod
def _get_caution_img(cls) -> Image.Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))

View File

@ -1,52 +0,0 @@
import torch.nn as nn
def _conv_forward_asymmetric(self, input, weight, bias):
"""
Patch for Conv2d._conv_forward that supports asymmetric padding
"""
working = nn.functional.pad(input, self.asymmetric_padding["x"], mode=self.asymmetric_padding_mode["x"])
working = nn.functional.pad(working, self.asymmetric_padding["y"], mode=self.asymmetric_padding_mode["y"])
return nn.functional.conv2d(
working,
weight,
bias,
self.stride,
nn.modules.utils._pair(0),
self.dilation,
self.groups,
)
def configure_model_padding(model, seamless, seamless_axes):
"""
Modifies the 2D convolution layers to use a circular padding mode based on
the `seamless` and `seamless_axes` options.
"""
# TODO: get an explicit interface for this in diffusers: https://github.com/huggingface/diffusers/issues/556
for m in model.modules():
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
if seamless:
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
else:
m._conv_forward = nn.Conv2d._conv_forward.__get__(m, nn.Conv2d)
if hasattr(m, "asymmetric_padding_mode"):
del m.asymmetric_padding_mode
if hasattr(m, "asymmetric_padding"):
del m.asymmetric_padding

View File

@ -1,89 +1,51 @@
from __future__ import annotations
from contextlib import contextmanager
from typing import Callable, List, Union
from typing import Callable, List, Optional, Tuple, Union
import torch
import torch.nn as nn
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.lora import LoRACompatibleConv
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
def _conv_forward_asymmetric(self, input, weight, bias):
"""
Patch for Conv2d._conv_forward that supports asymmetric padding
"""
working = nn.functional.pad(input, self.asymmetric_padding["x"], mode=self.asymmetric_padding_mode["x"])
working = nn.functional.pad(working, self.asymmetric_padding["y"], mode=self.asymmetric_padding_mode["y"])
return nn.functional.conv2d(
working,
weight,
bias,
self.stride,
nn.modules.utils._pair(0),
self.dilation,
self.groups,
)
@contextmanager
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL, AutoencoderTiny], seamless_axes: List[str]):
if not seamless_axes:
yield
return
# Callable: (input: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor
to_restore: list[tuple[nn.Conv2d | nn.ConvTranspose2d, Callable]] = []
# override conv_forward
# https://github.com/huggingface/diffusers/issues/556#issuecomment-1993287019
def _conv_forward_asymmetric(self, input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None):
self.paddingX = (self._reversed_padding_repeated_twice[0], self._reversed_padding_repeated_twice[1], 0, 0)
self.paddingY = (0, 0, self._reversed_padding_repeated_twice[2], self._reversed_padding_repeated_twice[3])
working = torch.nn.functional.pad(input, self.paddingX, mode=x_mode)
working = torch.nn.functional.pad(working, self.paddingY, mode=y_mode)
return torch.nn.functional.conv2d(
working, weight, bias, self.stride, torch.nn.modules.utils._pair(0), self.dilation, self.groups
)
original_layers: List[Tuple[nn.Conv2d, Callable]] = []
try:
# Hard coded to skip down block layers, allowing for seamless tiling at the expense of prompt adherence
skipped_layers = 1
for m_name, m in model.named_modules():
if not isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
continue
x_mode = "circular" if "x" in seamless_axes else "constant"
y_mode = "circular" if "y" in seamless_axes else "constant"
if isinstance(model, UNet2DConditionModel) and m_name.startswith("down_blocks.") and ".resnets." in m_name:
# down_blocks.1.resnets.1.conv1
_, block_num, _, resnet_num, submodule_name = m_name.split(".")
block_num = int(block_num)
resnet_num = int(resnet_num)
conv_layers: List[torch.nn.Conv2d] = []
if block_num >= len(model.down_blocks) - skipped_layers:
continue
for module in model.modules():
if isinstance(module, torch.nn.Conv2d):
conv_layers.append(module)
# Skip the second resnet (could be configurable)
if resnet_num > 0:
continue
# Skip Conv2d layers (could be configurable)
if submodule_name == "conv2":
continue
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
for layer in conv_layers:
if isinstance(layer, LoRACompatibleConv) and layer.lora_layer is None:
layer.lora_layer = lambda *x: 0
original_layers.append((layer, layer._conv_forward))
layer._conv_forward = _conv_forward_asymmetric.__get__(layer, torch.nn.Conv2d)
yield
finally:
for module, orig_conv_forward in to_restore:
module._conv_forward = orig_conv_forward
if hasattr(module, "asymmetric_padding_mode"):
del module.asymmetric_padding_mode
if hasattr(module, "asymmetric_padding"):
del module.asymmetric_padding
for layer, orig_conv_forward in original_layers:
layer._conv_forward = orig_conv_forward

View File

@ -52,47 +52,48 @@
},
"dependencies": {
"@chakra-ui/react-use-size": "^2.1.0",
"@dagrejs/dagre": "^1.1.1",
"@dagrejs/graphlib": "^2.2.1",
"@dagrejs/dagre": "^1.1.2",
"@dagrejs/graphlib": "^2.2.2",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.17",
"@fontsource-variable/inter": "^5.0.18",
"@invoke-ai/ui-library": "^0.0.25",
"@nanostores/react": "^0.7.2",
"@reduxjs/toolkit": "2.2.2",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",
"chakra-react-select": "^4.7.6",
"compare-versions": "^6.1.0",
"dateformat": "^5.0.3",
"framer-motion": "^11.0.22",
"i18next": "^23.10.1",
"i18next-http-backend": "^2.5.0",
"fracturedjsonjs": "^4.0.1",
"framer-motion": "^11.1.8",
"i18next": "^23.11.3",
"i18next-http-backend": "^2.5.1",
"idb-keyval": "^6.2.1",
"jsondiffpatch": "^0.6.0",
"konva": "^9.3.6",
"lodash-es": "^4.17.21",
"nanostores": "^0.10.0",
"nanostores": "^0.10.3",
"new-github-issue-url": "^1.0.0",
"overlayscrollbars": "^2.6.1",
"overlayscrollbars-react": "^0.5.5",
"overlayscrollbars": "^2.7.3",
"overlayscrollbars-react": "^0.5.6",
"query-string": "^9.0.0",
"react": "^18.2.0",
"react": "^18.3.1",
"react-colorful": "^5.6.1",
"react-dom": "^18.2.0",
"react-dom": "^18.3.1",
"react-dropzone": "^14.2.3",
"react-error-boundary": "^4.0.13",
"react-hook-form": "^7.51.2",
"react-hook-form": "^7.51.4",
"react-hotkeys-hook": "4.5.0",
"react-i18next": "^14.1.0",
"react-icons": "^5.0.1",
"react-i18next": "^14.1.1",
"react-icons": "^5.2.0",
"react-konva": "^18.2.10",
"react-redux": "9.1.0",
"react-resizable-panels": "^2.0.16",
"react-redux": "9.1.2",
"react-resizable-panels": "^2.0.19",
"react-select": "5.8.0",
"react-use": "^17.5.0",
"react-virtuoso": "^4.7.5",
"reactflow": "^11.10.4",
"react-virtuoso": "^4.7.10",
"reactflow": "^11.11.3",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
"redux-undo": "^1.1.0",
@ -104,8 +105,8 @@
"use-device-pixel-ratio": "^1.1.2",
"use-image": "^1.1.1",
"uuid": "^9.0.1",
"zod": "^3.22.4",
"zod-validation-error": "^3.0.3"
"zod": "^3.23.6",
"zod-validation-error": "^3.2.0"
},
"peerDependencies": {
"@chakra-ui/react": "^2.8.2",
@ -116,19 +117,19 @@
"devDependencies": {
"@invoke-ai/eslint-config-react": "^0.0.14",
"@invoke-ai/prettier-config-react": "^0.0.7",
"@storybook/addon-essentials": "^8.0.4",
"@storybook/addon-interactions": "^8.0.4",
"@storybook/addon-links": "^8.0.4",
"@storybook/addon-storysource": "^8.0.4",
"@storybook/manager-api": "^8.0.4",
"@storybook/react": "^8.0.4",
"@storybook/react-vite": "^8.0.4",
"@storybook/theming": "^8.0.4",
"@storybook/addon-essentials": "^8.0.10",
"@storybook/addon-interactions": "^8.0.10",
"@storybook/addon-links": "^8.0.10",
"@storybook/addon-storysource": "^8.0.10",
"@storybook/manager-api": "^8.0.10",
"@storybook/react": "^8.0.10",
"@storybook/react-vite": "^8.0.10",
"@storybook/theming": "^8.0.10",
"@types/dateformat": "^5.0.2",
"@types/lodash-es": "^4.17.12",
"@types/node": "^20.11.30",
"@types/react": "^18.2.73",
"@types/react-dom": "^18.2.22",
"@types/node": "^20.12.10",
"@types/react": "^18.3.1",
"@types/react-dom": "^18.3.0",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-react-swc": "^3.6.0",
"concurrently": "^8.2.2",
@ -136,20 +137,20 @@
"eslint": "^8.57.0",
"eslint-plugin-i18next": "^6.0.3",
"eslint-plugin-path": "^1.3.0",
"knip": "^5.6.1",
"knip": "^5.12.3",
"openapi-types": "^12.1.3",
"openapi-typescript": "^6.7.5",
"prettier": "^3.2.5",
"rollup-plugin-visualizer": "^5.12.0",
"storybook": "^8.0.4",
"storybook": "^8.0.10",
"ts-toolbelt": "^9.6.0",
"tsafe": "^1.6.6",
"typescript": "^5.4.3",
"vite": "^5.2.6",
"vite-plugin-css-injected-by-js": "^3.5.0",
"vite-plugin-dts": "^3.8.0",
"typescript": "^5.4.5",
"vite": "^5.2.11",
"vite-plugin-css-injected-by-js": "^3.5.1",
"vite-plugin-dts": "^3.9.1",
"vite-plugin-eslint": "^1.8.1",
"vite-tsconfig-paths": "^4.3.2",
"vitest": "^1.4.0"
"vitest": "^1.6.0"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -76,7 +76,9 @@
"aboutHeading": "Nutzen Sie Ihre kreative Energie",
"toResolve": "Lösen",
"add": "Hinzufügen",
"loglevel": "Protokoll Stufe"
"loglevel": "Protokoll Stufe",
"selected": "Ausgewählt",
"beta": "Beta"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@ -86,7 +88,7 @@
"noImagesInGallery": "Keine Bilder in der Galerie",
"loading": "Lade",
"deleteImage_one": "Lösche Bild",
"deleteImage_other": "",
"deleteImage_other": "Lösche {{count}} Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild",
@ -397,7 +399,14 @@
"cancel": "Stornieren",
"defaultSettingsSaved": "Standardeinstellungen gespeichert",
"addModels": "Model hinzufügen",
"deleteModelImage": "Lösche Model Bild"
"deleteModelImage": "Lösche Model Bild",
"hfTokenInvalidErrorMessage": "Falscher oder fehlender HuggingFace Schlüssel.",
"huggingFaceRepoID": "HuggingFace Repo ID",
"hfToken": "HuggingFace Schlüssel",
"hfTokenInvalid": "Falscher oder fehlender HF Schlüssel",
"huggingFacePlaceholder": "besitzer/model-name",
"hfTokenSaved": "HF Schlüssel gespeichert",
"hfTokenUnableToVerify": "Konnte den HF Schlüssel nicht validieren"
},
"parameters": {
"images": "Bilder",
@ -686,7 +695,11 @@
"hands": "Hände",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Posenschätzung mit DW Openpose",
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus"
"selectCLIPVisionModel": "Wähle ein CLIP Vision Model aus",
"ipAdapterMethod": "Methode",
"composition": "Nur Komposition",
"full": "Voll",
"style": "Nur Style"
},
"queue": {
"status": "Status",
@ -717,7 +730,6 @@
"resume": "Wieder aufnehmen",
"item": "Auftrag",
"notReady": "Warteschlange noch nicht bereit",
"queueCountPrediction": "{{promptsCount}} Prompts × {{iterations}} Iterationen -> {{count}} Generationen",
"clearQueueAlertDialog": "\"Die Warteschlange leeren\" stoppt den aktuellen Prozess und leert die Warteschlange komplett.",
"completedIn": "Fertig in",
"cancelBatchSucceeded": "Stapel abgebrochen",

View File

@ -142,8 +142,11 @@
"blue": "Blue",
"alpha": "Alpha",
"selected": "Selected",
"viewer": "Viewer",
"tab": "Tab"
"tab": "Tab",
"viewing": "Viewing",
"viewingDesc": "Review images in a large gallery view",
"editing": "Editing",
"editingDesc": "Edit on the Control Layers canvas"
},
"controlnet": {
"controlAdapter_one": "Control Adapter",
@ -258,7 +261,6 @@
"queue": "Queue",
"queueFront": "Add to Front of Queue",
"queueBack": "Add to Queue",
"queueCountPrediction": "{{promptsCount}} prompts \u00d7 {{iterations}} iterations -> {{count}} generations",
"queueEmpty": "Queue Empty",
"enqueueing": "Queueing Batch",
"resume": "Resume",
@ -311,7 +313,13 @@
"batchFailedToQueue": "Failed to Queue Batch",
"graphQueued": "Graph queued",
"graphFailedToQueue": "Failed to queue graph",
"openQueue": "Open Queue"
"openQueue": "Open Queue",
"prompts_one": "Prompt",
"prompts_other": "Prompts",
"iterations_one": "Iteration",
"iterations_other": "Iterations",
"generations_one": "Generation",
"generations_other": "Generations"
},
"invocationCache": {
"invocationCache": "Invocation Cache",
@ -364,8 +372,7 @@
"bulkDownloadRequestFailed": "Problem Preparing Download",
"bulkDownloadFailed": "Download Failed",
"problemDeletingImages": "Problem Deleting Images",
"problemDeletingImagesDesc": "One or more images could not be deleted",
"switchTo": "Switch to {{ tab }} (Z)"
"problemDeletingImagesDesc": "One or more images could not be deleted"
},
"hotkeys": {
"searchHotkeys": "Search Hotkeys",
@ -589,13 +596,9 @@
"desc": "Upscale the current image",
"title": "Upscale"
},
"backToEditor": {
"desc": "Closes the Image Viewer and shows the Editor View (Text to Image tab only)",
"title": "Back to Editor"
},
"openImageViewer": {
"desc": "Opens the Image Viewer (Text to Image tab only)",
"title": "Open Image Viewer"
"toggleViewer": {
"desc": "Switches between the Image Viewer and workspace for the current tab.",
"title": "Toggle Image Viewer"
}
},
"metadata": {
@ -936,17 +939,30 @@
"noModelSelected": "No model selected",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"systemDisconnected": "System disconnected"
"systemDisconnected": "System disconnected",
"layer": {
"initialImageNoImageSelected": "no initial image selected",
"controlAdapterNoModelSelected": "no Control Adapter model selected",
"controlAdapterIncompatibleBaseModel": "incompatible Control Adapter base model",
"controlAdapterNoImageSelected": "no Control Adapter image selected",
"controlAdapterImageNotProcessed": "Control Adapter image not processed",
"t2iAdapterIncompatibleDimensions": "T2I Adapter requires image dimension to be multiples of 64",
"ipAdapterNoModelSelected": "no IP adapter selected",
"ipAdapterIncompatibleBaseModel": "incompatible IP Adapter base model",
"ipAdapterNoImageSelected": "no IP Adapter image selected",
"rgNoPromptsOrIPAdapters": "no text prompts or IP Adapters",
"rgNoRegion": "no region selected"
}
},
"maskBlur": "Mask Blur",
"negativePromptPlaceholder": "Negative Prompt",
"globalNegativePromptPlaceholder": "Global Negative Prompt",
"noiseThreshold": "Noise Threshold",
"patchmatchDownScaleSize": "Downscale",
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"globalPositivePromptPlaceholder": "Global Positive Prompt",
"iterations": "Iterations",
"iterationsWithCount_one": "{{count}} Iteration",
"iterationsWithCount_other": "{{count}} Iterations",
"scale": "Scale",
"scaleBeforeProcessing": "Scale Before Processing",
"scaledHeight": "Scaled H",
@ -1534,7 +1550,7 @@
"moveForward": "Move Forward",
"moveBackward": "Move Backward",
"brushSize": "Brush Size",
"controlLayers": "Control Layers (BETA)",
"controlLayers": "Control Layers",
"globalMaskOpacity": "Global Mask Opacity",
"autoNegative": "Auto Negative",
"toggleVisibility": "Toggle Layer Visibility",
@ -1548,8 +1564,6 @@
"addIPAdapter": "Add $t(common.ipAdapter)",
"regionalGuidance": "Regional Guidance",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"controlNetLayer": "$t(common.controlNet) $t(unifiedCanvas.layer)",
"ipAdapterLayer": "$t(common.ipAdapter) $t(unifiedCanvas.layer)",
"opacity": "Opacity",
"globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
@ -1560,7 +1574,9 @@
"opacityFilter": "Opacity Filter",
"clearProcessor": "Clear Processor",
"resetProcessor": "Reset Processor to Defaults",
"noLayersAdded": "No Layers Added"
"noLayersAdded": "No Layers Added",
"layers_one": "Layer",
"layers_other": "Layers"
},
"ui": {
"tabs": {

View File

@ -25,7 +25,24 @@
"areYouSure": "¿Estas seguro?",
"batch": "Administrador de lotes",
"modelManager": "Administrador de modelos",
"communityLabel": "Comunidad"
"communityLabel": "Comunidad",
"direction": "Dirección",
"ai": "Ia",
"add": "Añadir",
"auto": "Automático",
"copyError": "Error $t(gallery.copy)",
"details": "Detalles",
"or": "o",
"checkpoint": "Punto de control",
"controlNet": "ControlNet",
"aboutHeading": "Sea dueño de su poder creativo",
"advanced": "Avanzado",
"data": "Fecha",
"delete": "Borrar",
"copy": "Copiar",
"beta": "Beta",
"on": "En",
"aboutDesc": "¿Utilizas Invoke para trabajar? Mira aquí:"
},
"gallery": {
"galleryImageSize": "Tamaño de la imagen",
@ -443,7 +460,13 @@
"previousImage": "Imagen anterior",
"nextImage": "Siguiente imagen",
"showOptionsPanel": "Mostrar el panel lateral",
"menu": "Menú"
"menu": "Menú",
"showGalleryPanel": "Mostrar panel de galería",
"loadMore": "Cargar más",
"about": "Acerca de",
"createIssue": "Crear un problema",
"resetUI": "Interfaz de usuario $t(accessibility.reset)",
"mode": "Modo"
},
"nodes": {
"zoomInNodes": "Acercar",
@ -456,5 +479,68 @@
"reloadNodeTemplates": "Recargar las plantillas de nodos",
"loadWorkflow": "Cargar el flujo de trabajo",
"downloadWorkflow": "Descargar el flujo de trabajo en un archivo JSON"
},
"boards": {
"autoAddBoard": "Agregar panel automáticamente",
"changeBoard": "Cambiar el panel",
"clearSearch": "Borrar la búsqueda",
"deleteBoard": "Borrar el panel",
"selectBoard": "Seleccionar un panel",
"uncategorized": "Sin categoría",
"cancel": "Cancelar",
"addBoard": "Agregar un panel",
"movingImagesToBoard_one": "Moviendo {{count}} imagen al panel:",
"movingImagesToBoard_many": "Moviendo {{count}} imágenes al panel:",
"movingImagesToBoard_other": "Moviendo {{count}} imágenes al panel:",
"bottomMessage": "Al eliminar este panel y las imágenes que contiene, se restablecerán las funciones que los estén utilizando actualmente.",
"deleteBoardAndImages": "Borrar el panel y las imágenes",
"loading": "Cargando...",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar",
"move": "Mover",
"menuItemAutoAdd": "Agregar automáticamente a este panel",
"searchBoard": "Buscando paneles…",
"topMessage": "Este panel contiene imágenes utilizadas en las siguientes funciones:",
"downloadBoard": "Descargar panel",
"deleteBoardOnly": "Borrar solo el panel",
"myBoard": "Mi panel",
"noMatching": "No hay paneles que coincidan"
},
"accordions": {
"compositing": {
"title": "Composición",
"infillTab": "Relleno"
},
"generation": {
"title": "Generación"
},
"image": {
"title": "Imagen"
},
"control": {
"title": "Control"
},
"advanced": {
"options": "$t(accordions.advanced.title) opciones",
"title": "Avanzado"
}
},
"ui": {
"tabs": {
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Lienzo",
"generation": "Generación",
"queue": "Cola",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"workflows": "Flujos de trabajo",
"models": "Modelos",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)"
}
},
"controlLayers": {
"layers_one": "Capa",
"layers_many": "Capas",
"layers_other": "Capas"
}
}

View File

@ -5,7 +5,7 @@
"reportBugLabel": "Segnala un errore",
"settingsLabel": "Impostazioni",
"img2img": "Immagine a Immagine",
"unifiedCanvas": "Tela unificata",
"unifiedCanvas": "Tela",
"nodes": "Flussi di lavoro",
"upload": "Caricamento",
"load": "Carica",
@ -74,7 +74,18 @@
"file": "File",
"toResolve": "Da risolvere",
"add": "Aggiungi",
"loglevel": "Livello di log"
"loglevel": "Livello di log",
"beta": "Beta",
"positivePrompt": "Prompt positivo",
"negativePrompt": "Prompt negativo",
"selected": "Selezionato",
"goTo": "Vai a",
"editor": "Editor",
"tab": "Scheda",
"viewing": "Visualizza",
"viewingDesc": "Rivedi le immagini in un'ampia vista della galleria",
"editing": "Modifica",
"editingDesc": "Modifica nell'area Livelli di controllo"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@ -180,8 +191,8 @@
"desc": "Mostra le informazioni sui metadati dell'immagine corrente"
},
"sendToImageToImage": {
"title": "Invia a Immagine a Immagine",
"desc": "Invia l'immagine corrente a da Immagine a Immagine"
"title": "Invia a Generazione da immagine",
"desc": "Invia l'immagine corrente a Generazione da immagine"
},
"deleteImage": {
"title": "Elimina immagine",
@ -334,6 +345,10 @@
"remixImage": {
"desc": "Utilizza tutti i parametri tranne il seme dell'immagine corrente",
"title": "Remixa l'immagine"
},
"toggleViewer": {
"title": "Attiva/disattiva il visualizzatore di immagini",
"desc": "Passa dal Visualizzatore immagini all'area di lavoro per la scheda corrente."
}
},
"modelManager": {
@ -471,8 +486,8 @@
"scaledHeight": "Altezza ridimensionata",
"infillMethod": "Metodo di riempimento",
"tileSize": "Dimensione piastrella",
"sendToImg2Img": "Invia a Immagine a Immagine",
"sendToUnifiedCanvas": "Invia a Tela Unificata",
"sendToImg2Img": "Invia a Generazione da immagine",
"sendToUnifiedCanvas": "Invia alla Tela",
"downloadImage": "Scarica l'immagine",
"usePrompt": "Usa Prompt",
"useSeed": "Usa Seme",
@ -508,13 +523,11 @@
"incompatibleBaseModelForControlAdapter": "Il modello dell'adattatore di controllo #{{number}} non è compatibile con il modello principale.",
"missingNodeTemplate": "Modello di nodo mancante",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} ingresso mancante",
"missingFieldTemplate": "Modello di campo mancante"
"missingFieldTemplate": "Modello di campo mancante",
"imageNotProcessedForControlAdapter": "L'immagine dell'adattatore di controllo #{{number}} non è stata elaborata"
},
"useCpuNoise": "Usa la CPU per generare rumore",
"iterations": "Iterazioni",
"iterationsWithCount_one": "{{count}} Iterazione",
"iterationsWithCount_many": "{{count}} Iterazioni",
"iterationsWithCount_other": "{{count}} Iterazioni",
"isAllowedToUpscale": {
"useX2Model": "L'immagine è troppo grande per l'ampliamento con il modello x4, utilizza il modello x2",
"tooLarge": "L'immagine è troppo grande per l'ampliamento, seleziona un'immagine più piccola"
@ -534,7 +547,10 @@
"infillMosaicMinColor": "Colore minimo",
"infillMosaicMaxColor": "Colore massimo",
"infillMosaicTileHeight": "Altezza piastrella",
"infillColorValue": "Colore di riempimento"
"infillColorValue": "Colore di riempimento",
"globalSettings": "Impostazioni globali",
"globalPositivePromptPlaceholder": "Prompt positivo globale",
"globalNegativePromptPlaceholder": "Prompt negativo globale"
},
"settings": {
"models": "Modelli",
@ -559,7 +575,7 @@
"intermediatesCleared_one": "Cancellata {{count}} immagine intermedia",
"intermediatesCleared_many": "Cancellate {{count}} immagini intermedie",
"intermediatesCleared_other": "Cancellate {{count}} immagini intermedie",
"clearIntermediatesDesc1": "La cancellazione delle immagini intermedie ripristinerà lo stato di Tela Unificata e ControlNet.",
"clearIntermediatesDesc1": "La cancellazione delle immagini intermedie ripristinerà lo stato della Tela e degli Adattatori di Controllo.",
"intermediatesClearedFailed": "Problema con la cancellazione delle immagini intermedie",
"clearIntermediatesWithCount_one": "Cancella {{count}} immagine intermedia",
"clearIntermediatesWithCount_many": "Cancella {{count}} immagini intermedie",
@ -575,8 +591,8 @@
"imageCopied": "Immagine copiata",
"imageNotLoadedDesc": "Impossibile trovare l'immagine",
"canvasMerged": "Tela unita",
"sentToImageToImage": "Inviato a Immagine a Immagine",
"sentToUnifiedCanvas": "Inviato a Tela Unificata",
"sentToImageToImage": "Inviato a Generazione da immagine",
"sentToUnifiedCanvas": "Inviato alla Tela",
"parametersNotSet": "Parametri non impostati",
"metadataLoadFailed": "Impossibile caricare i metadati",
"serverError": "Errore del Server",
@ -795,7 +811,7 @@
"float": "In virgola mobile",
"currentImageDescription": "Visualizza l'immagine corrente nell'editor dei nodi",
"fieldTypesMustMatch": "I tipi di campo devono corrispondere",
"edge": "Bordo",
"edge": "Collegamento",
"currentImage": "Immagine corrente",
"integer": "Numero Intero",
"inputMayOnlyHaveOneConnection": "L'ingresso può avere solo una connessione",
@ -845,7 +861,9 @@
"resetToDefaultValue": "Ripristina il valore predefinito",
"noFieldsViewMode": "Questo flusso di lavoro non ha campi selezionati da visualizzare. Visualizza il flusso di lavoro completo per configurare i valori.",
"edit": "Modifica",
"graph": "Grafico"
"graph": "Grafico",
"showEdgeLabelsHelp": "Mostra etichette sui collegamenti, che indicano i nodi collegati",
"showEdgeLabels": "Mostra le etichette del collegamento"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@ -922,7 +940,7 @@
"colorMapTileSize": "Dimensione piastrella",
"mediapipeFaceDescription": "Rilevamento dei volti tramite Mediapipe",
"hedDescription": "Rilevamento dei bordi nidificati olisticamente",
"setControlImageDimensions": "Imposta le dimensioni dell'immagine di controllo su L/A",
"setControlImageDimensions": "Copia le dimensioni in L/A (ottimizza per il modello)",
"maxFaces": "Numero massimo di volti",
"addT2IAdapter": "Aggiungi $t(common.t2iAdapter)",
"addControlNet": "Aggiungi $t(common.controlNet)",
@ -951,12 +969,17 @@
"mediapipeFace": "Mediapipe Volto",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision"
"selectCLIPVisionModel": "Seleziona un modello CLIP Vision",
"ipAdapterMethod": "Metodo",
"full": "Completo",
"composition": "Solo la composizione",
"style": "Solo lo stile",
"beginEndStepPercentShort": "Inizio/Fine %",
"setControlImageDimensionsForce": "Copia le dimensioni in L/A (ignora il modello)"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
"queueBack": "Aggiungi alla coda",
"queueCountPrediction": "{{promptsCount}} prompt × {{iterations}} iterazioni -> {{count}} generazioni",
"queue": "Coda",
"status": "Stato",
"pruneSucceeded": "Rimossi {{item_count}} elementi completati dalla coda",
@ -993,7 +1016,7 @@
"cancelBatchSucceeded": "Lotto annullato",
"clearTooltip": "Annulla e cancella tutti gli elementi",
"current": "Attuale",
"pauseTooltip": "Sospende l'elaborazione",
"pauseTooltip": "Sospendi l'elaborazione",
"failed": "Falliti",
"cancelItem": "Annulla l'elemento",
"next": "Prossimo",
@ -1394,6 +1417,12 @@
"paragraphs": [
"La dimensione del bordo del passaggio di coerenza."
]
},
"ipAdapterMethod": {
"heading": "Metodo",
"paragraphs": [
"Metodo con cui applicare l'adattatore IP corrente."
]
}
},
"sdxl": {
@ -1522,5 +1551,56 @@
"compatibleEmbeddings": "Incorporamenti compatibili",
"addPromptTrigger": "Aggiungi Trigger nel prompt",
"noMatchingTriggers": "Nessun Trigger corrispondente"
},
"controlLayers": {
"opacityFilter": "Filtro opacità",
"deleteAll": "Cancella tutto",
"addLayer": "Aggiungi Livello",
"moveToFront": "Sposta in primo piano",
"moveToBack": "Sposta in fondo",
"moveForward": "Sposta avanti",
"moveBackward": "Sposta indietro",
"brushSize": "Dimensioni del pennello",
"globalMaskOpacity": "Opacità globale della maschera",
"autoNegative": "Auto Negativo",
"toggleVisibility": "Attiva/disattiva la visibilità dei livelli",
"deletePrompt": "Cancella il prompt",
"debugLayers": "Debug dei Livelli",
"rectangle": "Rettangolo",
"maskPreviewColor": "Colore anteprima maschera",
"addPositivePrompt": "Aggiungi $t(common.positivePrompt)",
"addNegativePrompt": "Aggiungi $t(common.negativePrompt)",
"addIPAdapter": "Aggiungi $t(common.ipAdapter)",
"regionalGuidance": "Guida regionale",
"regionalGuidanceLayer": "$t(unifiedCanvas.layer) $t(controlLayers.regionalGuidance)",
"opacity": "Opacità",
"globalControlAdapter": "$t(controlnet.controlAdapter_one) Globale",
"globalControlAdapterLayer": "$t(controlnet.controlAdapter_one) - $t(unifiedCanvas.layer) Globale",
"globalIPAdapter": "$t(common.ipAdapter) Globale",
"globalIPAdapterLayer": "$t(common.ipAdapter) - $t(unifiedCanvas.layer) Globale",
"globalInitialImage": "Immagine iniziale",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) - $t(unifiedCanvas.layer) Globale",
"clearProcessor": "Cancella processore",
"resetProcessor": "Ripristina il processore alle impostazioni predefinite",
"noLayersAdded": "Nessun livello aggiunto",
"resetRegion": "Reimposta la regione",
"controlLayers": "Livelli di controllo",
"layers_one": "Livello",
"layers_many": "Livelli",
"layers_other": "Livelli"
},
"ui": {
"tabs": {
"generation": "Generazione",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"canvas": "Tela",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"workflows": "Flussi di lavoro",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Modelli",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Coda",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)"
}
}
}

View File

@ -570,7 +570,6 @@
"pauseSucceeded": "処理が一時停止されました",
"queueFront": "キューの先頭へ追加",
"queueBack": "キューに追加",
"queueCountPrediction": "{{promptsCount}} プロンプト × {{iterations}} イテレーション -> {{count}} 枚生成",
"pause": "一時停止",
"queue": "キュー",
"pauseTooltip": "処理を一時停止",

View File

@ -505,7 +505,6 @@
"completed": "완성된",
"queueBack": "Queue에 추가",
"cancelFailed": "항목 취소 중 발생한 문제",
"queueCountPrediction": "Queue에 {{predicted}} 추가",
"batchQueued": "Batch Queued",
"pauseFailed": "프로세서 중지 중 발생한 문제",
"clearFailed": "Queue 제거 중 발생한 문제",

View File

@ -383,8 +383,6 @@
"useCpuNoise": "Gebruik CPU-ruis",
"imageActions": "Afbeeldingshandeling",
"iterations": "Iteraties",
"iterationsWithCount_one": "{{count}} iteratie",
"iterationsWithCount_other": "{{count}} iteraties",
"coherenceMode": "Modus"
},
"settings": {
@ -940,7 +938,6 @@
"completed": "Voltooid",
"queueBack": "Voeg toe aan wachtrij",
"cancelFailed": "Fout bij annuleren onderdeel",
"queueCountPrediction": "Voeg {{predicted}} toe aan wachtrij",
"batchQueued": "Reeks in wachtrij geplaatst",
"pauseFailed": "Fout bij onderbreken verwerker",
"clearFailed": "Fout bij wissen van wachtrij",

View File

@ -76,7 +76,18 @@
"localSystem": "Локальная система",
"aboutDesc": "Используя Invoke для работы? Проверьте это:",
"add": "Добавить",
"loglevel": "Уровень логов"
"loglevel": "Уровень логов",
"beta": "Бета",
"selected": "Выбрано",
"positivePrompt": "Позитивный запрос",
"negativePrompt": "Негативный запрос",
"editor": "Редактор",
"goTo": "Перейти к",
"tab": "Вкладка",
"viewing": "Просмотр",
"editing": "Редактирование",
"viewingDesc": "Просмотр изображений в режиме большой галереи",
"editingDesc": "Редактировать на холсте слоёв управления"
},
"gallery": {
"galleryImageSize": "Размер изображений",
@ -87,8 +98,8 @@
"deleteImagePermanent": "Удаленные изображения невозможно восстановить.",
"deleteImageBin": "Удаленные изображения будут отправлены в корзину вашей операционной системы.",
"deleteImage_one": "Удалить изображение",
"deleteImage_few": "",
"deleteImage_many": "",
"deleteImage_few": "Удалить {{count}} изображения",
"deleteImage_many": "Удалить {{count}} изображений",
"assets": "Ресурсы",
"autoAssignBoardOnClick": "Авто-назначение доски по клику",
"deleteSelection": "Удалить выделенное",
@ -336,6 +347,10 @@
"remixImage": {
"desc": "Используйте все параметры, кроме сида из текущего изображения",
"title": "Ремикс изображения"
},
"toggleViewer": {
"title": "Переключить просмотр изображений",
"desc": "Переключение между средством просмотра изображений и рабочей областью для текущей вкладки."
}
},
"modelManager": {
@ -512,7 +527,8 @@
"missingNodeTemplate": "Отсутствует шаблон узла",
"missingFieldTemplate": "Отсутствует шаблон поля",
"addingImagesTo": "Добавление изображений в",
"invoke": "Создать"
"invoke": "Создать",
"imageNotProcessedForControlAdapter": "Изображение адаптера контроля №{{number}} не обрабатывается"
},
"isAllowedToUpscale": {
"useX2Model": "Изображение слишком велико для увеличения с помощью модели x4. Используйте модель x2",
@ -523,9 +539,6 @@
"useCpuNoise": "Использовать шум CPU",
"imageActions": "Действия с изображениями",
"iterations": "Кол-во",
"iterationsWithCount_one": "{{count}} Интеграция",
"iterationsWithCount_few": "{{count}} Итерации",
"iterationsWithCount_many": "{{count}} Итераций",
"useSize": "Использовать размер",
"coherenceMode": "Режим",
"aspect": "Соотношение",
@ -541,7 +554,10 @@
"infillMosaicTileHeight": "Высота плиток",
"infillMosaicMinColor": "Мин цвет",
"infillMosaicMaxColor": "Макс цвет",
"infillColorValue": "Цвет заливки"
"infillColorValue": "Цвет заливки",
"globalSettings": "Глобальные настройки",
"globalNegativePromptPlaceholder": "Глобальный негативный запрос",
"globalPositivePromptPlaceholder": "Глобальный запрос"
},
"settings": {
"models": "Модели",
@ -706,7 +722,9 @@
"coherenceModeBoxBlur": "коробчатое размытие",
"discardCurrent": "Отбросить текущее",
"invertBrushSizeScrollDirection": "Инвертировать прокрутку для размера кисти",
"initialFitImageSize": "Подогнать размер изображения при перебросе"
"initialFitImageSize": "Подогнать размер изображения при перебросе",
"hideBoundingBox": "Скрыть ограничительную рамку",
"showBoundingBox": "Показать ограничительную рамку"
},
"accessibility": {
"uploadImage": "Загрузить изображение",
@ -849,7 +867,10 @@
"editMode": "Открыть в редакторе узлов",
"resetToDefaultValue": "Сбросить к стандартному значкнию",
"edit": "Редактировать",
"noFieldsViewMode": "В этом рабочем процессе нет выбранных полей для отображения. Просмотрите полный рабочий процесс для настройки значений."
"noFieldsViewMode": "В этом рабочем процессе нет выбранных полей для отображения. Просмотрите полный рабочий процесс для настройки значений.",
"graph": "График",
"showEdgeLabels": "Показать метки на ребрах",
"showEdgeLabelsHelp": "Показать метки на ребрах, указывающие на соединенные узлы"
},
"controlnet": {
"amult": "a_mult",
@ -917,8 +938,8 @@
"lineartAnime": "Контурный рисунок в стиле аниме",
"mediapipeFaceDescription": "Обнаружение лиц с помощью Mediapipe",
"hedDescription": "Целостное обнаружение границ",
"setControlImageDimensions": "Установите размеры контрольного изображения на Ш/В",
"scribble": "каракули",
"setControlImageDimensions": "Скопируйте размер в Ш/В (оптимизируйте для модели)",
"scribble": "Штрихи",
"maxFaces": "Макс Лица",
"mlsdDescription": "Минималистичный детектор отрезков линии",
"resizeSimple": "Изменить размер (простой)",
@ -933,7 +954,18 @@
"small": "Маленький",
"body": "Тело",
"hands": "Руки",
"selectCLIPVisionModel": "Выбрать модель CLIP Vision"
"selectCLIPVisionModel": "Выбрать модель CLIP Vision",
"ipAdapterMethod": "Метод",
"full": "Всё",
"mlsd": "M-LSD",
"h": "H",
"style": "Только стиль",
"dwOpenpose": "DW Openpose",
"pidi": "PIDI",
"composition": "Только композиция",
"hed": "HED",
"beginEndStepPercentShort": "Начало/конец %",
"setControlImageDimensionsForce": "Скопируйте размер в Ш/В (игнорируйте модель)"
},
"boards": {
"autoAddBoard": "Авто добавление Доски",
@ -1312,6 +1344,12 @@
"paragraphs": [
"Плавно укладывайте изображение вдоль вертикальной оси."
]
},
"ipAdapterMethod": {
"heading": "Метод",
"paragraphs": [
"Метод, с помощью которого применяется текущий IP-адаптер."
]
}
},
"metadata": {
@ -1359,7 +1397,6 @@
"completed": "Выполнено",
"queueBack": "Добавить в очередь",
"cancelFailed": "Проблема с отменой элемента",
"queueCountPrediction": "{{promptsCount}} запросов × {{iterations}} изображений -> {{count}} генераций",
"batchQueued": "Пакетная очередь",
"pauseFailed": "Проблема с приостановкой рендеринга",
"clearFailed": "Проблема с очисткой очереди",
@ -1475,7 +1512,11 @@
"projectWorkflows": "Рабочие процессы проекта",
"defaultWorkflows": "Стандартные рабочие процессы",
"name": "Имя",
"noRecentWorkflows": "Нет последних рабочих процессов"
"noRecentWorkflows": "Нет последних рабочих процессов",
"loadWorkflow": "Рабочий процесс $t(common.load)",
"convertGraph": "Конвертировать график",
"loadFromGraph": "Загрузка рабочего процесса из графика",
"autoLayout": "Автоматическое расположение"
},
"hrf": {
"enableHrf": "Включить исправление высокого разрешения",
@ -1528,5 +1569,56 @@
"addPromptTrigger": "Добавить триггер запроса",
"compatibleEmbeddings": "Совместимые встраивания",
"noMatchingTriggers": "Нет соответствующих триггеров"
},
"controlLayers": {
"moveToBack": "На задний план",
"moveForward": "Переместить вперёд",
"moveBackward": "Переместить назад",
"brushSize": "Размер кисти",
"controlLayers": "Слои управления",
"globalMaskOpacity": "Глобальная непрозрачность маски",
"autoNegative": "Авто негатив",
"deletePrompt": "Удалить запрос",
"resetRegion": "Сбросить регион",
"debugLayers": "Слои отладки",
"rectangle": "Прямоугольник",
"maskPreviewColor": "Цвет предпросмотра маски",
"addNegativePrompt": "Добавить $t(common.negativePrompt)",
"regionalGuidance": "Региональная точность",
"opacity": "Непрозрачность",
"globalControlAdapter": "Глобальный $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Глобальный $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
"globalIPAdapter": "Глобальный $t(common.ipAdapter)",
"globalIPAdapterLayer": "Глобальный $t(common.ipAdapter) $t(unifiedCanvas.layer)",
"opacityFilter": "Фильтр непрозрачности",
"deleteAll": "Удалить всё",
"addLayer": "Добавить слой",
"moveToFront": "На передний план",
"toggleVisibility": "Переключить видимость слоя",
"addPositivePrompt": "Добавить $t(common.positivePrompt)",
"addIPAdapter": "Добавить $t(common.ipAdapter)",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"resetProcessor": "Сброс процессора по умолчанию",
"clearProcessor": "Чистый процессор",
"globalInitialImage": "Глобальное исходное изображение",
"globalInitialImageLayer": "$t(controlLayers.globalInitialImage) $t(unifiedCanvas.layer)",
"noLayersAdded": "Без слоев",
"layers_one": "Слой",
"layers_few": "Слоя",
"layers_many": "Слоев"
},
"ui": {
"tabs": {
"generation": "Генерация",
"canvas": "Холст",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Модели",
"generationTab": "$t(ui.tabs.generation) $t(common.tab)",
"workflows": "Рабочие процессы",
"canvasTab": "$t(ui.tabs.canvas) $t(common.tab)",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Очередь"
}
}
}

View File

@ -66,7 +66,7 @@
"saveAs": "保存为",
"ai": "ai",
"or": "或",
"aboutDesc": "使用 Invoke 工作?看:",
"aboutDesc": "使用 Invoke 工作?来看看:",
"add": "添加",
"loglevel": "日志级别",
"copy": "复制",
@ -445,7 +445,6 @@
"useX2Model": "图像太大,无法使用 x4 模型,使用 x2 模型作为替代",
"tooLarge": "图像太大无法进行放大,请选择更小的图像"
},
"iterationsWithCount_other": "{{count}} 次迭代生成",
"cfgRescaleMultiplier": "CFG 重缩放倍数",
"useSize": "使用尺寸",
"setToOptimalSize": "优化模型大小",
@ -853,7 +852,6 @@
"pruneSucceeded": "从队列修剪 {{item_count}} 个已完成的项目",
"notReady": "无法排队",
"batchFailedToQueue": "批次加入队列失败",
"queueCountPrediction": "{{promptsCount}} 提示词 × {{iterations}} 迭代次数 -> {{count}} 次生成",
"batchQueued": "加入队列的批次",
"front": "前",
"pruneTooltip": "修剪 {{item_count}} 个已完成的项目",

View File

@ -1,60 +1,54 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch } from 'app/store/store';
import { parseify } from 'common/util/serialize';
import {
caLayerImageChanged,
caLayerIsProcessingImageChanged,
caLayerModelChanged,
caLayerProcessedImageChanged,
caLayerProcessorConfigChanged,
caLayerProcessorPendingBatchIdChanged,
caLayerRecalled,
isControlAdapterLayer,
} from 'features/controlLayers/store/controlLayersSlice';
import { CA_PROCESSOR_DATA } from 'features/controlLayers/util/controlAdapters';
import { isImageOutput } from 'features/nodes/types/common';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { isEqual } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import { getImageDTO } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig, ImageDTO } from 'services/api/types';
import type { BatchConfig } from 'services/api/types';
import { socketInvocationComplete } from 'services/events/actions';
import { assert } from 'tsafe';
const matcher = isAnyOf(caLayerImageChanged, caLayerProcessorConfigChanged, caLayerModelChanged);
const matcher = isAnyOf(caLayerImageChanged, caLayerProcessorConfigChanged, caLayerModelChanged, caLayerRecalled);
const DEBOUNCE_MS = 300;
const log = logger('session');
/**
* Simple helper to cancel a batch and reset the pending batch ID
*/
const cancelProcessorBatch = async (dispatch: AppDispatch, layerId: string, batchId: string) => {
const req = dispatch(queueApi.endpoints.cancelByBatchIds.initiate({ batch_ids: [batchId] }));
log.trace({ batchId }, 'Cancelling existing preprocessor batch');
try {
await req.unwrap();
} catch {
// no-op
} finally {
req.reset();
// Always reset the pending batch ID - the cancel req could fail if the batch doesn't exist
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: null }));
}
};
export const addControlAdapterPreprocessor = (startAppListening: AppStartListening) => {
startAppListening({
matcher,
effect: async (action, { dispatch, getState, getOriginalState, cancelActiveListeners, delay, take }) => {
const { layerId } = action.payload;
const precheckLayerOriginal = getOriginalState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId);
const precheckLayer = getState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId);
// Conditions to bail
const layerDoesNotExist = !precheckLayer;
const layerHasNoImage = !precheckLayer?.controlAdapter.image;
const layerHasNoProcessorConfig = !precheckLayer?.controlAdapter.processorConfig;
const layerIsAlreadyProcessingImage = precheckLayer?.controlAdapter.isProcessingImage;
const areImageAndProcessorUnchanged =
isEqual(precheckLayer?.controlAdapter.image, precheckLayerOriginal?.controlAdapter.image) &&
isEqual(precheckLayer?.controlAdapter.processorConfig, precheckLayerOriginal?.controlAdapter.processorConfig);
if (
layerDoesNotExist ||
layerHasNoImage ||
layerHasNoProcessorConfig ||
areImageAndProcessorUnchanged ||
layerIsAlreadyProcessingImage
) {
return;
}
effect: async (action, { dispatch, getState, cancelActiveListeners, delay, take, signal }) => {
const layerId = caLayerRecalled.match(action) ? action.payload.id : action.payload.layerId;
// Cancel any in-progress instances of this listener
cancelActiveListeners();
@ -62,19 +56,31 @@ export const addControlAdapterPreprocessor = (startAppListening: AppStartListeni
// Delay before starting actual work
await delay(DEBOUNCE_MS);
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: true }));
// Double-check that we are still eligible for processing
const state = getState();
const layer = state.controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
const image = layer?.controlAdapter.image;
const config = layer?.controlAdapter.processorConfig;
// If we have no image or there is no processor config, bail
if (!layer || !image || !config) {
if (!layer) {
return;
}
const image = layer.controlAdapter.image;
const config = layer.controlAdapter.processorConfig;
if (!image || !config) {
// The user has reset the image or config, so we should clear the processed image
dispatch(caLayerProcessedImageChanged({ layerId, imageDTO: null }));
}
// At this point, the user has stopped fiddling with the processor settings and there is a processor selected.
// If there is a pending processor batch, cancel it.
if (layer.controlAdapter.processorPendingBatchId) {
cancelProcessorBatch(dispatch, layerId, layer.controlAdapter.processorPendingBatchId);
}
// @ts-expect-error: TS isn't able to narrow the typing of buildNode and `config` will error...
const processorNode = CA_PROCESSOR_DATA[config.type].buildNode(image, config);
const enqueueBatchArg: BatchConfig = {
@ -82,7 +88,11 @@ export const addControlAdapterPreprocessor = (startAppListening: AppStartListeni
batch: {
graph: {
nodes: {
[processorNode.id]: { ...processorNode, is_intermediate: true },
[processorNode.id]: {
...processorNode,
// Control images are always intermediate - do not save to gallery
is_intermediate: true,
},
},
edges: [],
},
@ -90,16 +100,21 @@ export const addControlAdapterPreprocessor = (startAppListening: AppStartListeni
},
};
// Kick off the processor batch
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, {
fixedCacheKey: 'enqueueBatch',
})
);
try {
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(enqueueBatchArg, {
fixedCacheKey: 'enqueueBatch',
})
);
const enqueueResult = await req.unwrap();
req.reset();
// TODO(psyche): Update the pydantic models, pretty sure we will _always_ have a batch_id here, but the model says it's optional
assert(enqueueResult.batch.batch_id, 'Batch ID not returned from queue');
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: enqueueResult.batch.batch_id }));
log.debug({ enqueueResult: parseify(enqueueResult) }, t('queue.graphQueued'));
// Wait for the processor node to complete
const [invocationCompleteAction] = await take(
(action): action is ReturnType<typeof socketInvocationComplete> =>
socketInvocationComplete.match(action) &&
@ -108,48 +123,52 @@ export const addControlAdapterPreprocessor = (startAppListening: AppStartListeni
);
// We still have to check the output type
if (isImageOutput(invocationCompleteAction.payload.data.result)) {
const { image_name } = invocationCompleteAction.payload.data.result.image;
assert(
isImageOutput(invocationCompleteAction.payload.data.result),
`Processor did not return an image output, got: ${invocationCompleteAction.payload.data.result}`
);
const { image_name } = invocationCompleteAction.payload.data.result.image;
// Wait for the ImageDTO to be received
const [{ payload }] = await take(
(action) =>
imagesApi.endpoints.getImageDTO.matchFulfilled(action) && action.payload.image_name === image_name
);
const imageDTO = await getImageDTO(image_name);
assert(imageDTO, "Failed to fetch processor output's image DTO");
const imageDTO = payload as ImageDTO;
log.debug({ layerId, imageDTO }, 'ControlNet image processed');
// Update the processed image in the store
dispatch(
caLayerProcessedImageChanged({
layerId,
imageDTO,
})
);
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: false }));
}
// Whew! We made it. Update the layer with the processed image
log.debug({ layerId, imageDTO }, 'ControlNet image processed');
dispatch(caLayerProcessedImageChanged({ layerId, imageDTO }));
dispatch(caLayerProcessorPendingBatchIdChanged({ layerId, batchId: null }));
} catch (error) {
console.log(error);
log.error({ enqueueBatchArg: parseify(enqueueBatchArg) }, t('queue.graphFailedToQueue'));
dispatch(caLayerIsProcessingImageChanged({ layerId, isProcessingImage: false }));
if (signal.aborted) {
// The listener was canceled - we need to cancel the pending processor batch, if there is one (could have changed by now).
const pendingBatchId = getState()
.controlLayers.present.layers.filter(isControlAdapterLayer)
.find((l) => l.id === layerId)?.controlAdapter.processorPendingBatchId;
if (pendingBatchId) {
cancelProcessorBatch(dispatch, layerId, pendingBatchId);
}
log.trace('Control Adapter preprocessor cancelled');
} else {
// Some other error condition...
console.log(error);
log.error({ enqueueBatchArg: parseify(enqueueBatchArg) }, t('queue.graphFailedToQueue'));
if (error instanceof Object) {
if ('data' in error && 'status' in error) {
if (error.status === 403) {
dispatch(caLayerImageChanged({ layerId, imageDTO: null }));
return;
if (error instanceof Object) {
if ('data' in error && 'status' in error) {
if (error.status === 403) {
dispatch(caLayerImageChanged({ layerId, imageDTO: null }));
return;
}
}
}
}
dispatch(
addToast({
title: t('queue.graphFailedToQueue'),
status: 'error',
})
);
dispatch(
addToast({
title: t('queue.graphFailedToQueue'),
status: 'error',
})
);
}
} finally {
req.reset();
}
},
});

View File

@ -1,5 +1,6 @@
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
import { buildGenerationTabGraph } from 'features/nodes/util/graph/buildGenerationTabGraph';
import { buildGenerationTabSDXLGraph } from 'features/nodes/util/graph/buildGenerationTabSDXLGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
@ -11,6 +12,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
enqueueRequested.match(action) && action.payload.tabName === 'generation',
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { shouldShowProgressInViewer } = state.ui;
const model = state.generation.model;
const { prepend } = action.payload;
@ -29,7 +31,14 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
fixedCacheKey: 'enqueueBatch',
})
);
req.reset();
try {
await req.unwrap();
if (shouldShowProgressInViewer) {
dispatch(isImageViewerOpenChanged(true));
}
} finally {
req.reset();
}
},
});
};

View File

@ -39,7 +39,11 @@ export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) =
fixedCacheKey: 'enqueueBatch',
})
);
req.reset();
try {
await req.unwrap();
} finally {
req.reset();
}
},
});
};

View File

@ -73,25 +73,25 @@ const deleteControlAdapterImages = (state: RootState, dispatch: AppDispatch, ima
const deleteControlLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.controlLayers.present.layers.forEach((l) => {
if (isRegionalGuidanceLayer(l)) {
if (l.ipAdapters.some((ipa) => ipa.image?.imageName === imageDTO.image_name)) {
if (l.ipAdapters.some((ipa) => ipa.image?.name === imageDTO.image_name)) {
dispatch(layerDeleted(l.id));
}
}
if (isControlAdapterLayer(l)) {
if (
l.controlAdapter.image?.imageName === imageDTO.image_name ||
l.controlAdapter.processedImage?.imageName === imageDTO.image_name
l.controlAdapter.image?.name === imageDTO.image_name ||
l.controlAdapter.processedImage?.name === imageDTO.image_name
) {
dispatch(layerDeleted(l.id));
}
}
if (isIPAdapterLayer(l)) {
if (l.ipAdapter.image?.imageName === imageDTO.image_name) {
if (l.ipAdapter.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}
if (isInitialImageLayer(l)) {
if (l.image?.imageName === imageDTO.image_name) {
if (l.image?.name === imageDTO.image_name) {
dispatch(layerDeleted(l.id));
}
}

View File

@ -2,7 +2,12 @@ import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { parseify } from 'common/util/serialize';
import { addImageToStagingArea } from 'features/canvas/store/canvasSlice';
import { boardIdSelected, galleryViewChanged, imageSelected } from 'features/gallery/store/gallerySlice';
import {
boardIdSelected,
galleryViewChanged,
imageSelected,
isImageViewerOpenChanged,
} from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import { isImageOutput } from 'features/nodes/types/common';
import { CANVAS_OUTPUT } from 'features/nodes/util/graph/constants';
@ -101,6 +106,7 @@ export const addInvocationCompleteEventListener = (startAppListening: AppStartLi
}
dispatch(imageSelected(imageDTO));
dispatch(isImageViewerOpenChanged(true));
}
}
}

View File

@ -70,6 +70,7 @@ const IAIDndImage = (props: IAIDndImageProps) => {
onMouseOver,
onMouseOut,
dataTestId,
...rest
} = props;
const [isHovered, setIsHovered] = useState(false);
@ -138,6 +139,7 @@ const IAIDndImage = (props: IAIDndImageProps) => {
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
{...rest}
>
{imageDTO && (
<Flex

View File

@ -13,6 +13,7 @@ type UseGroupedModelComboboxArg<T extends AnyModelConfig> = {
onChange: (value: T | null) => void;
getIsDisabled?: (model: T) => boolean;
isLoading?: boolean;
groupByType?: boolean;
};
type UseGroupedModelComboboxReturn = {
@ -23,17 +24,21 @@ type UseGroupedModelComboboxReturn = {
noOptionsMessage: () => string;
};
const groupByBaseFunc = <T extends AnyModelConfig>(model: T) => model.base.toUpperCase();
const groupByBaseAndTypeFunc = <T extends AnyModelConfig>(model: T) =>
`${model.base.toUpperCase()} / ${model.type.replaceAll('_', ' ').toUpperCase()}`;
export const useGroupedModelCombobox = <T extends AnyModelConfig>(
arg: UseGroupedModelComboboxArg<T>
): UseGroupedModelComboboxReturn => {
const { t } = useTranslation();
const base_model = useAppSelector((s) => s.generation.model?.base ?? 'sdxl');
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading } = arg;
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading, groupByType = false } = arg;
const options = useMemo<GroupBase<ComboboxOption>[]>(() => {
if (!modelConfigs) {
return [];
}
const groupedModels = groupBy(modelConfigs, 'base');
const groupedModels = groupBy(modelConfigs, groupByType ? groupByBaseAndTypeFunc : groupByBaseFunc);
const _options = reduce(
groupedModels,
(acc, val, label) => {
@ -49,9 +54,9 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
},
[] as GroupBase<ComboboxOption>[]
);
_options.sort((a) => (a.label === base_model ? -1 : 1));
_options.sort((a) => (a.label?.split('/')[0]?.toLowerCase().includes(base_model) ? -1 : 1));
return _options;
}, [getIsDisabled, modelConfigs, base_model]);
}, [modelConfigs, groupByType, getIsDisabled, base_model]);
const value = useMemo(
() =>

View File

@ -6,6 +6,7 @@ import {
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import type { Layer } from 'features/controlLayers/store/types';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { selectNodesSlice } from 'features/nodes/store/nodesSlice';
@ -14,9 +15,16 @@ import { selectGenerationSlice } from 'features/parameters/store/generationSlice
import { selectSystemSlice } from 'features/system/store/systemSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach } from 'lodash-es';
import { forEach, upperFirst } from 'lodash-es';
import { getConnectedEdges } from 'reactflow';
const LAYER_TYPE_TO_TKEY: Record<Layer['type'], string> = {
initial_image_layer: 'controlLayers.globalInitialImage',
control_adapter_layer: 'controlLayers.globalControlAdapter',
ip_adapter_layer: 'controlLayers.globalIPAdapter',
regional_guidance_layer: 'controlLayers.regionalGuidance',
};
const selector = createMemoizedSelector(
[
selectControlAdaptersSlice,
@ -29,21 +37,22 @@ const selector = createMemoizedSelector(
],
(controlAdapters, generation, system, nodes, dynamicPrompts, controlLayers, activeTabName) => {
const { model } = generation;
const { size } = controlLayers.present;
const { positivePrompt } = controlLayers.present;
const { isConnected } = system;
const reasons: string[] = [];
const reasons: { prefix?: string; content: string }[] = [];
// Cannot generate if not connected
if (!isConnected) {
reasons.push(i18n.t('parameters.invoke.systemDisconnected'));
reasons.push({ content: i18n.t('parameters.invoke.systemDisconnected') });
}
if (activeTabName === 'workflows') {
if (nodes.shouldValidateGraph) {
if (!nodes.nodes.length) {
reasons.push(i18n.t('parameters.invoke.noNodesInGraph'));
reasons.push({ content: i18n.t('parameters.invoke.noNodesInGraph') });
}
nodes.nodes.forEach((node) => {
@ -55,7 +64,7 @@ const selector = createMemoizedSelector(
if (!nodeTemplate) {
// Node type not found
reasons.push(i18n.t('parameters.invoke.missingNodeTemplate'));
reasons.push({ content: i18n.t('parameters.invoke.missingNodeTemplate') });
return;
}
@ -68,17 +77,17 @@ const selector = createMemoizedSelector(
);
if (!fieldTemplate) {
reasons.push(i18n.t('parameters.invoke.missingFieldTemplate'));
reasons.push({ content: i18n.t('parameters.invoke.missingFieldTemplate') });
return;
}
if (fieldTemplate.required && field.value === undefined && !hasConnection) {
reasons.push(
i18n.t('parameters.invoke.missingInputForField', {
reasons.push({
content: i18n.t('parameters.invoke.missingInputForField', {
nodeLabel: node.data.label || nodeTemplate.title,
fieldLabel: field.label || fieldTemplate.title,
})
);
}),
});
return;
}
});
@ -86,62 +95,94 @@ const selector = createMemoizedSelector(
}
} else {
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push(i18n.t('parameters.invoke.noPrompts'));
reasons.push({ content: i18n.t('parameters.invoke.noPrompts') });
}
if (!model) {
reasons.push(i18n.t('parameters.invoke.noModelSelected'));
reasons.push({ content: i18n.t('parameters.invoke.noModelSelected') });
}
if (activeTabName === 'generation') {
// Handling for generation tab
controlLayers.present.layers
.filter((l) => l.isEnabled)
.flatMap((l) => {
.forEach((l, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[l.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
const problems: string[] = [];
if (l.type === 'control_adapter_layer') {
return l.controlAdapter;
} else if (l.type === 'ip_adapter_layer') {
return l.ipAdapter;
} else if (l.type === 'regional_guidance_layer') {
return l.ipAdapters;
// Must have model
if (!l.controlAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterNoModelSelected'));
}
// Model base must match
if (l.controlAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterIncompatibleBaseModel'));
}
// Must have a control image OR, if it has a processor, it must have a processed image
if (!l.controlAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterNoImageSelected'));
} else if (l.controlAdapter.processorConfig && !l.controlAdapter.processedImage) {
problems.push(i18n.t('parameters.invoke.layer.controlAdapterImageNotProcessed'));
}
// T2I Adapters require images have dimensions that are multiples of 64
if (l.controlAdapter.type === 't2i_adapter' && (size.width % 64 !== 0 || size.height % 64 !== 0)) {
problems.push(i18n.t('parameters.invoke.layer.t2iAdapterIncompatibleDimensions'));
}
}
return [];
})
.forEach((ca, i) => {
const hasNoModel = !ca.model;
const mismatchedModelBase = ca.model?.base !== model?.base;
const hasNoImage = !ca.image;
const imageNotProcessed =
(ca.type === 'controlnet' || ca.type === 't2i_adapter') && !ca.processedImage && ca.processorConfig;
if (hasNoModel) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
if (l.type === 'ip_adapter_layer') {
// Must have model
if (!l.ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (l.ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!l.ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
}
if (mismatchedModelBase) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
if (l.type === 'initial_image_layer') {
// Must have an image
if (!l.image) {
problems.push(i18n.t('parameters.invoke.layer.initialImageNoImageSelected'));
}
}
if (hasNoImage) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
if (l.type === 'regional_guidance_layer') {
// Must have a region
if (l.maskObjects.length === 0) {
problems.push(i18n.t('parameters.invoke.layer.rgNoRegion'));
}
// Must have at least 1 prompt or IP Adapter
if (l.positivePrompt === null && l.negativePrompt === null && l.ipAdapters.length === 0) {
problems.push(i18n.t('parameters.invoke.layer.rgNoPromptsOrIPAdapters'));
}
l.ipAdapters.forEach((ipAdapter) => {
// Must have model
if (!ipAdapter.model) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoModelSelected'));
}
// Model base must match
if (ipAdapter.model?.base !== model?.base) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterIncompatibleBaseModel'));
}
// Must have an image
if (!ipAdapter.image) {
problems.push(i18n.t('parameters.invoke.layer.ipAdapterNoImageSelected'));
}
});
}
if (imageNotProcessed) {
reasons.push(
i18n.t('parameters.invoke.imageNotProcessedForControlAdapter', {
number: i + 1,
})
);
if (problems.length) {
const content = upperFirst(problems.join(', '));
reasons.push({ prefix, content });
}
});
} else {
@ -154,29 +195,19 @@ const selector = createMemoizedSelector(
}
if (!ca.model) {
reasons.push(
i18n.t('parameters.invoke.noModelForControlAdapter', {
number: i + 1,
})
);
reasons.push({ content: i18n.t('parameters.invoke.noModelForControlAdapter', { number: i + 1 }) });
} else if (ca.model.base !== model?.base) {
// This should never happen, just a sanity check
reasons.push(
i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', {
number: i + 1,
})
);
reasons.push({
content: i18n.t('parameters.invoke.incompatibleBaseModelForControlAdapter', { number: i + 1 }),
});
}
if (
!ca.controlImage ||
(isControlNetOrT2IAdapter(ca) && !ca.processedControlImage && ca.processorType !== 'none')
) {
reasons.push(
i18n.t('parameters.invoke.noControlImageForControlAdapter', {
number: i + 1,
})
);
reasons.push({ content: i18n.t('parameters.invoke.noControlImageForControlAdapter', { number: i + 1 }) });
}
});
}
@ -187,6 +218,6 @@ const selector = createMemoizedSelector(
);
export const useIsReadyToEnqueue = () => {
const { isReady, reasons } = useAppSelector(selector);
return { isReady, reasons };
const value = useAppSelector(selector);
return value;
};

View File

@ -21,8 +21,6 @@ import {
setShouldShowBoundingBox,
} from 'features/canvas/store/canvasSlice';
import type { CanvasLayer } from 'features/canvas/store/canvasTypes';
import { LAYER_NAMES_DICT } from 'features/canvas/store/canvasTypes';
import { ViewerButton } from 'features/gallery/components/ImageViewer/ViewerButton';
import { memo, useCallback, useMemo } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
@ -217,110 +215,107 @@ const IAICanvasToolbar = () => {
[dispatch, isMaskEnabled]
);
const value = useMemo(() => LAYER_NAMES_DICT.filter((o) => o.value === layer)[0], [layer]);
const layerOptions = useMemo<{ label: string; value: CanvasLayer }[]>(
() => [
{ label: t('unifiedCanvas.base'), value: 'base' },
{ label: t('unifiedCanvas.mask'), value: 'mask' },
],
[t]
);
const layerValue = useMemo(() => layerOptions.filter((o) => o.value === layer)[0] ?? null, [layer, layerOptions]);
return (
<Flex w="full" gap={2} alignItems="center">
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto" />
</Flex>
<Flex flex={1} gap={2} justifyContent="center">
<Tooltip label={`${t('unifiedCanvas.layer')} (Q)`}>
<FormControl isDisabled={isStaging} w="5rem">
<Combobox value={value} options={LAYER_NAMES_DICT} onChange={handleChangeLayer} />
</FormControl>
</Tooltip>
<Flex alignItems="center" gap={2} flexWrap="wrap">
<Tooltip label={`${t('unifiedCanvas.layer')} (Q)`}>
<FormControl isDisabled={isStaging} w="5rem">
<Combobox value={layerValue} options={layerOptions} onChange={handleChangeLayer} />
</FormControl>
</Tooltip>
<IAICanvasMaskOptions />
<IAICanvasToolChooserOptions />
<IAICanvasMaskOptions />
<IAICanvasToolChooserOptions />
<ButtonGroup>
<IconButton
aria-label={`${t('unifiedCanvas.move')} (V)`}
tooltip={`${t('unifiedCanvas.move')} (V)`}
icon={<PiHandGrabbingBold />}
isChecked={tool === 'move' || isStaging}
onClick={handleSelectMoveTool}
/>
<IconButton
aria-label={`${shouldShowBoundingBox ? t('unifiedCanvas.hideBoundingBox') : t('unifiedCanvas.showBoundingBox')} (Shift + H)`}
tooltip={`${shouldShowBoundingBox ? t('unifiedCanvas.hideBoundingBox') : t('unifiedCanvas.showBoundingBox')} (Shift + H)`}
icon={shouldShowBoundingBox ? <PiEyeBold /> : <PiEyeSlashBold />}
onClick={handleSetShouldShowBoundingBox}
isDisabled={isStaging}
/>
<IconButton
aria-label={`${t('unifiedCanvas.resetView')} (R)`}
tooltip={`${t('unifiedCanvas.resetView')} (R)`}
icon={<PiCrosshairSimpleBold />}
onClick={handleClickResetCanvasView}
/>
</ButtonGroup>
<ButtonGroup>
<IconButton
aria-label={`${t('unifiedCanvas.move')} (V)`}
tooltip={`${t('unifiedCanvas.move')} (V)`}
icon={<PiHandGrabbingBold />}
isChecked={tool === 'move' || isStaging}
onClick={handleSelectMoveTool}
/>
<IconButton
aria-label={`${shouldShowBoundingBox ? t('unifiedCanvas.hideBoundingBox') : t('unifiedCanvas.showBoundingBox')} (Shift + H)`}
tooltip={`${shouldShowBoundingBox ? t('unifiedCanvas.hideBoundingBox') : t('unifiedCanvas.showBoundingBox')} (Shift + H)`}
icon={shouldShowBoundingBox ? <PiEyeBold /> : <PiEyeSlashBold />}
onClick={handleSetShouldShowBoundingBox}
isDisabled={isStaging}
/>
<IconButton
aria-label={`${t('unifiedCanvas.resetView')} (R)`}
tooltip={`${t('unifiedCanvas.resetView')} (R)`}
icon={<PiCrosshairSimpleBold />}
onClick={handleClickResetCanvasView}
/>
</ButtonGroup>
<ButtonGroup>
<ButtonGroup>
<IconButton
aria-label={`${t('unifiedCanvas.mergeVisible')} (Shift+M)`}
tooltip={`${t('unifiedCanvas.mergeVisible')} (Shift+M)`}
icon={<PiStackBold />}
onClick={handleMergeVisible}
isDisabled={isStaging}
/>
<IconButton
aria-label={`${t('unifiedCanvas.saveToGallery')} (Shift+S)`}
tooltip={`${t('unifiedCanvas.saveToGallery')} (Shift+S)`}
icon={<PiFloppyDiskBold />}
onClick={handleSaveToGallery}
isDisabled={isStaging}
/>
{isClipboardAPIAvailable && (
<IconButton
aria-label={`${t('unifiedCanvas.mergeVisible')} (Shift+M)`}
tooltip={`${t('unifiedCanvas.mergeVisible')} (Shift+M)`}
icon={<PiStackBold />}
onClick={handleMergeVisible}
aria-label={`${t('unifiedCanvas.copyToClipboard')} (Cmd/Ctrl+C)`}
tooltip={`${t('unifiedCanvas.copyToClipboard')} (Cmd/Ctrl+C)`}
icon={<PiCopyBold />}
onClick={handleCopyImageToClipboard}
isDisabled={isStaging}
/>
<IconButton
aria-label={`${t('unifiedCanvas.saveToGallery')} (Shift+S)`}
tooltip={`${t('unifiedCanvas.saveToGallery')} (Shift+S)`}
icon={<PiFloppyDiskBold />}
onClick={handleSaveToGallery}
isDisabled={isStaging}
/>
{isClipboardAPIAvailable && (
<IconButton
aria-label={`${t('unifiedCanvas.copyToClipboard')} (Cmd/Ctrl+C)`}
tooltip={`${t('unifiedCanvas.copyToClipboard')} (Cmd/Ctrl+C)`}
icon={<PiCopyBold />}
onClick={handleCopyImageToClipboard}
isDisabled={isStaging}
/>
)}
<IconButton
aria-label={`${t('unifiedCanvas.downloadAsImage')} (Shift+D)`}
tooltip={`${t('unifiedCanvas.downloadAsImage')} (Shift+D)`}
icon={<PiDownloadSimpleBold />}
onClick={handleDownloadAsImage}
isDisabled={isStaging}
/>
</ButtonGroup>
<ButtonGroup>
<IAICanvasUndoButton />
<IAICanvasRedoButton />
</ButtonGroup>
)}
<IconButton
aria-label={`${t('unifiedCanvas.downloadAsImage')} (Shift+D)`}
tooltip={`${t('unifiedCanvas.downloadAsImage')} (Shift+D)`}
icon={<PiDownloadSimpleBold />}
onClick={handleDownloadAsImage}
isDisabled={isStaging}
/>
</ButtonGroup>
<ButtonGroup>
<IAICanvasUndoButton />
<IAICanvasRedoButton />
</ButtonGroup>
<ButtonGroup>
<IconButton
aria-label={`${t('common.upload')}`}
tooltip={`${t('common.upload')}`}
icon={<PiUploadSimpleBold />}
isDisabled={isStaging}
{...getUploadButtonProps()}
/>
<input {...getUploadInputProps()} />
<IconButton
aria-label={`${t('unifiedCanvas.clearCanvas')}`}
tooltip={`${t('unifiedCanvas.clearCanvas')}`}
icon={<PiTrashSimpleBold />}
onClick={handleResetCanvas}
colorScheme="error"
isDisabled={isStaging}
/>
</ButtonGroup>
<ButtonGroup>
<IAICanvasSettingsButtonPopover />
</ButtonGroup>
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto">
<ViewerButton />
</Flex>
</Flex>
<ButtonGroup>
<IconButton
aria-label={`${t('common.upload')}`}
tooltip={`${t('common.upload')}`}
icon={<PiUploadSimpleBold />}
isDisabled={isStaging}
{...getUploadButtonProps()}
/>
<input {...getUploadInputProps()} />
<IconButton
aria-label={`${t('unifiedCanvas.clearCanvas')}`}
tooltip={`${t('unifiedCanvas.clearCanvas')}`}
icon={<PiTrashSimpleBold />}
onClick={handleResetCanvas}
colorScheme="error"
isDisabled={isStaging}
/>
</ButtonGroup>
<ButtonGroup>
<IAICanvasSettingsButtonPopover />
</ButtonGroup>
</Flex>
);
};

View File

@ -5,11 +5,6 @@ import { z } from 'zod';
export type CanvasLayer = 'base' | 'mask';
export const LAYER_NAMES_DICT: { label: string; value: CanvasLayer }[] = [
{ label: 'Base', value: 'base' },
{ label: 'Mask', value: 'mask' },
];
const zBoundingBoxScaleMethod = z.enum(['none', 'auto', 'manual']);
export type BoundingBoxScaleMethod = z.infer<typeof zBoundingBoxScaleMethod>;
export const isBoundingBoxScaleMethod = (v: unknown): v is BoundingBoxScaleMethod =>

View File

@ -18,7 +18,12 @@ export const AddLayerButton = memo(() => {
return (
<Menu>
<MenuButton as={Button} leftIcon={<PiPlusBold />} variant="ghost">
<MenuButton
as={Button}
leftIcon={<PiPlusBold />}
variant="ghost"
data-testid="control-layers-add-layer-menu-button"
>
{t('controlLayers.addLayer')}
</MenuButton>
<MenuList>

View File

@ -19,7 +19,6 @@ export const CALayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const isSelected = useAppSelector((s) => selectCALayerOrThrow(s.controlLayers.present, layerId).isSelected);
const onClick = useCallback(() => {
// Must be capture so that the layer is selected before deleting/resetting/etc
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });

View File

@ -42,10 +42,10 @@ export const ControlAdapterImagePreview = memo(
const [isMouseOverImage, setIsMouseOverImage] = useState(false);
const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(
controlAdapter.image?.imageName ?? skipToken
controlAdapter.image?.name ?? skipToken
);
const { currentData: processedControlImage, isError: isErrorProcessedControlImage } = useGetImageDTOQuery(
controlAdapter.processedImage?.imageName ?? skipToken
controlAdapter.processedImage?.name ?? skipToken
);
const [changeIsIntermediate] = useChangeImageIsIntermediateMutation();
@ -124,7 +124,7 @@ export const ControlAdapterImagePreview = memo(
controlImage &&
processedControlImage &&
!isMouseOverImage &&
!controlAdapter.isProcessingImage &&
!controlAdapter.processorPendingBatchId &&
controlAdapter.processorConfig !== null;
useEffect(() => {
@ -190,7 +190,7 @@ export const ControlAdapterImagePreview = memo(
/>
</>
{controlAdapter.isProcessingImage && (
{controlAdapter.processorPendingBatchId !== null && (
<Flex
position="absolute"
top={0}

View File

@ -42,6 +42,7 @@ export const ControlAdapterModelCombobox = memo(({ modelKey, onChange: onChangeM
selectedModel,
getIsDisabled,
isLoading,
groupByType: true,
});
return (

View File

@ -34,9 +34,7 @@ export const IPAdapterImagePreview = memo(
const optimalDimension = useAppSelector(selectOptimalDimension);
const shift = useShiftModifier();
const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(
image?.imageName ?? skipToken
);
const { currentData: controlImage, isError: isErrorControlImage } = useGetImageDTOQuery(image?.name ?? skipToken);
const handleResetControlImage = useCallback(() => {
onChangeImage(null);
}, [onChangeImage]);

View File

@ -2,14 +2,13 @@ import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl, FormLabel } from '@invoke-ai/ui-library';
import type { ProcessorComponentProps } from 'features/controlLayers/components/ControlAndIPAdapter/processors/types';
import type { DepthAnythingModelSize, DepthAnythingProcessorConfig } from 'features/controlLayers/util/controlAdapters';
import { CA_PROCESSOR_DATA, isDepthAnythingModelSize } from 'features/controlLayers/util/controlAdapters';
import { isDepthAnythingModelSize } from 'features/controlLayers/util/controlAdapters';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './ProcessorWrapper';
type Props = ProcessorComponentProps<DepthAnythingProcessorConfig>;
const DEFAULTS = CA_PROCESSOR_DATA['depth_anything_image_processor'].buildDefaults();
export const DepthAnythingProcessor = memo(({ onChange, config }: Props) => {
const { t } = useTranslation();
@ -38,12 +37,7 @@ export const DepthAnythingProcessor = memo(({ onChange, config }: Props) => {
<ProcessorWrapper>
<FormControl>
<FormLabel m={0}>{t('controlnet.modelSize')}</FormLabel>
<Combobox
value={value}
defaultInputValue={DEFAULTS.model_size}
options={options}
onChange={handleModelSizeChange}
/>
<Combobox value={value} options={options} onChange={handleModelSizeChange} isSearchable={false} />
</FormControl>
</ProcessorWrapper>
);

View File

@ -32,7 +32,7 @@ export const ControlLayersPanelContent = memo(() => {
</Flex>
{layerIdTypePairs.length > 0 && (
<ScrollableContent>
<Flex flexDir="column" gap={2}>
<Flex flexDir="column" gap={2} data-testid="control-layers-layer-list">
{layerIdTypePairs.map(({ id, type }) => (
<LayerWrapper key={id} id={id} type={type} />
))}

View File

@ -1,12 +1,30 @@
import { Flex, IconButton, Popover, PopoverBody, PopoverContent, PopoverTrigger } from '@invoke-ai/ui-library';
import {
Checkbox,
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverBody,
PopoverContent,
PopoverTrigger,
} from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { setShouldInvertBrushSizeScrollDirection } from 'features/canvas/store/canvasSlice';
import { GlobalMaskLayerOpacity } from 'features/controlLayers/components/GlobalMaskLayerOpacity';
import { memo } from 'react';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { RiSettings4Fill } from 'react-icons/ri';
const ControlLayersSettingsPopover = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
const handleChangeShouldInvertBrushSizeScrollDirection = useCallback(
(e: ChangeEvent<HTMLInputElement>) => dispatch(setShouldInvertBrushSizeScrollDirection(e.target.checked)),
[dispatch]
);
return (
<Popover isLazy>
<PopoverTrigger>
@ -16,6 +34,13 @@ const ControlLayersSettingsPopover = () => {
<PopoverBody>
<Flex direction="column" gap={2}>
<GlobalMaskLayerOpacity />
<FormControl w="full">
<FormLabel flexGrow={1}>{t('unifiedCanvas.invertBrushSizeScrollDirection')}</FormLabel>
<Checkbox
isChecked={shouldInvertBrushSizeScrollDirection}
onChange={handleChangeShouldInvertBrushSizeScrollDirection}
/>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>

View File

@ -4,14 +4,17 @@ import { BrushSize } from 'features/controlLayers/components/BrushSize';
import ControlLayersSettingsPopover from 'features/controlLayers/components/ControlLayersSettingsPopover';
import { ToolChooser } from 'features/controlLayers/components/ToolChooser';
import { UndoRedoButtonGroup } from 'features/controlLayers/components/UndoRedoButtonGroup';
import { ViewerButton } from 'features/gallery/components/ImageViewer/ViewerButton';
import { ToggleProgressButton } from 'features/gallery/components/ImageViewer/ToggleProgressButton';
import { ViewerToggleMenu } from 'features/gallery/components/ImageViewer/ViewerToggleMenu';
import { memo } from 'react';
export const ControlLayersToolbar = memo(() => {
return (
<Flex w="full" gap={2}>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto" />
<Flex gap={2} marginInlineEnd="auto">
<ToggleProgressButton />
</Flex>
</Flex>
<Flex flex={1} gap={2} justifyContent="center">
<BrushSize />
@ -21,7 +24,7 @@ export const ControlLayersToolbar = memo(() => {
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto">
<ViewerButton />
<ViewerToggleMenu />
</Flex>
</Flex>
</Flex>

View File

@ -20,6 +20,7 @@ export const DeleteAllLayersButton = memo(() => {
variant="ghost"
colorScheme="error"
isDisabled={isDisabled}
data-testid="control-layers-delete-all-layers-button"
>
{t('controlLayers.deleteAll')}
</Button>

View File

@ -8,6 +8,7 @@ import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerT
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import {
iiLayerDenoisingStrengthChanged,
iiLayerImageChanged,
layerSelected,
selectIILayerOrThrow,
@ -36,6 +37,13 @@ export const IILayer = memo(({ layerId }: Props) => {
[dispatch, layerId]
);
const onChangeDenoisingStrength = useCallback(
(denoisingStrength: number) => {
dispatch(iiLayerDenoisingStrengthChanged({ layerId, denoisingStrength }));
},
[dispatch, layerId]
);
const droppableData = useMemo<IILayerImageDropData>(
() => ({
actionType: 'SET_II_LAYER_IMAGE',
@ -67,7 +75,7 @@ export const IILayer = memo(({ layerId }: Props) => {
</Flex>
{isOpen && (
<Flex flexDir="column" gap={3} px={3} pb={3}>
<ImageToImageStrength />
<ImageToImageStrength value={layer.denoisingStrength} onChange={onChangeDenoisingStrength} />
<InitialImagePreview
image={layer.image}
onChangeImage={onChangeImage}

View File

@ -32,7 +32,7 @@ export const InitialImagePreview = memo(({ image, onChangeImage, droppableData,
const optimalDimension = useAppSelector(selectOptimalDimension);
const shift = useShiftModifier();
const { currentData: imageDTO, isError: isErrorControlImage } = useGetImageDTOQuery(image?.imageName ?? skipToken);
const { currentData: imageDTO, isError: isErrorControlImage } = useGetImageDTOQuery(image?.name ?? skipToken);
const onReset = useCallback(() => {
onChangeImage(null);

View File

@ -1,19 +1,26 @@
import { Flex, Spacer, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { IPALayerIPAdapterWrapper } from 'features/controlLayers/components/IPALayer/IPALayerIPAdapterWrapper';
import { LayerDeleteButton } from 'features/controlLayers/components/LayerCommon/LayerDeleteButton';
import { LayerTitle } from 'features/controlLayers/components/LayerCommon/LayerTitle';
import { LayerVisibilityToggle } from 'features/controlLayers/components/LayerCommon/LayerVisibilityToggle';
import { LayerWrapper } from 'features/controlLayers/components/LayerCommon/LayerWrapper';
import { memo } from 'react';
import { layerSelected, selectIPALayerOrThrow } from 'features/controlLayers/store/controlLayersSlice';
import { memo, useCallback } from 'react';
type Props = {
layerId: string;
};
export const IPALayer = memo(({ layerId }: Props) => {
const dispatch = useAppDispatch();
const isSelected = useAppSelector((s) => selectIPALayerOrThrow(s.controlLayers.present, layerId).isSelected);
const { isOpen, onToggle } = useDisclosure({ defaultIsOpen: true });
const onClick = useCallback(() => {
dispatch(layerSelected(layerId));
}, [dispatch, layerId]);
return (
<LayerWrapper borderColor="base.800">
<LayerWrapper onClick={onClick} borderColor={isSelected ? 'base.400' : 'base.800'}>
<Flex gap={3} alignItems="center" p={3} cursor="pointer" onDoubleClick={onToggle}>
<LayerVisibilityToggle layerId={layerId} />
<LayerTitle type="ip_adapter_layer" />

View File

@ -10,7 +10,16 @@ type Props = PropsWithChildren<{
export const LayerWrapper = memo(({ onClick, borderColor, children }: Props) => {
return (
<Flex gap={2} onClick={onClick} bg={borderColor} px={2} borderRadius="base" py="1px">
<Flex
gap={2}
onClick={onClick}
bg={borderColor}
px={2}
borderRadius="base"
py="1px"
transitionProperty="all"
transitionDuration="0.2s"
>
<Flex flexDir="column" w="full" bg="base.850" borderRadius="base">
{children}
</Flex>

View File

@ -45,6 +45,7 @@ export const RGLayerNegativePrompt = memo(({ layerId }: Props) => {
variant="darkFilled"
paddingRight={30}
fontSize="sm"
spellCheck={false}
/>
<PromptOverlayButtonWrapper>
<RGLayerPromptDeleteButton layerId={layerId} polarity="negative" />

View File

@ -45,6 +45,7 @@ export const RGLayerPositivePrompt = memo(({ layerId }: Props) => {
variant="darkFilled"
paddingRight={30}
minH={28}
spellCheck={false}
/>
<PromptOverlayButtonWrapper>
<RGLayerPromptDeleteButton layerId={layerId} polarity="positive" />

View File

@ -6,7 +6,7 @@ import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useMouseEvents } from 'features/controlLayers/hooks/mouseEventHooks';
import {
$cursorPosition,
$lastCursorPos,
$lastMouseDownPos,
$tool,
isRegionalGuidanceLayer,
@ -48,7 +48,7 @@ const useStageRenderer = (
const state = useAppSelector((s) => s.controlLayers.present);
const tool = useStore($tool);
const mouseEventHandlers = useMouseEvents();
const cursorPosition = useStore($cursorPosition);
const lastCursorPos = useStore($lastCursorPos);
const lastMouseDownPos = useStore($lastMouseDownPos);
const selectedLayerIdColor = useAppSelector(selectSelectedLayerColor);
const selectedLayerType = useAppSelector(selectSelectedLayerType);
@ -130,18 +130,18 @@ const useStageRenderer = (
}, [stage, state.size.width, state.size.height, wrapper]);
useLayoutEffect(() => {
log.trace('Rendering tool preview');
if (asPreview) {
// Preview should not display tool
return;
}
log.trace('Rendering tool preview');
renderers.renderToolPreview(
stage,
tool,
selectedLayerIdColor,
selectedLayerType,
state.globalMaskLayerOpacity,
cursorPosition,
lastCursorPos,
lastMouseDownPos,
state.brushSize
);
@ -152,7 +152,7 @@ const useStageRenderer = (
selectedLayerIdColor,
selectedLayerType,
state.globalMaskLayerOpacity,
cursorPosition,
lastCursorPos,
lastMouseDownPos,
state.brushSize,
renderers,
@ -178,15 +178,24 @@ const useStageRenderer = (
// Preview should not display bboxes
return;
}
renderers.renderBbox(stage, state.layers, tool, onBboxChanged);
renderers.renderBboxes(stage, state.layers, tool);
}, [stage, asPreview, state.layers, tool, onBboxChanged, renderers]);
useLayoutEffect(() => {
log.trace('Rendering background');
if (asPreview) {
// Preview should not check for transparency
return;
}
log.trace('Updating bboxes');
debouncedRenderers.updateBboxes(stage, state.layers, onBboxChanged);
}, [stage, asPreview, state.layers, onBboxChanged]);
useLayoutEffect(() => {
if (asPreview) {
// The preview should not have a background
return;
}
log.trace('Rendering background');
renderers.renderBackground(stage, state.size.width, state.size.height);
}, [stage, asPreview, state.size.width, state.size.height, renderers]);
@ -196,11 +205,11 @@ const useStageRenderer = (
}, [stage, layerIds, renderers]);
useLayoutEffect(() => {
log.trace('Rendering no layers message');
if (asPreview) {
// The preview should not display the no layers message
return;
}
log.trace('Rendering no layers message');
renderers.renderNoLayersMessage(stage, layerCount, state.size.width, state.size.height);
}, [stage, layerCount, renderers, asPreview, state.size.width, state.size.height]);
@ -233,7 +242,14 @@ export const StageComponent = memo(({ asPreview = false }: Props) => {
return (
<Flex overflow="hidden" w="full" h="full">
<Flex ref={wrapperRef} w="full" h="full" alignItems="center" justifyContent="center">
<Flex ref={containerRef} tabIndex={-1} bg="base.850" borderRadius="base" overflow="hidden" />
<Flex
ref={containerRef}
tabIndex={-1}
bg="base.850"
borderRadius="base"
overflow="hidden"
data-testid="control-layers-canvas"
/>
</Flex>
</Flex>
);

View File

@ -4,9 +4,9 @@ import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
$tool,
layerReset,
selectControlLayersSlice,
selectedLayerDeleted,
selectedLayerReset,
} from 'features/controlLayers/store/controlLayersSlice';
import { useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
@ -22,6 +22,7 @@ export const ToolChooser: React.FC = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const isDisabled = useAppSelector(selectIsDisabled);
const selectedLayerId = useAppSelector((s) => s.controlLayers.present.selectedLayerId);
const tool = useStore($tool);
const setToolToBrush = useCallback(() => {
@ -42,8 +43,11 @@ export const ToolChooser: React.FC = () => {
useHotkeys('v', setToolToMove, { enabled: !isDisabled }, [isDisabled]);
const resetSelectedLayer = useCallback(() => {
dispatch(selectedLayerReset());
}, [dispatch]);
if (selectedLayerId === null) {
return;
}
dispatch(layerReset(selectedLayerId));
}, [dispatch, selectedLayerId]);
useHotkeys('shift+c', resetSelectedLayer);
const deleteSelectedLayer = useCallback(() => {

View File

@ -1,4 +1,5 @@
import { createSelector } from '@reduxjs/toolkit';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import {
isControlAdapterLayer,
@ -69,7 +70,7 @@ export const useLayerType = (layerId: string) => {
export const useLayerOpacity = (layerId: string) => {
const selectLayer = useMemo(
() =>
createSelector(selectControlLayersSlice, (controlLayers) => {
createMemoizedSelector(selectControlLayersSlice, (controlLayers) => {
const layer = controlLayers.present.layers.filter(isControlAdapterLayer).find((l) => l.id === layerId);
assert(layer, `Layer ${layerId} not found`);
return { opacity: Math.round(layer.opacity * 100), isFilterEnabled: layer.isFilterEnabled };

View File

@ -3,8 +3,8 @@ import { useStore } from '@nanostores/react';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { calculateNewBrushSize } from 'features/canvas/hooks/useCanvasZoom';
import {
$cursorPosition,
$isDrawing,
$lastCursorPos,
$lastMouseDownPos,
$tool,
brushSizeChanged,
@ -15,17 +15,41 @@ import {
import type Konva from 'konva';
import type { KonvaEventObject } from 'konva/lib/Node';
import type { Vector2d } from 'konva/lib/types';
import { useCallback, useRef } from 'react';
import { clamp } from 'lodash-es';
import { useCallback, useMemo, useRef } from 'react';
const getIsFocused = (stage: Konva.Stage) => {
return stage.container().contains(document.activeElement);
};
const getIsMouseDown = (e: KonvaEventObject<MouseEvent>) => e.evt.buttons === 1;
const SNAP_PX = 10;
export const snapPosToStage = (pos: Vector2d, stage: Konva.Stage) => {
const snappedPos = { ...pos };
// Get the normalized threshold for snapping to the edge of the stage
const thresholdX = SNAP_PX / stage.scaleX();
const thresholdY = SNAP_PX / stage.scaleY();
const stageWidth = stage.width() / stage.scaleX();
const stageHeight = stage.height() / stage.scaleY();
// Snap to the edge of the stage if within threshold
if (pos.x - thresholdX < 0) {
snappedPos.x = 0;
} else if (pos.x + thresholdX > stageWidth) {
snappedPos.x = Math.floor(stageWidth);
}
if (pos.y - thresholdY < 0) {
snappedPos.y = 0;
} else if (pos.y + thresholdY > stageHeight) {
snappedPos.y = Math.floor(stageHeight);
}
return snappedPos;
};
export const getScaledFlooredCursorPosition = (stage: Konva.Stage) => {
const pointerPosition = stage.getPointerPosition();
const stageTransform = stage.getAbsoluteTransform().copy();
if (!pointerPosition || !stageTransform) {
if (!pointerPosition) {
return;
}
const scaledCursorPosition = stageTransform.invert().point(pointerPosition);
@ -40,11 +64,13 @@ const syncCursorPos = (stage: Konva.Stage): Vector2d | null => {
if (!pos) {
return null;
}
$cursorPosition.set(pos);
$lastCursorPos.set(pos);
return pos;
};
const BRUSH_SPACING = 20;
const BRUSH_SPACING_PCT = 10;
const MIN_BRUSH_SPACING_PX = 5;
const MAX_BRUSH_SPACING_PX = 15;
export const useMouseEvents = () => {
const dispatch = useAppDispatch();
@ -60,6 +86,10 @@ export const useMouseEvents = () => {
const lastCursorPosRef = useRef<[number, number] | null>(null);
const shouldInvertBrushSizeScrollDirection = useAppSelector((s) => s.canvas.shouldInvertBrushSizeScrollDirection);
const brushSize = useAppSelector((s) => s.controlLayers.present.brushSize);
const brushSpacingPx = useMemo(
() => clamp(brushSize / BRUSH_SPACING_PCT, MIN_BRUSH_SPACING_PX, MAX_BRUSH_SPACING_PX),
[brushSize]
);
const onMouseDown = useCallback(
(e: KonvaEventObject<MouseEvent>) => {
@ -71,7 +101,6 @@ export const useMouseEvents = () => {
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
$lastMouseDownPos.set(pos);
if (tool === 'brush' || tool === 'eraser') {
dispatch(
rgLayerLineAdded({
@ -81,6 +110,9 @@ export const useMouseEvents = () => {
})
);
$isDrawing.set(true);
$lastMouseDownPos.set(pos);
} else if (tool === 'rect') {
$lastMouseDownPos.set(snapPosToStage(pos, stage));
}
},
[dispatch, selectedLayerId, selectedLayerType, tool]
@ -92,21 +124,22 @@ export const useMouseEvents = () => {
if (!stage) {
return;
}
const pos = $cursorPosition.get();
const pos = $lastCursorPos.get();
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
const lastPos = $lastMouseDownPos.get();
const tool = $tool.get();
if (lastPos && selectedLayerId && tool === 'rect') {
const snappedPos = snapPosToStage(pos, stage);
dispatch(
rgLayerRectAdded({
layerId: selectedLayerId,
rect: {
x: Math.min(pos.x, lastPos.x),
y: Math.min(pos.y, lastPos.y),
width: Math.abs(pos.x - lastPos.x),
height: Math.abs(pos.y - lastPos.y),
x: Math.min(snappedPos.x, lastPos.x),
y: Math.min(snappedPos.y, lastPos.y),
width: Math.abs(snappedPos.x - lastPos.x),
height: Math.abs(snappedPos.y - lastPos.y),
},
})
);
@ -132,7 +165,7 @@ export const useMouseEvents = () => {
// Continue the last line
if (lastCursorPosRef.current) {
// Dispatching redux events impacts perf substantially - using brush spacing keeps dispatches to a reasonable number
if (Math.hypot(lastCursorPosRef.current[0] - pos.x, lastCursorPosRef.current[1] - pos.y) < BRUSH_SPACING) {
if (Math.hypot(lastCursorPosRef.current[0] - pos.x, lastCursorPosRef.current[1] - pos.y) < brushSpacingPx) {
return;
}
}
@ -145,7 +178,7 @@ export const useMouseEvents = () => {
$isDrawing.set(true);
}
},
[dispatch, selectedLayerId, selectedLayerType, tool]
[brushSpacingPx, dispatch, selectedLayerId, selectedLayerType, tool]
);
const onMouseLeave = useCallback(
@ -155,14 +188,15 @@ export const useMouseEvents = () => {
return;
}
const pos = syncCursorPos(stage);
$isDrawing.set(false);
$lastCursorPos.set(null);
$lastMouseDownPos.set(null);
if (!pos || !selectedLayerId || selectedLayerType !== 'regional_guidance_layer') {
return;
}
if (getIsFocused(stage) && getIsMouseDown(e) && (tool === 'brush' || tool === 'eraser')) {
dispatch(rgLayerPointsAdded({ layerId: selectedLayerId, point: [pos.x, pos.y] }));
}
$isDrawing.set(false);
$cursorPosition.set(null);
},
[selectedLayerId, selectedLayerType, tool, dispatch]
);
@ -190,5 +224,10 @@ export const useMouseEvents = () => {
[selectedLayerType, tool, shouldInvertBrushSizeScrollDirection, dispatch, brushSize]
);
return { onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel };
const handlers = useMemo(
() => ({ onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel }),
[onMouseDown, onMouseUp, onMouseMove, onMouseLeave, onMouseWheel]
);
return handlers;
};

View File

@ -1,51 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import {
isControlAdapterLayer,
isInitialImageLayer,
isIPAdapterLayer,
isRegionalGuidanceLayer,
selectControlLayersSlice,
} from 'features/controlLayers/store/controlLayersSlice';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
const selectValidLayerCount = createSelector(selectControlLayersSlice, (controlLayers) => {
let count = 0;
controlLayers.present.layers.forEach((l) => {
if (isRegionalGuidanceLayer(l)) {
const hasTextPrompt = Boolean(l.positivePrompt || l.negativePrompt);
const hasAtLeastOneImagePrompt = l.ipAdapters.filter((ipa) => Boolean(ipa.image)).length > 0;
if (hasTextPrompt || hasAtLeastOneImagePrompt) {
count += 1;
}
}
if (isControlAdapterLayer(l)) {
if (l.controlAdapter.image || l.controlAdapter.processedImage) {
count += 1;
}
}
if (isIPAdapterLayer(l)) {
if (l.ipAdapter.image) {
count += 1;
}
}
if (isInitialImageLayer(l)) {
if (l.image) {
count += 1;
}
}
});
return count;
});
export const useControlLayersTitle = () => {
const { t } = useTranslation();
const validLayerCount = useAppSelector(selectValidLayerCount);
const title = useMemo(() => {
const suffix = validLayerCount > 0 ? ` (${validLayerCount})` : '';
return `${t('controlLayers.controlLayers')}${suffix}`;
}, [t, validLayerCount]);
return title;
};

View File

@ -27,7 +27,7 @@ import { modelChanged } from 'features/parameters/store/generationSlice';
import type { ParameterAutoNegative } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
import type { IRect, Vector2d } from 'konva/lib/types';
import { isEqual, partition } from 'lodash-es';
import { isEqual, partition, unset } from 'lodash-es';
import { atom } from 'nanostores';
import type { RgbColor } from 'react-colorful';
import type { UndoableOptions } from 'redux-undo';
@ -49,7 +49,7 @@ import type {
} from './types';
export const initialControlLayersState: ControlLayersState = {
_version: 1,
_version: 3,
selectedLayerId: null,
brushSize: 100,
layers: [],
@ -79,16 +79,6 @@ export const isRenderableLayer = (
layer?.type === 'regional_guidance_layer' ||
layer?.type === 'control_adapter_layer' ||
layer?.type === 'initial_image_layer';
const resetLayer = (layer: Layer) => {
if (layer.type === 'regional_guidance_layer') {
layer.maskObjects = [];
layer.bbox = null;
layer.isEnabled = true;
layer.needsPixelBbox = false;
layer.bboxNeedsUpdate = false;
return;
}
};
export const selectCALayerOrThrow = (state: ControlLayersState, layerId: string): ControlAdapterLayer => {
const layer = state.layers.find((l) => l.id === layerId);
@ -134,6 +124,12 @@ const getVectorMaskPreviewColor = (state: ControlLayersState): RgbColor => {
const lastColor = rgLayers[rgLayers.length - 1]?.previewColor;
return LayerColors.next(lastColor);
};
const exclusivelySelectLayer = (state: ControlLayersState, layerId: string) => {
for (const layer of state.layers) {
layer.isSelected = layer.id === layerId;
}
state.selectedLayerId = layerId;
};
export const controlLayersSlice = createSlice({
name: 'controlLayers',
@ -141,14 +137,7 @@ export const controlLayersSlice = createSlice({
reducers: {
//#region Any Layer Type
layerSelected: (state, action: PayloadAction<string>) => {
for (const layer of state.layers.filter(isRenderableLayer)) {
if (layer.id === action.payload) {
layer.isSelected = true;
state.selectedLayerId = action.payload;
} else {
layer.isSelected = false;
}
}
exclusivelySelectLayer(state, action.payload);
},
layerVisibilityToggled: (state, action: PayloadAction<string>) => {
const layer = state.layers.find((l) => l.id === action.payload);
@ -163,6 +152,9 @@ export const controlLayersSlice = createSlice({
layer.x = x;
layer.y = y;
}
if (isRegionalGuidanceLayer(layer)) {
layer.uploadedMaskImage = null;
}
},
layerBboxChanged: (state, action: PayloadAction<{ layerId: string; bbox: IRect | null }>) => {
const { layerId, bbox } = action.payload;
@ -173,14 +165,19 @@ export const controlLayersSlice = createSlice({
if (bbox === null && layer.type === 'regional_guidance_layer') {
// The layer was fully erased, empty its objects to prevent accumulation of invisible objects
layer.maskObjects = [];
layer.needsPixelBbox = false;
layer.uploadedMaskImage = null;
}
}
},
layerReset: (state, action: PayloadAction<string>) => {
const layer = state.layers.find((l) => l.id === action.payload);
if (layer) {
resetLayer(layer);
// TODO(psyche): Should other layer types also have reset functionality?
if (isRegionalGuidanceLayer(layer)) {
layer.maskObjects = [];
layer.bbox = null;
layer.isEnabled = true;
layer.bboxNeedsUpdate = false;
layer.uploadedMaskImage = null;
}
},
layerDeleted: (state, action: PayloadAction<string>) => {
@ -213,12 +210,6 @@ export const controlLayersSlice = createSlice({
moveToFront(renderableLayers, cb);
state.layers = [...ipAdapterLayers, ...renderableLayers];
},
selectedLayerReset: (state) => {
const layer = state.layers.find((l) => l.id === state.selectedLayerId);
if (layer) {
resetLayer(layer);
}
},
selectedLayerDeleted: (state) => {
state.layers = state.layers.filter((l) => l.id !== state.selectedLayerId);
state.selectedLayerId = state.layers[0]?.id ?? null;
@ -250,17 +241,16 @@ export const controlLayersSlice = createSlice({
controlAdapter,
};
state.layers.push(layer);
state.selectedLayerId = layer.id;
for (const layer of state.layers.filter(isRenderableLayer)) {
if (layer.id !== layerId) {
layer.isSelected = false;
}
}
exclusivelySelectLayer(state, layer.id);
},
prepare: (controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2) => ({
payload: { layerId: uuidv4(), controlAdapter },
}),
},
caLayerRecalled: (state, action: PayloadAction<ControlAdapterLayer>) => {
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
caLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
@ -344,19 +334,13 @@ export const controlLayersSlice = createSlice({
const layer = selectCALayerOrThrow(state, layerId);
layer.opacity = opacity;
},
caLayerIsProcessingImageChanged: (
caLayerProcessorPendingBatchIdChanged: (
state,
action: PayloadAction<{ layerId: string; isProcessingImage: boolean }>
action: PayloadAction<{ layerId: string; batchId: string | null }>
) => {
const { layerId, isProcessingImage } = action.payload;
const { layerId, batchId } = action.payload;
const layer = selectCALayerOrThrow(state, layerId);
layer.controlAdapter.isProcessingImage = isProcessingImage;
},
caLayerControlNetsDeleted: (state) => {
state.layers = state.layers.filter((l) => !isControlAdapterLayer(l) || l.controlAdapter.type !== 'controlnet');
},
caLayerT2IAdaptersDeleted: (state) => {
state.layers = state.layers.filter((l) => !isControlAdapterLayer(l) || l.controlAdapter.type !== 't2i_adapter');
layer.controlAdapter.processorPendingBatchId = batchId;
},
//#endregion
@ -368,12 +352,17 @@ export const controlLayersSlice = createSlice({
id: getIPALayerId(layerId),
type: 'ip_adapter_layer',
isEnabled: true,
isSelected: true,
ipAdapter,
};
state.layers.push(layer);
exclusivelySelectLayer(state, layer.id);
},
prepare: (ipAdapter: IPAdapterConfigV2) => ({ payload: { layerId: uuidv4(), ipAdapter } }),
},
ipaLayerRecalled: (state, action: PayloadAction<IPAdapterLayer>) => {
state.layers.push(action.payload);
},
ipaLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectIPALayerOrThrow(state, layerId);
@ -407,9 +396,6 @@ export const controlLayersSlice = createSlice({
const layer = selectIPALayerOrThrow(state, layerId);
layer.ipAdapter.clipVisionModel = clipVisionModel;
},
ipaLayersDeleted: (state) => {
state.layers = state.layers.filter((l) => !isIPAdapterLayer(l));
},
//#endregion
//#region CA or IPA Layers
@ -451,22 +437,21 @@ export const controlLayersSlice = createSlice({
x: 0,
y: 0,
autoNegative: 'invert',
needsPixelBbox: false,
positivePrompt: '',
negativePrompt: null,
ipAdapters: [],
isSelected: true,
uploadedMaskImage: null,
};
state.layers.push(layer);
state.selectedLayerId = layer.id;
for (const layer of state.layers.filter(isRenderableLayer)) {
if (layer.id !== layerId) {
layer.isSelected = false;
}
}
exclusivelySelectLayer(state, layer.id);
},
prepare: () => ({ payload: { layerId: uuidv4() } }),
},
rgLayerRecalled: (state, action: PayloadAction<RegionalGuidanceLayer>) => {
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
rgLayerPositivePromptChanged: (state, action: PayloadAction<{ layerId: string; prompt: string | null }>) => {
const { layerId, prompt } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
@ -505,9 +490,7 @@ export const controlLayersSlice = createSlice({
strokeWidth: state.brushSize,
});
layer.bboxNeedsUpdate = true;
if (!layer.needsPixelBbox && tool === 'eraser') {
layer.needsPixelBbox = true;
}
layer.uploadedMaskImage = null;
},
prepare: (payload: { layerId: string; points: [number, number, number, number]; tool: DrawingTool }) => ({
payload: { ...payload, lineUuid: uuidv4() },
@ -524,6 +507,7 @@ export const controlLayersSlice = createSlice({
// TODO: Handle this in the event listener
lastLine.points.push(point[0] - layer.x, point[1] - layer.y);
layer.bboxNeedsUpdate = true;
layer.uploadedMaskImage = null;
},
rgLayerRectAdded: {
reducer: (state, action: PayloadAction<{ layerId: string; rect: IRect; rectUuid: string }>) => {
@ -543,9 +527,15 @@ export const controlLayersSlice = createSlice({
height: rect.height,
});
layer.bboxNeedsUpdate = true;
layer.uploadedMaskImage = null;
},
prepare: (payload: { layerId: string; rect: IRect }) => ({ payload: { ...payload, rectUuid: uuidv4() } }),
},
rgLayerMaskImageUploaded: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO }>) => {
const { layerId, imageDTO } = action.payload;
const layer = selectRGLayerOrThrow(state, layerId);
layer.uploadedMaskImage = imageDTOToImageWithDims(imageDTO);
},
rgLayerAutoNegativeChanged: (
state,
action: PayloadAction<{ layerId: string; autoNegative: ParameterAutoNegative }>
@ -639,16 +629,17 @@ export const controlLayersSlice = createSlice({
isEnabled: true,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
isSelected: true,
denoisingStrength: 0.75,
};
state.layers.push(layer);
state.selectedLayerId = layer.id;
for (const layer of state.layers.filter(isRenderableLayer)) {
if (layer.id !== layerId) {
layer.isSelected = false;
}
}
exclusivelySelectLayer(state, layer.id);
},
prepare: (imageDTO: ImageDTO | null) => ({ payload: { layerId: 'initial_image_layer', imageDTO } }),
prepare: (imageDTO: ImageDTO | null) => ({ payload: { layerId: INITIAL_IMAGE_LAYER_ID, imageDTO } }),
},
iiLayerRecalled: (state, action: PayloadAction<InitialImageLayer>) => {
state.layers = state.layers.filter((l) => (isInitialImageLayer(l) ? false : true));
state.layers.push({ ...action.payload, isSelected: true });
exclusivelySelectLayer(state, action.payload.id);
},
iiLayerImageChanged: (state, action: PayloadAction<{ layerId: string; imageDTO: ImageDTO | null }>) => {
const { layerId, imageDTO } = action.payload;
@ -663,6 +654,11 @@ export const controlLayersSlice = createSlice({
const layer = selectIILayerOrThrow(state, layerId);
layer.opacity = opacity;
},
iiLayerDenoisingStrengthChanged: (state, action: PayloadAction<{ layerId: string; denoisingStrength: number }>) => {
const { layerId, denoisingStrength } = action.payload;
const layer = selectIILayerOrThrow(state, layerId);
layer.denoisingStrength = denoisingStrength;
},
//#endregion
//#region Globals
@ -792,11 +788,11 @@ export const {
layerMovedToFront,
layerMovedBackward,
layerMovedToBack,
selectedLayerReset,
selectedLayerDeleted,
allLayersDeleted,
// CA Layers
caLayerAdded,
caLayerRecalled,
caLayerImageChanged,
caLayerProcessedImageChanged,
caLayerModelChanged,
@ -804,27 +800,27 @@ export const {
caLayerProcessorConfigChanged,
caLayerIsFilterEnabledChanged,
caLayerOpacityChanged,
caLayerIsProcessingImageChanged,
caLayerControlNetsDeleted,
caLayerT2IAdaptersDeleted,
caLayerProcessorPendingBatchIdChanged,
// IPA Layers
ipaLayerAdded,
ipaLayerRecalled,
ipaLayerImageChanged,
ipaLayerMethodChanged,
ipaLayerModelChanged,
ipaLayerCLIPVisionModelChanged,
ipaLayersDeleted,
// CA or IPA Layers
caOrIPALayerWeightChanged,
caOrIPALayerBeginEndStepPctChanged,
// RG Layers
rgLayerAdded,
rgLayerRecalled,
rgLayerPositivePromptChanged,
rgLayerNegativePromptChanged,
rgLayerPreviewColorChanged,
rgLayerLineAdded,
rgLayerPointsAdded,
rgLayerRectAdded,
rgLayerMaskImageUploaded,
rgLayerAutoNegativeChanged,
rgLayerIPAdapterAdded,
rgLayerIPAdapterDeleted,
@ -836,8 +832,10 @@ export const {
rgLayerIPAdapterCLIPVisionModelChanged,
// II Layer
iiLayerAdded,
iiLayerRecalled,
iiLayerImageChanged,
iiLayerOpacityChanged,
iiLayerDenoisingStrengthChanged,
// Globals
positivePromptChanged,
negativePromptChanged,
@ -857,13 +855,26 @@ export const selectControlLayersSlice = (state: RootState) => state.controlLayer
/* eslint-disable-next-line @typescript-eslint/no-explicit-any */
const migrateControlLayersState = (state: any): any => {
if (state._version === 1) {
// Reset state for users on v1 (e.g. beta users), some changes could cause
state = deepClone(initialControlLayersState);
}
if (state._version === 2) {
// The CA `isProcessingImage` flag was replaced with a `processorPendingBatchId` property, fix up CA layers
for (const layer of (state as ControlLayersState).layers) {
if (layer.type === 'control_adapter_layer') {
layer.controlAdapter.processorPendingBatchId = null;
unset(layer.controlAdapter, 'isProcessingImage');
}
}
}
return state;
};
export const $isDrawing = atom(false);
export const $lastMouseDownPos = atom<Vector2d | null>(null);
export const $tool = atom<Tool>('brush');
export const $cursorPosition = atom<Vector2d | null>(null);
export const $lastCursorPos = atom<Vector2d | null>(null);
// IDs for singleton Konva layers and objects
export const TOOL_PREVIEW_LAYER_ID = 'tool_preview_layer';
@ -883,20 +894,22 @@ export const RG_LAYER_NAME = 'regional_guidance_layer';
export const RG_LAYER_LINE_NAME = 'regional_guidance_layer.line';
export const RG_LAYER_OBJECT_GROUP_NAME = 'regional_guidance_layer.object_group';
export const RG_LAYER_RECT_NAME = 'regional_guidance_layer.rect';
export const INITIAL_IMAGE_LAYER_ID = 'singleton_initial_image_layer';
export const INITIAL_IMAGE_LAYER_NAME = 'initial_image_layer';
export const INITIAL_IMAGE_LAYER_IMAGE_NAME = 'initial_image_layer.image';
export const LAYER_BBOX_NAME = 'layer.bbox';
export const COMPOSITING_RECT_NAME = 'compositing-rect';
// Getters for non-singleton layer and object IDs
const getRGLayerId = (layerId: string) => `${RG_LAYER_NAME}_${layerId}`;
export const getRGLayerId = (layerId: string) => `${RG_LAYER_NAME}_${layerId}`;
const getRGLayerLineId = (layerId: string, lineId: string) => `${layerId}.line_${lineId}`;
const getRGLayerRectId = (layerId: string, lineId: string) => `${layerId}.rect_${lineId}`;
export const getRGLayerObjectGroupId = (layerId: string, groupId: string) => `${layerId}.objectGroup_${groupId}`;
export const getLayerBboxId = (layerId: string) => `${layerId}.bbox`;
const getCALayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
export const getCALayerId = (layerId: string) => `control_adapter_layer_${layerId}`;
export const getCALayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
export const getIILayerImageId = (layerId: string, imageName: string) => `${layerId}.image_${imageName}`;
const getIPALayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
export const getIPALayerId = (layerId: string) => `ip_adapter_layer_${layerId}`;
export const controlLayersPersistConfig: PersistConfig<ControlLayersState> = {
name: controlLayersSlice.name,

View File

@ -1,89 +1,119 @@
import type {
ControlNetConfigV2,
ImageWithDims,
IPAdapterConfigV2,
T2IAdapterConfigV2,
import {
zControlNetConfigV2,
zImageWithDims,
zIPAdapterConfigV2,
zT2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import type { AspectRatioState } from 'features/parameters/components/ImageSize/types';
import type {
ParameterAutoNegative,
ParameterHeight,
ParameterNegativePrompt,
ParameterNegativeStylePromptSDXL,
ParameterPositivePrompt,
ParameterPositiveStylePromptSDXL,
ParameterWidth,
import {
type ParameterHeight,
type ParameterNegativePrompt,
type ParameterNegativeStylePromptSDXL,
type ParameterPositivePrompt,
type ParameterPositiveStylePromptSDXL,
type ParameterWidth,
zAutoNegative,
zParameterNegativePrompt,
zParameterPositivePrompt,
zParameterStrength,
} from 'features/parameters/types/parameterSchemas';
import type { IRect } from 'konva/lib/types';
import type { RgbColor } from 'react-colorful';
import { z } from 'zod';
export type DrawingTool = 'brush' | 'eraser';
const zTool = z.enum(['brush', 'eraser', 'move', 'rect']);
export type Tool = z.infer<typeof zTool>;
const zDrawingTool = zTool.extract(['brush', 'eraser']);
export type DrawingTool = z.infer<typeof zDrawingTool>;
export type Tool = DrawingTool | 'move' | 'rect';
const zPoints = z.array(z.number()).refine((points) => points.length % 2 === 0, {
message: 'Must have an even number of points',
});
const zVectorMaskLine = z.object({
id: z.string(),
type: z.literal('vector_mask_line'),
tool: zDrawingTool,
strokeWidth: z.number().min(1),
points: zPoints,
});
export type VectorMaskLine = z.infer<typeof zVectorMaskLine>;
export type VectorMaskLine = {
id: string;
type: 'vector_mask_line';
tool: DrawingTool;
strokeWidth: number;
points: number[];
};
const zVectorMaskRect = z.object({
id: z.string(),
type: z.literal('vector_mask_rect'),
x: z.number(),
y: z.number(),
width: z.number().min(1),
height: z.number().min(1),
});
export type VectorMaskRect = z.infer<typeof zVectorMaskRect>;
export type VectorMaskRect = {
id: string;
type: 'vector_mask_rect';
x: number;
y: number;
width: number;
height: number;
};
const zLayerBase = z.object({
id: z.string(),
isEnabled: z.boolean().default(true),
isSelected: z.boolean().default(true),
});
type LayerBase = {
id: string;
isEnabled: boolean;
};
const zRect = z.object({
x: z.number(),
y: z.number(),
width: z.number().min(1),
height: z.number().min(1),
});
const zRenderableLayerBase = zLayerBase.extend({
x: z.number(),
y: z.number(),
bbox: zRect.nullable(),
bboxNeedsUpdate: z.boolean(),
});
type RenderableLayerBase = LayerBase & {
x: number;
y: number;
bbox: IRect | null;
bboxNeedsUpdate: boolean;
isSelected: boolean;
};
const zControlAdapterLayer = zRenderableLayerBase.extend({
type: z.literal('control_adapter_layer'),
opacity: z.number().gte(0).lte(1),
isFilterEnabled: z.boolean(),
controlAdapter: z.discriminatedUnion('type', [zControlNetConfigV2, zT2IAdapterConfigV2]),
});
export type ControlAdapterLayer = z.infer<typeof zControlAdapterLayer>;
export type ControlAdapterLayer = RenderableLayerBase & {
type: 'control_adapter_layer'; // technically, also t2i adapter layer
opacity: number;
isFilterEnabled: boolean;
controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2;
};
const zIPAdapterLayer = zLayerBase.extend({
type: z.literal('ip_adapter_layer'),
ipAdapter: zIPAdapterConfigV2,
});
export type IPAdapterLayer = z.infer<typeof zIPAdapterLayer>;
export type IPAdapterLayer = LayerBase & {
type: 'ip_adapter_layer';
ipAdapter: IPAdapterConfigV2;
};
const zRgbColor = z.object({
r: z.number().int().min(0).max(255),
g: z.number().int().min(0).max(255),
b: z.number().int().min(0).max(255),
});
const zRegionalGuidanceLayer = zRenderableLayerBase.extend({
type: z.literal('regional_guidance_layer'),
maskObjects: z.array(z.discriminatedUnion('type', [zVectorMaskLine, zVectorMaskRect])),
positivePrompt: zParameterPositivePrompt.nullable(),
negativePrompt: zParameterNegativePrompt.nullable(),
ipAdapters: z.array(zIPAdapterConfigV2),
previewColor: zRgbColor,
autoNegative: zAutoNegative,
uploadedMaskImage: zImageWithDims.nullable(),
});
export type RegionalGuidanceLayer = z.infer<typeof zRegionalGuidanceLayer>;
export type RegionalGuidanceLayer = RenderableLayerBase & {
type: 'regional_guidance_layer';
maskObjects: (VectorMaskLine | VectorMaskRect)[];
positivePrompt: ParameterPositivePrompt | null;
negativePrompt: ParameterNegativePrompt | null; // Up to one text prompt per mask
ipAdapters: IPAdapterConfigV2[]; // Any number of image prompts
previewColor: RgbColor;
autoNegative: ParameterAutoNegative;
needsPixelBbox: boolean; // Needs the slower pixel-based bbox calculation - set to true when an there is an eraser object
};
const zInitialImageLayer = zRenderableLayerBase.extend({
type: z.literal('initial_image_layer'),
opacity: z.number().gte(0).lte(1),
image: zImageWithDims.nullable(),
denoisingStrength: zParameterStrength,
});
export type InitialImageLayer = z.infer<typeof zInitialImageLayer>;
export type InitialImageLayer = RenderableLayerBase & {
type: 'initial_image_layer';
opacity: number;
image: ImageWithDims | null;
};
export type Layer = RegionalGuidanceLayer | ControlAdapterLayer | IPAdapterLayer | InitialImageLayer;
export const zLayer = z.discriminatedUnion('type', [
zRegionalGuidanceLayer,
zControlAdapterLayer,
zIPAdapterLayer,
zInitialImageLayer,
]);
export type Layer = z.infer<typeof zLayer>;
export type ControlLayersState = {
_version: 1;
_version: 3;
selectedLayerId: string | null;
layers: Layer[];
brushSize: number;

View File

@ -2,7 +2,6 @@ import openBase64ImageInTab from 'common/util/openBase64ImageInTab';
import { imageDataToDataURL } from 'features/canvas/util/blobToDataURL';
import { RG_LAYER_OBJECT_GROUP_NAME } from 'features/controlLayers/store/controlLayersSlice';
import Konva from 'konva';
import type { Layer as KonvaLayerType } from 'konva/lib/Layer';
import type { IRect } from 'konva/lib/types';
import { assert } from 'tsafe';
@ -54,34 +53,30 @@ const getImageDataBbox = (imageData: ImageData): Extents | null => {
};
/**
* Get the bounding box of a regional prompt konva layer. This function has special handling for regional prompt layers.
* @param layer The konva layer to get the bounding box of.
* @param preview Whether to open a new tab displaying the rendered layer, which is used to calculate the bbox.
* Clones a regional guidance konva layer onto an offscreen stage/canvas. This allows the pixel data for a given layer
* to be captured, manipulated or analyzed without interference from other layers.
* @param layer The konva layer to clone.
* @returns The cloned stage and layer.
*/
export const getLayerBboxPixels = (layer: KonvaLayerType, preview: boolean = false): IRect | null => {
// To calculate the layer's bounding box, we must first export it to a pixel array, then do some math.
//
// Though it is relatively fast, we can't use Konva's `getClientRect`. It programmatically determines the rect
// by calculating the extents of individual shapes from their "vector" shape data.
//
// This doesn't work when some shapes are drawn with composite operations that "erase" pixels, like eraser lines.
// These shapes' extents are still calculated as if they were solid, leading to a bounding box that is too large.
const getIsolatedRGLayerClone = (layer: Konva.Layer): { stageClone: Konva.Stage; layerClone: Konva.Layer } => {
const stage = layer.getStage();
// Construct and offscreen canvas on which we will do the bbox calculations.
// Construct an offscreen canvas with the same dimensions as the layer's stage.
const offscreenStageContainer = document.createElement('div');
const offscreenStage = new Konva.Stage({
const stageClone = new Konva.Stage({
container: offscreenStageContainer,
x: stage.x(),
y: stage.y(),
width: stage.width(),
height: stage.height(),
});
// Clone the layer and filter out unwanted children.
const layerClone = layer.clone();
offscreenStage.add(layerClone);
stageClone.add(layerClone);
for (const child of layerClone.getChildren()) {
if (child.name() === RG_LAYER_OBJECT_GROUP_NAME) {
if (child.name() === RG_LAYER_OBJECT_GROUP_NAME && child.hasChildren()) {
// We need to cache the group to ensure it composites out eraser strokes correctly
child.opacity(1);
child.cache();
@ -91,11 +86,31 @@ export const getLayerBboxPixels = (layer: KonvaLayerType, preview: boolean = fal
}
}
return { stageClone, layerClone };
};
/**
* Get the bounding box of a regional prompt konva layer. This function has special handling for regional prompt layers.
* @param layer The konva layer to get the bounding box of.
* @param preview Whether to open a new tab displaying the rendered layer, which is used to calculate the bbox.
*/
export const getLayerBboxPixels = (layer: Konva.Layer, preview: boolean = false): IRect | null => {
// To calculate the layer's bounding box, we must first export it to a pixel array, then do some math.
//
// Though it is relatively fast, we can't use Konva's `getClientRect`. It programmatically determines the rect
// by calculating the extents of individual shapes from their "vector" shape data.
//
// This doesn't work when some shapes are drawn with composite operations that "erase" pixels, like eraser lines.
// These shapes' extents are still calculated as if they were solid, leading to a bounding box that is too large.
const { stageClone, layerClone } = getIsolatedRGLayerClone(layer);
// Get a worst-case rect using the relatively fast `getClientRect`.
const layerRect = layerClone.getClientRect();
if (layerRect.width === 0 || layerRect.height === 0) {
return null;
}
// Capture the image data with the above rect.
const layerImageData = offscreenStage
const layerImageData = stageClone
.toCanvas(layerRect)
.getContext('2d')
?.getImageData(0, 0, layerRect.width, layerRect.height);
@ -114,8 +129,8 @@ export const getLayerBboxPixels = (layer: KonvaLayerType, preview: boolean = fal
// Correct the bounding box to be relative to the layer's position.
const correctedLayerBbox = {
x: layerBbox.minX - Math.floor(stage.x()) + layerRect.x - Math.floor(layer.x()),
y: layerBbox.minY - Math.floor(stage.y()) + layerRect.y - Math.floor(layer.y()),
x: layerBbox.minX - Math.floor(stageClone.x()) + layerRect.x - Math.floor(layer.x()),
y: layerBbox.minY - Math.floor(stageClone.y()) + layerRect.y - Math.floor(layer.y()),
width: layerBbox.maxX - layerBbox.minX,
height: layerBbox.maxY - layerBbox.minY,
};
@ -123,7 +138,13 @@ export const getLayerBboxPixels = (layer: KonvaLayerType, preview: boolean = fal
return correctedLayerBbox;
};
export const getLayerBboxFast = (layer: KonvaLayerType): IRect | null => {
/**
* Get the bounding box of a konva layer. This function is faster than `getLayerBboxPixels` but less accurate. It
* should only be used when there are no eraser strokes or shapes in the layer.
* @param layer The konva layer to get the bounding box of.
* @returns The bounding box of the layer.
*/
export const getLayerBboxFast = (layer: Konva.Layer): IRect => {
const bbox = layer.getClientRect(GET_CLIENT_RECT_CONFIG);
return {
x: Math.floor(bbox.x),

View File

@ -4,20 +4,74 @@ import { assert } from 'tsafe';
import { describe, test } from 'vitest';
import type {
_CannyProcessorConfig,
_ColorMapProcessorConfig,
_ContentShuffleProcessorConfig,
_DepthAnythingProcessorConfig,
_DWOpenposeProcessorConfig,
_HedProcessorConfig,
_LineartAnimeProcessorConfig,
_LineartProcessorConfig,
_MediapipeFaceProcessorConfig,
_MidasDepthProcessorConfig,
_MlsdProcessorConfig,
_NormalbaeProcessorConfig,
_PidiProcessorConfig,
_ZoeDepthProcessorConfig,
CannyProcessorConfig,
CLIPVisionModelV2,
ColorMapProcessorConfig,
ContentShuffleProcessorConfig,
ControlModeV2,
DepthAnythingModelSize,
DepthAnythingProcessorConfig,
DWOpenposeProcessorConfig,
HedProcessorConfig,
IPMethodV2,
LineartAnimeProcessorConfig,
LineartProcessorConfig,
MediapipeFaceProcessorConfig,
MidasDepthProcessorConfig,
MlsdProcessorConfig,
NormalbaeProcessorConfig,
PidiProcessorConfig,
ProcessorConfig,
ProcessorTypeV2,
ZoeDepthProcessorConfig,
} from './controlAdapters';
describe('Control Adapter Types', () => {
test('ProcessorType', () => assert<Equals<ProcessorConfig['type'], ProcessorTypeV2>>());
test('IP Adapter Method', () => assert<Equals<NonNullable<S['IPAdapterInvocation']['method']>, IPMethodV2>>());
test('CLIP Vision Model', () =>
assert<Equals<NonNullable<S['IPAdapterInvocation']['clip_vision_model']>, CLIPVisionModelV2>>());
test('Control Mode', () => assert<Equals<NonNullable<S['ControlNetInvocation']['control_mode']>, ControlModeV2>>());
test('DepthAnything Model Size', () =>
assert<Equals<NonNullable<S['DepthAnythingImageProcessorInvocation']['model_size']>, DepthAnythingModelSize>>());
test('ProcessorType', () => {
assert<Equals<ProcessorConfig['type'], ProcessorTypeV2>>();
});
test('IP Adapter Method', () => {
assert<Equals<NonNullable<S['IPAdapterInvocation']['method']>, IPMethodV2>>();
});
test('CLIP Vision Model', () => {
assert<Equals<NonNullable<S['IPAdapterInvocation']['clip_vision_model']>, CLIPVisionModelV2>>();
});
test('Control Mode', () => {
assert<Equals<NonNullable<S['ControlNetInvocation']['control_mode']>, ControlModeV2>>();
});
test('DepthAnything Model Size', () => {
assert<Equals<NonNullable<S['DepthAnythingImageProcessorInvocation']['model_size']>, DepthAnythingModelSize>>();
});
test('Processor Configs', () => {
// The processor configs are manually modeled zod schemas. This test ensures that the inferred types are correct.
// The types prefixed with `_` are types generated from OpenAPI, while the types without the prefix are manually modeled.
assert<Equals<_CannyProcessorConfig, CannyProcessorConfig>>();
assert<Equals<_ColorMapProcessorConfig, ColorMapProcessorConfig>>();
assert<Equals<_ContentShuffleProcessorConfig, ContentShuffleProcessorConfig>>();
assert<Equals<_DepthAnythingProcessorConfig, DepthAnythingProcessorConfig>>();
assert<Equals<_HedProcessorConfig, HedProcessorConfig>>();
assert<Equals<_LineartAnimeProcessorConfig, LineartAnimeProcessorConfig>>();
assert<Equals<_LineartProcessorConfig, LineartProcessorConfig>>();
assert<Equals<_MediapipeFaceProcessorConfig, MediapipeFaceProcessorConfig>>();
assert<Equals<_MidasDepthProcessorConfig, MidasDepthProcessorConfig>>();
assert<Equals<_MlsdProcessorConfig, MlsdProcessorConfig>>();
assert<Equals<_NormalbaeProcessorConfig, NormalbaeProcessorConfig>>();
assert<Equals<_DWOpenposeProcessorConfig, DWOpenposeProcessorConfig>>();
assert<Equals<_PidiProcessorConfig, PidiProcessorConfig>>();
assert<Equals<_ZoeDepthProcessorConfig, ZoeDepthProcessorConfig>>();
});
});

View File

@ -1,9 +1,5 @@
import { deepClone } from 'common/util/deepClone';
import type {
ParameterControlNetModel,
ParameterIPAdapterModel,
ParameterT2IAdapterModel,
} from 'features/parameters/types/parameterSchemas';
import { zModelIdentifierField } from 'features/nodes/types/common';
import { merge, omit } from 'lodash-es';
import type {
BaseModelType,
@ -28,90 +24,201 @@ import type {
} from 'services/api/types';
import { z } from 'zod';
const zId = z.string().min(1);
const zCannyProcessorConfig = z.object({
id: zId,
type: z.literal('canny_image_processor'),
low_threshold: z.number().int().gte(0).lte(255),
high_threshold: z.number().int().gte(0).lte(255),
});
export type _CannyProcessorConfig = Required<
Pick<CannyImageProcessorInvocation, 'id' | 'type' | 'low_threshold' | 'high_threshold'>
>;
export type CannyProcessorConfig = z.infer<typeof zCannyProcessorConfig>;
const zColorMapProcessorConfig = z.object({
id: zId,
type: z.literal('color_map_image_processor'),
color_map_tile_size: z.number().int().gte(1),
});
export type _ColorMapProcessorConfig = Required<
Pick<ColorMapImageProcessorInvocation, 'id' | 'type' | 'color_map_tile_size'>
>;
export type ColorMapProcessorConfig = z.infer<typeof zColorMapProcessorConfig>;
const zContentShuffleProcessorConfig = z.object({
id: zId,
type: z.literal('content_shuffle_image_processor'),
w: z.number().int().gte(0),
h: z.number().int().gte(0),
f: z.number().int().gte(0),
});
export type _ContentShuffleProcessorConfig = Required<
Pick<ContentShuffleImageProcessorInvocation, 'id' | 'type' | 'w' | 'h' | 'f'>
>;
export type ContentShuffleProcessorConfig = z.infer<typeof zContentShuffleProcessorConfig>;
const zDepthAnythingModelSize = z.enum(['large', 'base', 'small']);
export type DepthAnythingModelSize = z.infer<typeof zDepthAnythingModelSize>;
export const isDepthAnythingModelSize = (v: unknown): v is DepthAnythingModelSize =>
zDepthAnythingModelSize.safeParse(v).success;
export type CannyProcessorConfig = Required<
Pick<CannyImageProcessorInvocation, 'id' | 'type' | 'low_threshold' | 'high_threshold'>
>;
export type ColorMapProcessorConfig = Required<
Pick<ColorMapImageProcessorInvocation, 'id' | 'type' | 'color_map_tile_size'>
>;
export type ContentShuffleProcessorConfig = Required<
Pick<ContentShuffleImageProcessorInvocation, 'id' | 'type' | 'w' | 'h' | 'f'>
>;
export type DepthAnythingProcessorConfig = Required<
const zDepthAnythingProcessorConfig = z.object({
id: zId,
type: z.literal('depth_anything_image_processor'),
model_size: zDepthAnythingModelSize,
});
export type _DepthAnythingProcessorConfig = Required<
Pick<DepthAnythingImageProcessorInvocation, 'id' | 'type' | 'model_size'>
>;
export type HedProcessorConfig = Required<Pick<HedImageProcessorInvocation, 'id' | 'type' | 'scribble'>>;
type LineartAnimeProcessorConfig = Required<Pick<LineartAnimeImageProcessorInvocation, 'id' | 'type'>>;
export type LineartProcessorConfig = Required<Pick<LineartImageProcessorInvocation, 'id' | 'type' | 'coarse'>>;
export type MediapipeFaceProcessorConfig = Required<
export type DepthAnythingProcessorConfig = z.infer<typeof zDepthAnythingProcessorConfig>;
const zHedProcessorConfig = z.object({
id: zId,
type: z.literal('hed_image_processor'),
scribble: z.boolean(),
});
export type _HedProcessorConfig = Required<Pick<HedImageProcessorInvocation, 'id' | 'type' | 'scribble'>>;
export type HedProcessorConfig = z.infer<typeof zHedProcessorConfig>;
const zLineartAnimeProcessorConfig = z.object({
id: zId,
type: z.literal('lineart_anime_image_processor'),
});
export type _LineartAnimeProcessorConfig = Required<Pick<LineartAnimeImageProcessorInvocation, 'id' | 'type'>>;
export type LineartAnimeProcessorConfig = z.infer<typeof zLineartAnimeProcessorConfig>;
const zLineartProcessorConfig = z.object({
id: zId,
type: z.literal('lineart_image_processor'),
coarse: z.boolean(),
});
export type _LineartProcessorConfig = Required<Pick<LineartImageProcessorInvocation, 'id' | 'type' | 'coarse'>>;
export type LineartProcessorConfig = z.infer<typeof zLineartProcessorConfig>;
const zMediapipeFaceProcessorConfig = z.object({
id: zId,
type: z.literal('mediapipe_face_processor'),
max_faces: z.number().int().gte(1),
min_confidence: z.number().gte(0).lte(1),
});
export type _MediapipeFaceProcessorConfig = Required<
Pick<MediapipeFaceProcessorInvocation, 'id' | 'type' | 'max_faces' | 'min_confidence'>
>;
export type MidasDepthProcessorConfig = Required<
export type MediapipeFaceProcessorConfig = z.infer<typeof zMediapipeFaceProcessorConfig>;
const zMidasDepthProcessorConfig = z.object({
id: zId,
type: z.literal('midas_depth_image_processor'),
a_mult: z.number().gte(0),
bg_th: z.number().gte(0),
});
export type _MidasDepthProcessorConfig = Required<
Pick<MidasDepthImageProcessorInvocation, 'id' | 'type' | 'a_mult' | 'bg_th'>
>;
export type MlsdProcessorConfig = Required<Pick<MlsdImageProcessorInvocation, 'id' | 'type' | 'thr_v' | 'thr_d'>>;
type NormalbaeProcessorConfig = Required<Pick<NormalbaeImageProcessorInvocation, 'id' | 'type'>>;
export type DWOpenposeProcessorConfig = Required<
export type MidasDepthProcessorConfig = z.infer<typeof zMidasDepthProcessorConfig>;
const zMlsdProcessorConfig = z.object({
id: zId,
type: z.literal('mlsd_image_processor'),
thr_v: z.number().gte(0),
thr_d: z.number().gte(0),
});
export type _MlsdProcessorConfig = Required<Pick<MlsdImageProcessorInvocation, 'id' | 'type' | 'thr_v' | 'thr_d'>>;
export type MlsdProcessorConfig = z.infer<typeof zMlsdProcessorConfig>;
const zNormalbaeProcessorConfig = z.object({
id: zId,
type: z.literal('normalbae_image_processor'),
});
export type _NormalbaeProcessorConfig = Required<Pick<NormalbaeImageProcessorInvocation, 'id' | 'type'>>;
export type NormalbaeProcessorConfig = z.infer<typeof zNormalbaeProcessorConfig>;
const zDWOpenposeProcessorConfig = z.object({
id: zId,
type: z.literal('dw_openpose_image_processor'),
draw_body: z.boolean(),
draw_face: z.boolean(),
draw_hands: z.boolean(),
});
export type _DWOpenposeProcessorConfig = Required<
Pick<DWOpenposeImageProcessorInvocation, 'id' | 'type' | 'draw_body' | 'draw_face' | 'draw_hands'>
>;
export type PidiProcessorConfig = Required<Pick<PidiImageProcessorInvocation, 'id' | 'type' | 'safe' | 'scribble'>>;
type ZoeDepthProcessorConfig = Required<Pick<ZoeDepthImageProcessorInvocation, 'id' | 'type'>>;
export type DWOpenposeProcessorConfig = z.infer<typeof zDWOpenposeProcessorConfig>;
export type ProcessorConfig =
| CannyProcessorConfig
| ColorMapProcessorConfig
| ContentShuffleProcessorConfig
| DepthAnythingProcessorConfig
| HedProcessorConfig
| LineartAnimeProcessorConfig
| LineartProcessorConfig
| MediapipeFaceProcessorConfig
| MidasDepthProcessorConfig
| MlsdProcessorConfig
| NormalbaeProcessorConfig
| DWOpenposeProcessorConfig
| PidiProcessorConfig
| ZoeDepthProcessorConfig;
const zPidiProcessorConfig = z.object({
id: zId,
type: z.literal('pidi_image_processor'),
safe: z.boolean(),
scribble: z.boolean(),
});
export type _PidiProcessorConfig = Required<Pick<PidiImageProcessorInvocation, 'id' | 'type' | 'safe' | 'scribble'>>;
export type PidiProcessorConfig = z.infer<typeof zPidiProcessorConfig>;
export type ImageWithDims = {
imageName: string;
width: number;
height: number;
};
const zZoeDepthProcessorConfig = z.object({
id: zId,
type: z.literal('zoe_depth_image_processor'),
});
export type _ZoeDepthProcessorConfig = Required<Pick<ZoeDepthImageProcessorInvocation, 'id' | 'type'>>;
export type ZoeDepthProcessorConfig = z.infer<typeof zZoeDepthProcessorConfig>;
type ControlAdapterBase = {
id: string;
weight: number;
image: ImageWithDims | null;
processedImage: ImageWithDims | null;
isProcessingImage: boolean;
processorConfig: ProcessorConfig | null;
beginEndStepPct: [number, number];
};
const zProcessorConfig = z.discriminatedUnion('type', [
zCannyProcessorConfig,
zColorMapProcessorConfig,
zContentShuffleProcessorConfig,
zDepthAnythingProcessorConfig,
zHedProcessorConfig,
zLineartAnimeProcessorConfig,
zLineartProcessorConfig,
zMediapipeFaceProcessorConfig,
zMidasDepthProcessorConfig,
zMlsdProcessorConfig,
zNormalbaeProcessorConfig,
zDWOpenposeProcessorConfig,
zPidiProcessorConfig,
zZoeDepthProcessorConfig,
]);
export type ProcessorConfig = z.infer<typeof zProcessorConfig>;
export const zImageWithDims = z.object({
name: z.string(),
width: z.number().int().positive(),
height: z.number().int().positive(),
});
export type ImageWithDims = z.infer<typeof zImageWithDims>;
const zBeginEndStepPct = z
.tuple([z.number().gte(0).lte(1), z.number().gte(0).lte(1)])
.refine(([begin, end]) => begin < end, {
message: 'Begin must be less than end',
});
const zControlAdapterBase = z.object({
id: zId,
weight: z.number().gte(0).lte(1),
image: zImageWithDims.nullable(),
processedImage: zImageWithDims.nullable(),
processorConfig: zProcessorConfig.nullable(),
processorPendingBatchId: z.string().nullable().default(null),
beginEndStepPct: zBeginEndStepPct,
});
const zControlModeV2 = z.enum(['balanced', 'more_prompt', 'more_control', 'unbalanced']);
export type ControlModeV2 = z.infer<typeof zControlModeV2>;
export const isControlModeV2 = (v: unknown): v is ControlModeV2 => zControlModeV2.safeParse(v).success;
export type ControlNetConfigV2 = ControlAdapterBase & {
type: 'controlnet';
model: ParameterControlNetModel | null;
controlMode: ControlModeV2;
};
export const isControlNetConfigV2 = (ca: ControlNetConfigV2 | T2IAdapterConfigV2): ca is ControlNetConfigV2 =>
ca.type === 'controlnet';
export const zControlNetConfigV2 = zControlAdapterBase.extend({
type: z.literal('controlnet'),
model: zModelIdentifierField.nullable(),
controlMode: zControlModeV2,
});
export type ControlNetConfigV2 = z.infer<typeof zControlNetConfigV2>;
export type T2IAdapterConfigV2 = ControlAdapterBase & {
type: 't2i_adapter';
model: ParameterT2IAdapterModel | null;
};
export const isT2IAdapterConfigV2 = (ca: ControlNetConfigV2 | T2IAdapterConfigV2): ca is T2IAdapterConfigV2 =>
ca.type === 't2i_adapter';
export const zT2IAdapterConfigV2 = zControlAdapterBase.extend({
type: z.literal('t2i_adapter'),
model: zModelIdentifierField.nullable(),
});
export type T2IAdapterConfigV2 = z.infer<typeof zT2IAdapterConfigV2>;
const zCLIPVisionModelV2 = z.enum(['ViT-H', 'ViT-G']);
export type CLIPVisionModelV2 = z.infer<typeof zCLIPVisionModelV2>;
@ -121,16 +228,17 @@ const zIPMethodV2 = z.enum(['full', 'style', 'composition']);
export type IPMethodV2 = z.infer<typeof zIPMethodV2>;
export const isIPMethodV2 = (v: unknown): v is IPMethodV2 => zIPMethodV2.safeParse(v).success;
export type IPAdapterConfigV2 = {
id: string;
type: 'ip_adapter';
weight: number;
method: IPMethodV2;
image: ImageWithDims | null;
model: ParameterIPAdapterModel | null;
clipVisionModel: CLIPVisionModelV2;
beginEndStepPct: [number, number];
};
export const zIPAdapterConfigV2 = z.object({
id: zId,
type: z.literal('ip_adapter'),
weight: z.number().gte(0).lte(1),
method: zIPMethodV2,
image: zImageWithDims.nullable(),
model: zModelIdentifierField.nullable(),
clipVisionModel: zCLIPVisionModelV2,
beginEndStepPct: zBeginEndStepPct,
});
export type IPAdapterConfigV2 = z.infer<typeof zIPAdapterConfigV2>;
const zProcessorTypeV2 = z.enum([
'canny_image_processor',
@ -190,7 +298,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
buildNode: (image, config) => ({
...config,
type: 'canny_image_processor',
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -207,7 +315,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
buildNode: (image, config) => ({
...config,
type: 'color_map_image_processor',
image: { image_name: image.imageName },
image: { image_name: image.name },
}),
},
content_shuffle_image_processor: {
@ -223,7 +331,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -239,7 +347,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
resolution: minDim(image),
}),
},
@ -254,7 +362,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -269,7 +377,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -285,7 +393,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -302,7 +410,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -319,7 +427,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -336,7 +444,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -351,7 +459,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -369,7 +477,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
image_resolution: minDim(image),
}),
},
@ -385,7 +493,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
detect_resolution: minDim(image),
image_resolution: minDim(image),
}),
@ -400,7 +508,7 @@ export const CA_PROCESSOR_DATA: CAProcessorsData = {
}),
buildNode: (image, config) => ({
...config,
image: { image_name: image.imageName },
image: { image_name: image.name },
}),
},
};
@ -413,8 +521,8 @@ export const initialControlNetV2: Omit<ControlNetConfigV2, 'id'> = {
controlMode: 'balanced',
image: null,
processedImage: null,
isProcessingImage: false,
processorConfig: CA_PROCESSOR_DATA.canny_image_processor.buildDefaults(),
processorPendingBatchId: null,
};
export const initialT2IAdapterV2: Omit<T2IAdapterConfigV2, 'id'> = {
@ -424,8 +532,8 @@ export const initialT2IAdapterV2: Omit<T2IAdapterConfigV2, 'id'> = {
beginEndStepPct: [0, 1],
image: null,
processedImage: null,
isProcessingImage: false,
processorConfig: CA_PROCESSOR_DATA.canny_image_processor.buildDefaults(),
processorPendingBatchId: null,
};
export const initialIPAdapterV2: Omit<IPAdapterConfigV2, 'id'> = {
@ -462,7 +570,7 @@ export const buildControlAdapterProcessorV2 = (
};
export const imageDTOToImageWithDims = ({ image_name, width, height }: ImageDTO): ImageWithDims => ({
imageName: image_name,
name: image_name,
width,
height,
});

View File

@ -1,12 +1,13 @@
import { getStore } from 'app/store/nanostores/store';
import { rgbaColorToString, rgbColorToString } from 'features/canvas/util/colorToString';
import { getScaledFlooredCursorPosition } from 'features/controlLayers/hooks/mouseEventHooks';
import { getScaledFlooredCursorPosition, snapPosToStage } from 'features/controlLayers/hooks/mouseEventHooks';
import {
$tool,
BACKGROUND_LAYER_ID,
BACKGROUND_RECT_ID,
CA_LAYER_IMAGE_NAME,
CA_LAYER_NAME,
COMPOSITING_RECT_NAME,
getCALayerImageId,
getIILayerImageId,
getLayerBboxId,
@ -211,12 +212,13 @@ const renderToolPreview = (
}
if (cursorPos && lastMouseDownPos && tool === 'rect') {
const snappedPos = snapPosToStage(cursorPos, stage);
const rectPreview = toolPreviewLayer.findOne<Konva.Rect>(`#${TOOL_PREVIEW_RECT_ID}`);
rectPreview?.setAttrs({
x: Math.min(cursorPos.x, lastMouseDownPos.x),
y: Math.min(cursorPos.y, lastMouseDownPos.y),
width: Math.abs(cursorPos.x - lastMouseDownPos.x),
height: Math.abs(cursorPos.y - lastMouseDownPos.y),
x: Math.min(snappedPos.x, lastMouseDownPos.x),
y: Math.min(snappedPos.y, lastMouseDownPos.y),
width: Math.abs(snappedPos.x - lastMouseDownPos.x),
height: Math.abs(snappedPos.y - lastMouseDownPos.y),
});
rectPreview?.visible(true);
} else {
@ -323,6 +325,12 @@ const createVectorMaskRect = (reduxObject: VectorMaskRect, konvaGroup: Konva.Gro
return vectorMaskRect;
};
const createCompositingRect = (konvaLayer: Konva.Layer): Konva.Rect => {
const compositingRect = new Konva.Rect({ name: COMPOSITING_RECT_NAME, listening: false });
konvaLayer.add(compositingRect);
return compositingRect;
};
/**
* Renders a vector mask layer.
* @param stage The konva stage to render on.
@ -400,15 +408,53 @@ const renderRegionalGuidanceLayer = (
groupNeedsCache = true;
}
if (konvaObjectGroup.children.length === 0) {
if (konvaObjectGroup.getChildren().length === 0) {
// No objects - clear the cache to reset the previous pixel data
konvaObjectGroup.clearCache();
} else if (groupNeedsCache) {
konvaObjectGroup.cache();
return;
}
// Updating group opacity does not require re-caching
if (konvaObjectGroup.opacity() !== globalMaskLayerOpacity) {
const compositingRect =
konvaLayer.findOne<Konva.Rect>(`.${COMPOSITING_RECT_NAME}`) ?? createCompositingRect(konvaLayer);
/**
* When the group is selected, we use a rect of the selected preview color, composited over the shapes. This allows
* shapes to render as a "raster" layer with all pixels drawn at the same color and opacity.
*
* Without this special handling, each shape is drawn individually with the given opacity, atop the other shapes. The
* effect is like if you have a Photoshop Group consisting of many shapes, each of which has the given opacity.
* Overlapping shapes will have their colors blended together, and the final color is the result of all the shapes.
*
* Instead, with the special handling, the effect is as if you drew all the shapes at 100% opacity, flattened them to
* a single raster image, and _then_ applied the 50% opacity.
*/
if (reduxLayer.isSelected && tool !== 'move') {
// We must clear the cache first so Konva will re-draw the group with the new compositing rect
if (konvaObjectGroup.isCached()) {
konvaObjectGroup.clearCache();
}
// The user is allowed to reduce mask opacity to 0, but we need the opacity for the compositing rect to work
konvaObjectGroup.opacity(1);
compositingRect.setAttrs({
// The rect should be the size of the layer - use the fast method if we don't have a pixel-perfect bbox already
...(!reduxLayer.bboxNeedsUpdate && reduxLayer.bbox ? reduxLayer.bbox : getLayerBboxFast(konvaLayer)),
fill: rgbColor,
opacity: globalMaskLayerOpacity,
// Draw this rect only where there are non-transparent pixels under it (e.g. the mask shapes)
globalCompositeOperation: 'source-in',
visible: true,
// This rect must always be on top of all other shapes
zIndex: konvaObjectGroup.getChildren().length,
});
} else {
// The compositing rect should only be shown when the layer is selected.
compositingRect.visible(false);
// Cache only if needed - or if we are on this code path and _don't_ have a cache
if (groupNeedsCache || !konvaObjectGroup.isCached()) {
konvaObjectGroup.cache();
}
// Updating group opacity does not require re-caching
konvaObjectGroup.opacity(globalMaskLayerOpacity);
}
};
@ -418,6 +464,7 @@ const createInitialImageLayer = (stage: Konva.Stage, reduxLayer: InitialImageLay
id: reduxLayer.id,
name: INITIAL_IMAGE_LAYER_NAME,
imageSmoothingEnabled: true,
listening: false,
});
stage.add(konvaLayer);
return konvaLayer;
@ -437,6 +484,9 @@ const updateInitialImageLayerImageAttrs = (
konvaImage: Konva.Image,
reduxLayer: InitialImageLayer
) => {
// Konva erroneously reports NaN for width and height when the stage is hidden. This causes errors when caching,
// but it doesn't seem to break anything.
// TODO(psyche): Investigate and report upstream.
const newWidth = stage.width() / stage.scaleX();
const newHeight = stage.height() / stage.scaleY();
if (
@ -464,7 +514,7 @@ const updateInitialImageLayerImageSource = async (
reduxLayer: InitialImageLayer
) => {
if (reduxLayer.image) {
const { imageName } = reduxLayer.image;
const imageName = reduxLayer.image.name;
const req = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
const imageDTO = await req.unwrap();
req.unsubscribe();
@ -497,7 +547,7 @@ const renderInitialImageLayer = (stage: Konva.Stage, reduxLayer: InitialImageLay
let imageSourceNeedsUpdate = false;
if (canvasImageSource instanceof HTMLImageElement) {
const image = reduxLayer.image;
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.imageName)) {
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.name)) {
imageSourceNeedsUpdate = true;
} else if (!image) {
imageSourceNeedsUpdate = true;
@ -518,6 +568,7 @@ const createControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLay
id: reduxLayer.id,
name: CA_LAYER_NAME,
imageSmoothingEnabled: true,
listening: false,
});
stage.add(konvaLayer);
return konvaLayer;
@ -539,7 +590,7 @@ const updateControlNetLayerImageSource = async (
) => {
const image = reduxLayer.controlAdapter.processedImage ?? reduxLayer.controlAdapter.image;
if (image) {
const { imageName } = image;
const imageName = image.name;
const req = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
const imageDTO = await req.unwrap();
req.unsubscribe();
@ -572,6 +623,9 @@ const updateControlNetLayerImageAttrs = (
reduxLayer: ControlAdapterLayer
) => {
let needsCache = false;
// Konva erroneously reports NaN for width and height when the stage is hidden. This causes errors when caching,
// but it doesn't seem to break anything.
// TODO(psyche): Investigate and report upstream.
const newWidth = stage.width() / stage.scaleX();
const newHeight = stage.height() / stage.scaleY();
const hasFilter = konvaImage.filters() !== null && konvaImage.filters().length > 0;
@ -607,7 +661,7 @@ const renderControlNetLayer = (stage: Konva.Stage, reduxLayer: ControlAdapterLay
let imageSourceNeedsUpdate = false;
if (canvasImageSource instanceof HTMLImageElement) {
const image = reduxLayer.controlAdapter.processedImage ?? reduxLayer.controlAdapter.image;
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.imageName)) {
if (image && canvasImageSource.id !== getCALayerImageId(reduxLayer.id, image.name)) {
imageSourceNeedsUpdate = true;
} else if (!image) {
imageSourceNeedsUpdate = true;
@ -656,6 +710,7 @@ const renderLayers = (
if (isInitialImageLayer(reduxLayer)) {
renderInitialImageLayer(stage, reduxLayer);
}
// IP Adapter layers are not rendered
}
};
@ -670,6 +725,7 @@ const createBboxRect = (reduxLayer: Layer, konvaLayer: Konva.Layer) => {
id: getLayerBboxId(reduxLayer.id),
name: LAYER_BBOX_NAME,
strokeWidth: 1,
visible: false,
});
konvaLayer.add(rect);
return rect;
@ -679,18 +735,10 @@ const createBboxRect = (reduxLayer: Layer, konvaLayer: Konva.Layer) => {
* Renders the bounding boxes for the layers.
* @param stage The konva stage to render on
* @param reduxLayers An array of all redux layers to draw bboxes for
* @param selectedLayerId The selected layer's id
* @param tool The current tool
* @param onBboxChanged Callback for when the bbox is changed
* @param onBboxMouseDown Callback for when the bbox is clicked
* @returns
*/
const renderBbox = (
stage: Konva.Stage,
reduxLayers: Layer[],
tool: Tool,
onBboxChanged: (layerId: string, bbox: IRect | null) => void
) => {
const renderBboxes = (stage: Konva.Stage, reduxLayers: Layer[], tool: Tool) => {
// Hide all bboxes so they don't interfere with getClientRect
for (const bboxRect of stage.find<Konva.Rect>(`.${LAYER_BBOX_NAME}`)) {
bboxRect.visible(false);
@ -701,36 +749,59 @@ const renderBbox = (
return;
}
for (const reduxLayer of reduxLayers) {
if (reduxLayer.type === 'regional_guidance_layer') {
const konvaLayer = stage.findOne<Konva.Layer>(`#${reduxLayer.id}`);
assert(konvaLayer, `Layer ${reduxLayer.id} not found in stage`);
for (const reduxLayer of reduxLayers.filter(isRegionalGuidanceLayer)) {
if (!reduxLayer.bbox) {
continue;
}
const konvaLayer = stage.findOne<Konva.Layer>(`#${reduxLayer.id}`);
assert(konvaLayer, `Layer ${reduxLayer.id} not found in stage`);
let bbox = reduxLayer.bbox;
const bboxRect = konvaLayer.findOne<Konva.Rect>(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(reduxLayer, konvaLayer);
// We only need to recalculate the bbox if the layer has changed and it has objects
if (reduxLayer.bboxNeedsUpdate && reduxLayer.maskObjects.length) {
// We only need to use the pixel-perfect bounding box if the layer has eraser strokes
bbox = reduxLayer.needsPixelBbox ? getLayerBboxPixels(konvaLayer) : getLayerBboxFast(konvaLayer);
// Update the layer's bbox in the redux store
onBboxChanged(reduxLayer.id, bbox);
bboxRect.setAttrs({
visible: !reduxLayer.bboxNeedsUpdate,
listening: reduxLayer.isSelected,
x: reduxLayer.bbox.x,
y: reduxLayer.bbox.y,
width: reduxLayer.bbox.width,
height: reduxLayer.bbox.height,
stroke: reduxLayer.isSelected ? BBOX_SELECTED_STROKE : '',
});
}
};
/**
* Calculates the bbox of each regional guidance layer. Only calculates if the mask has changed.
* @param stage The konva stage to render on.
* @param reduxLayers An array of redux layers to calculate bboxes for
* @param onBboxChanged Callback for when the bounding box changes
*/
const updateBboxes = (
stage: Konva.Stage,
reduxLayers: Layer[],
onBboxChanged: (layerId: string, bbox: IRect | null) => void
) => {
for (const rgLayer of reduxLayers.filter(isRegionalGuidanceLayer)) {
const konvaLayer = stage.findOne<Konva.Layer>(`#${rgLayer.id}`);
assert(konvaLayer, `Layer ${rgLayer.id} not found in stage`);
// We only need to recalculate the bbox if the layer has changed
if (rgLayer.bboxNeedsUpdate) {
const bboxRect = konvaLayer.findOne<Konva.Rect>(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(rgLayer, konvaLayer);
// Hide the bbox while we calculate the new bbox, else the bbox will be included in the calculation
const visible = bboxRect.visible();
bboxRect.visible(false);
if (rgLayer.maskObjects.length === 0) {
// No objects - no bbox to calculate
onBboxChanged(rgLayer.id, null);
} else {
// Calculate the bbox by rendering the layer and checking its pixels
onBboxChanged(rgLayer.id, getLayerBboxPixels(konvaLayer));
}
if (!bbox) {
continue;
}
const rect = konvaLayer.findOne<Konva.Rect>(`.${LAYER_BBOX_NAME}`) ?? createBboxRect(reduxLayer, konvaLayer);
rect.setAttrs({
visible: true,
listening: reduxLayer.isSelected,
x: bbox.x,
y: bbox.y,
width: bbox.width,
height: bbox.height,
stroke: reduxLayer.isSelected ? BBOX_SELECTED_STROKE : '',
});
// Restore the visibility of the bbox
bboxRect.visible(visible);
}
}
};
@ -847,10 +918,11 @@ const renderNoLayersMessage = (stage: Konva.Stage, layerCount: number, width: nu
export const renderers = {
renderToolPreview,
renderLayers,
renderBbox,
renderBboxes,
renderBackground,
renderNoLayersMessage,
arrangeLayers,
updateBboxes,
};
const DEBOUNCE_MS = 300;
@ -858,10 +930,11 @@ const DEBOUNCE_MS = 300;
export const debouncedRenderers = {
renderToolPreview: debounce(renderToolPreview, DEBOUNCE_MS),
renderLayers: debounce(renderLayers, DEBOUNCE_MS),
renderBbox: debounce(renderBbox, DEBOUNCE_MS),
renderBboxes: debounce(renderBboxes, DEBOUNCE_MS),
renderBackground: debounce(renderBackground, DEBOUNCE_MS),
renderNoLayersMessage: debounce(renderNoLayersMessage, DEBOUNCE_MS),
arrangeLayers: debounce(arrangeLayers, DEBOUNCE_MS),
updateBboxes: debounce(updateBboxes, DEBOUNCE_MS),
};
/**

View File

@ -46,18 +46,16 @@ export const getImageUsage = (
const isControlLayerImage = controlLayers.layers.some((l) => {
if (isRegionalGuidanceLayer(l)) {
return l.ipAdapters.some((ipa) => ipa.image?.imageName === image_name);
return l.ipAdapters.some((ipa) => ipa.image?.name === image_name);
}
if (isControlAdapterLayer(l)) {
return (
l.controlAdapter.image?.imageName === image_name || l.controlAdapter.processedImage?.imageName === image_name
);
return l.controlAdapter.image?.name === image_name || l.controlAdapter.processedImage?.name === image_name;
}
if (isIPAdapterLayer(l)) {
return l.ipAdapter.image?.imageName === image_name;
return l.ipAdapter.image?.name === image_name;
}
if (isInitialImageLayer(l)) {
return l.image?.imageName === image_name;
return l.image?.name === image_name;
}
return false;
});

View File

@ -73,6 +73,7 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
const handleSendToImageToImage = useCallback(() => {
dispatch(sentImageToImg2Img());
dispatch(iiLayerAdded(imageDTO));
dispatch(setActiveTab('generation'));
}, [dispatch, imageDTO]);
const handleSendToCanvas = useCallback(() => {

View File

@ -11,6 +11,7 @@ import type { GallerySelectionDraggableData, ImageDraggableData, TypesafeDraggab
import { getGalleryImageDataTestId } from 'features/gallery/components/ImageGrid/getGalleryImageDataTestId';
import { useMultiselect } from 'features/gallery/hooks/useMultiselect';
import { useScrollIntoView } from 'features/gallery/hooks/useScrollIntoView';
import { isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
import type { MouseEvent } from 'react';
import { memo, useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
@ -102,6 +103,10 @@ const GalleryImage = (props: HoverableImageProps) => {
setIsHovered(true);
}, []);
const onDoubleClick = useCallback(() => {
dispatch(isImageViewerOpenChanged(true));
}, [dispatch]);
const handleMouseOut = useCallback(() => {
setIsHovered(false);
}, []);
@ -143,6 +148,7 @@ const GalleryImage = (props: HoverableImageProps) => {
>
<IAIDndImage
onClick={handleClick}
onDoubleClick={onDoubleClick}
imageDTO={imageDTO}
draggableData={draggableData}
isSelected={isSelected}

View File

@ -1,5 +1,6 @@
import { Box, Flex, IconButton, Tooltip, useShiftModifier } from '@invoke-ai/ui-library';
import { getOverlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
import { Formatter } from 'fracturedjsonjs';
import { isString } from 'lodash-es';
import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
import type { CSSProperties } from 'react';
@ -7,6 +8,8 @@ import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiCopyBold, PiDownloadSimpleBold } from 'react-icons/pi';
const formatter = new Formatter();
type Props = {
label: string;
data: unknown;
@ -20,7 +23,7 @@ const overlayscrollbarsOptions = getOverlayScrollbarsParams('scroll', 'scroll').
const DataViewer = (props: Props) => {
const { label, data, fileName, withDownload = true, withCopy = true, extraCopyActions } = props;
const dataString = useMemo(() => (isString(data) ? data : JSON.stringify(data, null, 2)), [data]);
const dataString = useMemo(() => (isString(data) ? data : formatter.Serialize(data)) ?? '', [data]);
const shift = useShiftModifier();
const handleCopy = useCallback(() => {
navigator.clipboard.writeText(dataString);

View File

@ -1,12 +1,10 @@
import { useAppSelector } from 'app/store/storeHooks';
import { MetadataControlNets } from 'features/metadata/components/MetadataControlNets';
import { MetadataControlNetsV2 } from 'features/metadata/components/MetadataControlNetsV2';
import { MetadataIPAdapters } from 'features/metadata/components/MetadataIPAdapters';
import { MetadataIPAdaptersV2 } from 'features/metadata/components/MetadataIPAdaptersV2';
import { MetadataItem } from 'features/metadata/components/MetadataItem';
import { MetadataLayers } from 'features/metadata/components/MetadataLayers';
import { MetadataLoRAs } from 'features/metadata/components/MetadataLoRAs';
import { MetadataT2IAdapters } from 'features/metadata/components/MetadataT2IAdapters';
import { MetadataT2IAdaptersV2 } from 'features/metadata/components/MetadataT2IAdaptersV2';
import { handlers } from 'features/metadata/util/handlers';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo } from 'react';
@ -39,8 +37,7 @@ const ImageMetadataActions = (props: Props) => {
<MetadataItem metadata={metadata} handlers={handlers.scheduler} />
<MetadataItem metadata={metadata} handlers={handlers.cfgScale} />
<MetadataItem metadata={metadata} handlers={handlers.cfgRescaleMultiplier} />
<MetadataItem metadata={metadata} handlers={handlers.initialImage} />
<MetadataItem metadata={metadata} handlers={handlers.strength} />
{activeTabName !== 'generation' && <MetadataItem metadata={metadata} handlers={handlers.strength} />}
<MetadataItem metadata={metadata} handlers={handlers.hrfEnabled} />
<MetadataItem metadata={metadata} handlers={handlers.hrfMethod} />
<MetadataItem metadata={metadata} handlers={handlers.hrfStrength} />
@ -52,12 +49,10 @@ const ImageMetadataActions = (props: Props) => {
<MetadataItem metadata={metadata} handlers={handlers.refinerStart} />
<MetadataItem metadata={metadata} handlers={handlers.refinerSteps} />
<MetadataLoRAs metadata={metadata} />
{activeTabName === 'generation' && <MetadataLayers metadata={metadata} />}
{activeTabName !== 'generation' && <MetadataControlNets metadata={metadata} />}
{activeTabName !== 'generation' && <MetadataT2IAdapters metadata={metadata} />}
{activeTabName !== 'generation' && <MetadataIPAdapters metadata={metadata} />}
{activeTabName === 'generation' && <MetadataControlNetsV2 metadata={metadata} />}
{activeTabName === 'generation' && <MetadataT2IAdaptersV2 metadata={metadata} />}
{activeTabName === 'generation' && <MetadataIPAdaptersV2 metadata={metadata} />}
</>
);
};

View File

@ -16,6 +16,7 @@ import ParamUpscalePopover from 'features/parameters/components/Upscale/ParamUps
import { useIsQueueMutationInProgress } from 'features/queue/hooks/useIsQueueMutationInProgress';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { selectSystemSlice } from 'features/system/store/systemSlice';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { useGetAndLoadEmbeddedWorkflow } from 'features/workflowLibrary/hooks/useGetAndLoadEmbeddedWorkflow';
import { memo, useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
@ -84,6 +85,7 @@ const CurrentImageButtons = () => {
}
dispatch(sentImageToImg2Img());
dispatch(iiLayerAdded(imageDTO));
dispatch(setActiveTab('generation'));
}, [dispatch, imageDTO]);
useHotkeys('shift+i', handleSendToImageToImage, [imageDTO]);

View File

@ -22,7 +22,21 @@ const selectLastSelectedImageName = createSelector(
(lastSelectedImage) => lastSelectedImage?.image_name
);
const CurrentImagePreview = () => {
type Props = {
isDragDisabled?: boolean;
isDropDisabled?: boolean;
withNextPrevButtons?: boolean;
withMetadata?: boolean;
alwaysShowProgress?: boolean;
};
const CurrentImagePreview = ({
isDragDisabled = false,
isDropDisabled = false,
withNextPrevButtons = true,
withMetadata = true,
alwaysShowProgress = false,
}: Props) => {
const { t } = useTranslation();
const shouldShowImageDetails = useAppSelector((s) => s.ui.shouldShowImageDetails);
const imageName = useAppSelector(selectLastSelectedImageName);
@ -52,30 +66,35 @@ const CurrentImagePreview = () => {
// Show and hide the next/prev buttons on mouse move
const [shouldShowNextPrevButtons, setShouldShowNextPrevButtons] = useState<boolean>(false);
const timeoutId = useRef(0);
const onMouseMove = useCallback(() => {
const onMouseOver = useCallback(() => {
setShouldShowNextPrevButtons(true);
window.clearTimeout(timeoutId.current);
}, []);
const onMouseOut = useCallback(() => {
timeoutId.current = window.setTimeout(() => {
setShouldShowNextPrevButtons(false);
}, 1000);
}, 500);
}, []);
return (
<Flex
onMouseMove={onMouseMove}
onMouseOver={onMouseOver}
onMouseOut={onMouseOut}
width="full"
height="full"
alignItems="center"
justifyContent="center"
position="relative"
>
{hasDenoiseProgress && shouldShowProgressInViewer ? (
{hasDenoiseProgress && (shouldShowProgressInViewer || alwaysShowProgress) ? (
<ProgressImage />
) : (
<IAIDndImage
imageDTO={imageDTO}
droppableData={droppableData}
draggableData={draggableData}
isDragDisabled={isDragDisabled}
isDropDisabled={isDropDisabled}
isUploadDisabled={true}
fitContainer
useThumbailFallback
@ -84,26 +103,13 @@ const CurrentImagePreview = () => {
dataTestId="image-preview"
/>
)}
{shouldShowImageDetails && imageDTO && withMetadata && (
<Box position="absolute" opacity={0.8} top={0} width="full" height="full" borderRadius="base">
<ImageMetadataViewer image={imageDTO} />
</Box>
)}
<AnimatePresence>
{shouldShowImageDetails && imageDTO && (
<Box
as={motion.div}
key="metadataViewer"
initial={initial}
animate={animateMetadata}
exit={exit}
position="absolute"
top={0}
width="full"
height="full"
borderRadius="base"
>
<ImageMetadataViewer image={imageDTO} />
</Box>
)}
</AnimatePresence>
<AnimatePresence>
{shouldShowNextPrevButtons && imageDTO && (
{withNextPrevButtons && shouldShowNextPrevButtons && imageDTO && (
<Box
as={motion.div}
key="nextPrevButtons"
@ -133,10 +139,6 @@ const animateArrows: AnimationProps['animate'] = {
opacity: 1,
transition: { duration: 0.07 },
};
const animateMetadata: AnimationProps['animate'] = {
opacity: 0.8,
transition: { duration: 0.07 },
};
const exit: AnimationProps['exit'] = {
opacity: 0,
transition: { duration: 0.07 },

View File

@ -1,37 +0,0 @@
import { Button } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import type { InvokeTabName } from 'features/ui/store/tabMap';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useImageViewer } from './useImageViewer';
const TAB_NAME_TO_TKEY: Record<InvokeTabName, string> = {
generation: 'ui.tabs.generationTab',
canvas: 'ui.tabs.canvasTab',
workflows: 'ui.tabs.workflowsTab',
models: 'ui.tabs.modelsTab',
queue: 'ui.tabs.queueTab',
};
const TAB_NAME_TO_TKEY_SHORT: Record<InvokeTabName, string> = {
generation: 'ui.tabs.generation',
canvas: 'ui.tabs.canvas',
workflows: 'ui.tabs.workflows',
models: 'ui.tabs.models',
queue: 'ui.tabs.queue',
};
export const EditorButton = () => {
const { t } = useTranslation();
const { onClose } = useImageViewer();
const activeTabName = useAppSelector(activeTabNameSelector);
const tooltip = useMemo(() => t('gallery.switchTo', { tab: t(TAB_NAME_TO_TKEY[activeTabName]) }), [t, activeTabName]);
return (
<Button aria-label={tooltip} tooltip={tooltip} onClick={onClose} variant="ghost">
{t(TAB_NAME_TO_TKEY_SHORT[activeTabName])}
</Button>
);
};

View File

@ -5,26 +5,12 @@ import { ToggleProgressButton } from 'features/gallery/components/ImageViewer/To
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import type { InvokeTabName } from 'features/ui/store/tabMap';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import type { AnimationProps } from 'framer-motion';
import { AnimatePresence, motion } from 'framer-motion';
import { memo, useMemo } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import CurrentImageButtons from './CurrentImageButtons';
import CurrentImagePreview from './CurrentImagePreview';
import { EditorButton } from './EditorButton';
const initial: AnimationProps['initial'] = {
opacity: 0,
};
const animate: AnimationProps['animate'] = {
opacity: 1,
transition: { duration: 0.07 },
};
const exit: AnimationProps['exit'] = {
opacity: 0,
transition: { duration: 0.07 },
};
import { ViewerToggleMenu } from './ViewerToggleMenu';
const VIEWER_ENABLED_TABS: InvokeTabName[] = ['canvas', 'generation', 'workflows'];
@ -42,49 +28,44 @@ export const ImageViewer = memo(() => {
useHotkeys('z', onToggle, { enabled: isViewerEnabled }, [isViewerEnabled, onToggle]);
useHotkeys('esc', onClose, { enabled: isViewerEnabled }, [isViewerEnabled, onClose]);
if (!shouldShowViewer) {
return null;
}
return (
<AnimatePresence>
{shouldShowViewer && (
<Flex
key="imageViewer"
as={motion.div}
initial={initial}
animate={animate}
exit={exit}
layerStyle="first"
borderRadius="base"
position="absolute"
flexDirection="column"
top={0}
right={0}
bottom={0}
left={0}
p={2}
rowGap={4}
alignItems="center"
justifyContent="center"
zIndex={10} // reactflow puts its minimap at 5, so we need to be above that
>
<Flex w="full" gap={2}>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto">
<ToggleProgressButton />
<ToggleMetadataViewerButton />
</Flex>
</Flex>
<Flex flex={1} gap={2} justifyContent="center">
<CurrentImageButtons />
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto">
<EditorButton />
</Flex>
</Flex>
<Flex
layerStyle="first"
borderRadius="base"
position="absolute"
flexDirection="column"
top={0}
right={0}
bottom={0}
left={0}
p={2}
rowGap={4}
alignItems="center"
justifyContent="center"
zIndex={10} // reactflow puts its minimap at 5, so we need to be above that
>
<Flex w="full" gap={2}>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto">
<ToggleProgressButton />
<ToggleMetadataViewerButton />
</Flex>
<CurrentImagePreview />
</Flex>
)}
</AnimatePresence>
<Flex flex={1} gap={2} justifyContent="center">
<CurrentImageButtons />
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto">
<ViewerToggleMenu />
</Flex>
</Flex>
</Flex>
<CurrentImagePreview />
</Flex>
);
});

View File

@ -0,0 +1,45 @@
import { Flex } from '@invoke-ai/ui-library';
import { ToggleMetadataViewerButton } from 'features/gallery/components/ImageViewer/ToggleMetadataViewerButton';
import { ToggleProgressButton } from 'features/gallery/components/ImageViewer/ToggleProgressButton';
import { memo } from 'react';
import CurrentImageButtons from './CurrentImageButtons';
import CurrentImagePreview from './CurrentImagePreview';
export const ImageViewerWorkflows = memo(() => {
return (
<Flex
layerStyle="first"
borderRadius="base"
position="absolute"
flexDirection="column"
top={0}
right={0}
bottom={0}
left={0}
p={2}
rowGap={4}
alignItems="center"
justifyContent="center"
zIndex={10} // reactflow puts its minimap at 5, so we need to be above that
>
<Flex w="full" gap={2}>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineEnd="auto">
<ToggleProgressButton />
<ToggleMetadataViewerButton />
</Flex>
</Flex>
<Flex flex={1} gap={2} justifyContent="center">
<CurrentImageButtons />
</Flex>
<Flex flex={1} justifyContent="center">
<Flex gap={2} marginInlineStart="auto" />
</Flex>
</Flex>
<CurrentImagePreview />
</Flex>
);
});
ImageViewerWorkflows.displayName = 'ImageViewerWorkflows';

View File

@ -35,6 +35,7 @@ export const ToggleMetadataViewerButton = memo(() => {
isDisabled={!imageDTO}
variant="outline"
colorScheme={shouldShowImageDetails ? 'invokeBlue' : 'base'}
data-testid="toggle-show-metadata-button"
/>
);
});

View File

@ -22,6 +22,7 @@ export const ToggleProgressButton = memo(() => {
onClick={onClick}
variant="outline"
colorScheme={shouldShowProgressInViewer ? 'invokeBlue' : 'base'}
data-testid="toggle-show-progress-button"
/>
);
});

View File

@ -1,17 +0,0 @@
import { Button } from '@invoke-ai/ui-library';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useImageViewer } from './useImageViewer';
export const ViewerButton = () => {
const { t } = useTranslation();
const { onOpen } = useImageViewer();
const tooltip = useMemo(() => t('gallery.switchTo', { tab: t('common.viewer') }), [t]);
return (
<Button aria-label={tooltip} tooltip={tooltip} onClick={onOpen} variant="ghost" pointerEvents="auto">
{t('common.viewer')}
</Button>
);
};

View File

@ -0,0 +1,67 @@
import {
Button,
Flex,
Icon,
Popover,
PopoverArrow,
PopoverBody,
PopoverContent,
PopoverTrigger,
Text,
} from '@invoke-ai/ui-library';
import { useTranslation } from 'react-i18next';
import { PiCaretDownBold, PiCheckBold, PiEyeBold, PiPencilBold } from 'react-icons/pi';
import { useImageViewer } from './useImageViewer';
export const ViewerToggleMenu = () => {
const { t } = useTranslation();
const { isOpen, onClose, onOpen } = useImageViewer();
return (
<Popover isLazy>
<PopoverTrigger>
<Button variant="outline" data-testid="toggle-viewer-menu-button">
<Flex gap={3} w="full" alignItems="center">
{isOpen ? <Icon as={PiEyeBold} /> : <Icon as={PiPencilBold} />}
<Text fontSize="md">{isOpen ? t('common.viewing') : t('common.editing')}</Text>
<Icon as={PiCaretDownBold} />
</Flex>
</Button>
</PopoverTrigger>
<PopoverContent p={2}>
<PopoverArrow />
<PopoverBody>
<Flex flexDir="column">
<Button onClick={onOpen} variant="ghost" h="auto" w="auto" p={2}>
<Flex gap={2} w="full">
<Icon as={PiCheckBold} visibility={isOpen ? 'visible' : 'hidden'} />
<Flex flexDir="column" gap={2} alignItems="flex-start">
<Text fontWeight="semibold" color="base.100">
{t('common.viewing')}
</Text>
<Text fontWeight="normal" color="base.300">
{t('common.viewingDesc')}
</Text>
</Flex>
</Flex>
</Button>
<Button onClick={onClose} variant="ghost" h="auto" w="auto" p={2}>
<Flex gap={2} w="full">
<Icon as={PiCheckBold} visibility={isOpen ? 'hidden' : 'visible'} />
<Flex flexDir="column" gap={2} alignItems="flex-start">
<Text fontWeight="semibold" color="base.100">
{t('common.editing')}
</Text>
<Text fontWeight="normal" color="base.300">
{t('common.editingDesc')}
</Text>
</Flex>
</Flex>
</Button>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};

View File

@ -21,7 +21,7 @@ const initialGalleryState: GalleryState = {
boardSearchText: '',
limit: INITIAL_IMAGE_LIMIT,
offset: 0,
isImageViewerOpen: false,
isImageViewerOpen: true,
};
export const gallerySlice = createSlice({
@ -30,11 +30,9 @@ export const gallerySlice = createSlice({
reducers: {
imageSelected: (state, action: PayloadAction<ImageDTO | null>) => {
state.selection = action.payload ? [action.payload] : [];
state.isImageViewerOpen = true;
},
selectionChanged: (state, action: PayloadAction<ImageDTO[]>) => {
state.selection = uniqBy(action.payload, (i) => i.image_name);
state.isImageViewerOpen = true;
},
shouldAutoSwitchChanged: (state, action: PayloadAction<boolean>) => {
state.shouldAutoSwitch = action.payload;

View File

@ -1,72 +0,0 @@
import { MetadataItemView } from 'features/metadata/components/MetadataItemView';
import type { ControlNetConfigV2Metadata, MetadataHandlers } from 'features/metadata/types';
import { handlers } from 'features/metadata/util/handlers';
import { useCallback, useEffect, useMemo, useState } from 'react';
type Props = {
metadata: unknown;
};
export const MetadataControlNetsV2 = ({ metadata }: Props) => {
const [controlNets, setControlNets] = useState<ControlNetConfigV2Metadata[]>([]);
useEffect(() => {
const parse = async () => {
try {
const parsed = await handlers.controlNetsV2.parse(metadata);
setControlNets(parsed);
} catch (e) {
setControlNets([]);
}
};
parse();
}, [metadata]);
const label = useMemo(() => handlers.controlNetsV2.getLabel(), []);
return (
<>
{controlNets.map((controlNet) => (
<MetadataViewControlNet
key={controlNet.id}
label={label}
controlNet={controlNet}
handlers={handlers.controlNetsV2}
/>
))}
</>
);
};
const MetadataViewControlNet = ({
label,
controlNet,
handlers,
}: {
label: string;
controlNet: ControlNetConfigV2Metadata;
handlers: MetadataHandlers<ControlNetConfigV2Metadata[], ControlNetConfigV2Metadata>;
}) => {
const onRecall = useCallback(() => {
if (!handlers.recallItem) {
return;
}
handlers.recallItem(controlNet, true);
}, [handlers, controlNet]);
const [renderedValue, setRenderedValue] = useState<React.ReactNode>(null);
useEffect(() => {
const _renderValue = async () => {
if (!handlers.renderItemValue) {
setRenderedValue(null);
return;
}
const rendered = await handlers.renderItemValue(controlNet);
setRenderedValue(rendered);
};
_renderValue();
}, [handlers, controlNet]);
return <MetadataItemView label={label} isDisabled={false} onRecall={onRecall} renderedValue={renderedValue} />;
};

View File

@ -3,6 +3,7 @@ import { MetadataItemView } from 'features/metadata/components/MetadataItemView'
import { useMetadataItem } from 'features/metadata/hooks/useMetadataItem';
import type { MetadataHandlers } from 'features/metadata/types';
import { MetadataParseFailedToken } from 'features/metadata/util/parsers';
import { isSymbol } from 'lodash-es';
type MetadataItemProps<T> = {
metadata: unknown;
@ -17,6 +18,10 @@ const _MetadataItem = typedMemo(<T,>({ metadata, handlers, direction = 'row' }:
return null;
}
if (handlers.getIsVisible && !isSymbol(value) && !handlers.getIsVisible(value)) {
return null;
}
return (
<MetadataItemView
label={label}

View File

@ -1,5 +1,6 @@
import type { Layer } from 'features/controlLayers/store/types';
import { MetadataItemView } from 'features/metadata/components/MetadataItemView';
import type { IPAdapterConfigV2Metadata, MetadataHandlers } from 'features/metadata/types';
import type { MetadataHandlers } from 'features/metadata/types';
import { handlers } from 'features/metadata/util/handlers';
import { useCallback, useEffect, useMemo, useState } from 'react';
@ -7,52 +8,47 @@ type Props = {
metadata: unknown;
};
export const MetadataIPAdaptersV2 = ({ metadata }: Props) => {
const [ipAdapters, setIPAdapters] = useState<IPAdapterConfigV2Metadata[]>([]);
export const MetadataLayers = ({ metadata }: Props) => {
const [layers, setLayers] = useState<Layer[]>([]);
useEffect(() => {
const parse = async () => {
try {
const parsed = await handlers.ipAdaptersV2.parse(metadata);
setIPAdapters(parsed);
const parsed = await handlers.layers.parse(metadata);
setLayers(parsed);
} catch (e) {
setIPAdapters([]);
setLayers([]);
}
};
parse();
}, [metadata]);
const label = useMemo(() => handlers.ipAdaptersV2.getLabel(), []);
const label = useMemo(() => handlers.layers.getLabel(), []);
return (
<>
{ipAdapters.map((ipAdapter) => (
<MetadataViewIPAdapter
key={ipAdapter.id}
label={label}
ipAdapter={ipAdapter}
handlers={handlers.ipAdaptersV2}
/>
{layers.map((layer) => (
<MetadataViewLayer key={layer.id} label={label} layer={layer} handlers={handlers.layers} />
))}
</>
);
};
const MetadataViewIPAdapter = ({
const MetadataViewLayer = ({
label,
ipAdapter,
layer,
handlers,
}: {
label: string;
ipAdapter: IPAdapterConfigV2Metadata;
handlers: MetadataHandlers<IPAdapterConfigV2Metadata[], IPAdapterConfigV2Metadata>;
layer: Layer;
handlers: MetadataHandlers<Layer[], Layer>;
}) => {
const onRecall = useCallback(() => {
if (!handlers.recallItem) {
return;
}
handlers.recallItem(ipAdapter, true);
}, [handlers, ipAdapter]);
handlers.recallItem(layer, true);
}, [handlers, layer]);
const [renderedValue, setRenderedValue] = useState<React.ReactNode>(null);
useEffect(() => {
@ -61,12 +57,12 @@ const MetadataViewIPAdapter = ({
setRenderedValue(null);
return;
}
const rendered = await handlers.renderItemValue(ipAdapter);
const rendered = await handlers.renderItemValue(layer);
setRenderedValue(rendered);
};
_renderValue();
}, [handlers, ipAdapter]);
}, [handlers, layer]);
return <MetadataItemView label={label} isDisabled={false} onRecall={onRecall} renderedValue={renderedValue} />;
};

View File

@ -1,72 +0,0 @@
import { MetadataItemView } from 'features/metadata/components/MetadataItemView';
import type { MetadataHandlers, T2IAdapterConfigV2Metadata } from 'features/metadata/types';
import { handlers } from 'features/metadata/util/handlers';
import { useCallback, useEffect, useMemo, useState } from 'react';
type Props = {
metadata: unknown;
};
export const MetadataT2IAdaptersV2 = ({ metadata }: Props) => {
const [t2iAdapters, setT2IAdapters] = useState<T2IAdapterConfigV2Metadata[]>([]);
useEffect(() => {
const parse = async () => {
try {
const parsed = await handlers.t2iAdaptersV2.parse(metadata);
setT2IAdapters(parsed);
} catch (e) {
setT2IAdapters([]);
}
};
parse();
}, [metadata]);
const label = useMemo(() => handlers.t2iAdaptersV2.getLabel(), []);
return (
<>
{t2iAdapters.map((t2iAdapter) => (
<MetadataViewT2IAdapter
key={t2iAdapter.id}
label={label}
t2iAdapter={t2iAdapter}
handlers={handlers.t2iAdaptersV2}
/>
))}
</>
);
};
const MetadataViewT2IAdapter = ({
label,
t2iAdapter,
handlers,
}: {
label: string;
t2iAdapter: T2IAdapterConfigV2Metadata;
handlers: MetadataHandlers<T2IAdapterConfigV2Metadata[], T2IAdapterConfigV2Metadata>;
}) => {
const onRecall = useCallback(() => {
if (!handlers.recallItem) {
return;
}
handlers.recallItem(t2iAdapter, true);
}, [handlers, t2iAdapter]);
const [renderedValue, setRenderedValue] = useState<React.ReactNode>(null);
useEffect(() => {
const _renderValue = async () => {
if (!handlers.renderItemValue) {
setRenderedValue(null);
return;
}
const rendered = await handlers.renderItemValue(t2iAdapter);
setRenderedValue(rendered);
};
_renderValue();
}, [handlers, t2iAdapter]);
return <MetadataItemView label={label} isDisabled={false} onRecall={onRecall} renderedValue={renderedValue} />;
};

View File

@ -1,9 +1,4 @@
import type { ControlNetConfig, IPAdapterConfig, T2IAdapterConfig } from 'features/controlAdapters/store/types';
import type {
ControlNetConfigV2,
IPAdapterConfigV2,
T2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import type { O } from 'ts-toolbelt';
/**
@ -50,6 +45,14 @@ export type MetadataParseFunc<T = unknown> = (metadata: unknown) => Promise<T>;
*/
export type MetadataValidateFunc<T> = (value: T) => Promise<T>;
/**
* A function that determines whether a metadata item should be visible.
*
* @param value The value to check.
* @returns True if the item should be visible, false otherwise.
*/
type MetadataGetIsVisibleFunc<T> = (value: T) => boolean;
export type MetadataHandlers<TValue = unknown, TItem = unknown> = {
/**
* Gets the label of the current metadata item as a string.
@ -111,6 +114,14 @@ export type MetadataHandlers<TValue = unknown, TItem = unknown> = {
* @returns The rendered item.
*/
renderItemValue?: MetadataRenderValueFunc<TItem>;
/**
* Checks if a parsed metadata value should be visible.
* If not provided, the item is always visible.
*
* @param value The value to check.
* @returns True if the item should be visible, false otherwise.
*/
getIsVisible?: MetadataGetIsVisibleFunc<TValue>;
};
// TODO(psyche): The types for item handlers should be able to be inferred from the type of the value:
@ -127,6 +138,7 @@ type BuildMetadataHandlersArg<TValue, TItem> = {
getLabel: MetadataGetLabelFunc;
renderValue?: MetadataRenderValueFunc<TValue>;
renderItemValue?: MetadataRenderValueFunc<TItem>;
getIsVisible?: MetadataGetIsVisibleFunc<TValue>;
};
export type BuildMetadataHandlers = <TValue, TItem>(
@ -140,11 +152,3 @@ export type AnyControlAdapterConfigMetadata =
| ControlNetConfigMetadata
| T2IAdapterConfigMetadata
| IPAdapterConfigMetadata;
export type ControlNetConfigV2Metadata = O.NonNullable<ControlNetConfigV2, 'model'>;
export type T2IAdapterConfigV2Metadata = O.NonNullable<T2IAdapterConfigV2, 'model'>;
export type IPAdapterConfigV2Metadata = O.NonNullable<IPAdapterConfigV2, 'model'>;
export type AnyControlAdapterConfigV2Metadata =
| ControlNetConfigV2Metadata
| T2IAdapterConfigV2Metadata
| IPAdapterConfigV2Metadata;

View File

@ -1,9 +1,9 @@
import { objectKeys } from 'common/util/objectKeys';
import { toast } from 'common/util/toast';
import type { Layer } from 'features/controlLayers/store/types';
import type { LoRA } from 'features/lora/store/loraSlice';
import type {
AnyControlAdapterConfigMetadata,
AnyControlAdapterConfigV2Metadata,
BuildMetadataHandlers,
MetadataGetLabelFunc,
MetadataHandlers,
@ -16,6 +16,7 @@ import { fetchModelConfig } from 'features/metadata/util/modelFetchingHelpers';
import { validators } from 'features/metadata/util/validators';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { t } from 'i18next';
import { assert } from 'tsafe';
import { parsers } from './parsers';
import { recallers } from './recallers';
@ -44,13 +45,48 @@ const renderControlAdapterValue: MetadataRenderValueFunc<AnyControlAdapterConfig
return `${value.model.key} (${value.model.base.toUpperCase()}) - ${value.weight}`;
}
};
const renderControlAdapterValueV2: MetadataRenderValueFunc<AnyControlAdapterConfigV2Metadata> = async (value) => {
try {
const modelConfig = await fetchModelConfig(value.model.key ?? 'none');
return `${modelConfig.name} (${modelConfig.base.toUpperCase()}) - ${value.weight}`;
} catch {
return `${value.model.key} (${value.model.base.toUpperCase()}) - ${value.weight}`;
const renderLayerValue: MetadataRenderValueFunc<Layer> = async (layer) => {
if (layer.type === 'initial_image_layer') {
let rendered = t('controlLayers.globalInitialImageLayer');
if (layer.image) {
rendered += ` (${layer.image})`;
}
return rendered;
}
if (layer.type === 'control_adapter_layer') {
let rendered = t('controlLayers.globalControlAdapterLayer');
const model = layer.controlAdapter.model;
if (model) {
rendered += ` (${model.name} - ${model.base.toUpperCase()})`;
}
return rendered;
}
if (layer.type === 'ip_adapter_layer') {
let rendered = t('controlLayers.globalIPAdapterLayer');
const model = layer.ipAdapter.model;
if (model) {
rendered += ` (${model.name} - ${model.base.toUpperCase()})`;
}
return rendered;
}
if (layer.type === 'regional_guidance_layer') {
const rendered = t('controlLayers.regionalGuidanceLayer');
const items: string[] = [];
if (layer.positivePrompt) {
items.push(`Positive: ${layer.positivePrompt}`);
}
if (layer.negativePrompt) {
items.push(`Negative: ${layer.negativePrompt}`);
}
if (layer.ipAdapters.length > 0) {
items.push(`${layer.ipAdapters.length} IP Adapters`);
}
return `${rendered} (${items.join(', ')})`;
}
assert(false, 'Unknown layer type');
};
const renderLayersValue: MetadataRenderValueFunc<Layer[]> = async (layers) => {
return `${layers.length} ${t('controlLayers.layers', { count: layers.length })}`;
};
const parameterSetToast = (parameter: string, description?: string) => {
@ -73,26 +109,6 @@ const parameterNotSetToast = (parameter: string, description?: string) => {
});
};
// const allParameterSetToast = (description?: string) => {
// toast({
// title: t('toast.parametersSet'),
// status: 'info',
// description,
// duration: 2500,
// isClosable: true,
// });
// };
// const allParameterNotSetToast = (description?: string) => {
// toast({
// title: t('toast.parametersNotSet'),
// status: 'warning',
// description,
// duration: 2500,
// isClosable: true,
// });
// };
const buildParse =
<TValue, TItem>(arg: {
parser: MetadataParseFunc<TValue>;
@ -171,6 +187,7 @@ const buildHandlers: BuildMetadataHandlers = ({
itemValidator,
renderValue,
renderItemValue,
getIsVisible,
}) => ({
parse: buildParse({ parser, getLabel }),
parseItem: itemParser ? buildParseItem({ itemParser, getLabel }) : undefined,
@ -179,6 +196,7 @@ const buildHandlers: BuildMetadataHandlers = ({
getLabel,
renderValue: renderValue ?? resolveToString,
renderItemValue: renderItemValue ?? resolveToString,
getIsVisible,
});
export const handlers = {
@ -198,12 +216,6 @@ export const handlers = {
recaller: recallers.cfgScale,
}),
height: buildHandlers({ getLabel: () => t('metadata.height'), parser: parsers.height, recaller: recallers.height }),
initialImage: buildHandlers({
getLabel: () => t('metadata.initImage'),
parser: parsers.initialImage,
recaller: recallers.initialImage,
renderValue: async (imageDTO) => imageDTO.image_name,
}),
negativePrompt: buildHandlers({
getLabel: () => t('metadata.negativePrompt'),
parser: parsers.negativePrompt,
@ -350,35 +362,17 @@ export const handlers = {
itemValidator: validators.t2iAdapter,
renderItemValue: renderControlAdapterValue,
}),
controlNetsV2: buildHandlers({
getLabel: () => t('common.controlNet'),
parser: parsers.controlNetsV2,
itemParser: parsers.controlNetV2,
recaller: recallers.controlNetsV2,
itemRecaller: recallers.controlNetV2,
validator: validators.controlNetsV2,
itemValidator: validators.controlNetV2,
renderItemValue: renderControlAdapterValueV2,
}),
ipAdaptersV2: buildHandlers({
getLabel: () => t('common.ipAdapter'),
parser: parsers.ipAdaptersV2,
itemParser: parsers.ipAdapterV2,
recaller: recallers.ipAdaptersV2,
itemRecaller: recallers.ipAdapterV2,
validator: validators.ipAdaptersV2,
itemValidator: validators.ipAdapterV2,
renderItemValue: renderControlAdapterValueV2,
}),
t2iAdaptersV2: buildHandlers({
getLabel: () => t('common.t2iAdapter'),
parser: parsers.t2iAdaptersV2,
itemParser: parsers.t2iAdapterV2,
recaller: recallers.t2iAdaptersV2,
itemRecaller: recallers.t2iAdapterV2,
validator: validators.t2iAdaptersV2,
itemValidator: validators.t2iAdapterV2,
renderItemValue: renderControlAdapterValueV2,
layers: buildHandlers({
getLabel: () => t('controlLayers.layers_one'),
parser: parsers.layers,
itemParser: parsers.layer,
recaller: recallers.layers,
itemRecaller: recallers.layer,
validator: validators.layers,
itemValidator: validators.layer,
renderItemValue: renderLayerValue,
renderValue: renderLayersValue,
getIsVisible: (value) => value.length > 0,
}),
} as const;
@ -435,9 +429,9 @@ export const parseAndRecallImageDimensions = async (metadata: unknown) => {
};
// These handlers should be omitted when recalling to control layers
const TO_CONTROL_LAYERS_SKIP_KEYS: (keyof typeof handlers)[] = ['controlNets', 'ipAdapters', 't2iAdapters'];
const TO_CONTROL_LAYERS_SKIP_KEYS: (keyof typeof handlers)[] = ['controlNets', 'ipAdapters', 't2iAdapters', 'strength'];
// These handlers should be omitted when recalling to the rest of the app
const NOT_TO_CONTROL_LAYERS_SKIP_KEYS: (keyof typeof handlers)[] = ['controlNetsV2', 'ipAdaptersV2', 't2iAdaptersV2'];
const NOT_TO_CONTROL_LAYERS_SKIP_KEYS: (keyof typeof handlers)[] = ['layers'];
export const parseAndRecallAllMetadata = async (
metadata: unknown,

View File

@ -1,4 +1,5 @@
import { getStore } from 'app/store/nanostores/store';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { isModelIdentifier, isModelIdentifierV2 } from 'features/nodes/types/common';
import { modelsApi } from 'services/api/endpoints/models';
import type { AnyModelConfig, BaseModelType, ModelType } from 'services/api/types';
@ -68,6 +69,24 @@ const fetchModelConfigByAttrs = async (name: string, base: BaseModelType, type:
}
};
/**
* Fetches the model config given an identifier. First attempts to fetch by key, then falls back to fetching by attrs.
* @param identifier The model identifier.
* @returns A promise that resolves to the model config.
* @throws {ModelConfigNotFoundError} If the model config is unable to be fetched.
*/
export const fetchModelConfigByIdentifier = async (identifier: ModelIdentifierField): Promise<AnyModelConfig> => {
try {
return await fetchModelConfig(identifier.key);
} catch {
try {
return await fetchModelConfigByAttrs(identifier.name, identifier.base, identifier.type);
} catch {
throw new ModelConfigNotFoundError(`Unable to retrieve model config for identifier ${identifier}`);
}
}
};
/**
* Fetches the model config for a given model key and type, and ensures that the model config is of a specific type.
* @param key The model key.

View File

@ -1,10 +1,12 @@
import { getStore } from 'app/store/nanostores/store';
import {
initialControlNet,
initialIPAdapter,
initialT2IAdapter,
} from 'features/controlAdapters/util/buildControlAdapter';
import { buildControlAdapterProcessor } from 'features/controlAdapters/util/buildControlAdapterProcessor';
import { getCALayerId, getIPALayerId, INITIAL_IMAGE_LAYER_ID } from 'features/controlLayers/store/controlLayersSlice';
import type { ControlAdapterLayer, InitialImageLayer, IPAdapterLayer, Layer } from 'features/controlLayers/store/types';
import { zLayer } from 'features/controlLayers/store/types';
import {
CA_PROCESSOR_DATA,
imageDTOToImageWithDims,
@ -17,12 +19,9 @@ import type { LoRA } from 'features/lora/store/loraSlice';
import { defaultLoRAConfig } from 'features/lora/store/loraSlice';
import type {
ControlNetConfigMetadata,
ControlNetConfigV2Metadata,
IPAdapterConfigMetadata,
IPAdapterConfigV2Metadata,
MetadataParseFunc,
T2IAdapterConfigMetadata,
T2IAdapterConfigV2Metadata,
} from 'features/metadata/types';
import { fetchModelConfigWithTypeGuard, getModelKey } from 'features/metadata/util/modelFetchingHelpers';
import { zControlField, zIPAdapterField, zModelIdentifierField, zT2IAdapterField } from 'features/nodes/types/common';
@ -69,8 +68,7 @@ import {
isParameterWidth,
} from 'features/parameters/types/parameterSchemas';
import { get, isArray, isString } from 'lodash-es';
import { getImageDTO, imagesApi } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
import { getImageDTO } from 'services/api/endpoints/images';
import {
isControlNetModelConfig,
isIPAdapterModelConfig,
@ -80,6 +78,7 @@ import {
isT2IAdapterModelConfig,
isVAEModelConfig,
} from 'services/api/types';
import { assert } from 'tsafe';
import { v4 as uuidv4 } from 'uuid';
export const MetadataParsePendingToken = Symbol('pending');
@ -149,14 +148,6 @@ const parseCFGRescaleMultiplier: MetadataParseFunc<ParameterCFGRescaleMultiplier
const parseScheduler: MetadataParseFunc<ParameterScheduler> = (metadata) =>
getProperty(metadata, 'scheduler', isParameterScheduler);
const parseInitialImage: MetadataParseFunc<ImageDTO> = async (metadata) => {
const imageName = await getProperty(metadata, 'init_image', isString);
const imageDTORequest = getStore().dispatch(imagesApi.endpoints.getImageDTO.initiate(imageName));
const imageDTO = await imageDTORequest.unwrap();
imageDTORequest.unsubscribe();
return imageDTO;
};
const parseWidth: MetadataParseFunc<ParameterWidth> = (metadata) => getProperty(metadata, 'width', isParameterWidth);
const parseHeight: MetadataParseFunc<ParameterHeight> = (metadata) =>
@ -309,7 +300,7 @@ const parseControlNet: MetadataParseFunc<ControlNetConfigMetadata> = async (meta
const parseAllControlNets: MetadataParseFunc<ControlNetConfigMetadata[]> = async (metadata) => {
try {
const controlNetsRaw = await getProperty(metadata, 'controlnets', isArray || undefined);
const controlNetsRaw = await getProperty(metadata, 'controlnets', isArray);
const parseResults = await Promise.allSettled(controlNetsRaw.map((cn) => parseControlNet(cn)));
const controlNets = parseResults
.filter((result): result is PromiseFulfilledResult<ControlNetConfigMetadata> => result.status === 'fulfilled')
@ -439,8 +430,103 @@ const parseAllIPAdapters: MetadataParseFunc<IPAdapterConfigMetadata[]> = async (
}
};
//#region V2/Control Layers
const parseControlNetV2: MetadataParseFunc<ControlNetConfigV2Metadata> = async (metadataItem) => {
//#region Control Layers
const parseLayer: MetadataParseFunc<Layer> = async (metadataItem) => zLayer.parseAsync(metadataItem);
const parseLayers: MetadataParseFunc<Layer[]> = async (metadata) => {
// We need to support recalling pre-Control Layers metadata into Control Layers. A separate set of parsers handles
// taking pre-CL metadata and parsing it into layers. It doesn't always map 1-to-1, so this is best-effort. For
// example, CL Control Adapters don't support resize mode, so we simply omit that property.
try {
const layers: Layer[] = [];
try {
const control_layers = await getProperty(metadata, 'control_layers');
const controlLayersRaw = await getProperty(control_layers, 'layers', isArray);
const controlLayersParseResults = await Promise.allSettled(controlLayersRaw.map(parseLayer));
const controlLayers = controlLayersParseResults
.filter((result): result is PromiseFulfilledResult<Layer> => result.status === 'fulfilled')
.map((result) => result.value);
layers.push(...controlLayers);
} catch {
// no-op
}
try {
const controlNetsRaw = await getProperty(metadata, 'controlnets', isArray);
const controlNetsParseResults = await Promise.allSettled(
controlNetsRaw.map(async (cn) => await parseControlNetToControlAdapterLayer(cn))
);
const controlNetsAsLayers = controlNetsParseResults
.filter((result): result is PromiseFulfilledResult<ControlAdapterLayer> => result.status === 'fulfilled')
.map((result) => result.value);
layers.push(...controlNetsAsLayers);
} catch {
// no-op
}
try {
const t2iAdaptersRaw = await getProperty(metadata, 't2iAdapters', isArray);
const t2iAdaptersParseResults = await Promise.allSettled(
t2iAdaptersRaw.map(async (cn) => await parseT2IAdapterToControlAdapterLayer(cn))
);
const t2iAdaptersAsLayers = t2iAdaptersParseResults
.filter((result): result is PromiseFulfilledResult<ControlAdapterLayer> => result.status === 'fulfilled')
.map((result) => result.value);
layers.push(...t2iAdaptersAsLayers);
} catch {
// no-op
}
try {
const ipAdaptersRaw = await getProperty(metadata, 'ipAdapters', isArray);
const ipAdaptersParseResults = await Promise.allSettled(
ipAdaptersRaw.map(async (cn) => await parseIPAdapterToIPAdapterLayer(cn))
);
const ipAdaptersAsLayers = ipAdaptersParseResults
.filter((result): result is PromiseFulfilledResult<IPAdapterLayer> => result.status === 'fulfilled')
.map((result) => result.value);
layers.push(...ipAdaptersAsLayers);
} catch {
// no-op
}
try {
const initialImageLayer = await parseInitialImageToInitialImageLayer(metadata);
layers.push(initialImageLayer);
} catch {
// no-op
}
return layers;
} catch {
return [];
}
};
const parseInitialImageToInitialImageLayer: MetadataParseFunc<InitialImageLayer> = async (metadata) => {
const denoisingStrength = await getProperty(metadata, 'strength', isParameterStrength);
const imageName = await getProperty(metadata, 'init_image', isString);
const imageDTO = await getImageDTO(imageName);
assert(imageDTO, 'ImageDTO is null');
const layer: InitialImageLayer = {
id: INITIAL_IMAGE_LAYER_ID,
type: 'initial_image_layer',
bbox: null,
bboxNeedsUpdate: true,
x: 0,
y: 0,
isEnabled: true,
opacity: 1,
image: imageDTOToImageWithDims(imageDTO),
isSelected: true,
denoisingStrength,
};
return layer;
};
const parseControlNetToControlAdapterLayer: MetadataParseFunc<ControlAdapterLayer> = async (metadataItem) => {
const control_model = await getProperty(metadataItem, 'control_model');
const key = await getModelKey(control_model, 'controlnet');
const controlNetModel = await fetchModelConfigWithTypeGuard(key, isControlNetModelConfig);
@ -469,7 +555,6 @@ const parseControlNetV2: MetadataParseFunc<ControlNetConfigV2Metadata> = async (
.catch(null)
.parse(await getProperty(metadataItem, 'control_mode'));
const id = uuidv4();
const defaultPreprocessor = controlNetModel.default_settings?.preprocessor;
const processorConfig = isProcessorTypeV2(defaultPreprocessor)
? CA_PROCESSOR_DATA[defaultPreprocessor].buildDefaults()
@ -481,36 +566,35 @@ const parseControlNetV2: MetadataParseFunc<ControlNetConfigV2Metadata> = async (
const imageDTO = image ? await getImageDTO(image.image_name) : null;
const processedImageDTO = processedImage ? await getImageDTO(processedImage.image_name) : null;
const controlNet: ControlNetConfigV2Metadata = {
id,
type: 'controlnet',
model: zModelIdentifierField.parse(controlNetModel),
weight: typeof control_weight === 'number' ? control_weight : initialControlNetV2.weight,
beginEndStepPct,
controlMode: control_mode ?? initialControlNetV2.controlMode,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
processedImage: processedImageDTO ? imageDTOToImageWithDims(processedImageDTO) : null,
processorConfig,
isProcessingImage: false,
const layer: ControlAdapterLayer = {
id: getCALayerId(uuidv4()),
bbox: null,
bboxNeedsUpdate: true,
isEnabled: true,
isFilterEnabled: true,
isSelected: true,
opacity: 1,
type: 'control_adapter_layer',
x: 0,
y: 0,
controlAdapter: {
id: uuidv4(),
type: 'controlnet',
model: zModelIdentifierField.parse(controlNetModel),
weight: typeof control_weight === 'number' ? control_weight : initialControlNetV2.weight,
beginEndStepPct,
controlMode: control_mode ?? initialControlNetV2.controlMode,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
processedImage: processedImageDTO ? imageDTOToImageWithDims(processedImageDTO) : null,
processorConfig,
processorPendingBatchId: null,
},
};
return controlNet;
return layer;
};
const parseAllControlNetsV2: MetadataParseFunc<ControlNetConfigV2Metadata[]> = async (metadata) => {
try {
const controlNetsRaw = await getProperty(metadata, 'controlnets', isArray || undefined);
const parseResults = await Promise.allSettled(controlNetsRaw.map((cn) => parseControlNetV2(cn)));
const controlNets = parseResults
.filter((result): result is PromiseFulfilledResult<ControlNetConfigV2Metadata> => result.status === 'fulfilled')
.map((result) => result.value);
return controlNets;
} catch {
return [];
}
};
const parseT2IAdapterV2: MetadataParseFunc<T2IAdapterConfigV2Metadata> = async (metadataItem) => {
const parseT2IAdapterToControlAdapterLayer: MetadataParseFunc<ControlAdapterLayer> = async (metadataItem) => {
const t2i_adapter_model = await getProperty(metadataItem, 't2i_adapter_model');
const key = await getModelKey(t2i_adapter_model, 't2i_adapter');
const t2iAdapterModel = await fetchModelConfigWithTypeGuard(key, isT2IAdapterModelConfig);
@ -536,7 +620,6 @@ const parseT2IAdapterV2: MetadataParseFunc<T2IAdapterConfigV2Metadata> = async (
.catch(null)
.parse(await getProperty(metadataItem, 'end_step_percent'));
const id = uuidv4();
const defaultPreprocessor = t2iAdapterModel.default_settings?.preprocessor;
const processorConfig = isProcessorTypeV2(defaultPreprocessor)
? CA_PROCESSOR_DATA[defaultPreprocessor].buildDefaults()
@ -548,35 +631,34 @@ const parseT2IAdapterV2: MetadataParseFunc<T2IAdapterConfigV2Metadata> = async (
const imageDTO = image ? await getImageDTO(image.image_name) : null;
const processedImageDTO = processedImage ? await getImageDTO(processedImage.image_name) : null;
const t2iAdapter: T2IAdapterConfigV2Metadata = {
id,
type: 't2i_adapter',
model: zModelIdentifierField.parse(t2iAdapterModel),
weight: typeof weight === 'number' ? weight : initialT2IAdapterV2.weight,
beginEndStepPct,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
processedImage: processedImageDTO ? imageDTOToImageWithDims(processedImageDTO) : null,
processorConfig,
isProcessingImage: false,
const layer: ControlAdapterLayer = {
id: getCALayerId(uuidv4()),
bbox: null,
bboxNeedsUpdate: true,
isEnabled: true,
isFilterEnabled: true,
isSelected: true,
opacity: 1,
type: 'control_adapter_layer',
x: 0,
y: 0,
controlAdapter: {
id: uuidv4(),
type: 't2i_adapter',
model: zModelIdentifierField.parse(t2iAdapterModel),
weight: typeof weight === 'number' ? weight : initialT2IAdapterV2.weight,
beginEndStepPct,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
processedImage: processedImageDTO ? imageDTOToImageWithDims(processedImageDTO) : null,
processorConfig,
processorPendingBatchId: null,
},
};
return t2iAdapter;
return layer;
};
const parseAllT2IAdaptersV2: MetadataParseFunc<T2IAdapterConfigV2Metadata[]> = async (metadata) => {
try {
const t2iAdaptersRaw = await getProperty(metadata, 't2iAdapters', isArray);
const parseResults = await Promise.allSettled(t2iAdaptersRaw.map((t2iAdapter) => parseT2IAdapterV2(t2iAdapter)));
const t2iAdapters = parseResults
.filter((result): result is PromiseFulfilledResult<T2IAdapterConfigV2Metadata> => result.status === 'fulfilled')
.map((result) => result.value);
return t2iAdapters;
} catch {
return [];
}
};
const parseIPAdapterV2: MetadataParseFunc<IPAdapterConfigV2Metadata> = async (metadataItem) => {
const parseIPAdapterToIPAdapterLayer: MetadataParseFunc<IPAdapterLayer> = async (metadataItem) => {
const ip_adapter_model = await getProperty(metadataItem, 'ip_adapter_model');
const key = await getModelKey(ip_adapter_model, 'ip_adapter');
const ipAdapterModel = await fetchModelConfigWithTypeGuard(key, isIPAdapterModelConfig);
@ -602,39 +684,32 @@ const parseIPAdapterV2: MetadataParseFunc<IPAdapterConfigV2Metadata> = async (me
.catch(null)
.parse(await getProperty(metadataItem, 'end_step_percent'));
const id = uuidv4();
const beginEndStepPct: [number, number] = [
begin_step_percent ?? initialIPAdapterV2.beginEndStepPct[0],
end_step_percent ?? initialIPAdapterV2.beginEndStepPct[1],
];
const imageDTO = image ? await getImageDTO(image.image_name) : null;
const ipAdapter: IPAdapterConfigV2Metadata = {
id,
type: 'ip_adapter',
model: zModelIdentifierField.parse(ipAdapterModel),
weight: typeof weight === 'number' ? weight : initialIPAdapterV2.weight,
beginEndStepPct,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
clipVisionModel: initialIPAdapterV2.clipVisionModel, // TODO: This needs to be added to the zIPAdapterField...
method: method ?? initialIPAdapterV2.method,
const layer: IPAdapterLayer = {
id: getIPALayerId(uuidv4()),
type: 'ip_adapter_layer',
isEnabled: true,
isSelected: true,
ipAdapter: {
id: uuidv4(),
type: 'ip_adapter',
model: zModelIdentifierField.parse(ipAdapterModel),
weight: typeof weight === 'number' ? weight : initialIPAdapterV2.weight,
beginEndStepPct,
image: imageDTO ? imageDTOToImageWithDims(imageDTO) : null,
clipVisionModel: initialIPAdapterV2.clipVisionModel, // TODO: This needs to be added to the zIPAdapterField...
method: method ?? initialIPAdapterV2.method,
},
};
return ipAdapter;
};
const parseAllIPAdaptersV2: MetadataParseFunc<IPAdapterConfigV2Metadata[]> = async (metadata) => {
try {
const ipAdaptersRaw = await getProperty(metadata, 'ipAdapters', isArray);
const parseResults = await Promise.allSettled(ipAdaptersRaw.map((ipAdapter) => parseIPAdapterV2(ipAdapter)));
const ipAdapters = parseResults
.filter((result): result is PromiseFulfilledResult<IPAdapterConfigV2Metadata> => result.status === 'fulfilled')
.map((result) => result.value);
return ipAdapters;
} catch {
return [];
}
return layer;
};
//#endregion
export const parsers = {
createdBy: parseCreatedBy,
@ -647,7 +722,6 @@ export const parsers = {
cfgScale: parseCFGScale,
cfgRescaleMultiplier: parseCFGRescaleMultiplier,
scheduler: parseScheduler,
initialImage: parseInitialImage,
width: parseWidth,
height: parseHeight,
steps: parseSteps,
@ -672,10 +746,9 @@ export const parsers = {
t2iAdapters: parseAllT2IAdapters,
ipAdapter: parseIPAdapter,
ipAdapters: parseAllIPAdapters,
controlNetV2: parseControlNetV2,
controlNetsV2: parseAllControlNetsV2,
t2iAdapterV2: parseT2IAdapterV2,
t2iAdaptersV2: parseAllT2IAdaptersV2,
ipAdapterV2: parseIPAdapterV2,
ipAdaptersV2: parseAllIPAdaptersV2,
controlNetToControlLayer: parseControlNetToControlAdapterLayer,
t2iAdapterToControlAdapterLayer: parseT2IAdapterToControlAdapterLayer,
ipAdapterToIPAdapterLayer: parseIPAdapterToIPAdapterLayer,
layer: parseLayer,
layers: parseLayers,
} as const;

View File

@ -1,4 +1,5 @@
import { getStore } from 'app/store/nanostores/store';
import { deepClone } from 'common/util/deepClone';
import {
controlAdapterRecalled,
controlNetsReset,
@ -6,31 +7,32 @@ import {
t2iAdaptersReset,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import {
caLayerAdded,
caLayerControlNetsDeleted,
caLayerT2IAdaptersDeleted,
allLayersDeleted,
caLayerRecalled,
getCALayerId,
getIPALayerId,
getRGLayerId,
heightChanged,
iiLayerAdded,
ipaLayerAdded,
ipaLayersDeleted,
iiLayerRecalled,
ipaLayerRecalled,
negativePrompt2Changed,
negativePromptChanged,
positivePrompt2Changed,
positivePromptChanged,
rgLayerRecalled,
widthChanged,
} from 'features/controlLayers/store/controlLayersSlice';
import type { Layer } from 'features/controlLayers/store/types';
import { setHrfEnabled, setHrfMethod, setHrfStrength } from 'features/hrf/store/hrfSlice';
import type { LoRA } from 'features/lora/store/loraSlice';
import { loraRecalled, lorasReset } from 'features/lora/store/loraSlice';
import type {
ControlNetConfigMetadata,
ControlNetConfigV2Metadata,
IPAdapterConfigMetadata,
IPAdapterConfigV2Metadata,
MetadataRecallFunc,
T2IAdapterConfigMetadata,
T2IAdapterConfigV2Metadata,
} from 'features/metadata/types';
import { fetchModelConfigByIdentifier } from 'features/metadata/util/modelFetchingHelpers';
import { modelSelected } from 'features/parameters/store/actions';
import {
setCfgRescaleMultiplier,
@ -72,7 +74,8 @@ import {
setRefinerStart,
setRefinerSteps,
} from 'features/sdxl/store/sdxlSlice';
import type { ImageDTO } from 'services/api/types';
import { getImageDTO } from 'services/api/endpoints/images';
import { v4 as uuidv4 } from 'uuid';
const recallPositivePrompt: MetadataRecallFunc<ParameterPositivePrompt> = (positivePrompt) => {
getStore().dispatch(positivePromptChanged(positivePrompt));
@ -106,10 +109,6 @@ const recallScheduler: MetadataRecallFunc<ParameterScheduler> = (scheduler) => {
getStore().dispatch(setScheduler(scheduler));
};
const recallInitialImage: MetadataRecallFunc<ImageDTO> = async (imageDTO) => {
getStore().dispatch(iiLayerAdded(imageDTO));
};
const setSizeOptions = { updateAspectRatio: true, clamp: true };
const recallWidth: MetadataRecallFunc<ParameterWidth> = (width) => {
@ -244,50 +243,96 @@ const recallIPAdapters: MetadataRecallFunc<IPAdapterConfigMetadata[]> = (ipAdapt
});
};
//#region V2/Control Layer
const recallControlNetV2: MetadataRecallFunc<ControlNetConfigV2Metadata> = (controlNet) => {
getStore().dispatch(caLayerAdded(controlNet));
};
const recallControlNetsV2: MetadataRecallFunc<ControlNetConfigV2Metadata[]> = (controlNets) => {
//#region Control Layers
const recallLayer: MetadataRecallFunc<Layer> = async (layer) => {
const { dispatch } = getStore();
dispatch(caLayerControlNetsDeleted());
if (!controlNets.length) {
// We need to check for the existence of all images and models when recalling. If they do not exist, SMITE THEM!
// Also, we need fresh IDs for all objects when recalling, to prevent multiple layers with the same ID.
if (layer.type === 'control_adapter_layer') {
const clone = deepClone(layer);
if (clone.controlAdapter.image) {
const imageDTO = await getImageDTO(clone.controlAdapter.image.name);
if (!imageDTO) {
clone.controlAdapter.image = null;
}
}
if (clone.controlAdapter.processedImage) {
const imageDTO = await getImageDTO(clone.controlAdapter.processedImage.name);
if (!imageDTO) {
clone.controlAdapter.processedImage = null;
}
}
if (clone.controlAdapter.model) {
try {
await fetchModelConfigByIdentifier(clone.controlAdapter.model);
} catch {
clone.controlAdapter.model = null;
}
}
clone.id = getCALayerId(uuidv4());
clone.controlAdapter.id = uuidv4();
dispatch(caLayerRecalled(clone));
return;
}
controlNets.forEach((controlNet) => {
dispatch(caLayerAdded(controlNet));
});
};
const recallT2IAdapterV2: MetadataRecallFunc<T2IAdapterConfigV2Metadata> = (t2iAdapter) => {
getStore().dispatch(caLayerAdded(t2iAdapter));
};
const recallT2IAdaptersV2: MetadataRecallFunc<T2IAdapterConfigV2Metadata[]> = (t2iAdapters) => {
const { dispatch } = getStore();
dispatch(caLayerT2IAdaptersDeleted());
if (!t2iAdapters.length) {
if (layer.type === 'ip_adapter_layer') {
const clone = deepClone(layer);
if (clone.ipAdapter.image) {
const imageDTO = await getImageDTO(clone.ipAdapter.image.name);
if (!imageDTO) {
clone.ipAdapter.image = null;
}
}
if (clone.ipAdapter.model) {
try {
await fetchModelConfigByIdentifier(clone.ipAdapter.model);
} catch {
clone.ipAdapter.model = null;
}
}
clone.id = getIPALayerId(uuidv4());
clone.ipAdapter.id = uuidv4();
dispatch(ipaLayerRecalled(clone));
return;
}
t2iAdapters.forEach((t2iAdapters) => {
dispatch(caLayerAdded(t2iAdapters));
});
};
const recallIPAdapterV2: MetadataRecallFunc<IPAdapterConfigV2Metadata> = (ipAdapter) => {
getStore().dispatch(ipaLayerAdded(ipAdapter));
};
if (layer.type === 'regional_guidance_layer') {
const clone = deepClone(layer);
// Strip out the uploaded mask image property - this is an intermediate image
clone.uploadedMaskImage = null;
const recallIPAdaptersV2: MetadataRecallFunc<IPAdapterConfigV2Metadata[]> = (ipAdapters) => {
const { dispatch } = getStore();
dispatch(ipaLayersDeleted());
if (!ipAdapters.length) {
for (const ipAdapter of clone.ipAdapters) {
if (ipAdapter.image) {
const imageDTO = await getImageDTO(ipAdapter.image.name);
if (!imageDTO) {
ipAdapter.image = null;
}
}
if (ipAdapter.model) {
try {
await fetchModelConfigByIdentifier(ipAdapter.model);
} catch {
ipAdapter.model = null;
}
}
ipAdapter.id = uuidv4();
}
clone.id = getRGLayerId(uuidv4());
dispatch(rgLayerRecalled(clone));
return;
}
ipAdapters.forEach((ipAdapter) => {
dispatch(ipaLayerAdded(ipAdapter));
});
if (layer.type === 'initial_image_layer') {
dispatch(iiLayerRecalled(layer));
return;
}
};
const recallLayers: MetadataRecallFunc<Layer[]> = (layers) => {
const { dispatch } = getStore();
dispatch(allLayersDeleted());
for (const l of layers) {
recallLayer(l);
}
};
export const recallers = {
@ -299,7 +344,6 @@ export const recallers = {
cfgScale: recallCFGScale,
cfgRescaleMultiplier: recallCFGRescaleMultiplier,
scheduler: recallScheduler,
initialImage: recallInitialImage,
width: recallWidth,
height: recallHeight,
steps: recallSteps,
@ -324,10 +368,6 @@ export const recallers = {
t2iAdapter: recallT2IAdapter,
ipAdapters: recallIPAdapters,
ipAdapter: recallIPAdapter,
controlNetV2: recallControlNetV2,
controlNetsV2: recallControlNetsV2,
t2iAdapterV2: recallT2IAdapterV2,
t2iAdaptersV2: recallT2IAdaptersV2,
ipAdapterV2: recallIPAdapterV2,
ipAdaptersV2: recallIPAdaptersV2,
layer: recallLayer,
layers: recallLayers,
} as const;

View File

@ -1,17 +1,16 @@
import { getStore } from 'app/store/nanostores/store';
import type { Layer } from 'features/controlLayers/store/types';
import type { LoRA } from 'features/lora/store/loraSlice';
import type {
ControlNetConfigMetadata,
ControlNetConfigV2Metadata,
IPAdapterConfigMetadata,
IPAdapterConfigV2Metadata,
MetadataValidateFunc,
T2IAdapterConfigMetadata,
T2IAdapterConfigV2Metadata,
} from 'features/metadata/types';
import { InvalidModelConfigError } from 'features/metadata/util/modelFetchingHelpers';
import type { ParameterSDXLRefinerModel, ParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import type { BaseModelType } from 'services/api/types';
import { assert } from 'tsafe';
/**
* Checks the given base model type against the currently-selected model's base type and throws an error if they are
@ -111,58 +110,39 @@ const validateIPAdapters: MetadataValidateFunc<IPAdapterConfigMetadata[]> = (ipA
return new Promise((resolve) => resolve(validatedIPAdapters));
};
const validateControlNetV2: MetadataValidateFunc<ControlNetConfigV2Metadata> = (controlNet) => {
validateBaseCompatibility(controlNet.model?.base, 'ControlNet incompatible with currently-selected model');
return new Promise((resolve) => resolve(controlNet));
};
const validateControlNetsV2: MetadataValidateFunc<ControlNetConfigV2Metadata[]> = (controlNets) => {
const validatedControlNets: ControlNetConfigV2Metadata[] = [];
controlNets.forEach((controlNet) => {
try {
validateBaseCompatibility(controlNet.model?.base, 'ControlNet incompatible with currently-selected model');
validatedControlNets.push(controlNet);
} catch {
// This is a no-op - we want to continue validating the rest of the ControlNets, and an empty list is valid.
const validateLayer: MetadataValidateFunc<Layer> = async (layer) => {
if (layer.type === 'control_adapter_layer') {
const model = layer.controlAdapter.model;
assert(model, 'Control Adapter layer missing model');
validateBaseCompatibility(model.base, 'Layer incompatible with currently-selected model');
}
if (layer.type === 'ip_adapter_layer') {
const model = layer.ipAdapter.model;
assert(model, 'IP Adapter layer missing model');
validateBaseCompatibility(model.base, 'Layer incompatible with currently-selected model');
}
if (layer.type === 'regional_guidance_layer') {
for (const ipa of layer.ipAdapters) {
const model = ipa.model;
assert(model, 'IP Adapter layer missing model');
validateBaseCompatibility(model.base, 'Layer incompatible with currently-selected model');
}
});
return new Promise((resolve) => resolve(validatedControlNets));
}
return layer;
};
const validateT2IAdapterV2: MetadataValidateFunc<T2IAdapterConfigV2Metadata> = (t2iAdapter) => {
validateBaseCompatibility(t2iAdapter.model?.base, 'T2I Adapter incompatible with currently-selected model');
return new Promise((resolve) => resolve(t2iAdapter));
};
const validateT2IAdaptersV2: MetadataValidateFunc<T2IAdapterConfigV2Metadata[]> = (t2iAdapters) => {
const validatedT2IAdapters: T2IAdapterConfigV2Metadata[] = [];
t2iAdapters.forEach((t2iAdapter) => {
const validateLayers: MetadataValidateFunc<Layer[]> = async (layers) => {
const validatedLayers: Layer[] = [];
for (const l of layers) {
try {
validateBaseCompatibility(t2iAdapter.model?.base, 'T2I Adapter incompatible with currently-selected model');
validatedT2IAdapters.push(t2iAdapter);
const validated = await validateLayer(l);
validatedLayers.push(validated);
} catch {
// This is a no-op - we want to continue validating the rest of the T2I Adapters, and an empty list is valid.
// This is a no-op - we want to continue validating the rest of the layers, and an empty list is valid.
}
});
return new Promise((resolve) => resolve(validatedT2IAdapters));
};
const validateIPAdapterV2: MetadataValidateFunc<IPAdapterConfigV2Metadata> = (ipAdapter) => {
validateBaseCompatibility(ipAdapter.model?.base, 'IP Adapter incompatible with currently-selected model');
return new Promise((resolve) => resolve(ipAdapter));
};
const validateIPAdaptersV2: MetadataValidateFunc<IPAdapterConfigV2Metadata[]> = (ipAdapters) => {
const validatedIPAdapters: IPAdapterConfigV2Metadata[] = [];
ipAdapters.forEach((ipAdapter) => {
try {
validateBaseCompatibility(ipAdapter.model?.base, 'IP Adapter incompatible with currently-selected model');
validatedIPAdapters.push(ipAdapter);
} catch {
// This is a no-op - we want to continue validating the rest of the IP Adapters, and an empty list is valid.
}
});
return new Promise((resolve) => resolve(validatedIPAdapters));
}
return new Promise((resolve) => resolve(validatedLayers));
};
export const validators = {
@ -176,10 +156,6 @@ export const validators = {
t2iAdapters: validateT2IAdapters,
ipAdapter: validateIPAdapter,
ipAdapters: validateIPAdapters,
controlNetV2: validateControlNetV2,
controlNetsV2: validateControlNetsV2,
t2iAdapterV2: validateT2IAdapterV2,
t2iAdaptersV2: validateT2IAdaptersV2,
ipAdapterV2: validateIPAdapterV2,
ipAdaptersV2: validateIPAdaptersV2,
layer: validateLayer,
layers: validateLayers,
} as const;

View File

@ -1,6 +1,5 @@
import { Flex, Spacer } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { ViewerButton } from 'features/gallery/components/ImageViewer/ViewerButton';
import AddNodeButton from 'features/nodes/components/flow/panels/TopPanel/AddNodeButton';
import ClearFlowButton from 'features/nodes/components/flow/panels/TopPanel/ClearFlowButton';
import SaveWorkflowButton from 'features/nodes/components/flow/panels/TopPanel/SaveWorkflowButton';
@ -23,7 +22,6 @@ const TopCenterPanel = () => {
<ClearFlowButton />
<SaveWorkflowButton />
<WorkflowLibraryMenu />
<ViewerButton />
</Flex>
);
};

View File

@ -2,25 +2,28 @@ import { getStore } from 'app/store/nanostores/store';
import type { RootState } from 'app/store/store';
import {
isControlAdapterLayer,
isInitialImageLayer,
isIPAdapterLayer,
isRegionalGuidanceLayer,
rgLayerMaskImageUploaded,
} from 'features/controlLayers/store/controlLayersSlice';
import {
type ControlNetConfigV2,
type ImageWithDims,
type IPAdapterConfigV2,
isControlNetConfigV2,
isT2IAdapterConfigV2,
type ProcessorConfig,
type T2IAdapterConfigV2,
import type { InitialImageLayer, Layer, RegionalGuidanceLayer } from 'features/controlLayers/store/types';
import type {
ControlNetConfigV2,
ImageWithDims,
IPAdapterConfigV2,
ProcessorConfig,
T2IAdapterConfigV2,
} from 'features/controlLayers/util/controlAdapters';
import { getRegionalPromptLayerBlobs } from 'features/controlLayers/util/getLayerBlobs';
import type { ImageField } from 'features/nodes/types/common';
import {
CONTROL_NET_COLLECT,
IMAGE_TO_LATENTS,
IP_ADAPTER_COLLECT,
NEGATIVE_CONDITIONING,
NEGATIVE_CONDITIONING_COLLECT,
NOISE,
POSITIVE_CONDITIONING,
POSITIVE_CONDITIONING_COLLECT,
PROMPT_REGION_INVERT_TENSOR_MASK_PREFIX,
@ -28,16 +31,20 @@ import {
PROMPT_REGION_NEGATIVE_COND_PREFIX,
PROMPT_REGION_POSITIVE_COND_INVERTED_PREFIX,
PROMPT_REGION_POSITIVE_COND_PREFIX,
RESIZE,
T2I_ADAPTER_COLLECT,
} from 'features/nodes/util/graph/constants';
import { upsertMetadata } from 'features/nodes/util/graph/metadata';
import { size } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import { getImageDTO, imagesApi } from 'services/api/endpoints/images';
import type {
BaseModelType,
CollectInvocation,
ControlNetInvocation,
CoreMetadataInvocation,
Edge,
ImageDTO,
ImageResizeInvocation,
ImageToLatentsInvocation,
IPAdapterInvocation,
NonNullableGraph,
S,
@ -45,369 +52,33 @@ import type {
} from 'services/api/types';
import { assert } from 'tsafe';
const buildControlImage = (
image: ImageWithDims | null,
processedImage: ImageWithDims | null,
processorConfig: ProcessorConfig | null
): ImageField => {
if (processedImage && processorConfig) {
// We've processed the image in the app - use it for the control image.
return {
image_name: processedImage.imageName,
};
} else if (image) {
// No processor selected, and we have an image - the user provided a processed image, use it for the control image.
return {
image_name: image.imageName,
};
}
assert(false, 'Attempted to add unprocessed control image');
};
const buildControlNetMetadata = (controlNet: ControlNetConfigV2): S['ControlNetMetadataField'] => {
const { beginEndStepPct, controlMode, image, model, processedImage, processorConfig, weight } = controlNet;
assert(model, 'ControlNet model is required');
assert(image, 'ControlNet image is required');
const processed_image =
processedImage && processorConfig
? {
image_name: processedImage.imageName,
}
: null;
return {
control_model: model,
control_weight: weight,
control_mode: controlMode,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
resize_mode: 'just_resize',
image: {
image_name: image.imageName,
},
processed_image,
};
};
const addControlNetCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[CONTROL_NET_COLLECT]) {
// You see, we've already got one!
return;
}
// Add the ControlNet collector
const controlNetIterateNode: CollectInvocation = {
id: CONTROL_NET_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[CONTROL_NET_COLLECT] = controlNetIterateNode;
graph.edges.push({
source: { node_id: CONTROL_NET_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 'control',
},
});
};
const addGlobalControlNetsToGraph = async (
controlNets: ControlNetConfigV2[],
export const addControlLayersToGraph = async (
state: RootState,
graph: NonNullableGraph,
denoiseNodeId: string
) => {
if (controlNets.length === 0) {
return;
}
const controlNetMetadata: CoreMetadataInvocation['controlnets'] = [];
addControlNetCollectorSafe(graph, denoiseNodeId);
for (const controlNet of controlNets) {
if (!controlNet.model) {
return;
}
const { id, beginEndStepPct, controlMode, image, model, processedImage, processorConfig, weight } = controlNet;
const controlNetNode: ControlNetInvocation = {
id: `control_net_${id}`,
type: 'controlnet',
is_intermediate: true,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
control_mode: controlMode,
resize_mode: 'just_resize',
control_model: model,
control_weight: weight,
image: buildControlImage(image, processedImage, processorConfig),
};
graph.nodes[controlNetNode.id] = controlNetNode;
controlNetMetadata.push(buildControlNetMetadata(controlNet));
graph.edges.push({
source: { node_id: controlNetNode.id, field: 'control' },
destination: {
node_id: CONTROL_NET_COLLECT,
field: 'item',
},
});
}
upsertMetadata(graph, { controlnets: controlNetMetadata });
};
const buildT2IAdapterMetadata = (t2iAdapter: T2IAdapterConfigV2): S['T2IAdapterMetadataField'] => {
const { beginEndStepPct, image, model, processedImage, processorConfig, weight } = t2iAdapter;
assert(model, 'T2I Adapter model is required');
assert(image, 'T2I Adapter image is required');
const processed_image =
processedImage && processorConfig
? {
image_name: processedImage.imageName,
}
: null;
return {
t2i_adapter_model: model,
weight,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
resize_mode: 'just_resize',
image: {
image_name: image.imageName,
},
processed_image,
};
};
const addT2IAdapterCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[T2I_ADAPTER_COLLECT]) {
// You see, we've already got one!
return;
}
// Even though denoise_latents' t2i adapter input is collection or scalar, keep it simple and always use a collect
const t2iAdapterCollectNode: CollectInvocation = {
id: T2I_ADAPTER_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[T2I_ADAPTER_COLLECT] = t2iAdapterCollectNode;
graph.edges.push({
source: { node_id: T2I_ADAPTER_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 't2i_adapter',
},
});
};
const addGlobalT2IAdaptersToGraph = async (
t2iAdapters: T2IAdapterConfigV2[],
graph: NonNullableGraph,
denoiseNodeId: string
) => {
if (t2iAdapters.length === 0) {
return;
}
const t2iAdapterMetadata: CoreMetadataInvocation['t2iAdapters'] = [];
addT2IAdapterCollectorSafe(graph, denoiseNodeId);
for (const t2iAdapter of t2iAdapters) {
if (!t2iAdapter.model) {
return;
}
const { id, beginEndStepPct, image, model, processedImage, processorConfig, weight } = t2iAdapter;
const t2iAdapterNode: T2IAdapterInvocation = {
id: `t2i_adapter_${id}`,
type: 't2i_adapter',
is_intermediate: true,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
resize_mode: 'just_resize',
t2i_adapter_model: model,
weight: weight,
image: buildControlImage(image, processedImage, processorConfig),
};
graph.nodes[t2iAdapterNode.id] = t2iAdapterNode;
t2iAdapterMetadata.push(buildT2IAdapterMetadata(t2iAdapter));
graph.edges.push({
source: { node_id: t2iAdapterNode.id, field: 't2i_adapter' },
destination: {
node_id: T2I_ADAPTER_COLLECT,
field: 'item',
},
});
}
upsertMetadata(graph, { t2iAdapters: t2iAdapterMetadata });
};
const buildIPAdapterMetadata = (ipAdapter: IPAdapterConfigV2): S['IPAdapterMetadataField'] => {
const { weight, model, clipVisionModel, method, beginEndStepPct, image } = ipAdapter;
assert(model, 'IP Adapter model is required');
assert(image, 'IP Adapter image is required');
return {
ip_adapter_model: model,
clip_vision_model: clipVisionModel,
weight,
method,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
image: {
image_name: image.imageName,
},
};
};
const addIPAdapterCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[IP_ADAPTER_COLLECT]) {
// You see, we've already got one!
return;
}
const ipAdapterCollectNode: CollectInvocation = {
id: IP_ADAPTER_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[IP_ADAPTER_COLLECT] = ipAdapterCollectNode;
graph.edges.push({
source: { node_id: IP_ADAPTER_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 'ip_adapter',
},
});
};
const addGlobalIPAdaptersToGraph = async (
ipAdapters: IPAdapterConfigV2[],
graph: NonNullableGraph,
denoiseNodeId: string
) => {
if (ipAdapters.length === 0) {
return;
}
const ipAdapterMetdata: CoreMetadataInvocation['ipAdapters'] = [];
addIPAdapterCollectorSafe(graph, denoiseNodeId);
for (const ipAdapter of ipAdapters) {
const { id, weight, model, clipVisionModel, method, beginEndStepPct, image } = ipAdapter;
assert(image, 'IP Adapter image is required');
assert(model, 'IP Adapter model is required');
const ipAdapterNode: IPAdapterInvocation = {
id: `ip_adapter_${id}`,
type: 'ip_adapter',
is_intermediate: true,
weight,
method,
ip_adapter_model: model,
clip_vision_model: clipVisionModel,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
image: {
image_name: image.imageName,
},
};
graph.nodes[ipAdapterNode.id] = ipAdapterNode;
ipAdapterMetdata.push(buildIPAdapterMetadata(ipAdapter));
graph.edges.push({
source: { node_id: ipAdapterNode.id, field: 'ip_adapter' },
destination: {
node_id: IP_ADAPTER_COLLECT,
field: 'item',
},
});
}
upsertMetadata(graph, { ipAdapters: ipAdapterMetdata });
};
export const addControlLayersToGraph = async (state: RootState, graph: NonNullableGraph, denoiseNodeId: string) => {
const { dispatch } = getStore();
): Promise<Layer[]> => {
const mainModel = state.generation.model;
assert(mainModel, 'Missing main model when building graph');
const isSDXL = mainModel.base === 'sdxl';
// Add global control adapters
const globalControlNets = state.controlLayers.present.layers
// Must be a CA layer
.filter(isControlAdapterLayer)
// Must be enabled
.filter((l) => l.isEnabled)
// We want the CAs themselves
.map((l) => l.controlAdapter)
// Must be a ControlNet
.filter(isControlNetConfigV2)
.filter((ca) => {
const hasModel = Boolean(ca.model);
const modelMatchesBase = ca.model?.base === mainModel.base;
const hasControlImage = ca.image || (ca.processedImage && ca.processorConfig);
return hasModel && modelMatchesBase && hasControlImage;
});
addGlobalControlNetsToGraph(globalControlNets, graph, denoiseNodeId);
// Filter out layers with incompatible base model, missing control image
const validLayers = state.controlLayers.present.layers.filter((l) => isValidLayer(l, mainModel.base));
const globalT2IAdapters = state.controlLayers.present.layers
// Must be a CA layer
.filter(isControlAdapterLayer)
// Must be enabled
.filter((l) => l.isEnabled)
// We want the CAs themselves
.map((l) => l.controlAdapter)
// Must have a ControlNet CA
.filter(isT2IAdapterConfigV2)
.filter((ca) => {
const hasModel = Boolean(ca.model);
const modelMatchesBase = ca.model?.base === mainModel.base;
const hasControlImage = ca.image || (ca.processedImage && ca.processorConfig);
return hasModel && modelMatchesBase && hasControlImage;
});
addGlobalT2IAdaptersToGraph(globalT2IAdapters, graph, denoiseNodeId);
const validControlAdapters = validLayers.filter(isControlAdapterLayer).map((l) => l.controlAdapter);
for (const ca of validControlAdapters) {
addGlobalControlAdapterToGraph(ca, graph, denoiseNodeId);
}
const globalIPAdapters = state.controlLayers.present.layers
// Must be an IP Adapter layer
.filter(isIPAdapterLayer)
// Must be enabled
.filter((l) => l.isEnabled)
// We want the IP Adapters themselves
.map((l) => l.ipAdapter)
.filter((ca) => {
const hasModel = Boolean(ca.model);
const modelMatchesBase = ca.model?.base === mainModel.base;
const hasControlImage = Boolean(ca.image);
return hasModel && modelMatchesBase && hasControlImage;
});
addGlobalIPAdaptersToGraph(globalIPAdapters, graph, denoiseNodeId);
const rgLayers = state.controlLayers.present.layers
// Only RG layers are get masks
.filter(isRegionalGuidanceLayer)
// Only visible layers are rendered on the canvas
.filter((l) => l.isEnabled)
// Only layers with prompts get added to the graph
.filter((l) => {
const hasTextPrompt = Boolean(l.positivePrompt || l.negativePrompt);
const hasIPAdapter = l.ipAdapters.length !== 0;
return hasTextPrompt || hasIPAdapter;
});
const layerIds = rgLayers.map((l) => l.id);
const blobs = await getRegionalPromptLayerBlobs(layerIds);
assert(size(blobs) === size(layerIds), 'Mismatch between layer IDs and blobs');
const validIPAdapters = validLayers.filter(isIPAdapterLayer).map((l) => l.ipAdapter);
for (const ipAdapter of validIPAdapters) {
addGlobalIPAdapterToGraph(ipAdapter, graph, denoiseNodeId);
}
const initialImageLayers = validLayers.filter(isInitialImageLayer);
assert(initialImageLayers.length <= 1, 'Only one initial image layer allowed');
if (initialImageLayers[0]) {
addInitialImageLayerToGraph(state, graph, denoiseNodeId, initialImageLayers[0]);
}
// TODO: We should probably just use conditioning collectors by default, and skip all this fanagling with re-routing
// the existing conditioning nodes.
@ -470,22 +141,16 @@ export const addControlLayersToGraph = async (state: RootState, graph: NonNullab
},
});
// Upload the blobs to the backend, add each to graph
// TODO: Store the uploaded image names in redux to reuse them, so long as the layer hasn't otherwise changed. This
// would be a great perf win - not only would we skip re-uploading the same image, but we'd be able to use the node
// cache (currently, when we re-use the same mask data, since it is a different image, the node cache is not used).
for (const layer of rgLayers) {
const validRGLayers = validLayers.filter(isRegionalGuidanceLayer);
const layerIds = validRGLayers.map((l) => l.id);
const blobs = await getRegionalPromptLayerBlobs(layerIds);
assert(size(blobs) === size(layerIds), 'Mismatch between layer IDs and blobs');
for (const layer of validRGLayers) {
const blob = blobs[layer.id];
assert(blob, `Blob for layer ${layer.id} not found`);
const file = new File([blob], `${layer.id}_mask.png`, { type: 'image/png' });
const req = dispatch(
imagesApi.endpoints.uploadImage.initiate({ file, image_category: 'mask', is_intermediate: true })
);
req.reset();
// TODO: This will raise on network error
const { image_name } = await req.unwrap();
// Upload the mask image, or get the cached image if it exists
const { image_name } = await getMaskImage(layer, blob);
// The main mask-to-tensor node
const maskToTensorNode: S['AlphaMaskToTensorInvocation'] = {
@ -632,15 +297,11 @@ export const addControlLayersToGraph = async (state: RootState, graph: NonNullab
}
}
// TODO(psyche): For some reason, I have to explicitly annotate regionalIPAdapters here. Not sure why.
const regionalIPAdapters: IPAdapterConfigV2[] = layer.ipAdapters.filter((ipAdapter) => {
const hasModel = Boolean(ipAdapter.model);
const modelMatchesBase = ipAdapter.model?.base === mainModel.base;
const hasControlImage = Boolean(ipAdapter.image);
return hasModel && modelMatchesBase && hasControlImage;
});
const validRegionalIPAdapters: IPAdapterConfigV2[] = layer.ipAdapters.filter((ipa) =>
isValidIPAdapter(ipa, mainModel.base)
);
for (const ipAdapter of regionalIPAdapters) {
for (const ipAdapter of validRegionalIPAdapters) {
addIPAdapterCollectorSafe(graph, denoiseNodeId);
const { id, weight, model, clipVisionModel, method, beginEndStepPct, image } = ipAdapter;
assert(model, 'IP Adapter model is required');
@ -657,7 +318,7 @@ export const addControlLayersToGraph = async (state: RootState, graph: NonNullab
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
image: {
image_name: image.imageName,
image_name: image.name,
},
};
@ -678,4 +339,369 @@ export const addControlLayersToGraph = async (state: RootState, graph: NonNullab
});
}
}
upsertMetadata(graph, { control_layers: { layers: validLayers, version: state.controlLayers.present._version } });
return validLayers;
};
const getMaskImage = async (layer: RegionalGuidanceLayer, blob: Blob): Promise<ImageDTO> => {
if (layer.uploadedMaskImage) {
const imageDTO = await getImageDTO(layer.uploadedMaskImage.name);
if (imageDTO) {
return imageDTO;
}
}
const { dispatch } = getStore();
// No cached mask, or the cached image no longer exists - we need to upload the mask image
const file = new File([blob], `${layer.id}_mask.png`, { type: 'image/png' });
const req = dispatch(
imagesApi.endpoints.uploadImage.initiate({ file, image_category: 'mask', is_intermediate: true })
);
req.reset();
const imageDTO = await req.unwrap();
dispatch(rgLayerMaskImageUploaded({ layerId: layer.id, imageDTO }));
return imageDTO;
};
const buildControlImage = (
image: ImageWithDims | null,
processedImage: ImageWithDims | null,
processorConfig: ProcessorConfig | null
): ImageField => {
if (processedImage && processorConfig) {
// We've processed the image in the app - use it for the control image.
return {
image_name: processedImage.name,
};
} else if (image) {
// No processor selected, and we have an image - the user provided a processed image, use it for the control image.
return {
image_name: image.name,
};
}
assert(false, 'Attempted to add unprocessed control image');
};
const addGlobalControlAdapterToGraph = (
controlAdapter: ControlNetConfigV2 | T2IAdapterConfigV2,
graph: NonNullableGraph,
denoiseNodeId: string
) => {
if (controlAdapter.type === 'controlnet') {
addGlobalControlNetToGraph(controlAdapter, graph, denoiseNodeId);
}
if (controlAdapter.type === 't2i_adapter') {
addGlobalT2IAdapterToGraph(controlAdapter, graph, denoiseNodeId);
}
};
const addControlNetCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[CONTROL_NET_COLLECT]) {
// You see, we've already got one!
return;
}
// Add the ControlNet collector
const controlNetIterateNode: CollectInvocation = {
id: CONTROL_NET_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[CONTROL_NET_COLLECT] = controlNetIterateNode;
graph.edges.push({
source: { node_id: CONTROL_NET_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 'control',
},
});
};
const addGlobalControlNetToGraph = (controlNet: ControlNetConfigV2, graph: NonNullableGraph, denoiseNodeId: string) => {
const { id, beginEndStepPct, controlMode, image, model, processedImage, processorConfig, weight } = controlNet;
assert(model, 'ControlNet model is required');
const controlImage = buildControlImage(image, processedImage, processorConfig);
addControlNetCollectorSafe(graph, denoiseNodeId);
const controlNetNode: ControlNetInvocation = {
id: `control_net_${id}`,
type: 'controlnet',
is_intermediate: true,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
control_mode: controlMode,
resize_mode: 'just_resize',
control_model: model,
control_weight: weight,
image: controlImage,
};
graph.nodes[controlNetNode.id] = controlNetNode;
graph.edges.push({
source: { node_id: controlNetNode.id, field: 'control' },
destination: {
node_id: CONTROL_NET_COLLECT,
field: 'item',
},
});
};
const addT2IAdapterCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[T2I_ADAPTER_COLLECT]) {
// You see, we've already got one!
return;
}
// Even though denoise_latents' t2i adapter input is collection or scalar, keep it simple and always use a collect
const t2iAdapterCollectNode: CollectInvocation = {
id: T2I_ADAPTER_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[T2I_ADAPTER_COLLECT] = t2iAdapterCollectNode;
graph.edges.push({
source: { node_id: T2I_ADAPTER_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 't2i_adapter',
},
});
};
const addGlobalT2IAdapterToGraph = (t2iAdapter: T2IAdapterConfigV2, graph: NonNullableGraph, denoiseNodeId: string) => {
const { id, beginEndStepPct, image, model, processedImage, processorConfig, weight } = t2iAdapter;
assert(model, 'T2I Adapter model is required');
const controlImage = buildControlImage(image, processedImage, processorConfig);
addT2IAdapterCollectorSafe(graph, denoiseNodeId);
const t2iAdapterNode: T2IAdapterInvocation = {
id: `t2i_adapter_${id}`,
type: 't2i_adapter',
is_intermediate: true,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
resize_mode: 'just_resize',
t2i_adapter_model: model,
weight: weight,
image: controlImage,
};
graph.nodes[t2iAdapterNode.id] = t2iAdapterNode;
graph.edges.push({
source: { node_id: t2iAdapterNode.id, field: 't2i_adapter' },
destination: {
node_id: T2I_ADAPTER_COLLECT,
field: 'item',
},
});
};
const addIPAdapterCollectorSafe = (graph: NonNullableGraph, denoiseNodeId: string) => {
if (graph.nodes[IP_ADAPTER_COLLECT]) {
// You see, we've already got one!
return;
}
const ipAdapterCollectNode: CollectInvocation = {
id: IP_ADAPTER_COLLECT,
type: 'collect',
is_intermediate: true,
};
graph.nodes[IP_ADAPTER_COLLECT] = ipAdapterCollectNode;
graph.edges.push({
source: { node_id: IP_ADAPTER_COLLECT, field: 'collection' },
destination: {
node_id: denoiseNodeId,
field: 'ip_adapter',
},
});
};
const addGlobalIPAdapterToGraph = (ipAdapter: IPAdapterConfigV2, graph: NonNullableGraph, denoiseNodeId: string) => {
addIPAdapterCollectorSafe(graph, denoiseNodeId);
const { id, weight, model, clipVisionModel, method, beginEndStepPct, image } = ipAdapter;
assert(image, 'IP Adapter image is required');
assert(model, 'IP Adapter model is required');
const ipAdapterNode: IPAdapterInvocation = {
id: `ip_adapter_${id}`,
type: 'ip_adapter',
is_intermediate: true,
weight,
method,
ip_adapter_model: model,
clip_vision_model: clipVisionModel,
begin_step_percent: beginEndStepPct[0],
end_step_percent: beginEndStepPct[1],
image: {
image_name: image.name,
},
};
graph.nodes[ipAdapterNode.id] = ipAdapterNode;
graph.edges.push({
source: { node_id: ipAdapterNode.id, field: 'ip_adapter' },
destination: {
node_id: IP_ADAPTER_COLLECT,
field: 'item',
},
});
};
const addInitialImageLayerToGraph = (
state: RootState,
graph: NonNullableGraph,
denoiseNodeId: string,
layer: InitialImageLayer
) => {
const { vaePrecision, model } = state.generation;
const { refinerModel, refinerStart } = state.sdxl;
const { width, height } = state.controlLayers.present.size;
assert(layer.isEnabled, 'Initial image layer is not enabled');
assert(layer.image, 'Initial image layer has no image');
const isSDXL = model?.base === 'sdxl';
const useRefinerStartEnd = isSDXL && Boolean(refinerModel);
const denoiseNode = graph.nodes[denoiseNodeId];
assert(denoiseNode?.type === 'denoise_latents', `Missing denoise node or incorrect type: ${denoiseNode?.type}`);
const { denoisingStrength } = layer;
denoiseNode.denoising_start = useRefinerStartEnd
? Math.min(refinerStart, 1 - denoisingStrength)
: 1 - denoisingStrength;
denoiseNode.denoising_end = useRefinerStartEnd ? refinerStart : 1;
const i2lNode: ImageToLatentsInvocation = {
type: 'i2l',
id: IMAGE_TO_LATENTS,
is_intermediate: true,
use_cache: true,
fp32: vaePrecision === 'fp32',
};
graph.nodes[i2lNode.id] = i2lNode;
graph.edges.push({
source: {
node_id: IMAGE_TO_LATENTS,
field: 'latents',
},
destination: {
node_id: denoiseNode.id,
field: 'latents',
},
});
if (layer.image.width !== width || layer.image.height !== height) {
// The init image needs to be resized to the specified width and height before being passed to `IMAGE_TO_LATENTS`
// Create a resize node, explicitly setting its image
const resizeNode: ImageResizeInvocation = {
id: RESIZE,
type: 'img_resize',
image: {
image_name: layer.image.name,
},
is_intermediate: true,
width,
height,
};
graph.nodes[RESIZE] = resizeNode;
// The `RESIZE` node then passes its image to `IMAGE_TO_LATENTS`
graph.edges.push({
source: { node_id: RESIZE, field: 'image' },
destination: {
node_id: IMAGE_TO_LATENTS,
field: 'image',
},
});
// The `RESIZE` node also passes its width and height to `NOISE`
graph.edges.push({
source: { node_id: RESIZE, field: 'width' },
destination: {
node_id: NOISE,
field: 'width',
},
});
graph.edges.push({
source: { node_id: RESIZE, field: 'height' },
destination: {
node_id: NOISE,
field: 'height',
},
});
} else {
// We are not resizing, so we need to set the image on the `IMAGE_TO_LATENTS` node explicitly
i2lNode.image = {
image_name: layer.image.name,
};
// Pass the image's dimensions to the `NOISE` node
graph.edges.push({
source: { node_id: IMAGE_TO_LATENTS, field: 'width' },
destination: {
node_id: NOISE,
field: 'width',
},
});
graph.edges.push({
source: { node_id: IMAGE_TO_LATENTS, field: 'height' },
destination: {
node_id: NOISE,
field: 'height',
},
});
}
upsertMetadata(graph, { generation_mode: isSDXL ? 'sdxl_img2img' : 'img2img' });
};
const isValidControlAdapter = (ca: ControlNetConfigV2 | T2IAdapterConfigV2, base: BaseModelType): boolean => {
// Must be have a model that matches the current base and must have a control image
const hasModel = Boolean(ca.model);
const modelMatchesBase = ca.model?.base === base;
const hasControlImage = Boolean(ca.image || (ca.processedImage && ca.processorConfig));
return hasModel && modelMatchesBase && hasControlImage;
};
const isValidIPAdapter = (ipa: IPAdapterConfigV2, base: BaseModelType): boolean => {
// Must be have a model that matches the current base and must have a control image
const hasModel = Boolean(ipa.model);
const modelMatchesBase = ipa.model?.base === base;
const hasImage = Boolean(ipa.image);
return hasModel && modelMatchesBase && hasImage;
};
const isValidLayer = (layer: Layer, base: BaseModelType) => {
if (!layer.isEnabled) {
return false;
}
if (isControlAdapterLayer(layer)) {
return isValidControlAdapter(layer.controlAdapter, base);
}
if (isIPAdapterLayer(layer)) {
return isValidIPAdapter(layer.ipAdapter, base);
}
if (isInitialImageLayer(layer)) {
if (!layer.image) {
return false;
}
return true;
}
if (isRegionalGuidanceLayer(layer)) {
if (layer.maskObjects.length === 0) {
// Layer has no mask, meaning any guidance would be applied to an empty region.
return false;
}
const hasTextPrompt = Boolean(layer.positivePrompt) || Boolean(layer.negativePrompt);
const hasIPAdapter = layer.ipAdapters.filter((ipa) => isValidIPAdapter(ipa, base)).length > 0;
return hasTextPrompt || hasIPAdapter;
}
return false;
};

View File

@ -1,125 +0,0 @@
import type { RootState } from 'app/store/store';
import { isInitialImageLayer } from 'features/controlLayers/store/controlLayersSlice';
import { upsertMetadata } from 'features/nodes/util/graph/metadata';
import type { ImageResizeInvocation, ImageToLatentsInvocation, NonNullableGraph } from 'services/api/types';
import { assert } from 'tsafe';
import { IMAGE_TO_LATENTS, NOISE, RESIZE } from './constants';
export const addInitialImageToLinearGraph = (
state: RootState,
graph: NonNullableGraph,
denoiseNodeId: string
): void => {
// Remove Existing UNet Connections
const { img2imgStrength, vaePrecision, model } = state.generation;
const { refinerModel, refinerStart } = state.sdxl;
const { width, height } = state.controlLayers.present.size;
const initialImageLayer = state.controlLayers.present.layers.find(isInitialImageLayer);
const initialImage = initialImageLayer?.isEnabled ? initialImageLayer?.image : null;
if (!initialImage) {
return;
}
const isSDXL = model?.base === 'sdxl';
const useRefinerStartEnd = isSDXL && Boolean(refinerModel);
const denoiseNode = graph.nodes[denoiseNodeId];
assert(denoiseNode?.type === 'denoise_latents', `Missing denoise node or incorrect type: ${denoiseNode?.type}`);
denoiseNode.denoising_start = useRefinerStartEnd ? Math.min(refinerStart, 1 - img2imgStrength) : 1 - img2imgStrength;
denoiseNode.denoising_end = useRefinerStartEnd ? refinerStart : 1;
// We conditionally hook the image in depending on if a resize is needed
const i2lNode: ImageToLatentsInvocation = {
type: 'i2l',
id: IMAGE_TO_LATENTS,
is_intermediate: true,
use_cache: true,
fp32: vaePrecision === 'fp32',
};
graph.nodes[i2lNode.id] = i2lNode;
graph.edges.push({
source: {
node_id: IMAGE_TO_LATENTS,
field: 'latents',
},
destination: {
node_id: denoiseNode.id,
field: 'latents',
},
});
if (initialImage.width !== width || initialImage.height !== height) {
// The init image needs to be resized to the specified width and height before being passed to `IMAGE_TO_LATENTS`
// Create a resize node, explicitly setting its image
const resizeNode: ImageResizeInvocation = {
id: RESIZE,
type: 'img_resize',
image: {
image_name: initialImage.imageName,
},
is_intermediate: true,
width,
height,
};
graph.nodes[RESIZE] = resizeNode;
// The `RESIZE` node then passes its image to `IMAGE_TO_LATENTS`
graph.edges.push({
source: { node_id: RESIZE, field: 'image' },
destination: {
node_id: IMAGE_TO_LATENTS,
field: 'image',
},
});
// The `RESIZE` node also passes its width and height to `NOISE`
graph.edges.push({
source: { node_id: RESIZE, field: 'width' },
destination: {
node_id: NOISE,
field: 'width',
},
});
graph.edges.push({
source: { node_id: RESIZE, field: 'height' },
destination: {
node_id: NOISE,
field: 'height',
},
});
} else {
// We are not resizing, so we need to set the image on the `IMAGE_TO_LATENTS` node explicitly
i2lNode.image = {
image_name: initialImage.imageName,
};
// Pass the image's dimensions to the `NOISE` node
graph.edges.push({
source: { node_id: IMAGE_TO_LATENTS, field: 'width' },
destination: {
node_id: NOISE,
field: 'width',
},
});
graph.edges.push({
source: { node_id: IMAGE_TO_LATENTS, field: 'height' },
destination: {
node_id: NOISE,
field: 'height',
},
});
}
upsertMetadata(graph, {
generation_mode: isSDXL ? 'sdxl_img2img' : 'img2img',
strength: img2imgStrength,
init_image: initialImage.imageName,
});
};

View File

@ -1,5 +1,5 @@
import type { RootState } from 'app/store/store';
import { getBoardField, getIsIntermediate } from 'features/nodes/util/graph/graphBuilderUtils';
import { getBoardField } from 'features/nodes/util/graph/graphBuilderUtils';
import type { ESRGANInvocation, Graph, NonNullableGraph } from 'services/api/types';
import { ESRGAN } from './constants';
@ -18,7 +18,7 @@ export const buildAdHocUpscaleGraph = ({ image_name, state }: Arg): Graph => {
type: 'esrgan',
image: { image_name },
model_name: esrganModelName,
is_intermediate: getIsIntermediate(state),
is_intermediate: false,
board: getBoardField(state),
};

View File

@ -1,8 +1,8 @@
import { logger } from 'app/logging/logger';
import type { RootState } from 'app/store/store';
import { isInitialImageLayer, isRegionalGuidanceLayer } from 'features/controlLayers/store/controlLayersSlice';
import { fetchModelConfigWithTypeGuard } from 'features/metadata/util/modelFetchingHelpers';
import { addControlLayersToGraph } from 'features/nodes/util/graph/addControlLayersToGraph';
import { addInitialImageToLinearGraph } from 'features/nodes/util/graph/addInitialImageToLinearGraph';
import { getBoardField, getIsIntermediate } from 'features/nodes/util/graph/graphBuilderUtils';
import { isNonRefinerMainModelConfig, type NonNullableGraph } from 'services/api/types';
@ -232,24 +232,23 @@ export const buildGenerationTabGraph = async (state: RootState): Promise<NonNull
LATENTS_TO_IMAGE
);
addInitialImageToLinearGraph(state, graph, DENOISE_LATENTS);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
addSeamlessToLinearGraph(state, graph, modelLoaderNodeId);
modelLoaderNodeId = SEAMLESS;
}
// optionally add custom VAE
await addVAEToGraph(state, graph, modelLoaderNodeId);
// add LoRA support
await addLoRAsToGraph(state, graph, DENOISE_LATENTS, modelLoaderNodeId);
await addControlLayersToGraph(state, graph, DENOISE_LATENTS);
const addedLayers = await addControlLayersToGraph(state, graph, DENOISE_LATENTS);
// optionally add custom VAE
await addVAEToGraph(state, graph, modelLoaderNodeId);
const shouldUseHRF = !addedLayers.some((l) => isInitialImageLayer(l) || isRegionalGuidanceLayer(l));
// High resolution fix.
if (state.hrf.hrfEnabled) {
if (state.hrf.hrfEnabled && shouldUseHRF) {
addHrfToGraph(state, graph);
}

View File

@ -2,7 +2,6 @@ import { logger } from 'app/logging/logger';
import type { RootState } from 'app/store/store';
import { fetchModelConfigWithTypeGuard } from 'features/metadata/util/modelFetchingHelpers';
import { addControlLayersToGraph } from 'features/nodes/util/graph/addControlLayersToGraph';
import { addInitialImageToLinearGraph } from 'features/nodes/util/graph/addInitialImageToLinearGraph';
import { isNonRefinerMainModelConfig, type NonNullableGraph } from 'services/api/types';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
@ -242,8 +241,6 @@ export const buildGenerationTabSDXLGraph = async (state: RootState): Promise<Non
LATENTS_TO_IMAGE
);
addInitialImageToLinearGraph(state, graph, SDXL_DENOISE_LATENTS);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
addSeamlessToLinearGraph(state, graph, modelLoaderNodeId);
@ -258,14 +255,14 @@ export const buildGenerationTabSDXLGraph = async (state: RootState): Promise<Non
}
}
// optionally add custom VAE
await addVAEToGraph(state, graph, modelLoaderNodeId);
// add LoRA support
await addSDXLLoRAsToGraph(state, graph, SDXL_DENOISE_LATENTS, modelLoaderNodeId);
await addControlLayersToGraph(state, graph, SDXL_DENOISE_LATENTS);
// optionally add custom VAE
await addVAEToGraph(state, graph, modelLoaderNodeId);
// NSFW & watermark - must be last thing added to graph
if (state.system.shouldUseNSFWChecker) {
// must add before watermarker!

View File

@ -0,0 +1,20 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import ImageToImageStrength from 'features/parameters/components/ImageToImage/ImageToImageStrength';
import { setImg2imgStrength } from 'features/parameters/store/generationSlice';
import { memo, useCallback } from 'react';
const ParamImageToImageStrength = () => {
const img2imgStrength = useAppSelector((s) => s.generation.img2imgStrength);
const dispatch = useAppDispatch();
const onChange = useCallback(
(v: number) => {
dispatch(setImg2imgStrength(v));
},
[dispatch]
);
return <ImageToImageStrength value={img2imgStrength} onChange={onChange} />;
};
export default memo(ParamImageToImageStrength);

View File

@ -33,12 +33,13 @@ export const ParamNegativePrompt = memo(() => {
name="negativePrompt"
ref={textareaRef}
value={prompt}
placeholder={t('parameters.negativePromptPlaceholder')}
placeholder={t('parameters.globalNegativePromptPlaceholder')}
onChange={onChange}
onKeyDown={onKeyDown}
fontSize="sm"
variant="darkFilled"
paddingRight={30}
spellCheck={false}
/>
<PromptOverlayButtonWrapper>
<AddPromptTriggerButton isOpen={isOpen} onOpen={onOpen} />

View File

@ -49,12 +49,13 @@ export const ParamPositivePrompt = memo(() => {
name="prompt"
ref={textareaRef}
value={prompt}
placeholder={t('parameters.positivePromptPlaceholder')}
placeholder={t('parameters.globalPositivePromptPlaceholder')}
onChange={onChange}
minH={28}
onKeyDown={onKeyDown}
variant="darkFilled"
paddingRight={30}
spellCheck={false}
/>
<PromptOverlayButtonWrapper>
<AddPromptTriggerButton isOpen={isOpen} onOpen={onOpen} />

View File

@ -1,14 +1,17 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAppSelector } from 'app/store/storeHooks';
import { InformationalPopover } from 'common/components/InformationalPopover/InformationalPopover';
import { setImg2imgStrength } from 'features/parameters/store/generationSlice';
import { memo, useCallback } from 'react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
const marks = [0, 0.5, 1];
const ImageToImageStrength = () => {
const img2imgStrength = useAppSelector((s) => s.generation.img2imgStrength);
type Props = {
value: number;
onChange: (v: number) => void;
};
const ImageToImageStrength = ({ value, onChange }: Props) => {
const initial = useAppSelector((s) => s.config.sd.img2imgStrength.initial);
const sliderMin = useAppSelector((s) => s.config.sd.img2imgStrength.sliderMin);
const sliderMax = useAppSelector((s) => s.config.sd.img2imgStrength.sliderMax);
@ -16,11 +19,8 @@ const ImageToImageStrength = () => {
const numberInputMax = useAppSelector((s) => s.config.sd.img2imgStrength.numberInputMax);
const coarseStep = useAppSelector((s) => s.config.sd.img2imgStrength.coarseStep);
const fineStep = useAppSelector((s) => s.config.sd.img2imgStrength.fineStep);
const dispatch = useAppDispatch();
const { t } = useTranslation();
const handleChange = useCallback((v: number) => dispatch(setImg2imgStrength(v)), [dispatch]);
return (
<FormControl>
<InformationalPopover feature="paramDenoisingStrength">
@ -31,8 +31,8 @@ const ImageToImageStrength = () => {
fineStep={fineStep}
min={sliderMin}
max={sliderMax}
onChange={handleChange}
value={img2imgStrength}
onChange={onChange}
value={value}
defaultValue={initial}
marks={marks}
/>
@ -41,8 +41,8 @@ const ImageToImageStrength = () => {
fineStep={fineStep}
min={numberInputMin}
max={numberInputMax}
onChange={handleChange}
value={img2imgStrength}
onChange={onChange}
value={value}
defaultValue={initial}
/>
</FormControl>

View File

@ -38,6 +38,7 @@ export const usePreselectedImage = (selectedImage?: {
const handleSendToImg2Img = useCallback(() => {
if (selectedImageDto) {
dispatch(iiLayerAdded(selectedImageDto));
dispatch(setActiveTab('generation'));
}
}, [dispatch, selectedImageDto]);

View File

@ -16,14 +16,14 @@ import { z } from 'zod';
*/
// #region Positive prompt
const zParameterPositivePrompt = z.string();
export const zParameterPositivePrompt = z.string();
export type ParameterPositivePrompt = z.infer<typeof zParameterPositivePrompt>;
export const isParameterPositivePrompt = (val: unknown): val is ParameterPositivePrompt =>
zParameterPositivePrompt.safeParse(val).success;
// #endregion
// #region Negative prompt
const zParameterNegativePrompt = z.string();
export const zParameterNegativePrompt = z.string();
export type ParameterNegativePrompt = z.infer<typeof zParameterNegativePrompt>;
export const isParameterNegativePrompt = (val: unknown): val is ParameterNegativePrompt =>
zParameterNegativePrompt.safeParse(val).success;
@ -127,7 +127,7 @@ export type ParameterT2IAdapterModel = z.infer<typeof zParameterT2IAdapterModel>
// #endregion
// #region Strength (l2l strength)
const zParameterStrength = z.number().min(0).max(1);
export const zParameterStrength = z.number().min(0).max(1);
export type ParameterStrength = z.infer<typeof zParameterStrength>;
export const isParameterStrength = (val: unknown): val is ParameterStrength =>
zParameterStrength.safeParse(val).success;
@ -198,6 +198,6 @@ export const isParameterLoRAWeight = (val: unknown): val is ParameterLoRAWeight
// #endregion
// #region Regional Prompts AutoNegative
const zAutoNegative = z.enum(['off', 'invert']);
export const zAutoNegative = z.enum(['off', 'invert']);
export type ParameterAutoNegative = z.infer<typeof zAutoNegative>;
// #endregion

View File

@ -16,25 +16,26 @@ export const InvokeQueueBackButton = memo(() => {
return (
<Flex pos="relative" flexGrow={1} minW="240px">
<QueueIterationsNumberInput />
<Button
onClick={queueBack}
isLoading={isLoading || isLoadingDynamicPrompts}
loadingText={invoke}
isDisabled={isDisabled}
rightIcon={<RiSparkling2Fill />}
tooltip={<QueueButtonTooltip />}
variant="solid"
zIndex={1}
colorScheme="invokeYellow"
size="lg"
w="calc(100% - 60px)"
flexShrink={0}
justifyContent="space-between"
spinnerPlacement="end"
>
<span>{invoke}</span>
<Spacer />
</Button>
<QueueButtonTooltip>
<Button
onClick={queueBack}
isLoading={isLoading || isLoadingDynamicPrompts}
loadingText={invoke}
isDisabled={isDisabled}
rightIcon={<RiSparkling2Fill />}
variant="solid"
zIndex={1}
colorScheme="invokeYellow"
size="lg"
w="calc(100% - 60px)"
flexShrink={0}
justifyContent="space-between"
spinnerPlacement="end"
>
<span>{invoke}</span>
<Spacer />
</Button>
</QueueButtonTooltip>
</Flex>
);
});

View File

@ -1,10 +1,11 @@
import { Divider, Flex, ListItem, Text, UnorderedList } from '@invoke-ai/ui-library';
import { Divider, Flex, ListItem, Text, Tooltip, UnorderedList } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { useIsReadyToEnqueue } from 'common/hooks/useIsReadyToEnqueue';
import { selectControlLayersSlice } from 'features/controlLayers/store/controlLayersSlice';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import type { PropsWithChildren } from 'react';
import { memo, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useEnqueueBatchMutation } from 'services/api/endpoints/queue';
@ -21,17 +22,32 @@ type Props = {
prepend?: boolean;
};
export const QueueButtonTooltip = memo(({ prepend = false }: Props) => {
export const QueueButtonTooltip = (props: PropsWithChildren<Props>) => {
return (
<Tooltip label={<TooltipContent prepend={props.prepend} />} maxW={512}>
{props.children}
</Tooltip>
);
};
const TooltipContent = memo(({ prepend = false }: Props) => {
const { t } = useTranslation();
const { isReady, reasons } = useIsReadyToEnqueue();
const isLoadingDynamicPrompts = useAppSelector((s) => s.dynamicPrompts.isLoading);
const promptsCount = useAppSelector(selectPromptsCount);
const iterations = useAppSelector((s) => s.generation.iterations);
const iterationsCount = useAppSelector((s) => s.generation.iterations);
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const autoAddBoardName = useBoardName(autoAddBoardId);
const [_, { isLoading }] = useEnqueueBatchMutation({
fixedCacheKey: 'enqueueBatch',
});
const queueCountPredictionLabel = useMemo(() => {
const generationCount = Math.min(promptsCount * iterationsCount, 10000);
const prompts = t('queue.prompts', { count: promptsCount });
const iterations = t('queue.iterations', { count: iterationsCount });
const generations = t('queue.generations', { count: generationCount });
return `${promptsCount} ${prompts} \u00d7 ${iterationsCount} ${iterations} -> ${generationCount} ${generations}`.toLowerCase();
}, [iterationsCount, promptsCount, t]);
const label = useMemo(() => {
if (isLoading) {
@ -52,20 +68,21 @@ export const QueueButtonTooltip = memo(({ prepend = false }: Props) => {
return (
<Flex flexDir="column" gap={1}>
<Text fontWeight="semibold">{label}</Text>
<Text>
{t('queue.queueCountPrediction', {
promptsCount,
iterations,
count: Math.min(promptsCount * iterations, 10000),
})}
</Text>
<Text>{queueCountPredictionLabel}</Text>
{reasons.length > 0 && (
<>
<Divider opacity={0.2} borderColor="base.900" />
<UnorderedList>
{reasons.map((reason, i) => (
<ListItem key={`${reason}.${i}`}>
<Text>{reason}</Text>
<ListItem key={`${reason.content}.${i}`}>
<span>
{reason.prefix && (
<Text as="span" fontWeight="semibold">
{reason.prefix}:{' '}
</Text>
)}
<Text as="span">{reason.content}</Text>
</span>
</ListItem>
))}
</UnorderedList>
@ -82,4 +99,4 @@ export const QueueButtonTooltip = memo(({ prepend = false }: Props) => {
);
});
QueueButtonTooltip.displayName = 'QueueButtonTooltip';
TooltipContent.displayName = 'QueueButtonTooltipContent';

Some files were not shown because too many files have changed in this diff Show More