Merge branch 'main' into refactor/model-manager-3

This commit is contained in:
Lincoln Stein 2023-12-01 09:15:18 -05:00
commit 778fd55f0d
403 changed files with 8703 additions and 6513 deletions

View File

@ -8,7 +8,7 @@ To use a node, add the node to the `nodes` folder found in your InvokeAI install
The suggested method is to use `git clone` to clone the repository the node is found in. This allows for easy updates of the node in the future.
If you'd prefer, you can also just download the `.py` file from the linked repository and add it to the `nodes` folder.
If you'd prefer, you can also just download the whole node folder from the linked repository and add it to the `nodes` folder.
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
@ -26,6 +26,7 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Image Picker](#image-picker)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Match Histogram](#match-histogram)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image)
@ -208,6 +209,23 @@ This includes 15 Nodes:
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
--------------------------------
### Match Histogram
**Description:** An InvokeAI node to match a histogram from one image to another. This is a bit like the `color correct` node in the main InvokeAI but this works in the YCbCr colourspace and can handle images of different sizes. Also does not require a mask input.
- Option to only transfer luminance channel.
- Option to save output as grayscale
A good use case for this node is to normalize the colors of an image that has been through the tiled scaling workflow of my XYGrid Nodes.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/match_histogram
**Output Examples**
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" width="300" />
--------------------------------
### Oobabooga
@ -237,22 +255,30 @@ This node works best with SDXL models, especially as the style can be described
--------------------------------
### Prompt Tools
**Description:** A set of InvokeAI nodes that add general prompt manipulation tools. These were written to accompany the PromptsFromFile node and other prompt generation nodes.
**Description:** A set of InvokeAI nodes that add general prompt (string) manipulation tools. Designed to accompany the `Prompts From File` node and other prompt generation nodes.
1. `Prompt To File` - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
2. `PTFields Collect` - Converts image generation fields into a Json format string that can be passed to Prompt to file.
3. `PTFields Expand` - Takes Json string and converts it to individual generation parameters. This can be fed from the Prompts to file node.
4. `Prompt Strength` - Formats prompt with strength like the weighted format of compel
5. `Prompt Strength Combine` - Combines weighted prompts for .and()/.blend()
6. `CSV To Index String` - Gets a string from a CSV by index. Includes a Random index option
The following Nodes are now included in v3.2 of Invoke and are nolonger in this set of tools.<br>
- `Prompt Join` -> `String Join`
- `Prompt Join Three` -> `String Join Three`
- `Prompt Replace` -> `String Replace`
- `Prompt Split Neg` -> `String Split Neg`
1. PromptJoin - Joins to prompts into one.
2. PromptReplace - performs a search and replace on a prompt. With the option of using regex.
3. PromptSplitNeg - splits a prompt into positive and negative using the old V2 method of [] for negative.
4. PromptToFile - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
5. PTFieldsCollect - Converts image generation fields into a Json format string that can be passed to Prompt to file.
6. PTFieldsExpand - Takes Json string and converts it to individual generation parameters This can be fed from the Prompts to file node.
7. PromptJoinThree - Joins 3 prompt together.
8. PromptStrength - This take a string and float and outputs another string in the format of (string)strength like the weighted format of compel.
9. PromptStrengthCombine - This takes a collection of prompt strength strings and outputs a string in the .and() or .blend() format that can be fed into a proper prompt node.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/Prompt-tools-nodes
**Workflow Examples**
<img src="https://github.com/skunkworxdark/prompt-tools/blob/main/images/CSVToIndexStringNode.png" width="300" />
--------------------------------
### Remote Image
@ -339,15 +365,27 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
--------------------------------
### XY Image to Grid and Images to Grids nodes
**Description:** Image to grid nodes and supporting tools.
**Description:** These nodes add the following to InvokeAI:
- Generate grids of images from multiple input images
- Create XY grid images with labels from parameters
- Split images into overlapping tiles for processing (for super-resolution workflows)
- Recombine image tiles into a single output image blending the seams
1. "Images To Grids" node - Takes a collection of images and creates a grid(s) of images. If there are more images than the size of a single grid then multiple grids will be created until it runs out of images.
2. "XYImage To Grid" node - Converts a collection of XYImages into a labeled Grid of images. The XYImages collection has to be built using the supporting nodes. See example node setups for more details.
The nodes include:
1. `Images To Grids` - Combine multiple images into a grid of images
2. `XYImage To Grid` - Take X & Y params and creates a labeled image grid.
3. `XYImage Tiles` - Super-resolution (embiggen) style tiled resizing
4. `Image Tot XYImages` - Takes an image and cuts it up into a number of columns and rows.
5. Multiple supporting nodes - Helper nodes for data wrangling and building `XYImage` collections
See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/XYGrid_nodes
**Output Examples**
<img src="https://github.com/skunkworxdark/XYGrid_nodes/blob/main/images/collage.png" width="300" />
--------------------------------
### Example Node Template

View File

@ -1,104 +1,106 @@
# List of Default Nodes
The table below contains a list of the default nodes shipped with InvokeAI and their descriptions.
The table below contains a list of the default nodes shipped with InvokeAI and
their descriptions.
| Node <img width=160 align="right"> | Function |
|: ---------------------------------- | :--------------------------------------------------------------------------------------|
|Add Integers | Adds two numbers|
|Boolean Primitive Collection | A collection of boolean primitive values|
|Boolean Primitive | A boolean primitive value|
|Canny Processor | Canny edge detection for ControlNet|
|CLIP Skip | Skip layers in clip text_encoder model.|
|Collect | Collects values into a collection|
|Color Correct | Shifts the colors of a target image to match the reference image, optionally using a mask to only color-correct certain regions of the target image.|
|Color Primitive | A color primitive value|
|Compel Prompt | Parse prompt using compel package to conditioning.|
|Conditioning Primitive Collection | A collection of conditioning tensor primitive values|
|Conditioning Primitive | A conditioning tensor primitive value|
|Content Shuffle Processor | Applies content shuffle processing to image|
|ControlNet | Collects ControlNet info to pass to other nodes|
|Denoise Latents | Denoises noisy latents to decodable images|
|Divide Integers | Divides two numbers|
|Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator|
|[FaceMask](./detailedNodes/faceTools.md#facemask) | Generates masks for faces in an image to use with Inpainting|
|[FaceIdentifier](./detailedNodes/faceTools.md#faceidentifier) | Identifies and labels faces in an image|
|[FaceOff](./detailedNodes/faceTools.md#faceoff) | Creates a new image that is a scaled bounding box with a mask on the face for Inpainting|
|Float Math | Perform basic math operations on two floats|
|Float Primitive Collection | A collection of float primitive values|
|Float Primitive | A float primitive value|
|Float Range | Creates a range|
|HED (softedge) Processor | Applies HED edge detection to image|
|Blur Image | Blurs an image|
|Extract Image Channel | Gets a channel from an image.|
|Image Primitive Collection | A collection of image primitive values|
|Integer Math | Perform basic math operations on two integers|
|Convert Image Mode | Converts an image to a different mode.|
|Crop Image | Crops an image to a specified box. The box can be outside of the image.|
|Image Hue Adjustment | Adjusts the Hue of an image.|
|Inverse Lerp Image | Inverse linear interpolation of all pixels of an image|
|Image Primitive | An image primitive value|
|Lerp Image | Linear interpolation of all pixels of an image|
|Offset Image Channel | Add to or subtract from an image color channel by a uniform value.|
|Multiply Image Channel | Multiply or Invert an image color channel by a scalar value.|
|Multiply Images | Multiplies two images together using `PIL.ImageChops.multiply()`.|
|Blur NSFW Image | Add blur to NSFW-flagged images|
|Paste Image | Pastes an image into another image.|
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
|Resize Image | Resizes an image to specific dimensions|
|Round Float | Rounds a float to a specified number of decimal places|
|Float to Integer | Converts a float to an integer. Optionally rounds to an even multiple of a input number.|
|Scale Image | Scales an image by a factor|
|Image to Latents | Encodes an image into latents.|
|Add Invisible Watermark | Add an invisible watermark to an image|
|Solid Color Infill | Infills transparent areas of an image with a solid color|
|PatchMatch Infill | Infills transparent areas of an image using the PatchMatch algorithm|
|Tile Infill | Infills transparent areas of an image with tiles of the image|
|Integer Primitive Collection | A collection of integer primitive values|
|Integer Primitive | An integer primitive value|
|Iterate | Iterates over a list of items|
|Latents Primitive Collection | A collection of latents tensor primitive values|
|Latents Primitive | A latents tensor primitive value|
|Latents to Image | Generates an image from latents.|
|Leres (Depth) Processor | Applies leres processing to image|
|Lineart Anime Processor | Applies line art anime processing to image|
|Lineart Processor | Applies line art processing to image|
|LoRA Loader | Apply selected lora to unet and text_encoder.|
|Main Model Loader | Loads a main model, outputting its submodels.|
|Combine Mask | Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`.|
|Mask Edge | Applies an edge mask to an image|
|Mask from Alpha | Extracts the alpha channel of an image as a mask.|
|Mediapipe Face Processor | Applies mediapipe face processing to image|
|Midas (Depth) Processor | Applies Midas depth processing to image|
|MLSD Processor | Applies MLSD processing to image|
|Multiply Integers | Multiplies two numbers|
|Noise | Generates latent noise.|
|Normal BAE Processor | Applies NormalBae processing to image|
|ONNX Latents to Image | Generates an image from latents.|
|ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in __init__ to receive providers.|
|ONNX Text to Latents | Generates latents from conditionings.|
|ONNX Model Loader | Loads a main model, outputting its submodels.|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Openpose Processor | Applies Openpose processing to image|
|PIDI Processor | Applies PIDI processing to image|
|Prompts from File | Loads prompts from a text file|
|Random Integer | Outputs a single random integer.|
|Random Range | Creates a collection of random numbers|
|Integer Range | Creates a range of numbers from start to stop with step|
|Integer Range of Size | Creates a range from start to start + size with step|
|Resize Latents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8.|
|SDXL Compel Prompt | Parse prompt using compel package to conditioning.|
|SDXL LoRA Loader | Apply selected lora to unet and text_encoder.|
|SDXL Main Model Loader | Loads an sdxl base model, outputting its submodels.|
|SDXL Refiner Compel Prompt | Parse prompt using compel package to conditioning.|
|SDXL Refiner Model Loader | Loads an sdxl refiner model, outputting its submodels.|
|Scale Latents | Scales latents by a given factor.|
|Segment Anything Processor | Applies segment anything processing to image|
|Show Image | Displays a provided image, and passes it forward in the pipeline.|
|Step Param Easing | Experimental per-step parameter easing for denoising steps|
|String Primitive Collection | A collection of string primitive values|
|String Primitive | A string primitive value|
|Subtract Integers | Subtracts two numbers|
|Tile Resample Processor | Tile resampler processor|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput|
|Zoe (Depth) Processor | Applies Zoe depth processing to image|
| :------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------- |
| Add Integers | Adds two numbers |
| Boolean Primitive Collection | A collection of boolean primitive values |
| Boolean Primitive | A boolean primitive value |
| Canny Processor | Canny edge detection for ControlNet |
| CenterPadCrop | Pad or crop an image's sides from the center by specified pixels. Positive values are outside of the image. |
| CLIP Skip | Skip layers in clip text_encoder model. |
| Collect | Collects values into a collection |
| Color Correct | Shifts the colors of a target image to match the reference image, optionally using a mask to only color-correct certain regions of the target image. |
| Color Primitive | A color primitive value |
| Compel Prompt | Parse prompt using compel package to conditioning. |
| Conditioning Primitive Collection | A collection of conditioning tensor primitive values |
| Conditioning Primitive | A conditioning tensor primitive value |
| Content Shuffle Processor | Applies content shuffle processing to image |
| ControlNet | Collects ControlNet info to pass to other nodes |
| Denoise Latents | Denoises noisy latents to decodable images |
| Divide Integers | Divides two numbers |
| Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator |
| [FaceMask](./detailedNodes/faceTools.md#facemask) | Generates masks for faces in an image to use with Inpainting |
| [FaceIdentifier](./detailedNodes/faceTools.md#faceidentifier) | Identifies and labels faces in an image |
| [FaceOff](./detailedNodes/faceTools.md#faceoff) | Creates a new image that is a scaled bounding box with a mask on the face for Inpainting |
| Float Math | Perform basic math operations on two floats |
| Float Primitive Collection | A collection of float primitive values |
| Float Primitive | A float primitive value |
| Float Range | Creates a range |
| HED (softedge) Processor | Applies HED edge detection to image |
| Blur Image | Blurs an image |
| Extract Image Channel | Gets a channel from an image. |
| Image Primitive Collection | A collection of image primitive values |
| Integer Math | Perform basic math operations on two integers |
| Convert Image Mode | Converts an image to a different mode. |
| Crop Image | Crops an image to a specified box. The box can be outside of the image. |
| Image Hue Adjustment | Adjusts the Hue of an image. |
| Inverse Lerp Image | Inverse linear interpolation of all pixels of an image |
| Image Primitive | An image primitive value |
| Lerp Image | Linear interpolation of all pixels of an image |
| Offset Image Channel | Add to or subtract from an image color channel by a uniform value. |
| Multiply Image Channel | Multiply or Invert an image color channel by a scalar value. |
| Multiply Images | Multiplies two images together using `PIL.ImageChops.multiply()`. |
| Blur NSFW Image | Add blur to NSFW-flagged images |
| Paste Image | Pastes an image into another image. |
| ImageProcessor | Base class for invocations that preprocess images for ControlNet |
| Resize Image | Resizes an image to specific dimensions |
| Round Float | Rounds a float to a specified number of decimal places |
| Float to Integer | Converts a float to an integer. Optionally rounds to an even multiple of a input number. |
| Scale Image | Scales an image by a factor |
| Image to Latents | Encodes an image into latents. |
| Add Invisible Watermark | Add an invisible watermark to an image |
| Solid Color Infill | Infills transparent areas of an image with a solid color |
| PatchMatch Infill | Infills transparent areas of an image using the PatchMatch algorithm |
| Tile Infill | Infills transparent areas of an image with tiles of the image |
| Integer Primitive Collection | A collection of integer primitive values |
| Integer Primitive | An integer primitive value |
| Iterate | Iterates over a list of items |
| Latents Primitive Collection | A collection of latents tensor primitive values |
| Latents Primitive | A latents tensor primitive value |
| Latents to Image | Generates an image from latents. |
| Leres (Depth) Processor | Applies leres processing to image |
| Lineart Anime Processor | Applies line art anime processing to image |
| Lineart Processor | Applies line art processing to image |
| LoRA Loader | Apply selected lora to unet and text_encoder. |
| Main Model Loader | Loads a main model, outputting its submodels. |
| Combine Mask | Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`. |
| Mask Edge | Applies an edge mask to an image |
| Mask from Alpha | Extracts the alpha channel of an image as a mask. |
| Mediapipe Face Processor | Applies mediapipe face processing to image |
| Midas (Depth) Processor | Applies Midas depth processing to image |
| MLSD Processor | Applies MLSD processing to image |
| Multiply Integers | Multiplies two numbers |
| Noise | Generates latent noise. |
| Normal BAE Processor | Applies NormalBae processing to image |
| ONNX Latents to Image | Generates an image from latents. |
| ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in **init** to receive providers. |
| ONNX Text to Latents | Generates latents from conditionings. |
| ONNX Model Loader | Loads a main model, outputting its submodels. |
| OpenCV Inpaint | Simple inpaint using opencv. |
| Openpose Processor | Applies Openpose processing to image |
| PIDI Processor | Applies PIDI processing to image |
| Prompts from File | Loads prompts from a text file |
| Random Integer | Outputs a single random integer. |
| Random Range | Creates a collection of random numbers |
| Integer Range | Creates a range of numbers from start to stop with step |
| Integer Range of Size | Creates a range from start to start + size with step |
| Resize Latents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8. |
| SDXL Compel Prompt | Parse prompt using compel package to conditioning. |
| SDXL LoRA Loader | Apply selected lora to unet and text_encoder. |
| SDXL Main Model Loader | Loads an sdxl base model, outputting its submodels. |
| SDXL Refiner Compel Prompt | Parse prompt using compel package to conditioning. |
| SDXL Refiner Model Loader | Loads an sdxl refiner model, outputting its submodels. |
| Scale Latents | Scales latents by a given factor. |
| Segment Anything Processor | Applies segment anything processing to image |
| Show Image | Displays a provided image, and passes it forward in the pipeline. |
| Step Param Easing | Experimental per-step parameter easing for denoising steps |
| String Primitive Collection | A collection of string primitive values |
| String Primitive | A string primitive value |
| Subtract Integers | Subtracts two numbers |
| Tile Resample Processor | Tile resampler processor |
| Upscale (RealESRGAN) | Upscales an image using RealESRGAN. |
| VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput |
| Zoe (Depth) Processor | Applies Zoe depth processing to image |

View File

@ -1,11 +1,8 @@
import sys
from typing import Any
from fastapi.responses import HTMLResponse
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
import sys
from invokeai.version.invokeai_version import __version__
from .services.config import InvokeAIAppConfig
@ -22,6 +19,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
import socket
from inspect import signature
from pathlib import Path
from typing import Any
import uvicorn
from fastapi import FastAPI
@ -29,7 +27,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import FileResponse
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
@ -58,9 +56,9 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
InputFieldJSONSchemaExtra,
OutputFieldJSONSchemaExtra,
UIConfigBase,
_InputField,
_OutputField,
)
if is_mps_available():
@ -157,7 +155,11 @@ def custom_openapi() -> dict[str, Any]:
# Add Node Editor UI helper schemas
ui_config_schemas = models_json_schema(
[(UIConfigBase, "serialization"), (_InputField, "serialization"), (_OutputField, "serialization")],
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, ui_config_schema in ui_config_schemas[1]["$defs"].items():
@ -165,7 +167,7 @@ def custom_openapi() -> dict[str, Any]:
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]

View File

@ -1,4 +1,4 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI team
from __future__ import annotations
@ -8,7 +8,7 @@ from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
from types import UnionType
from typing import TYPE_CHECKING, Any, Callable, ClassVar, Iterable, Literal, Optional, Type, TypeVar, Union
from typing import TYPE_CHECKING, Any, Callable, ClassVar, Iterable, Literal, Optional, Type, TypeVar, Union, cast
import semver
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter, create_model
@ -17,11 +17,17 @@ from pydantic_core import PydanticUndefined
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.util.metaenum import MetaEnum
from invokeai.app.util.misc import uuid_string
from invokeai.backend.util.logging import InvokeAILogger
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
logger = InvokeAILogger.get_logger()
CUSTOM_NODE_PACK_SUFFIX = "__invokeai-custom-node"
class InvalidVersionError(ValueError):
pass
@ -31,7 +37,7 @@ class InvalidFieldError(TypeError):
pass
class Input(str, Enum):
class Input(str, Enum, metaclass=MetaEnum):
"""
The type of input a field accepts.
- `Input.Direct`: The field must have its value provided directly, when the invocation and field \
@ -45,86 +51,124 @@ class Input(str, Enum):
Any = "any"
class UIType(str, Enum):
class FieldKind(str, Enum, metaclass=MetaEnum):
"""
Type hints for the UI.
If a field should be provided a data type that does not exactly match the python type of the field, \
use this to provide the type that should be used instead. See the node development docs for detail \
on adding a new field type, which involves client-side changes.
The kind of field.
- `Input`: An input field on a node.
- `Output`: An output field on a node.
- `Internal`: A field which is treated as an input, but cannot be used in node definitions. Metadata is
one example. It is provided to nodes via the WithMetadata class, and we want to reserve the field name
"metadata" for this on all nodes. `FieldKind` is used to short-circuit the field name validation logic,
allowing "metadata" for that field.
- `NodeAttribute`: The field is a node attribute. These are fields which are not inputs or outputs,
but which are used to store information about the node. For example, the `id` and `type` fields are node
attributes.
The presence of this in `json_schema_extra["field_kind"]` is used when initializing node schemas on app
startup, and when generating the OpenAPI schema for the workflow editor.
"""
# region Primitives
Boolean = "boolean"
Color = "ColorField"
Conditioning = "ConditioningField"
Control = "ControlField"
Float = "float"
Image = "ImageField"
Integer = "integer"
Latents = "LatentsField"
String = "string"
# endregion
Input = "input"
Output = "output"
Internal = "internal"
NodeAttribute = "node_attribute"
# region Collection Primitives
BooleanCollection = "BooleanCollection"
ColorCollection = "ColorCollection"
ConditioningCollection = "ConditioningCollection"
ControlCollection = "ControlCollection"
FloatCollection = "FloatCollection"
ImageCollection = "ImageCollection"
IntegerCollection = "IntegerCollection"
LatentsCollection = "LatentsCollection"
StringCollection = "StringCollection"
# endregion
# region Polymorphic Primitives
BooleanPolymorphic = "BooleanPolymorphic"
ColorPolymorphic = "ColorPolymorphic"
ConditioningPolymorphic = "ConditioningPolymorphic"
ControlPolymorphic = "ControlPolymorphic"
FloatPolymorphic = "FloatPolymorphic"
ImagePolymorphic = "ImagePolymorphic"
IntegerPolymorphic = "IntegerPolymorphic"
LatentsPolymorphic = "LatentsPolymorphic"
StringPolymorphic = "StringPolymorphic"
# endregion
class UIType(str, Enum, metaclass=MetaEnum):
"""
Type hints for the UI for situations in which the field type is not enough to infer the correct UI type.
# region Models
MainModel = "MainModelField"
- Model Fields
The most common node-author-facing use will be for model fields. Internally, there is no difference
between SD-1, SD-2 and SDXL model fields - they all use the class `MainModelField`. To ensure the
base-model-specific UI is rendered, use e.g. `ui_type=UIType.SDXLMainModelField` to indicate that
the field is an SDXL main model field.
- Any Field
We cannot infer the usage of `typing.Any` via schema parsing, so you *must* use `ui_type=UIType.Any` to
indicate that the field accepts any type. Use with caution. This cannot be used on outputs.
- Scheduler Field
Special handling in the UI is needed for this field, which otherwise would be parsed as a plain enum field.
- Internal Fields
Similar to the Any Field, the `collect` and `iterate` nodes make use of `typing.Any`. To facilitate
handling these types in the client, we use `UIType._Collection` and `UIType._CollectionItem`. These
should not be used by node authors.
- DEPRECATED Fields
These types are deprecated and should not be used by node authors. A warning will be logged if one is
used, and the type will be ignored. They are included here for backwards compatibility.
"""
# region Model Field Types
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VaeModel = "VaeModelField"
VaeModel = "VAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
UNet = "UNetField"
Vae = "VaeField"
CLIP = "ClipField"
# endregion
# region Iterate/Collect
Collection = "Collection"
CollectionItem = "CollectionItem"
# region Misc Field Types
Scheduler = "SchedulerField"
Any = "AnyField"
# endregion
# region Misc
Enum = "enum"
Scheduler = "Scheduler"
WorkflowField = "WorkflowField"
IsIntermediate = "IsIntermediate"
BoardField = "BoardField"
Any = "Any"
MetadataItem = "MetadataItem"
MetadataItemCollection = "MetadataItemCollection"
MetadataItemPolymorphic = "MetadataItemPolymorphic"
MetadataDict = "MetadataDict"
# region Internal Field Types
_Collection = "CollectionField"
_CollectionItem = "CollectionItemField"
# endregion
# region DEPRECATED
Boolean = "DEPRECATED_Boolean"
Color = "DEPRECATED_Color"
Conditioning = "DEPRECATED_Conditioning"
Control = "DEPRECATED_Control"
Float = "DEPRECATED_Float"
Image = "DEPRECATED_Image"
Integer = "DEPRECATED_Integer"
Latents = "DEPRECATED_Latents"
String = "DEPRECATED_String"
BooleanCollection = "DEPRECATED_BooleanCollection"
ColorCollection = "DEPRECATED_ColorCollection"
ConditioningCollection = "DEPRECATED_ConditioningCollection"
ControlCollection = "DEPRECATED_ControlCollection"
FloatCollection = "DEPRECATED_FloatCollection"
ImageCollection = "DEPRECATED_ImageCollection"
IntegerCollection = "DEPRECATED_IntegerCollection"
LatentsCollection = "DEPRECATED_LatentsCollection"
StringCollection = "DEPRECATED_StringCollection"
BooleanPolymorphic = "DEPRECATED_BooleanPolymorphic"
ColorPolymorphic = "DEPRECATED_ColorPolymorphic"
ConditioningPolymorphic = "DEPRECATED_ConditioningPolymorphic"
ControlPolymorphic = "DEPRECATED_ControlPolymorphic"
FloatPolymorphic = "DEPRECATED_FloatPolymorphic"
ImagePolymorphic = "DEPRECATED_ImagePolymorphic"
IntegerPolymorphic = "DEPRECATED_IntegerPolymorphic"
LatentsPolymorphic = "DEPRECATED_LatentsPolymorphic"
StringPolymorphic = "DEPRECATED_StringPolymorphic"
MainModel = "DEPRECATED_MainModel"
UNet = "DEPRECATED_UNet"
Vae = "DEPRECATED_Vae"
CLIP = "DEPRECATED_CLIP"
Collection = "DEPRECATED_Collection"
CollectionItem = "DEPRECATED_CollectionItem"
Enum = "DEPRECATED_Enum"
WorkflowField = "DEPRECATED_WorkflowField"
IsIntermediate = "DEPRECATED_IsIntermediate"
BoardField = "DEPRECATED_BoardField"
MetadataItem = "DEPRECATED_MetadataItem"
MetadataItemCollection = "DEPRECATED_MetadataItemCollection"
MetadataItemPolymorphic = "DEPRECATED_MetadataItemPolymorphic"
MetadataDict = "DEPRECATED_MetadataDict"
# endregion
class UIComponent(str, Enum):
class UIComponent(str, Enum, metaclass=MetaEnum):
"""
The type of UI component to use for a field, used to override the default components, which are \
The type of UI component to use for a field, used to override the default components, which are
inferred from the field type.
"""
@ -133,21 +177,22 @@ class UIComponent(str, Enum):
Slider = "slider"
class _InputField(BaseModel):
class InputFieldJSONSchemaExtra(BaseModel):
"""
*DO NOT USE*
This helper class is used to tell the client about our custom field attributes via OpenAPI
schema generation, and Typescript type generation from that schema. It serves no functional
purpose in the backend.
Extra attributes to be added to input fields and their OpenAPI schema. Used during graph execution,
and by the workflow editor during schema parsing and UI rendering.
"""
input: Input
ui_hidden: bool
ui_type: Optional[UIType]
ui_component: Optional[UIComponent]
ui_order: Optional[int]
ui_choice_labels: Optional[dict[str, str]]
item_default: Optional[Any]
orig_required: bool
field_kind: FieldKind
default: Optional[Any] = None
orig_default: Optional[Any] = None
ui_hidden: bool = False
ui_type: Optional[UIType] = None
ui_component: Optional[UIComponent] = None
ui_order: Optional[int] = None
ui_choice_labels: Optional[dict[str, str]] = None
model_config = ConfigDict(
validate_assignment=True,
@ -155,14 +200,13 @@ class _InputField(BaseModel):
)
class _OutputField(BaseModel):
class OutputFieldJSONSchemaExtra(BaseModel):
"""
*DO NOT USE*
This helper class is used to tell the client about our custom field attributes via OpenAPI
schema generation, and Typescript type generation from that schema. It serves no functional
purpose in the backend.
Extra attributes to be added to input fields and their OpenAPI schema. Used by the workflow editor
during schema parsing and UI rendering.
"""
field_kind: FieldKind
ui_hidden: bool
ui_type: Optional[UIType]
ui_order: Optional[int]
@ -173,13 +217,9 @@ class _OutputField(BaseModel):
)
def get_type(klass: BaseModel) -> str:
"""Helper function to get an invocation or invocation output's type. This is the default value of the `type` field."""
return klass.model_fields["type"].default
def InputField(
# copied from pydantic's Field
# TODO: Can we support default_factory?
default: Any = _Unset,
default_factory: Callable[[], Any] | None = _Unset,
title: str | None = _Unset,
@ -203,12 +243,11 @@ def InputField(
ui_hidden: bool = False,
ui_order: Optional[int] = None,
ui_choice_labels: Optional[dict[str, str]] = None,
item_default: Optional[Any] = None,
) -> Any:
"""
Creates an input field for an invocation.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.Field) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param Input input: [Input.Any] The kind of input this field requires. \
@ -228,28 +267,58 @@ def InputField(
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI.
: param bool item_default: [None] Specifies the default item value, if this is a collection input. \
Ignored for non-collection fields.
:param dict[str, str] ui_choice_labels: [None] Specifies the labels to use for the choices in an enum field.
"""
json_schema_extra_: dict[str, Any] = {
"input": input,
"ui_type": ui_type,
"ui_component": ui_component,
"ui_hidden": ui_hidden,
"ui_order": ui_order,
"item_default": item_default,
"ui_choice_labels": ui_choice_labels,
"_field_kind": "input",
}
json_schema_extra_ = InputFieldJSONSchemaExtra(
input=input,
ui_type=ui_type,
ui_component=ui_component,
ui_hidden=ui_hidden,
ui_order=ui_order,
ui_choice_labels=ui_choice_labels,
field_kind=FieldKind.Input,
orig_required=True,
)
"""
There is a conflict between the typing of invocation definitions and the typing of an invocation's
`invoke()` function.
On instantiation of a node, the invocation definition is used to create the python class. At this time,
any number of fields may be optional, because they may be provided by connections.
On calling of `invoke()`, however, those fields may be required.
For example, consider an ResizeImageInvocation with an `image: ImageField` field.
`image` is required during the call to `invoke()`, but when the python class is instantiated,
the field may not be present. This is fine, because that image field will be provided by a
connection from an ancestor node, which outputs an image.
This means we want to type the `image` field as optional for the node class definition, but required
for the `invoke()` function.
If we use `typing.Optional` in the node class definition, the field will be typed as optional in the
`invoke()` method, and we'll have to do a lot of runtime checks to ensure the field is present - or
any static type analysis tools will complain.
To get around this, in node class definitions, we type all fields correctly for the `invoke()` function,
but secretly make them optional in `InputField()`. We also store the original required bool and/or default
value. When we call `invoke()`, we use this stored information to do an additional check on the class.
"""
if default_factory is not _Unset and default_factory is not None:
default = default_factory()
logger.warn('"default_factory" is not supported, calling it now to set "default"')
# These are the args we may wish pass to the pydantic `Field()` function
field_args = {
"default": default,
"default_factory": default_factory,
"title": title,
"description": description,
"pattern": pattern,
@ -266,70 +335,34 @@ def InputField(
"max_length": max_length,
}
"""
Invocation definitions have their fields typed correctly for their `invoke()` functions.
This typing is often more specific than the actual invocation definition requires, because
fields may have values provided only by connections.
For example, consider an ResizeImageInvocation with an `image: ImageField` field.
`image` is required during the call to `invoke()`, but when the python class is instantiated,
the field may not be present. This is fine, because that image field will be provided by a
an ancestor node that outputs the image.
So we'd like to type that `image` field as `Optional[ImageField]`. If we do that, however, then
we need to handle a lot of extra logic in the `invoke()` function to check if the field has a
value or not. This is very tedious.
Ideally, the invocation definition would be able to specify that the field is required during
invocation, but optional during instantiation. So the field would be typed as `image: ImageField`,
but when calling the `invoke()` function, we raise an error if the field is not present.
To do this, we need to do a bit of fanagling to make the pydantic field optional, and then do
extra validation when calling `invoke()`.
There is some additional logic here to cleaning create the pydantic field via the wrapper.
"""
# Filter out field args not provided
# We only want to pass the args that were provided, otherwise the `Field()`` function won't work as expected
provided_args = {k: v for (k, v) in field_args.items() if v is not PydanticUndefined}
if (default is not PydanticUndefined) and (default_factory is not PydanticUndefined):
raise ValueError("Cannot specify both default and default_factory")
# Because we are manually making fields optional, we need to store the original required bool for reference later
json_schema_extra_.orig_required = default is PydanticUndefined
# because we are manually making fields optional, we need to store the original required bool for reference later
if default is PydanticUndefined and default_factory is PydanticUndefined:
json_schema_extra_.update({"orig_required": True})
else:
json_schema_extra_.update({"orig_required": False})
# make Input.Any and Input.Connection fields optional, providing None as a default if the field doesn't already have one
if (input is Input.Any or input is Input.Connection) and default_factory is PydanticUndefined:
# Make Input.Any and Input.Connection fields optional, providing None as a default if the field doesn't already have one
if input is Input.Any or input is Input.Connection:
default_ = None if default is PydanticUndefined else default
provided_args.update({"default": default_})
if default is not PydanticUndefined:
# before invoking, we'll grab the original default value and set it on the field if the field wasn't provided a value
json_schema_extra_.update({"default": default})
json_schema_extra_.update({"orig_default": default})
elif default is not PydanticUndefined and default_factory is PydanticUndefined:
# Before invoking, we'll check for the original default value and set it on the field if the field has no value
json_schema_extra_.default = default
json_schema_extra_.orig_default = default
elif default is not PydanticUndefined:
default_ = default
provided_args.update({"default": default_})
json_schema_extra_.update({"orig_default": default_})
elif default_factory is not PydanticUndefined:
provided_args.update({"default_factory": default_factory})
# TODO: cannot serialize default_factory...
# json_schema_extra_.update(dict(orig_default_factory=default_factory))
json_schema_extra_.orig_default = default_
return Field(
**provided_args,
json_schema_extra=json_schema_extra_,
json_schema_extra=json_schema_extra_.model_dump(exclude_none=True),
)
def OutputField(
# copied from pydantic's Field
default: Any = _Unset,
default_factory: Callable[[], Any] | None = _Unset,
title: str | None = _Unset,
description: str | None = _Unset,
pattern: str | None = _Unset,
@ -362,13 +395,12 @@ def OutputField(
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
"""
return Field(
default=default,
default_factory=default_factory,
title=title,
description=description,
pattern=pattern,
@ -383,12 +415,12 @@ def OutputField(
decimal_places=decimal_places,
min_length=min_length,
max_length=max_length,
json_schema_extra={
"ui_type": ui_type,
"ui_hidden": ui_hidden,
"ui_order": ui_order,
"_field_kind": "output",
},
json_schema_extra=OutputFieldJSONSchemaExtra(
ui_type=ui_type,
ui_hidden=ui_hidden,
ui_order=ui_order,
field_kind=FieldKind.Output,
).model_dump(exclude_none=True),
)
@ -401,10 +433,10 @@ class UIConfigBase(BaseModel):
tags: Optional[list[str]] = Field(default_factory=None, description="The node's tags")
title: Optional[str] = Field(default=None, description="The node's display name")
category: Optional[str] = Field(default=None, description="The node's category")
version: Optional[str] = Field(
default=None,
version: str = Field(
description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".',
)
node_pack: Optional[str] = Field(default=None, description="Whether or not this is a custom node")
model_config = ConfigDict(
validate_assignment=True,
@ -447,29 +479,39 @@ class BaseInvocationOutput(BaseModel):
@classmethod
def register_output(cls, output: BaseInvocationOutput) -> None:
"""Registers an invocation output."""
cls._output_classes.add(output)
@classmethod
def get_outputs(cls) -> Iterable[BaseInvocationOutput]:
"""Gets all invocation outputs."""
return cls._output_classes
@classmethod
def get_outputs_union(cls) -> UnionType:
"""Gets a union of all invocation outputs."""
outputs_union = Union[tuple(cls._output_classes)] # type: ignore [valid-type]
return outputs_union # type: ignore [return-value]
@classmethod
def get_output_types(cls) -> Iterable[str]:
return (get_type(i) for i in BaseInvocationOutput.get_outputs())
"""Gets all invocation output types."""
return (i.get_type() for i in BaseInvocationOutput.get_outputs())
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
"""Adds various UI-facing attributes to the invocation output's OpenAPI schema."""
# Because we use a pydantic Literal field with default value for the invocation type,
# it will be typed as optional in the OpenAPI schema. Make it required manually.
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
schema["required"].extend(["type"])
@classmethod
def get_type(cls) -> str:
"""Gets the invocation output's type, as provided by the `@invocation_output` decorator."""
return cls.model_fields["type"].default
model_config = ConfigDict(
protected_namespaces=(),
validate_assignment=True,
@ -499,21 +541,29 @@ class BaseInvocation(ABC, BaseModel):
_invocation_classes: ClassVar[set[BaseInvocation]] = set()
@classmethod
def get_type(cls) -> str:
"""Gets the invocation's type, as provided by the `@invocation` decorator."""
return cls.model_fields["type"].default
@classmethod
def register_invocation(cls, invocation: BaseInvocation) -> None:
"""Registers an invocation."""
cls._invocation_classes.add(invocation)
@classmethod
def get_invocations_union(cls) -> UnionType:
"""Gets a union of all invocation types."""
invocations_union = Union[tuple(cls._invocation_classes)] # type: ignore [valid-type]
return invocations_union # type: ignore [return-value]
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
app_config = InvokeAIAppConfig.get_config()
allowed_invocations: set[BaseInvocation] = set()
for sc in cls._invocation_classes:
invocation_type = get_type(sc)
invocation_type = sc.get_type()
is_in_allowlist = (
invocation_type in app_config.allow_nodes if isinstance(app_config.allow_nodes, list) else True
)
@ -526,28 +576,32 @@ class BaseInvocation(ABC, BaseModel):
@classmethod
def get_invocations_map(cls) -> dict[str, BaseInvocation]:
# Get the type strings out of the literals and into a dictionary
return {get_type(i): i for i in BaseInvocation.get_invocations()}
"""Gets a map of all invocation types to their invocation classes."""
return {i.get_type(): i for i in BaseInvocation.get_invocations()}
@classmethod
def get_invocation_types(cls) -> Iterable[str]:
return (get_type(i) for i in BaseInvocation.get_invocations())
"""Gets all invocation types."""
return (i.get_type() for i in BaseInvocation.get_invocations())
@classmethod
def get_output_type(cls) -> BaseInvocationOutput:
def get_output_annotation(cls) -> BaseInvocationOutput:
"""Gets the invocation's output annotation (i.e. the return annotation of its `invoke()` method)."""
return signature(cls.invoke).return_annotation
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
# Add the various UI-facing attributes to the schema. These are used to build the invocation templates.
uiconfig = getattr(model_class, "UIConfig", None)
if uiconfig and hasattr(uiconfig, "title"):
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel], *args, **kwargs) -> None:
"""Adds various UI-facing attributes to the invocation's OpenAPI schema."""
uiconfig = cast(UIConfigBase | None, getattr(model_class, "UIConfig", None))
if uiconfig is not None:
if uiconfig.title is not None:
schema["title"] = uiconfig.title
if uiconfig and hasattr(uiconfig, "tags"):
if uiconfig.tags is not None:
schema["tags"] = uiconfig.tags
if uiconfig and hasattr(uiconfig, "category"):
if uiconfig.category is not None:
schema["category"] = uiconfig.category
if uiconfig and hasattr(uiconfig, "version"):
if uiconfig.node_pack is not None:
schema["node_pack"] = uiconfig.node_pack
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
@ -559,6 +613,10 @@ class BaseInvocation(ABC, BaseModel):
pass
def invoke_internal(self, context: InvocationContext) -> BaseInvocationOutput:
"""
Internal invoke method, calls `invoke()` after some prep.
Handles optional fields that are required to call `invoke()` and invocation cache.
"""
for field_name, field in self.model_fields.items():
if not field.json_schema_extra or callable(field.json_schema_extra):
# something has gone terribly awry, we should always have this and it should be a dict
@ -598,21 +656,20 @@ class BaseInvocation(ABC, BaseModel):
context.services.logger.debug(f'Skipping invocation cache for "{self.get_type()}": {self.id}')
return self.invoke(context)
def get_type(self) -> str:
return self.model_fields["type"].default
id: str = Field(
default_factory=uuid_string,
description="The id of this instance of an invocation. Must be unique among all instances of invocations.",
json_schema_extra={"_field_kind": "internal"},
json_schema_extra={"field_kind": FieldKind.NodeAttribute},
)
is_intermediate: bool = Field(
default=False,
description="Whether or not this is an intermediate invocation.",
json_schema_extra={"ui_type": UIType.IsIntermediate, "_field_kind": "internal"},
json_schema_extra={"ui_type": "IsIntermediate", "field_kind": FieldKind.NodeAttribute},
)
use_cache: bool = Field(
default=True, description="Whether or not to use the cache", json_schema_extra={"_field_kind": "internal"}
default=True,
description="Whether or not to use the cache",
json_schema_extra={"field_kind": FieldKind.NodeAttribute},
)
UIConfig: ClassVar[Type[UIConfigBase]]
@ -629,12 +686,15 @@ class BaseInvocation(ABC, BaseModel):
TBaseInvocation = TypeVar("TBaseInvocation", bound=BaseInvocation)
RESERVED_INPUT_FIELD_NAMES = {
RESERVED_NODE_ATTRIBUTE_FIELD_NAMES = {
"id",
"is_intermediate",
"use_cache",
"type",
"workflow",
}
RESERVED_INPUT_FIELD_NAMES = {
"metadata",
}
@ -652,40 +712,59 @@ RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None:
"""
Validates the fields of an invocation or invocation output:
- must not override any pydantic reserved fields
- must be created via `InputField`, `OutputField`, or be an internal field defined in this file
- Must not override any pydantic reserved fields
- Must have a type annotation
- Must have a json_schema_extra dict
- Must have field_kind in json_schema_extra
- Field name must not be reserved, according to its field_kind
"""
for name, field in model_fields.items():
if name in RESERVED_PYDANTIC_FIELD_NAMES:
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved by pydantic)')
field_kind = (
# _field_kind is defined via InputField(), OutputField() or by one of the internal fields defined in this file
field.json_schema_extra.get("_field_kind", None) if field.json_schema_extra else None
if not field.annotation:
raise InvalidFieldError(f'Invalid field type "{name}" on "{model_type}" (missing annotation)')
if not isinstance(field.json_schema_extra, dict):
raise InvalidFieldError(
f'Invalid field definition for "{name}" on "{model_type}" (missing json_schema_extra dict)'
)
field_kind = field.json_schema_extra.get("field_kind", None)
# must have a field_kind
if field_kind is None or field_kind not in {"input", "output", "internal"}:
if not isinstance(field_kind, FieldKind):
raise InvalidFieldError(
f'Invalid field definition for "{name}" on "{model_type}" (maybe it\'s not an InputField or OutputField?)'
)
if field_kind == "input" and name in RESERVED_INPUT_FIELD_NAMES:
if field_kind is FieldKind.Input and (
name in RESERVED_NODE_ATTRIBUTE_FIELD_NAMES or name in RESERVED_INPUT_FIELD_NAMES
):
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved input field name)')
if field_kind == "output" and name in RESERVED_OUTPUT_FIELD_NAMES:
if field_kind is FieldKind.Output and name in RESERVED_OUTPUT_FIELD_NAMES:
raise InvalidFieldError(f'Invalid field name "{name}" on "{model_type}" (reserved output field name)')
# internal fields *must* be in the reserved list
if (
field_kind == "internal"
and name not in RESERVED_INPUT_FIELD_NAMES
and name not in RESERVED_OUTPUT_FIELD_NAMES
):
if (field_kind is FieldKind.Internal) and name not in RESERVED_INPUT_FIELD_NAMES:
raise InvalidFieldError(
f'Invalid field name "{name}" on "{model_type}" (internal field without reserved name)'
)
# node attribute fields *must* be in the reserved list
if (
field_kind is FieldKind.NodeAttribute
and name not in RESERVED_NODE_ATTRIBUTE_FIELD_NAMES
and name not in RESERVED_OUTPUT_FIELD_NAMES
):
raise InvalidFieldError(
f'Invalid field name "{name}" on "{model_type}" (node attribute field without reserved name)'
)
ui_type = field.json_schema_extra.get("ui_type", None)
if isinstance(ui_type, str) and ui_type.startswith("DEPRECATED_"):
logger.warn(f"\"UIType.{ui_type.split('_')[-1]}\" is deprecated, ignoring")
field.json_schema_extra.pop("ui_type")
return None
@ -720,21 +799,30 @@ def invocation(
validate_fields(cls.model_fields, invocation_type)
# Add OpenAPI schema extras
uiconf_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name:
cls.UIConfig = type(uiconf_name, (UIConfigBase,), {})
if title is not None:
uiconfig_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconfig_name:
cls.UIConfig = type(uiconfig_name, (UIConfigBase,), {})
cls.UIConfig.title = title
if tags is not None:
cls.UIConfig.tags = tags
if category is not None:
cls.UIConfig.category = category
# Grab the node pack's name from the module name, if it's a custom node
module_name = cls.__module__.split(".")[0]
if module_name.endswith(CUSTOM_NODE_PACK_SUFFIX):
cls.UIConfig.node_pack = module_name.split(CUSTOM_NODE_PACK_SUFFIX)[0]
else:
cls.UIConfig.node_pack = None
if version is not None:
try:
semver.Version.parse(version)
except ValueError as e:
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
cls.UIConfig.version = version
else:
logger.warn(f'No version specified for node "{invocation_type}", using "1.0.0"')
cls.UIConfig.version = "1.0.0"
if use_cache is not None:
cls.model_fields["use_cache"].default = use_cache
@ -749,7 +837,7 @@ def invocation(
invocation_type_annotation = Literal[invocation_type] # type: ignore
invocation_type_field = Field(
title="type", default=invocation_type, json_schema_extra={"_field_kind": "internal"}
title="type", default=invocation_type, json_schema_extra={"field_kind": FieldKind.NodeAttribute}
)
docstring = cls.__doc__
@ -795,7 +883,9 @@ def invocation_output(
# Add the output type to the model.
output_type_annotation = Literal[output_type] # type: ignore
output_type_field = Field(title="type", default=output_type, json_schema_extra={"_field_kind": "internal"})
output_type_field = Field(
title="type", default=output_type, json_schema_extra={"field_kind": FieldKind.NodeAttribute}
)
docstring = cls.__doc__
cls = create_model(
@ -827,7 +917,7 @@ WorkflowFieldValidator = TypeAdapter(WorkflowField)
class WithWorkflow(BaseModel):
workflow: Optional[WorkflowField] = Field(
default=None, description=FieldDescriptions.workflow, json_schema_extra={"_field_kind": "internal"}
default=None, description=FieldDescriptions.workflow, json_schema_extra={"field_kind": FieldKind.NodeAttribute}
)
@ -845,5 +935,11 @@ MetadataFieldValidator = TypeAdapter(MetadataField)
class WithMetadata(BaseModel):
metadata: Optional[MetadataField] = Field(
default=None, description=FieldDescriptions.metadata, json_schema_extra={"_field_kind": "internal"}
default=None,
description=FieldDescriptions.metadata,
json_schema_extra=InputFieldJSONSchemaExtra(
field_kind=FieldKind.Internal,
input=Input.Connection,
orig_required=False,
).model_dump(exclude_none=True),
)

View File

@ -5,7 +5,7 @@ import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.app.util.misc import SEED_MAX
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@ -55,7 +55,7 @@ class RangeOfSizeInvocation(BaseInvocation):
title="Random Range",
tags=["range", "integer", "random", "collection"],
category="collections",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class RandomRangeInvocation(BaseInvocation):
@ -65,10 +65,10 @@ class RandomRangeInvocation(BaseInvocation):
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
size: int = InputField(default=1, description="The number of values to generate")
seed: int = InputField(
default=0,
ge=0,
le=SEED_MAX,
description="The seed for the RNG (omit for random)",
default_factory=get_random_seed,
)
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:

View File

@ -6,6 +6,7 @@ import sys
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
from invokeai.app.invocations.baseinvocation import CUSTOM_NODE_PACK_SUFFIX
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger()
@ -32,13 +33,15 @@ for d in Path(__file__).parent.iterdir():
if module_name in globals():
continue
# we have a legit module to import
spec = spec_from_file_location(module_name, init.absolute())
# load the module, appending adding a suffix to identify it as a custom node pack
spec = spec_from_file_location(f"{module_name}{CUSTOM_NODE_PACK_SUFFIX}", init.absolute())
if spec is None or spec.loader is None:
logger.warn(f"Could not load {init}")
continue
logger.info(f"Loading node pack {module_name}")
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
@ -47,5 +50,5 @@ for d in Path(__file__).parent.iterdir():
del init, module_name
logger.info(f"Loaded {loaded_count} modules from {Path(__file__).parent}")
if loaded_count > 0:
logger.info(f"Loaded {loaded_count} node packs from {Path(__file__).parent}")

View File

@ -100,6 +100,61 @@ class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation(
invocation_type="img_pad_crop",
title="Center Pad or Crop Image",
category="image",
tags=["image", "pad", "crop"],
version="1.0.0",
)
class CenterPadCropInvocation(BaseInvocation):
"""Pad or crop an image's sides from the center by specified pixels. Positive values are outside of the image."""
image: ImageField = InputField(description="The image to crop")
left: int = InputField(
default=0,
description="Number of pixels to pad/crop from the left (negative values crop inwards, positive values pad outwards)",
)
right: int = InputField(
default=0,
description="Number of pixels to pad/crop from the right (negative values crop inwards, positive values pad outwards)",
)
top: int = InputField(
default=0,
description="Number of pixels to pad/crop from the top (negative values crop inwards, positive values pad outwards)",
)
bottom: int = InputField(
default=0,
description="Number of pixels to pad/crop from the bottom (negative values crop inwards, positive values pad outwards)",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
# Calculate and create new image dimensions
new_width = image.width + self.right + self.left
new_height = image.height + self.top + self.bottom
image_crop = Image.new(mode="RGBA", size=(new_width, new_height), color=(0, 0, 0, 0))
# Paste new image onto input
image_crop.paste(image, (self.left, self.top))
image_dto = context.services.images.create(
image=image_crop,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.1.0")
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Pastes an image into another image."""

View File

@ -8,7 +8,7 @@ from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
@ -154,17 +154,17 @@ class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.1")
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with tiles of the image"""
image: ImageField = InputField(description="The image to infill")
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
seed: int = InputField(
default=0,
ge=0,
le=SEED_MAX,
description="The seed to use for tile generation (omit for random)",
default_factory=get_random_seed,
)
def invoke(self, context: InvocationContext) -> ImageOutput:

View File

@ -11,7 +11,6 @@ from invokeai.app.invocations.baseinvocation import (
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
@ -67,7 +66,7 @@ class IPAdapterInvocation(BaseInvocation):
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField(
default=1, ge=-1, description="The weight given to the IP-Adapter", ui_type=UIType.Float, title="Weight"
default=1, ge=-1, description="The weight given to the IP-Adapter", title="Weight"
)
begin_step_percent: float = InputField(

View File

@ -79,6 +79,12 @@ DEFAULT_PRECISION = choose_precision(choose_torch_device())
SAMPLER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
# HACK: Many nodes are currently hard-coded to use a fixed latent scale factor of 8. This is fragile, and will need to
# be addressed if future models use a different latent scale factor. Also, note that there may be places where the scale
# factor is hard-coded to a literal '8' rather than using this constant.
# The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
LATENT_SCALE_FACTOR = 8
@invocation_output("scheduler_output")
class SchedulerOutput(BaseInvocationOutput):
@ -215,7 +221,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.4.0",
version="1.5.0",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -273,8 +279,14 @@ class DenoiseLatentsInvocation(BaseInvocation):
input=Input.Connection,
ui_order=7,
)
cfg_rescale_multiplier: float = InputField(
default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
)
latents: Optional[LatentsField] = InputField(
default=None, description=FieldDescriptions.latents, input=Input.Connection
default=None,
description=FieldDescriptions.latents,
input=Input.Connection,
ui_order=4,
)
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
@ -329,6 +341,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
unconditioned_embeddings=uc,
text_embeddings=c,
guidance_scale=self.cfg_scale,
guidance_rescale_multiplier=self.cfg_rescale_multiplier,
extra=extra_conditioning_info,
postprocessing_settings=PostprocessingSettings(
threshold=0.0, # threshold,
@ -387,9 +400,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
exit_stack: ExitStack,
do_classifier_free_guidance: bool = True,
) -> List[ControlNetData]:
# assuming fixed dimensional scaling of 8:1 for image:latents
control_height_resize = latents_shape[2] * 8
control_width_resize = latents_shape[3] * 8
# Assuming fixed dimensional scaling of LATENT_SCALE_FACTOR.
control_height_resize = latents_shape[2] * LATENT_SCALE_FACTOR
control_width_resize = latents_shape[3] * LATENT_SCALE_FACTOR
if control_input is None:
control_list = None
elif isinstance(control_input, list) and len(control_input) == 0:
@ -902,12 +915,12 @@ class ResizeLatentsInvocation(BaseInvocation):
)
width: int = InputField(
ge=64,
multiple_of=8,
multiple_of=LATENT_SCALE_FACTOR,
description=FieldDescriptions.width,
)
height: int = InputField(
ge=64,
multiple_of=8,
multiple_of=LATENT_SCALE_FACTOR,
description=FieldDescriptions.width,
)
mode: LATENTS_INTERPOLATION_MODE = InputField(default="bilinear", description=FieldDescriptions.interp_mode)
@ -921,7 +934,7 @@ class ResizeLatentsInvocation(BaseInvocation):
resized_latents = torch.nn.functional.interpolate(
latents.to(device),
size=(self.height // 8, self.width // 8),
size=(self.height // LATENT_SCALE_FACTOR, self.width // LATENT_SCALE_FACTOR),
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
@ -1159,3 +1172,60 @@ class BlendLatentsInvocation(BaseInvocation):
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, blended_latents)
return build_latents_output(latents_name=name, latents=blended_latents)
# The Crop Latents node was copied from @skunkworxdark's implementation here:
# https://github.com/skunkworxdark/XYGrid_nodes/blob/74647fa9c1fa57d317a94bd43ca689af7f0aae5e/images_to_grids.py#L1117C1-L1167C80
@invocation(
"crop_latents",
title="Crop Latents",
tags=["latents", "crop"],
category="latents",
version="1.0.0",
)
# TODO(ryand): Named `CropLatentsCoreInvocation` to prevent a conflict with custom node `CropLatentsInvocation`.
# Currently, if the class names conflict then 'GET /openapi.json' fails.
class CropLatentsCoreInvocation(BaseInvocation):
"""Crops a latent-space tensor to a box specified in image-space. The box dimensions and coordinates must be
divisible by the latent scale factor of 8.
"""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
x: int = InputField(
ge=0,
multiple_of=LATENT_SCALE_FACTOR,
description="The left x coordinate (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
y: int = InputField(
ge=0,
multiple_of=LATENT_SCALE_FACTOR,
description="The top y coordinate (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
width: int = InputField(
ge=1,
multiple_of=LATENT_SCALE_FACTOR,
description="The width (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
height: int = InputField(
ge=1,
multiple_of=LATENT_SCALE_FACTOR,
description="The height (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
x1 = self.x // LATENT_SCALE_FACTOR
y1 = self.y // LATENT_SCALE_FACTOR
x2 = x1 + (self.width // LATENT_SCALE_FACTOR)
y2 = y1 + (self.height // LATENT_SCALE_FACTOR)
cropped_latents = latents[..., y1:y2, x1:x2]
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, cropped_latents)
return build_latents_output(latents_name=name, latents=cropped_latents)

View File

@ -127,6 +127,9 @@ class CoreMetadataInvocation(BaseInvocation):
seed: Optional[int] = InputField(default=None, description="The seed used for noise generation")
rand_device: Optional[str] = InputField(default=None, description="The device used for random number generation")
cfg_scale: Optional[float] = InputField(default=None, description="The classifier-free guidance scale parameter")
cfg_rescale_multiplier: Optional[float] = InputField(
default=None, description=FieldDescriptions.cfg_rescale_multiplier
)
steps: Optional[int] = InputField(default=None, description="The number of steps used for inference")
scheduler: Optional[str] = InputField(default=None, description="The scheduler used for inference")
seamless_x: Optional[bool] = InputField(default=None, description="Whether seamless tiling was used on the X axis")

View File

@ -14,7 +14,6 @@ from .baseinvocation import (
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
@ -395,7 +394,6 @@ class VaeLoaderInvocation(BaseInvocation):
vae_model: VAEModelField = InputField(
description=FieldDescriptions.vae_model,
input=Input.Direct,
ui_type=UIType.VaeModel,
title="VAE",
)

View File

@ -6,7 +6,7 @@ from pydantic import field_validator
from invokeai.app.invocations.latent import LatentsField
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.app.util.misc import SEED_MAX
from ...backend.util.devices import choose_torch_device, torch_dtype
from .baseinvocation import (
@ -83,16 +83,16 @@ def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
title="Noise",
tags=["latents", "noise"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
seed: int = InputField(
default=0,
ge=0,
le=SEED_MAX,
description=FieldDescriptions.seed,
default_factory=get_random_seed,
)
width: int = InputField(
default=512,

View File

@ -62,12 +62,12 @@ class BooleanInvocation(BaseInvocation):
title="Boolean Collection Primitive",
tags=["primitives", "boolean", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
collection: list[bool] = InputField(default_factory=list, description="The collection of boolean values")
collection: list[bool] = InputField(default=[], description="The collection of boolean values")
def invoke(self, context: InvocationContext) -> BooleanCollectionOutput:
return BooleanCollectionOutput(collection=self.collection)
@ -111,12 +111,12 @@ class IntegerInvocation(BaseInvocation):
title="Integer Collection Primitive",
tags=["primitives", "integer", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
collection: list[int] = InputField(default_factory=list, description="The collection of integer values")
collection: list[int] = InputField(default=[], description="The collection of integer values")
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=self.collection)
@ -158,12 +158,12 @@ class FloatInvocation(BaseInvocation):
title="Float Collection Primitive",
tags=["primitives", "float", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
collection: list[float] = InputField(default_factory=list, description="The collection of float values")
collection: list[float] = InputField(default=[], description="The collection of float values")
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
return FloatCollectionOutput(collection=self.collection)
@ -205,12 +205,12 @@ class StringInvocation(BaseInvocation):
title="String Collection Primitive",
tags=["primitives", "string", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
collection: list[str] = InputField(default_factory=list, description="The collection of string values")
collection: list[str] = InputField(default=[], description="The collection of string values")
def invoke(self, context: InvocationContext) -> StringCollectionOutput:
return StringCollectionOutput(collection=self.collection)
@ -467,13 +467,13 @@ class ConditioningInvocation(BaseInvocation):
title="Conditioning Collection Primitive",
tags=["primitives", "conditioning", "collection"],
category="primitives",
version="1.0.0",
version="1.0.1",
)
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""
collection: list[ConditioningField] = InputField(
default_factory=list,
default=[],
description="The collection of conditioning tensors",
)

View File

@ -44,7 +44,7 @@ class DynamicPromptInvocation(BaseInvocation):
title="Prompts from File",
tags=["prompt", "file"],
category="prompt",
version="1.0.0",
version="1.0.1",
)
class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file"""
@ -82,7 +82,7 @@ class PromptsFromFileInvocation(BaseInvocation):
end_line = start_line + max_prompts
if max_prompts <= 0:
end_line = np.iinfo(np.int32).max
with open(file_path) as f:
with open(file_path, encoding="utf-8") as f:
for i, line in enumerate(f):
if i >= start_line and i < end_line:
prompts.append((pre_prompt or "") + line.strip() + (post_prompt or ""))

View File

@ -9,7 +9,6 @@ from invokeai.app.invocations.baseinvocation import (
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
@ -59,7 +58,7 @@ class T2IAdapterInvocation(BaseInvocation):
ui_order=-1,
)
weight: Union[float, list[float]] = InputField(
default=1, ge=0, description="The weight given to the T2I-Adapter", ui_type=UIType.Float, title="Weight"
default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the T2I-Adapter is first applied (% of total steps)"

View File

@ -0,0 +1,181 @@
import numpy as np
from PIL import Image
from pydantic import BaseModel
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
OutputField,
WithMetadata,
WithWorkflow,
invocation,
invocation_output,
)
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.tiles.tiles import calc_tiles_with_overlap, merge_tiles_with_linear_blending
from invokeai.backend.tiles.utils import Tile
class TileWithImage(BaseModel):
tile: Tile
image: ImageField
@invocation_output("calculate_image_tiles_output")
class CalculateImageTilesOutput(BaseInvocationOutput):
tiles: list[Tile] = OutputField(description="The tiles coordinates that cover a particular image shape.")
@invocation("calculate_image_tiles", title="Calculate Image Tiles", tags=["tiles"], category="tiles", version="1.0.0")
class CalculateImageTilesInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
image_width: int = InputField(ge=1, default=1024, description="The image width, in pixels, to calculate tiles for.")
image_height: int = InputField(
ge=1, default=1024, description="The image height, in pixels, to calculate tiles for."
)
tile_width: int = InputField(ge=1, default=576, description="The tile width, in pixels.")
tile_height: int = InputField(ge=1, default=576, description="The tile height, in pixels.")
overlap: int = InputField(
ge=0,
default=128,
description="The target overlap, in pixels, between adjacent tiles. Adjacent tiles will overlap by at least this amount",
)
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
tiles = calc_tiles_with_overlap(
image_height=self.image_height,
image_width=self.image_width,
tile_height=self.tile_height,
tile_width=self.tile_width,
overlap=self.overlap,
)
return CalculateImageTilesOutput(tiles=tiles)
@invocation_output("tile_to_properties_output")
class TileToPropertiesOutput(BaseInvocationOutput):
coords_left: int = OutputField(description="Left coordinate of the tile relative to its parent image.")
coords_right: int = OutputField(description="Right coordinate of the tile relative to its parent image.")
coords_top: int = OutputField(description="Top coordinate of the tile relative to its parent image.")
coords_bottom: int = OutputField(description="Bottom coordinate of the tile relative to its parent image.")
# HACK: The width and height fields are 'meta' fields that can easily be calculated from the other fields on this
# object. Including redundant fields that can cheaply/easily be re-calculated goes against conventional API design
# principles. These fields are included, because 1) they are often useful in tiled workflows, and 2) they are
# difficult to calculate in a workflow (even though it's just a couple of subtraction nodes the graph gets
# surprisingly complicated).
width: int = OutputField(description="The width of the tile. Equal to coords_right - coords_left.")
height: int = OutputField(description="The height of the tile. Equal to coords_bottom - coords_top.")
overlap_top: int = OutputField(description="Overlap between this tile and its top neighbor.")
overlap_bottom: int = OutputField(description="Overlap between this tile and its bottom neighbor.")
overlap_left: int = OutputField(description="Overlap between this tile and its left neighbor.")
overlap_right: int = OutputField(description="Overlap between this tile and its right neighbor.")
@invocation("tile_to_properties", title="Tile to Properties", tags=["tiles"], category="tiles", version="1.0.0")
class TileToPropertiesInvocation(BaseInvocation):
"""Split a Tile into its individual properties."""
tile: Tile = InputField(description="The tile to split into properties.")
def invoke(self, context: InvocationContext) -> TileToPropertiesOutput:
return TileToPropertiesOutput(
coords_left=self.tile.coords.left,
coords_right=self.tile.coords.right,
coords_top=self.tile.coords.top,
coords_bottom=self.tile.coords.bottom,
width=self.tile.coords.right - self.tile.coords.left,
height=self.tile.coords.bottom - self.tile.coords.top,
overlap_top=self.tile.overlap.top,
overlap_bottom=self.tile.overlap.bottom,
overlap_left=self.tile.overlap.left,
overlap_right=self.tile.overlap.right,
)
@invocation_output("pair_tile_image_output")
class PairTileImageOutput(BaseInvocationOutput):
tile_with_image: TileWithImage = OutputField(description="A tile description with its corresponding image.")
@invocation("pair_tile_image", title="Pair Tile with Image", tags=["tiles"], category="tiles", version="1.0.0")
class PairTileImageInvocation(BaseInvocation):
"""Pair an image with its tile properties."""
# TODO(ryand): The only reason that PairTileImage is needed is because the iterate/collect nodes don't preserve
# order. Can this be fixed?
image: ImageField = InputField(description="The tile image.")
tile: Tile = InputField(description="The tile properties.")
def invoke(self, context: InvocationContext) -> PairTileImageOutput:
return PairTileImageOutput(
tile_with_image=TileWithImage(
tile=self.tile,
image=self.image,
)
)
@invocation("merge_tiles_to_image", title="Merge Tiles to Image", tags=["tiles"], category="tiles", version="1.0.0")
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Merge multiple tile images into a single image."""
# Inputs
tiles_with_images: list[TileWithImage] = InputField(description="A list of tile images with tile properties.")
blend_amount: int = InputField(
ge=0,
description="The amount to blend adjacent tiles in pixels. Must be <= the amount of overlap between adjacent tiles.",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
images = [twi.image for twi in self.tiles_with_images]
tiles = [twi.tile for twi in self.tiles_with_images]
# Infer the output image dimensions from the max/min tile limits.
height = 0
width = 0
for tile in tiles:
height = max(height, tile.coords.bottom)
width = max(width, tile.coords.right)
# Get all tile images for processing.
# TODO(ryand): It pains me that we spend time PNG decoding each tile from disk when they almost certainly
# existed in memory at an earlier point in the graph.
tile_np_images: list[np.ndarray] = []
for image in images:
pil_image = context.services.images.get_pil_image(image.image_name)
pil_image = pil_image.convert("RGB")
tile_np_images.append(np.array(pil_image))
# Prepare the output image buffer.
# Check the first tile to determine how many image channels are expected in the output.
channels = tile_np_images[0].shape[-1]
dtype = tile_np_images[0].dtype
np_image = np.zeros(shape=(height, width, channels), dtype=dtype)
merge_tiles_with_linear_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
pil_image = Image.fromarray(np_image)
image_dto = context.services.images.create(
image=pil_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -49,7 +49,7 @@ class Edge(BaseModel):
def get_output_field(node: BaseInvocation, field: str) -> Any:
node_type = type(node)
node_outputs = get_type_hints(node_type.get_output_type())
node_outputs = get_type_hints(node_type.get_output_annotation())
node_output_field = node_outputs.get(field) or None
return node_output_field
@ -188,7 +188,7 @@ class GraphInvocationOutput(BaseInvocationOutput):
# TODO: Fill this out and move to invocations
@invocation("graph")
@invocation("graph", version="1.0.0")
class GraphInvocation(BaseInvocation):
"""Execute a graph"""
@ -205,7 +205,7 @@ class IterateInvocationOutput(BaseInvocationOutput):
"""Used to connect iteration outputs. Will be expanded to a specific output."""
item: Any = OutputField(
description="The item being iterated over", title="Collection Item", ui_type=UIType.CollectionItem
description="The item being iterated over", title="Collection Item", ui_type=UIType._CollectionItem
)
@ -215,7 +215,7 @@ class IterateInvocation(BaseInvocation):
"""Iterates over a list of items"""
collection: list[Any] = InputField(
description="The list of items to iterate over", default_factory=list, ui_type=UIType.Collection
description="The list of items to iterate over", default=[], ui_type=UIType._Collection
)
index: int = InputField(description="The index, will be provided on executed iterators", default=0, ui_hidden=True)
@ -227,7 +227,7 @@ class IterateInvocation(BaseInvocation):
@invocation_output("collect_output")
class CollectInvocationOutput(BaseInvocationOutput):
collection: list[Any] = OutputField(
description="The collection of input items", title="Collection", ui_type=UIType.Collection
description="The collection of input items", title="Collection", ui_type=UIType._Collection
)
@ -238,12 +238,12 @@ class CollectInvocation(BaseInvocation):
item: Optional[Any] = InputField(
default=None,
description="The item to collect (all inputs must be of the same type)",
ui_type=UIType.CollectionItem,
ui_type=UIType._CollectionItem,
title="Collection Item",
input=Input.Connection,
)
collection: list[Any] = InputField(
description="The collection, will be provided on execution", default_factory=list, ui_hidden=True
description="The collection, will be provided on execution", default=[], ui_hidden=True
)
def invoke(self, context: InvocationContext) -> CollectInvocationOutput:
@ -379,7 +379,7 @@ class Graph(BaseModel):
raise NodeNotFoundError(f"Edge destination node {edge.destination.node_id} does not exist in the graph")
# output fields are not on the node object directly, they are on the output type
if edge.source.field not in source_node.get_output_type().model_fields:
if edge.source.field not in source_node.get_output_annotation().model_fields:
raise NodeFieldNotFoundError(
f"Edge source field {edge.source.field} does not exist in node {edge.source.node_id}"
)

View File

@ -2,6 +2,7 @@ class FieldDescriptions:
denoising_start = "When to start denoising, expressed a percentage of total steps"
denoising_end = "When to stop denoising, expressed a percentage of total steps"
cfg_scale = "Classifier-Free Guidance scale"
cfg_rescale_multiplier = "Rescale multiplier for CFG guidance, used for models trained with zero-terminal SNR"
scheduler = "Scheduler to use during inference"
positive_cond = "Positive conditioning tensor"
negative_cond = "Negative conditioning tensor"

View File

@ -54,6 +54,44 @@ class ImageProjModel(torch.nn.Module):
return clip_extra_context_tokens
class MLPProjModel(torch.nn.Module):
"""SD model with image prompt"""
def __init__(self, cross_attention_dim=1024, clip_embeddings_dim=1024):
super().__init__()
self.proj = torch.nn.Sequential(
torch.nn.Linear(clip_embeddings_dim, clip_embeddings_dim),
torch.nn.GELU(),
torch.nn.Linear(clip_embeddings_dim, cross_attention_dim),
torch.nn.LayerNorm(cross_attention_dim),
)
@classmethod
def from_state_dict(cls, state_dict: dict[torch.Tensor]):
"""Initialize an MLPProjModel from a state_dict.
The cross_attention_dim and clip_embeddings_dim are inferred from the shape of the tensors in the state_dict.
Args:
state_dict (dict[torch.Tensor]): The state_dict of model weights.
Returns:
MLPProjModel
"""
cross_attention_dim = state_dict["proj.3.weight"].shape[0]
clip_embeddings_dim = state_dict["proj.0.weight"].shape[0]
model = cls(cross_attention_dim, clip_embeddings_dim)
model.load_state_dict(state_dict)
return model
def forward(self, image_embeds):
clip_extra_context_tokens = self.proj(image_embeds)
return clip_extra_context_tokens
class IPAdapter:
"""IP-Adapter: https://arxiv.org/pdf/2308.06721.pdf"""
@ -130,6 +168,13 @@ class IPAdapterPlus(IPAdapter):
return image_prompt_embeds, uncond_image_prompt_embeds
class IPAdapterFull(IPAdapterPlus):
"""IP-Adapter Plus with full features."""
def _init_image_proj_model(self, state_dict: dict[torch.Tensor]):
return MLPProjModel.from_state_dict(state_dict).to(self.device, dtype=self.dtype)
class IPAdapterPlusXL(IPAdapterPlus):
"""IP-Adapter Plus for SDXL."""
@ -149,11 +194,9 @@ def build_ip_adapter(
) -> Union[IPAdapter, IPAdapterPlus]:
state_dict = torch.load(ip_adapter_ckpt_path, map_location="cpu")
# Determine if the state_dict is from an IPAdapter or IPAdapterPlus based on the image_proj weights that it
# contains.
is_plus = "proj.weight" not in state_dict["image_proj"]
if is_plus:
if "proj.weight" in state_dict["image_proj"]: # IPAdapter (with ImageProjModel).
return IPAdapter(state_dict, device=device, dtype=dtype)
elif "proj_in.weight" in state_dict["image_proj"]: # IPAdaterPlus or IPAdapterPlusXL (with Resampler).
cross_attention_dim = state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[-1]
if cross_attention_dim == 768:
# SD1 IP-Adapter Plus
@ -163,5 +206,7 @@ def build_ip_adapter(
return IPAdapterPlusXL(state_dict, device=device, dtype=dtype)
else:
raise Exception(f"Unsupported IP-Adapter Plus cross-attention dimension: {cross_attention_dim}.")
elif "proj.0.weight" in state_dict["image_proj"]: # IPAdapterFull (with MLPProjModel).
return IPAdapterFull(state_dict, device=device, dtype=dtype)
else:
return IPAdapter(state_dict, device=device, dtype=dtype)
raise ValueError(f"'{ip_adapter_ckpt_path}' has an unrecognized IP-Adapter model architecture.")

View File

@ -607,10 +607,13 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if isinstance(guidance_scale, list):
guidance_scale = guidance_scale[step_index]
noise_pred = self.invokeai_diffuser._combine(
uc_noise_pred,
noise_pred = self.invokeai_diffuser._combine(uc_noise_pred, c_noise_pred, guidance_scale)
guidance_rescale_multiplier = conditioning_data.guidance_rescale_multiplier
if guidance_rescale_multiplier > 0:
noise_pred = self._rescale_cfg(
noise_pred,
c_noise_pred,
guidance_scale,
guidance_rescale_multiplier,
)
# compute the previous noisy sample x_t -> x_t-1
@ -634,6 +637,16 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
return step_output
@staticmethod
def _rescale_cfg(total_noise_pred, pos_noise_pred, multiplier=0.7):
"""Implementation of Algorithm 2 from https://arxiv.org/pdf/2305.08891.pdf."""
ro_pos = torch.std(pos_noise_pred, dim=(1, 2, 3), keepdim=True)
ro_cfg = torch.std(total_noise_pred, dim=(1, 2, 3), keepdim=True)
x_rescaled = total_noise_pred * (ro_pos / ro_cfg)
x_final = multiplier * x_rescaled + (1.0 - multiplier) * total_noise_pred
return x_final
def _unet_forward(
self,
latents,

View File

@ -67,13 +67,17 @@ class IPAdapterConditioningInfo:
class ConditioningData:
unconditioned_embeddings: BasicConditioningInfo
text_embeddings: BasicConditioningInfo
guidance_scale: Union[float, List[float]]
"""
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
`guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf).
Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate
images that are closely linked to the text `prompt`, usually at the expense of lower image quality.
"""
guidance_scale: Union[float, List[float]]
""" for models trained using zero-terminal SNR ("ztsnr"), it's suggested to use guidance_rescale_multiplier of 0.7 .
ref [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf)
"""
guidance_rescale_multiplier: float = 0
extra: Optional[ExtraConditioningInfo] = None
scheduler_args: dict[str, Any] = field(default_factory=dict)
"""

View File

View File

@ -0,0 +1,201 @@
import math
from typing import Union
import numpy as np
from invokeai.backend.tiles.utils import TBLR, Tile, paste
def calc_tiles_with_overlap(
image_height: int, image_width: int, tile_height: int, tile_width: int, overlap: int = 0
) -> list[Tile]:
"""Calculate the tile coordinates for a given image shape under a simple tiling scheme with overlaps.
Args:
image_height (int): The image height in px.
image_width (int): The image width in px.
tile_height (int): The tile height in px. All tiles will have this height.
tile_width (int): The tile width in px. All tiles will have this width.
overlap (int, optional): The target overlap between adjacent tiles. If the tiles do not evenly cover the image
shape, then the last row/column of tiles will overlap more than this. Defaults to 0.
Returns:
list[Tile]: A list of tiles that cover the image shape. Ordered from left-to-right, top-to-bottom.
"""
assert image_height >= tile_height
assert image_width >= tile_width
assert overlap < tile_height
assert overlap < tile_width
non_overlap_per_tile_height = tile_height - overlap
non_overlap_per_tile_width = tile_width - overlap
num_tiles_y = math.ceil((image_height - overlap) / non_overlap_per_tile_height)
num_tiles_x = math.ceil((image_width - overlap) / non_overlap_per_tile_width)
# tiles[y * num_tiles_x + x] is the tile for the y'th row, x'th column.
tiles: list[Tile] = []
# Calculate tile coordinates. (Ignore overlap values for now.)
for tile_idx_y in range(num_tiles_y):
for tile_idx_x in range(num_tiles_x):
tile = Tile(
coords=TBLR(
top=tile_idx_y * non_overlap_per_tile_height,
bottom=tile_idx_y * non_overlap_per_tile_height + tile_height,
left=tile_idx_x * non_overlap_per_tile_width,
right=tile_idx_x * non_overlap_per_tile_width + tile_width,
),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
if tile.coords.bottom > image_height:
# If this tile would go off the bottom of the image, shift it so that it is aligned with the bottom
# of the image.
tile.coords.bottom = image_height
tile.coords.top = image_height - tile_height
if tile.coords.right > image_width:
# If this tile would go off the right edge of the image, shift it so that it is aligned with the
# right edge of the image.
tile.coords.right = image_width
tile.coords.left = image_width - tile_width
tiles.append(tile)
def get_tile_or_none(idx_y: int, idx_x: int) -> Union[Tile, None]:
if idx_y < 0 or idx_y > num_tiles_y or idx_x < 0 or idx_x > num_tiles_x:
return None
return tiles[idx_y * num_tiles_x + idx_x]
# Iterate over tiles again and calculate overlaps.
for tile_idx_y in range(num_tiles_y):
for tile_idx_x in range(num_tiles_x):
cur_tile = get_tile_or_none(tile_idx_y, tile_idx_x)
top_neighbor_tile = get_tile_or_none(tile_idx_y - 1, tile_idx_x)
left_neighbor_tile = get_tile_or_none(tile_idx_y, tile_idx_x - 1)
assert cur_tile is not None
# Update cur_tile top-overlap and corresponding top-neighbor bottom-overlap.
if top_neighbor_tile is not None:
cur_tile.overlap.top = max(0, top_neighbor_tile.coords.bottom - cur_tile.coords.top)
top_neighbor_tile.overlap.bottom = cur_tile.overlap.top
# Update cur_tile left-overlap and corresponding left-neighbor right-overlap.
if left_neighbor_tile is not None:
cur_tile.overlap.left = max(0, left_neighbor_tile.coords.right - cur_tile.coords.left)
left_neighbor_tile.overlap.right = cur_tile.overlap.left
return tiles
def merge_tiles_with_linear_blending(
dst_image: np.ndarray, tiles: list[Tile], tile_images: list[np.ndarray], blend_amount: int
):
"""Merge a set of image tiles into `dst_image` with linear blending between the tiles.
We expect every tile edge to either:
1) have an overlap of 0, because it is aligned with the image edge, or
2) have an overlap >= blend_amount.
If neither of these conditions are satisfied, we raise an exception.
The linear blending is centered at the halfway point of the overlap between adjacent tiles.
Args:
dst_image (np.ndarray): The destination image. Shape: (H, W, C).
tiles (list[Tile]): The list of tiles describing the locations of the respective `tile_images`.
tile_images (list[np.ndarray]): The tile images to merge into `dst_image`.
blend_amount (int): The amount of blending (in px) between adjacent overlapping tiles.
"""
# Sort tiles and images first by left x coordinate, then by top y coordinate. During tile processing, we want to
# iterate over tiles left-to-right, top-to-bottom.
tiles_and_images = list(zip(tiles, tile_images, strict=True))
tiles_and_images = sorted(tiles_and_images, key=lambda x: x[0].coords.left)
tiles_and_images = sorted(tiles_and_images, key=lambda x: x[0].coords.top)
# Organize tiles into rows.
tile_and_image_rows: list[list[tuple[Tile, np.ndarray]]] = []
cur_tile_and_image_row: list[tuple[Tile, np.ndarray]] = []
first_tile_in_cur_row, _ = tiles_and_images[0]
for tile_and_image in tiles_and_images:
tile, _ = tile_and_image
if not (
tile.coords.top == first_tile_in_cur_row.coords.top
and tile.coords.bottom == first_tile_in_cur_row.coords.bottom
):
# Store the previous row, and start a new one.
tile_and_image_rows.append(cur_tile_and_image_row)
cur_tile_and_image_row = []
first_tile_in_cur_row, _ = tile_and_image
cur_tile_and_image_row.append(tile_and_image)
tile_and_image_rows.append(cur_tile_and_image_row)
# Prepare 1D linear gradients for blending.
gradient_left_x = np.linspace(start=0.0, stop=1.0, num=blend_amount)
gradient_top_y = np.linspace(start=0.0, stop=1.0, num=blend_amount)
# Convert shape: (blend_amount, ) -> (blend_amount, 1). The extra dimension enables the gradient to be applied
# to a 2D image via broadcasting. Note that no additional dimension is needed on gradient_left_x for
# broadcasting to work correctly.
gradient_top_y = np.expand_dims(gradient_top_y, axis=1)
for tile_and_image_row in tile_and_image_rows:
first_tile_in_row, _ = tile_and_image_row[0]
row_height = first_tile_in_row.coords.bottom - first_tile_in_row.coords.top
row_image = np.zeros((row_height, dst_image.shape[1], dst_image.shape[2]), dtype=dst_image.dtype)
# Blend the tiles in the row horizontally.
for tile, tile_image in tile_and_image_row:
# We expect the tiles to be ordered left-to-right. For each tile, we construct a mask that applies linear
# blending to the left of the current tile. The inverse linear blending is automatically applied to the
# right of the tiles that have already been pasted by the paste(...) operation.
tile_height, tile_width, _ = tile_image.shape
mask = np.ones(shape=(tile_height, tile_width), dtype=np.float64)
# Left blending:
if tile.overlap.left > 0:
assert tile.overlap.left >= blend_amount
# Center the blending gradient in the middle of the overlap.
blend_start_left = tile.overlap.left // 2 - blend_amount // 2
# The region left of the blending region is masked completely.
mask[:, :blend_start_left] = 0.0
# Apply the blend gradient to the mask.
mask[:, blend_start_left : blend_start_left + blend_amount] = gradient_left_x
# For visual debugging:
# tile_image[:, blend_start_left : blend_start_left + blend_amount] = 0
paste(
dst_image=row_image,
src_image=tile_image,
box=TBLR(
top=0, bottom=tile.coords.bottom - tile.coords.top, left=tile.coords.left, right=tile.coords.right
),
mask=mask,
)
# Blend the row into the dst_image vertically.
# We construct a mask that applies linear blending to the top of the current row. The inverse linear blending is
# automatically applied to the bottom of the tiles that have already been pasted by the paste(...) operation.
mask = np.ones(shape=(row_image.shape[0], row_image.shape[1]), dtype=np.float64)
# Top blending:
# (See comments under 'Left blending' for an explanation of the logic.)
# We assume that the entire row has the same vertical overlaps as the first_tile_in_row.
if first_tile_in_row.overlap.top > 0:
assert first_tile_in_row.overlap.top >= blend_amount
blend_start_top = first_tile_in_row.overlap.top // 2 - blend_amount // 2
mask[:blend_start_top, :] = 0.0
mask[blend_start_top : blend_start_top + blend_amount, :] = gradient_top_y
# For visual debugging:
# row_image[blend_start_top : blend_start_top + blend_amount, :] = 0
paste(
dst_image=dst_image,
src_image=row_image,
box=TBLR(
top=first_tile_in_row.coords.top,
bottom=first_tile_in_row.coords.bottom,
left=0,
right=row_image.shape[1],
),
mask=mask,
)

View File

@ -0,0 +1,47 @@
from typing import Optional
import numpy as np
from pydantic import BaseModel, Field
class TBLR(BaseModel):
top: int
bottom: int
left: int
right: int
def __eq__(self, other):
return (
self.top == other.top
and self.bottom == other.bottom
and self.left == other.left
and self.right == other.right
)
class Tile(BaseModel):
coords: TBLR = Field(description="The coordinates of this tile relative to its parent image.")
overlap: TBLR = Field(description="The amount of overlap with adjacent tiles on each side of this tile.")
def __eq__(self, other):
return self.coords == other.coords and self.overlap == other.overlap
def paste(dst_image: np.ndarray, src_image: np.ndarray, box: TBLR, mask: Optional[np.ndarray] = None):
"""Paste a source image into a destination image.
Args:
dst_image (torch.Tensor): The destination image to paste into. Shape: (H, W, C).
src_image (torch.Tensor): The source image to paste. Shape: (H, W, C). H and W must be compatible with 'box'.
box (TBLR): Box defining the region in the 'dst_image' where 'src_image' will be pasted.
mask (Optional[torch.Tensor]): A mask that defines the blending between 'src_image' and 'dst_image'.
Range: [0.0, 1.0], Shape: (H, W). The output is calculate per-pixel according to
`src * mask + dst * (1 - mask)`.
"""
if mask is None:
dst_image[box.top : box.bottom, box.left : box.right] = src_image
else:
mask = np.expand_dims(mask, -1)
dst_image_box = dst_image[box.top : box.bottom, box.left : box.right]
dst_image[box.top : box.bottom, box.left : box.right] = src_image * mask + dst_image_box * (1.0 - mask)

View File

@ -25,9 +25,11 @@ module.exports = {
'@typescript-eslint',
'eslint-plugin-react-hooks',
'i18next',
'path',
],
root: true,
rules: {
'path/no-relative-imports': ['error', { maxDepth: 0 }],
curly: 'error',
'i18next/no-literal-string': 2,
'react/jsx-no-bind': ['error', { allowBind: true }],

View File

@ -9,6 +9,5 @@ index.html
.yalc/
*.scss
src/services/api/schema.d.ts
docs/
static/
src/theme/css/overlayscrollbars.css

View File

@ -13,6 +13,8 @@
- [Vite](#vite)
- [i18next & Weblate](#i18next--weblate)
- [openapi-typescript](#openapi-typescript)
- [reactflow](#reactflow)
- [zod](#zod)
- [Client Types Generation](#client-types-generation)
- [Package Scripts](#package-scripts)
- [Contributing](#contributing)
@ -26,46 +28,54 @@ The UI is a fairly straightforward Typescript React app.
## Core Libraries
The app makes heavy use of a handful of libraries.
InvokeAI's UI is made possible by a number of excellent open-source libraries. The most heavily-used are listed below, but there are many others.
### Redux Toolkit
[Redux Toolkit](https://github.com/reduxjs/redux-toolkit) is used for state management and fetching/caching:
[Redux Toolkit] is used for state management and fetching/caching:
- `RTK-Query` for data fetching and caching
- `createAsyncThunk` for a couple other HTTP requests
- `createEntityAdapter` to normalize things like images and models
- `createListenerMiddleware` for async workflows
We use [redux-remember](https://github.com/zewish/redux-remember) for persistence.
We use [redux-remember] for persistence.
### Socket\.IO
[Socket\.IO](https://github.com/socketio/socket.io) is used for server-to-client events, like generation process and queue state changes.
[Socket.IO] is used for server-to-client events, like generation process and queue state changes.
### Chakra UI
[Chakra UI](https://github.com/chakra-ui/chakra-ui) is our primary UI library, but we also use a few components from [Mantine v6](https://v6.mantine.dev/).
[Chakra UI] is our primary UI library, but we also use a few components from [Mantine v6].
### KonvaJS
[KonvaJS](https://github.com/konvajs/react-konva) powers the canvas. In the future, we'd like to explore [PixiJS](https://github.com/pixijs/pixijs) or WebGPU.
[KonvaJS] powers the canvas. In the future, we'd like to explore [PixiJS] or WebGPU.
### Vite
[Vite](https://github.com/vitejs/vite) is our bundler.
[Vite] is our bundler.
### i18next & Weblate
We use [i18next](https://github.com/i18next/react-i18next) for localisation, but translation to languages other than English happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. **Only the English source strings should be changed on this repo.**
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project. **Only the English source strings should be changed on this repo.**
### openapi-typescript
[openapi-typescript](https://github.com/drwpow/openapi-typescript) is used to generate types from the server's OpenAPI schema. See TYPES_CODEGEN.md.
[openapi-typescript] is used to generate types from the server's OpenAPI schema. See TYPES_CODEGEN.md.
### reactflow
[reactflow] powers the Workflow Editor.
### zod
[zod] schemas are used to model data structures and provide runtime validation.
## Client Types Generation
We use [`openapi-typescript`](https://github.com/drwpow/openapi-typescript) to generate types from the app's OpenAPI schema.
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
The generated types are written to `invokeai/frontend/web/src/services/api/schema.d.ts`. This file is committed to the repo.
@ -98,11 +108,11 @@ Run with `yarn <script name>`.
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
We encourage you to ping @psychedelicious and @blessedcoolant on [discord] if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
Install [node](https://nodejs.org/en/download/) and [yarn classic](https://classic.yarnpkg.com/lang/en/).
Install [node] and [yarn classic].
From `invokeai/frontend/web/` run `yarn install` to get everything set up.
@ -125,3 +135,20 @@ For a number of technical and logistical reasons, we need to commit UI build art
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `yarn build`.
[node]: https://nodejs.org/en/download/
[yarn classic]: https://classic.yarnpkg.com/lang/en/
[discord]: https://discord.gg/ZmtBAhwWhy
[Redux Toolkit]: https://github.com/reduxjs/redux-toolkit
[redux-remember]: https://github.com/zewish/redux-remember
[Socket.IO]: https://github.com/socketio/socket.io
[Chakra UI]: https://github.com/chakra-ui/chakra-ui
[Mantine v6]: https://v6.mantine.dev/
[KonvaJS]: https://github.com/konvajs/react-konva
[PixiJS]: https://github.com/pixijs/pixijs
[Vite]: https://github.com/vitejs/vite
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[reactflow]: https://github.com/xyflow/xyflow
[zod]: https://github.com/colinhacks/zod

View File

@ -0,0 +1,350 @@
# Workflows - Design and Implementation
<!-- @import "[TOC]" {cmd="toc" depthFrom=1 depthTo=6 orderedList=false} -->
<!-- code_chunk_output -->
- [Workflows - Design and Implementation](#workflows---design-and-implementation)
- [Design](#design)
- [Linear UI](#linear-ui)
- [Workflow Editor](#workflow-editor)
- [Workflows](#workflows)
- [Workflow -> reactflow state -> InvokeAI graph](#workflow---reactflow-state---invokeai-graph)
- [Nodes vs Invocations](#nodes-vs-invocations)
- [Workflow Linear View](#workflow-linear-view)
- [OpenAPI Schema](#openapi-schema)
- [Field Instances and Templates](#field-instances-and-templates)
- [Stateful vs Stateless Fields](#stateful-vs-stateless-fields)
- [Collection and Polymorphic Fields](#collection-and-polymorphic-fields)
- [Implementation](#implementation)
- [zod Schemas and Types](#zod-schemas-and-types)
- [OpenAPI Schema Parsing](#openapi-schema-parsing)
- [Parsing Field Types](#parsing-field-types)
- [Primitive Types](#primitive-types)
- [Complex Types](#complex-types)
- [Collection Types](#collection-types)
- [Polymorphic Types](#polymorphic-types)
- [Optional Fields](#optional-fields)
- [Building Field Input Templates](#building-field-input-templates)
- [Building Field Output Templates](#building-field-output-templates)
- [Managing reactflow State](#managing-reactflow-state)
- [Building Nodes and Edges](#building-nodes-and-edges)
- [Building a Workflow](#building-a-workflow)
- [Loading a Workflow](#loading-a-workflow)
- [Workflow Migrations](#workflow-migrations)
<!-- /code_chunk_output -->
> This document describes, at a high level, the design and implementation of workflows in the InvokeAI frontend. There are a substantial number of implementation details not included, but which are hopefully clear from the code.
InvokeAI's backend uses graphs, composed of **nodes** and **edges**, to process data and generate images.
Nodes have any number of **input fields** and **output fields**. Edges connect nodes together via their inputs and outputs. Fields have data types which dictate how they may be connected.
During execution, a nodes' outputs may be passed along to any number of other nodes' inputs.
Workflows are an enriched abstraction over a graph.
## Design
InvokeAI provide two ways to build graphs in the frontend: the [Linear UI](#linear-ui) and [Workflow Editor](#workflow-editor).
To better understand the use case and challenges related to workflows, we will review both of these modes.
### Linear UI
This includes the **Text to Image**, **Image to Image** and **Unified Canvas** tabs.
The user-managed parameters on these tabs are stored as simple objects in the application state. When the user invokes, adding a generation to the queue, we internally build a graph from these parameters.
This logic can be fairly complex due to the range of features available and their interactions. Depending on the parameters selected, the graph may be very different. Building graphs in code can be challenging - you are trying to construct a non-linear structure in a linear context.
The simplest graph building logic is for **Text to Image** with a SD1.5 model: [buildLinearTextToImageGraph.ts]
There are many other graph builders in the same directory for different tabs or base models (e.g. SDXL). Some are pretty hairy.
In the Linear UI, we go straight from **simple application state** to **graph** via these builders.
### Workflow Editor
The Workflow Editor is a visual graph editor, allowing users to draw edges from node to node to construct a graph. This _far_ more approachable way to create complex graphs.
InvokeAI uses the [reactflow] library to power the Workflow Editor. It provides both a graph editor UI and manages its own internal graph state.
#### Workflows
A workflow is a representation of a graph plus additional metadata:
- Name
- Description
- Version
- Notes
- [Exposed fields](#workflow-linear-view)
- Author, tags, category, etc.
Workflows should have other qualities:
- Portable: you should be able to load a workflow created by another person.
- Resilient: you should be able to "upgrade" a workflow as the application changes.
- Abstract: as much as is possible, workflows should not be married to the specific implementation details of the application.
To support these qualities, workflows are serializable, have a versioned schemas, and represent graphs as minimally as possible. Fortunately, the reactflow state for nodes and edges works perfectly for this.
##### Workflow -> reactflow state -> InvokeAI graph
Given a workflow, we need to be able to derive reactflow state and/or an InvokeAI graph from it.
The first step - workflow to reactflow state - is very simple. The logic is in [nodesSlice.ts], in the `workflowLoaded` reducer.
The reactflow state is, however, structurally incompatible with our backend's graph structure. When a user invokes on a Workflow, we need to convert the reactflow state into an InvokeAI graph. This is far simpler than the graph building logic from the Linear UI:
[buildNodesGraph.ts]
##### Nodes vs Invocations
We often use the terms "node" and "invocation" interchangeably, but they may refer to different things in the frontend.
reactflow [has its own definitions][reactflow-concepts] of "node", "edge" and "handle" which are closely related to InvokeAI graph concepts.
- A reactflow node is related to an InvokeAI invocation. It has a "data" property, which holds the InvokeAI-specific invocation data.
- A reactflow edge is roughly equivalent to an InvokeAI edge.
- A reactflow handle is roughly equivalent to an InvokeAI input or output field.
##### Workflow Linear View
Graphs are very capable data structures, but not everyone wants to work with them all the time.
To allow less technical users - or anyone who wants a less visually noisy workspace - to benefit from the power of nodes, InvokeAI has a workflow feature called the Linear View.
A workflow input field can be added to this Linear View, and its input component can be presented similarly to the Linear UI tabs. Internally, we add the field to the workflow's list of exposed fields.
#### OpenAPI Schema
OpenAPI is a schema specification that can represent complex data structures and relationships. The backend is capable of generating an OpenAPI schema for all invocations.
When the UI connects, it requests this schema and parses each invocation into an **invocation template**. Invocation templates have a number of properties, like title, description and type, but the most important ones are their input and output **field templates**.
Invocation and field templates are the "source of truth" for graphs, because they indicate what the backend is able to process.
When a user adds a new node to their workflow, these templates are used to instantiate a node with fields instantiated from the input and output field templates.
##### Field Instances and Templates
Field templates consist of:
- Name: the identifier of the field, its variable name in python
- Type: derived from the field's type annotation in python (e.g. IntegerField, ImageField, MainModelField)
- Constraints: derived from the field's creation args in python (e.g. minimum value for an integer)
- Default value: optionally provided in the field's creation args (e.g. 42 for an integer)
Field instances are created from the templates and have name, type and optionally a value.
The type of the field determines the UI components that are rendered for it.
A field instance's name associates it with its template.
##### Stateful vs Stateless Fields
**Stateful** fields store their value in the frontend graph. Think primitives, model identifiers, images, etc. Fields are only stateful if the frontend allows the user to directly input a value for them.
Many field types, however, are **stateless**. An example is a `UNetField`, which contains some data describing a UNet. Users cannot directly provide this data - it is created and consumed in the backend.
Stateless fields do not store their value in the node, so their field instances do not have values.
"Custom" fields will always be treated as stateless fields.
##### Collection and Polymorphic Fields
Field types have a name and two flags which may identify it as a **collection** or **polymorphic** field.
If a field is annotated in python as a list, its field type is parsed and flagged as a collection type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a polymorphic type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
The majority of data structures in the backend are [pydantic] models. Pydantic provides OpenAPI schemas for all models and we then generate TypeScript types from those.
The OpenAPI schema is parsed at runtime into our invocation templates.
Workflows and all related data are modeled in the frontend using [zod]. Related types are inferred from the zod schemas.
> In python, invocations are pydantic models with fields. These fields become node inputs. The invocation's `invoke()` function returns a pydantic model - its output. Like the invocation itself, the output model has any number of fields, which become node outputs.
### zod Schemas and Types
The zod schemas, inferred types, and type guards are in [types/].
Roughly order from lowest-level to highest:
- `common.ts`: stateful field data, and couple other misc types
- `field.ts`: fields - types, values, instances, templates
- `invocation.ts`: invocations and other node types
- `workflow.ts`: workflows and constituents
We customize the OpenAPI schema to include additional properties on invocation and field schemas. To facilitate parsing this schema into templates, we modify/wrap the types from [openapi-types] in `openapi.ts`.
### OpenAPI Schema Parsing
The entrypoint for OpenAPI schema parsing is [parseSchema.ts].
General logic flow:
- Iterate over all invocation schema objects
- Extract relevant invocation-level attributes (e.g. title, type, version, etc)
- Iterate over the invocation's input fields
- [Parse each field's type](#parsing-field-types)
- [Build a field input template](#building-field-input-templates) from the type - either a stateful template or "generic" stateless template
- Iterate over the invocation's output fields
- Parse the field's type (same as inputs)
- [Build a field output template](#building-field-output-templates)
- Assemble the attributes and fields into an invocation template
Most of these involve very straightforward `reduce`s, but the less intuitive steps are detailed below.
#### Parsing Field Types
Field types are represented as structured objects:
```ts
type FieldType = {
name: string;
isCollection: boolean;
isCollectionOrScalar: boolean;
};
```
The parsing logic is in `parseFieldType.ts`.
There are 4 general cases for field type parsing.
##### Primitive Types
When a field is annotated as a primitive values (e.g. `int`, `str`, `float`), the field type parsing is fairly straightforward. The field is represented by a simple OpenAPI **schema object**, which has a `type` property.
We create a field type name from this `type` string (e.g. `string` -> `StringField`).
##### Complex Types
When a field is annotated as a pydantic model (e.g. `ImageField`, `MainModelField`, `ControlField`), it is represented as a **reference object**. Reference objects are pointers to another schema or reference object within the schema.
We need to **dereference** the schema to pull these out. Dereferencing may require recursion. We use the reference object's name directly for the field type name.
> Unfortunately, at this time, we've had limited success using external libraries to deference at runtime, so we do this ourselves.
##### Collection Types
When a field is annotated as a list of a single type, the schema object has an `items` property. They may be a schema object or reference object and must be parsed to determine the item type.
We use the item type for field type name, adding `isCollection: true` to the field type.
##### Collection or Scalar Types
When a field is annotated as a union of a type and list of that type, the schema object has an `anyOf` property, which holds a list of valid types for the union.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, adding `isCollectionOrScalar: true` to the field type.
##### Optional Fields
In OpenAPI v3.1, when an object is optional, it is put into an `anyOf` along with a primitive schema object with `type: 'null'`.
Handling this adds a fair bit of complexity, as we now must filter out the `'null'` types and work with the remaining types as described above.
If there is a single remaining schema object, we must recursively call to `parseFieldType()` to get parse it.
#### Building Field Input Templates
Now that we have a field type, we can build an input template for the field.
Stateful fields all get a function to build their template, while stateless fields are constructed directly. This is possible because stateless fields have no default value or constraints.
See [buildFieldInputTemplate.ts].
#### Building Field Output Templates
Field outputs are similar to stateless fields - they do not have any value in the frontend. When building their templates, we don't need a special function for each field type.
See [buildFieldOutputTemplate.ts].
### Managing reactflow State
As described above, the workflow editor state is the essentially the reactflow state, plus some extra metadata.
We provide reactflow with an array of nodes and edges via redux, and a number of [event handlers][reactflow-events]. These handlers dispatch redux actions, managing nodes and edges.
The pieces of redux state relevant to workflows are:
- `state.nodes.nodes`: the reactflow nodes state
- `state.nodes.edges`: the reactflow edges state
- `state.nodes.workflow`: the workflow metadata
#### Building Nodes and Edges
A reactflow node has a few important top-level properties:
- `id`: unique identifier
- `type`: a string that maps to a react component to render the node
- `position`: XY coordinates
- `data`: arbitrary data
When the user adds a node, we build **invocation node data**, storing it in `data`. Invocation properties (e.g. type, version, label, etc.) are copied from the invocation template. Inputs and outputs are built from the invocation template's field templates.
See [buildInvocationNode.ts].
Edges are managed by reactflow, but briefly, they consist of:
- `source`: id of the source node
- `sourceHandle`: id of the source node handle (output field)
- `target`: id of the target node
- `targetHandle`: id of the target node handle (input field)
> Edge creation is gated behind validation logic. This validation compares the input and output field types and overall graph state.
#### Building a Workflow
Building a workflow entity is as simple as dropping the nodes, edges and metadata into an object.
Each node and edge is parsed with a zod schema, which serves to strip out any unneeded data.
See [buildWorkflow.ts].
#### Loading a Workflow
Workflows may be loaded from external sources or the user's local instance. In all cases, the workflow needs to be handled with care, as an untrusted object.
Loading has a few stages which may throw or warn if there are problems:
- Parsing the workflow data structure itself, [migrating](#workflow-migrations) it if necessary (throws)
- Check for a template for each node (warns)
- Check each node's version against its template (warns)
- Validate the source and target of each edge (warns)
This validation occurs in [validateWorkflow.ts].
If there are no fatal errors, the workflow is then stored in redux state.
### Workflow Migrations
When the workflow schema changes, we may need to perform some data migrations. This occurs as workflows are loaded. zod schemas for each workflow schema version is retained to facilitate migrations.
Previous schemas are in folders in `invokeai/frontend/web/src/features/nodes/types/`, eg `v1/`.
Migration logic is in [migrations.ts].
<!-- links -->
[pydantic]: https://github.com/pydantic/pydantic 'pydantic'
[zod]: https://github.com/colinhacks/zod 'zod'
[openapi-types]: https://github.com/kogosoftwarellc/open-api/tree/main/packages/openapi-types 'openapi-types'
[reactflow]: https://github.com/xyflow/xyflow 'reactflow'
[reactflow-concepts]: https://reactflow.dev/learn/concepts/terms-and-definitions
[reactflow-events]: https://reactflow.dev/api-reference/react-flow#event-handlers
[buildWorkflow.ts]: ../src/features/nodes/util/workflow/buildWorkflow.ts
[nodesSlice.ts]: ../src/features/nodes/store/nodesSlice.ts
[buildLinearTextToImageGraph.ts]: ../src/features/nodes/util/graph/buildLinearTextToImageGraph.ts
[buildNodesGraph.ts]: ../src/features/nodes/util/graph/buildNodesGraph.ts
[buildInvocationNode.ts]: ../src/features/nodes/util/node/buildInvocationNode.ts
[validateWorkflow.ts]: ../src/features/nodes/util/workflow/validateWorkflow.ts
[migrations.ts]: ../src/features/nodes/util/workflow/migrations.ts
[parseSchema.ts]: ../src/features/nodes/util/schema/parseSchema.ts
[buildFieldInputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldInputTemplate.ts
[buildFieldOutputTemplate.ts]: ../src/features/nodes/util/schema/buildFieldOutputTemplate.ts

View File

@ -29,7 +29,7 @@
"lint:prettier": "prettier --check .",
"lint:tsc": "tsc --noEmit",
"lint": "concurrently -g -n eslint,prettier,tsc,madge -c cyan,green,magenta,yellow \"yarn run lint:eslint\" \"yarn run lint:prettier\" \"yarn run lint:tsc\" \"yarn run lint:madge\"",
"fix": "eslint --fix . && prettier --loglevel warn --write .",
"fix": "eslint --fix . && prettier --log-level warn --write .",
"lint-staged": "lint-staged",
"postinstall": "patch-package && yarn run theme",
"theme": "chakra-cli tokens src/theme/theme.ts",
@ -75,6 +75,7 @@
"framer-motion": "^10.16.4",
"i18next": "^23.6.0",
"i18next-http-backend": "^2.3.1",
"idb-keyval": "^6.2.1",
"konva": "^9.2.3",
"lodash-es": "^4.17.21",
"nanostores": "^0.9.4",
@ -132,6 +133,7 @@
"eslint": "^8.53.0",
"eslint-config-prettier": "^9.0.0",
"eslint-plugin-i18next": "^6.0.3",
"eslint-plugin-path": "^1.2.2",
"eslint-plugin-react": "^7.33.2",
"eslint-plugin-react-hooks": "^4.6.0",
"husky": "^8.0.3",

View File

@ -803,8 +803,7 @@
"canny": "Canny",
"hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximal Anzahl Gesichter",
"unstarImage": "Markierung aufheben"
"maxFaces": "Maximal Anzahl Gesichter"
},
"queue": {
"status": "Status",

View File

@ -1,7 +1,7 @@
{
"accessibility": {
"copyMetadataJson": "Copy metadata JSON",
"createIssue":"Create Issue",
"createIssue": "Create Issue",
"exitViewer": "Exit Viewer",
"flipHorizontally": "Flip Horizontally",
"flipVertically": "Flip Vertically",
@ -13,7 +13,7 @@
"nextImage": "Next Image",
"previousImage": "Previous Image",
"reset": "Reset",
"resetUI":"$t(accessibility.reset) UI",
"resetUI": "$t(accessibility.reset) UI",
"rotateClockwise": "Rotate Clockwise",
"rotateCounterClockwise": "Rotate Counter-Clockwise",
"showGalleryPanel": "Show Gallery Panel",
@ -59,7 +59,7 @@
"back": "Back",
"batch": "Batch Manager",
"cancel": "Cancel",
"copyError":"$t(gallery.copy) Error",
"copyError": "$t(gallery.copy) Error",
"close": "Close",
"on": "On",
"checkpoint": "Checkpoint",
@ -76,7 +76,7 @@
"error": "Error",
"file": "File",
"folder": "Folder",
"format":"format",
"format": "format",
"generate": "Generate",
"githubLabel": "Github",
"hotkeysLabel": "Hotkeys",
@ -160,6 +160,7 @@
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
"txt2img": "Text To Image",
"unifiedCanvas": "Unified Canvas",
"unknown": "Unknown",
"upload": "Upload"
},
"controlnet": {
@ -242,7 +243,6 @@
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"toggleControlNet": "Toggle this ControlNet",
"unstarImage": "Unstar Image",
"w": "W",
"weight": "Weight",
"enableIPAdapter": "Enable IP Adapter",
@ -355,9 +355,9 @@
"autoSwitchNewImages": "Auto-Switch to New Images",
"copy": "Copy",
"currentlyInUse": "This image is currently in use in the following features:",
"drop":"Drop",
"dropOrUpload":"$t(gallery.drop) or Upload",
"dropToUpload":"$t(gallery.drop) to Upload",
"drop": "Drop",
"dropOrUpload": "$t(gallery.drop) or Upload",
"dropToUpload": "$t(gallery.drop) to Upload",
"deleteImage": "Delete Image",
"deleteImageBin": "Deleted images will be sent to your operating system's Bin.",
"deleteImagePermanent": "Deleted images cannot be restored.",
@ -377,6 +377,8 @@
"showGenerations": "Show Generations",
"showUploads": "Show Uploads",
"singleColumnLayout": "Single Column Layout",
"starImage": "Star Image",
"unstarImage": "Unstar Image",
"unableToLoad": "Unable to load Gallery",
"uploads": "Uploads",
"deleteSelection": "Delete Selection",
@ -598,6 +600,7 @@
},
"metadata": {
"cfgScale": "CFG scale",
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)",
"createdBy": "Created By",
"fit": "Image to image fit",
"generationMode": "Generation Mode",
@ -775,7 +778,7 @@
"esrganModel": "ESRGAN Model",
"loading": "loading",
"noLoRAsAvailable": "No LoRAs available",
"noLoRAsLoaded":"No LoRAs Loaded",
"noLoRAsLoaded": "No LoRAs Loaded",
"noMatchingLoRAs": "No matching LoRAs",
"noMatchingModels": "No matching Models",
"noModelsAvailable": "No models available",
@ -787,7 +790,7 @@
"nodes": {
"addNode": "Add Node",
"addNodeToolTip": "Add Node (Shift+A, Space)",
"addLinearView":"Add to Linear View",
"addLinearView": "Add to Linear View",
"animatedEdges": "Animated Edges",
"animatedEdgesHelp": "Animate selected edges and edges connected to selected nodes",
"boardField": "Board",
@ -802,9 +805,12 @@
"cannotConnectOutputToOutput": "Cannot connect output to output",
"cannotConnectToSelf": "Cannot connect to self",
"cannotDuplicateConnection": "Cannot create duplicate connections",
"nodePack": "Node pack",
"clipField": "Clip",
"clipFieldDescription": "Tokenizer and text_encoder submodels.",
"collection": "Collection",
"collectionFieldType": "{{name}} Collection",
"collectionOrScalarFieldType": "{{name}} Collection|Scalar",
"collectionDescription": "TODO",
"collectionItem": "Collection Item",
"collectionItemDescription": "TODO",
@ -891,10 +897,15 @@
"mainModelField": "Model",
"mainModelFieldDescription": "TODO",
"maybeIncompatible": "May be Incompatible With Installed",
"mismatchedVersion": "Has Mismatched Version",
"mismatchedVersion": "Invalid node: node {{node}} of type {{type}} has mismatched version (try updating?)",
"missingCanvaInitImage": "Missing canvas init image",
"missingCanvaInitMaskImages": "Missing canvas init and mask images",
"missingTemplate": "Missing Template",
"missingTemplate": "Invalid node: node {{node}} of type {{type}} missing template (not installed?)",
"sourceNodeDoesNotExist": "Invalid edge: source/output node {{node}} does not exist",
"targetNodeDoesNotExist": "Invalid edge: target/input node {{node}} does not exist",
"sourceNodeFieldDoesNotExist": "Invalid edge: source/output field {{node}}.{{field}} does not exist",
"targetNodeFieldDoesNotExist": "Invalid edge: target/input field {{node}}.{{field}} does not exist",
"deletedInvalidEdge": "Deleted invalid edge {{source}} -> {{target}}",
"noConnectionData": "No connection data",
"noConnectionInProgress": "No connection in progress",
"node": "Node",
@ -954,25 +965,37 @@
"stringDescription": "Strings are text.",
"stringPolymorphic": "String Polymorphic",
"stringPolymorphicDescription": "A collection of strings.",
"unableToLoadWorkflow": "Unable to Validate Workflow",
"unableToLoadWorkflow": "Unable to Load Workflow",
"unableToParseEdge": "Unable to parse edge",
"unableToParseNode": "Unable to parse node",
"unableToUpdateNode": "Unable to update node",
"unableToValidateWorkflow": "Unable to Validate Workflow",
"unableToMigrateWorkflow": "Unable to Migrate Workflow",
"unknownErrorValidatingWorkflow": "Unknown error validating workflow",
"inputFieldTypeParseError": "Unable to parse type of input field {{node}}.{{field}} ({{message}})",
"outputFieldTypeParseError": "Unable to parse type of output field {{node}}.{{field}} ({{message}})",
"unableToExtractSchemaNameFromRef": "unable to extract schema name from ref",
"unsupportedArrayItemType": "unsupported array item type \"{{type}}\"",
"unsupportedAnyOfLength": "too many union members ({{count}})",
"unsupportedMismatchedUnion": "mismatched CollectionOrScalar type with base types {{firstType}} and {{secondType}}",
"unableToParseFieldType": "unable to parse field type",
"unableToExtractEnumOptions": "unable to extract enum options",
"uNetField": "UNet",
"uNetFieldDescription": "UNet submodel.",
"unhandledInputProperty": "Unhandled input property",
"unhandledOutputProperty": "Unhandled output property",
"unknownField": "Unknown field",
"unknownFieldType": "$(nodes.unknownField) type",
"unknownFieldType": "$t(nodes.unknownField) type: {{type}}",
"unknownNode": "Unknown Node",
"unknownNodeType":"$t(nodes.unknownNode) type",
"unknownNodeType": "Unknown node type",
"unknownTemplate": "Unknown Template",
"unknownInput": "Unknown input",
"unknownInput": "Unknown input: {{name}}",
"unkownInvocation": "Unknown Invocation type",
"unknownOutput": "Unknown output",
"unknownOutput": "Unknown output: {{name}}",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"updateAllNodes": "Update Nodes",
"allNodesUpdated": "All Nodes Updated",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
@ -981,6 +1004,8 @@
"vaeModelFieldDescription": "TODO",
"validateConnections": "Validate Connections and Graph",
"validateConnectionsHelp": "Prevent invalid connections from being made, and invalid graphs from being invoked",
"unableToGetWorkflowVersion": "Unable to get workflow schema version",
"unrecognizedWorkflowVersion": "Unrecognized workflow schema version {{version}}",
"version": "Version",
"versionUnknown": " Version Unknown",
"workflow": "Workflow",
@ -1010,6 +1035,8 @@
"setType": "Set cancel type"
},
"cfgScale": "CFG Scale",
"cfgRescaleMultiplier": "CFG Rescale Multiplier",
"cfgRescale": "CFG Rescale",
"clipSkip": "CLIP Skip",
"clipSkipWithLayerCount": "CLIP Skip {{layerCount}}",
"closeViewer": "Close Viewer",
@ -1336,15 +1363,11 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": [
"The blur radius of the mask."
]
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": [
"The method of blur applied to the masked area."
]
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@ -1354,9 +1377,7 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": [
"The mode of the Coherence Pass."
]
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@ -1374,9 +1395,7 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": [
"Adjust the mask."
]
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@ -1434,9 +1453,7 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": [
"Method to infill the selected area."
]
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",
@ -1458,6 +1475,12 @@
"Controls how much your prompt influences the generation process."
]
},
"paramCFGRescaleMultiplier": {
"heading": "CFG Rescale Multiplier",
"paragraphs": [
"Rescale multiplier for CFG guidance, used for models trained using zero-terminal SNR (ztsnr). Suggested value 0.7."
]
},
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraphs": [
@ -1576,7 +1599,7 @@
"redo": "Redo",
"resetView": "Reset View",
"saveBoxRegionOnly": "Save Box Region Only",
"saveMask":"Save $t(unifiedCanvas.mask)",
"saveMask": "Save $t(unifiedCanvas.mask)",
"saveToGallery": "Save To Gallery",
"scaledBoundingBox": "Scaled Bounding Box",
"showCanvasDebugInfo": "Show Additional Canvas Info",

View File

@ -1137,8 +1137,7 @@
"openPose": "Openpose",
"controlAdapter_other": "Control Adapters",
"lineartAnime": "Lineart Anime",
"canny": "Canny",
"unstarImage": "取消收藏图像"
"canny": "Canny"
},
"queue": {
"status": "状态",

View File

@ -20,7 +20,8 @@ import AppErrorBoundaryFallback from './AppErrorBoundaryFallback';
import GlobalHotkeys from './GlobalHotkeys';
import PreselectedImage from './PreselectedImage';
import Toaster from './Toaster';
import { useSocketIO } from '../hooks/useSocketIO';
import { useSocketIO } from 'app/hooks/useSocketIO';
import { useClearStorage } from 'common/hooks/useClearStorage';
const DEFAULT_CONFIG = {};
@ -36,15 +37,16 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
const language = useAppSelector(languageSelector);
const logger = useLogger('system');
const dispatch = useAppDispatch();
const clearStorage = useClearStorage();
// singleton!
useSocketIO();
const handleReset = useCallback(() => {
localStorage.clear();
clearStorage();
location.reload();
return false;
}, []);
}, [clearStorage]);
useEffect(() => {
i18n.changeLanguage(language);

View File

@ -7,21 +7,23 @@ import { $headerComponent } from 'app/store/nanostores/headerComponent';
import { $isDebugging } from 'app/store/nanostores/isDebugging';
import { $projectId } from 'app/store/nanostores/projectId';
import { $queueId, DEFAULT_QUEUE_ID } from 'app/store/nanostores/queueId';
import { store } from 'app/store/store';
import { $store } from 'app/store/nanostores/store';
import { createStore } from 'app/store/store';
import { PartialAppConfig } from 'app/types/invokeai';
import Loading from 'common/components/Loading/Loading';
import AppDndContext from 'features/dnd/components/AppDndContext';
import 'i18n';
import React, {
PropsWithChildren,
ReactNode,
lazy,
memo,
useEffect,
useMemo,
} from 'react';
import { Provider } from 'react-redux';
import { addMiddleware, resetMiddlewares } from 'redux-dynamic-middlewares';
import { ManagerOptions, SocketOptions } from 'socket.io-client';
import Loading from '../../common/components/Loading/Loading';
import AppDndContext from '../../features/dnd/components/AppDndContext';
import '../../i18n';
const App = lazy(() => import('./App'));
const ThemeLocaleProvider = lazy(() => import('./ThemeLocaleProvider'));
@ -137,6 +139,14 @@ const InvokeAIUI = ({
};
}, [isDebugging]);
const store = useMemo(() => {
return createStore(projectId);
}, [projectId]);
useEffect(() => {
$store.set(store);
}, [store]);
return (
<React.StrictMode>
<Provider store={store}>

View File

@ -9,9 +9,9 @@ import { TOAST_OPTIONS, theme as invokeAITheme } from 'theme/theme';
import '@fontsource-variable/inter';
import { MantineProvider } from '@mantine/core';
import { useMantineTheme } from 'mantine-theme/theme';
import 'overlayscrollbars/overlayscrollbars.css';
import 'theme/css/overlayscrollbars.css';
import { useMantineTheme } from 'mantine-theme/theme';
type ThemeLocaleProviderProps = {
children: ReactNode;

View File

@ -1,8 +1 @@
export const LOCALSTORAGE_KEYS = [
'chakra-ui-color-mode',
'i18nextLng',
'ROARR_FILTER',
'ROARR_LOG',
];
export const LOCALSTORAGE_PREFIX = '@@invokeai-';
export const STORAGE_PREFIX = '@@invokeai-';

View File

@ -6,7 +6,7 @@ import {
createListenerMiddleware,
} from '@reduxjs/toolkit';
import type { AppDispatch, RootState } from '../../store';
import type { AppDispatch, RootState } from 'app/store/store';
import { addCommitStagingAreaImageListener } from './listeners/addCommitStagingAreaImageListener';
import { addFirstListImagesListener } from './listeners/addFirstListImagesListener.ts';
import { addAnyEnqueuedListener } from './listeners/anyEnqueued';
@ -71,7 +71,7 @@ import { addSocketUnsubscribedEventListener as addSocketUnsubscribedListener } f
import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSaved';
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addWorkflowLoadRequestedListener } from './listeners/workflowLoadRequested';
import { addUpdateAllNodesRequestedListener } from './listeners/updateAllNodesRequested';
export const listenerMiddleware = createListenerMiddleware();
@ -178,7 +178,7 @@ addBoardIdSelectedListener();
addReceivedOpenAPISchemaListener();
// Workflows
addWorkflowLoadedListener();
addWorkflowLoadRequestedListener();
addUpdateAllNodesRequestedListener();
// DND

View File

@ -12,10 +12,10 @@ import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import { isImageOutput } from 'services/api/guards';
import { BatchConfig, ImageDTO } from 'services/api/types';
import { socketInvocationComplete } from 'services/events/actions';
import { startAppListening } from '..';
import { isImageOutput } from 'features/nodes/types/common';
export const addControlNetImageProcessedListener = () => {
startAppListening({

View File

@ -10,8 +10,8 @@ import { blobToDataURL } from 'features/canvas/util/blobToDataURL';
import { getCanvasData } from 'features/canvas/util/getCanvasData';
import { getCanvasGenerationMode } from 'features/canvas/util/getCanvasGenerationMode';
import { canvasGraphBuilt } from 'features/nodes/store/actions';
import { buildCanvasGraph } from 'features/nodes/util/graphBuilders/buildCanvasGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
import { buildCanvasGraph } from 'features/nodes/util/graph/buildCanvasGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { imagesApi } from 'services/api/endpoints/images';
import { queueApi } from 'services/api/endpoints/queue';
import { ImageDTO } from 'services/api/types';

View File

@ -1,9 +1,9 @@
import { enqueueRequested } from 'app/store/actions';
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
import { buildLinearImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearImageToImageGraph';
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLImageToImageGraph';
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLTextToImageGraph';
import { buildLinearTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearTextToImageGraph';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildLinearImageToImageGraph } from 'features/nodes/util/graph/buildLinearImageToImageGraph';
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLImageToImageGraph';
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graph/buildLinearSDXLTextToImageGraph';
import { buildLinearTextToImageGraph } from 'features/nodes/util/graph/buildLinearTextToImageGraph';
import { queueApi } from 'services/api/endpoints/queue';
import { startAppListening } from '..';

View File

@ -1,5 +1,5 @@
import { enqueueRequested } from 'app/store/actions';
import { buildNodesGraph } from 'features/nodes/util/graphBuilders/buildNodesGraph';
import { buildNodesGraph } from 'features/nodes/util/graph/buildNodesGraph';
import { queueApi } from 'services/api/endpoints/queue';
import { BatchConfig } from 'services/api/types';
import { startAppListening } from '..';

View File

@ -5,19 +5,20 @@ import {
controlAdapterProcessedImageChanged,
selectControlAdapterAll,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { isModalOpenChanged } from 'features/deleteImageModal/store/slice';
import { selectListImagesBaseQueryArgs } from 'features/gallery/store/gallerySelectors';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { isInvocationNode } from 'features/nodes/types/types';
import { isImageFieldInputInstance } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { clearInitialImage } from 'features/parameters/store/generationSlice';
import { clamp, forEach } from 'lodash-es';
import { api } from 'services/api';
import { imagesApi } from 'services/api/endpoints/images';
import { imagesAdapter } from 'services/api/util';
import { startAppListening } from '..';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
export const addRequestedSingleImageDeletionListener = () => {
startAppListening({
@ -121,7 +122,7 @@ export const addRequestedSingleImageDeletionListener = () => {
forEach(node.data.inputs, (input) => {
if (
input.type === 'ImageField' &&
isImageFieldInputInstance(input) &&
input.value?.image_name === imageDTO.image_name
) {
dispatch(
@ -241,7 +242,7 @@ export const addRequestedMultipleImageDeletionListener = () => {
forEach(node.data.inputs, (input) => {
if (
input.type === 'ImageField' &&
isImageFieldInputInstance(input) &&
input.value?.image_name === imageDTO.image_name
) {
dispatch(

View File

@ -12,7 +12,7 @@ import { t } from 'i18next';
import { omit } from 'lodash-es';
import { boardsApi } from 'services/api/endpoints/boards';
import { startAppListening } from '..';
import { imagesApi } from '../../../../../services/api/endpoints/images';
import { imagesApi } from 'services/api/endpoints/images';
export const addImageUploadedFulfilledListener = () => {
startAppListening({

View File

@ -1,7 +1,7 @@
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { selectionChanged } from '../../../../../features/gallery/store/gallerySlice';
import { ImageDTO } from '../../../../../services/api/types';
import { selectionChanged } from 'features/gallery/store/gallerySlice';
import { ImageDTO } from 'services/api/types';
export const addImagesStarredListener = () => {
startAppListening({

View File

@ -1,7 +1,7 @@
import { imagesApi } from 'services/api/endpoints/images';
import { startAppListening } from '..';
import { selectionChanged } from '../../../../../features/gallery/store/gallerySlice';
import { ImageDTO } from '../../../../../services/api/types';
import { selectionChanged } from 'features/gallery/store/gallerySlice';
import { ImageDTO } from 'services/api/types';
export const addImagesUnstarredListener = () => {
startAppListening({

View File

@ -12,12 +12,12 @@ import {
setWidth,
vaeSelected,
} from 'features/parameters/store/generationSlice';
import { zMainOrOnnxModel } from 'features/parameters/types/parameterSchemas';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
import { forEach } from 'lodash-es';
import { startAppListening } from '..';
import { zParameterModel } from 'features/parameters/types/parameterSchemas';
export const addModelSelectedListener = () => {
startAppListening({
@ -26,7 +26,7 @@ export const addModelSelectedListener = () => {
const log = logger('models');
const state = getState();
const result = zMainOrOnnxModel.safeParse(action.payload);
const result = zParameterModel.safeParse(action.payload);
if (!result.success) {
log.error(

View File

@ -11,9 +11,9 @@ import {
vaeSelected,
} from 'features/parameters/store/generationSlice';
import {
zMainOrOnnxModel,
zSDXLRefinerModel,
zVaeModel,
zParameterModel,
zParameterSDXLRefinerModel,
zParameterVAEModel,
} from 'features/parameters/types/parameterSchemas';
import {
refinerModelChanged,
@ -67,7 +67,7 @@ export const addModelsLoadedListener = () => {
return;
}
const result = zMainOrOnnxModel.safeParse(models[0]);
const result = zParameterModel.safeParse(models[0]);
if (!result.success) {
log.error(
@ -119,7 +119,7 @@ export const addModelsLoadedListener = () => {
return;
}
const result = zSDXLRefinerModel.safeParse(models[0]);
const result = zParameterSDXLRefinerModel.safeParse(models[0]);
if (!result.success) {
log.error(
@ -170,7 +170,7 @@ export const addModelsLoadedListener = () => {
return;
}
const result = zVaeModel.safeParse(firstModel);
const result = zParameterVAEModel.safeParse(firstModel);
if (!result.success) {
log.error(

View File

@ -1,7 +1,7 @@
import { logger } from 'app/logging/logger';
import { parseify } from 'common/util/serialize';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodesSlice';
import { parseSchema } from 'features/nodes/util/parseSchema';
import { parseSchema } from 'features/nodes/util/schema/parseSchema';
import { size } from 'lodash-es';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import { startAppListening } from '..';
@ -15,6 +15,7 @@ export const addReceivedOpenAPISchemaListener = () => {
log.debug({ schemaJSON }, 'Received OpenAPI schema');
const { nodesAllowlist, nodesDenylist } = getState().config;
const nodeTemplates = parseSchema(
schemaJSON,
nodesAllowlist,

View File

@ -10,16 +10,16 @@ import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import {
LINEAR_UI_OUTPUT,
nodeIDDenyList,
} from 'features/nodes/util/graphBuilders/constants';
} from 'features/nodes/util/graph/constants';
import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
import { imagesAdapter } from 'services/api/util';
import {
appSocketInvocationComplete,
socketInvocationComplete,
} from 'services/events/actions';
import { startAppListening } from '../..';
import { isImageOutput } from 'features/nodes/types/common';
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
const nodeTypeDenylist = ['load_image', 'image'];

View File

@ -1,14 +1,16 @@
import { logger } from 'app/logging/logger';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import {
getNeedsUpdate,
updateNode,
} from 'features/nodes/hooks/useNodeVersion';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import { startAppListening } from '..';
import { logger } from 'app/logging/logger';
} from 'features/nodes/util/node/nodeUpdate';
import { NodeUpdateError } from 'features/nodes/types/error';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
import { startAppListening } from '..';
export const addUpdateAllNodesRequestedListener = () => {
startAppListening({
@ -20,22 +22,31 @@ export const addUpdateAllNodesRequestedListener = () => {
let unableToUpdateCount = 0;
nodes.forEach((node) => {
nodes.filter(isInvocationNode).forEach((node) => {
const template = templates[node.data.type];
const needsUpdate = getNeedsUpdate(node, template);
const updatedNode = updateNode(node, template);
if (!updatedNode) {
if (needsUpdate) {
if (!template) {
unableToUpdateCount++;
}
return;
}
if (!getNeedsUpdate(node, template)) {
// No need to increment the count here, since we're not actually updating
return;
}
try {
const updatedNode = updateNode(node, template);
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
} catch (e) {
if (e instanceof NodeUpdateError) {
unableToUpdateCount++;
}
}
});
if (unableToUpdateCount) {
log.warn(
`Unable to update ${unableToUpdateCount} nodes. Please report this issue.`
t('nodes.unableToUpdateNodes', {
count: unableToUpdateCount,
})
);
dispatch(
addToast(
@ -46,6 +57,15 @@ export const addUpdateAllNodesRequestedListener = () => {
})
)
);
} else {
dispatch(
addToast(
makeToast({
title: t('nodes.allNodesUpdated'),
status: 'success',
})
)
);
}
},
});

View File

@ -1,7 +1,7 @@
import { createAction } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import { parseify } from 'common/util/serialize';
import { buildAdHocUpscaleGraph } from 'features/nodes/util/graphBuilders/buildAdHocUpscaleGraph';
import { buildAdHocUpscaleGraph } from 'features/nodes/util/graph/buildAdHocUpscaleGraph';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';

View File

@ -0,0 +1,120 @@
import { logger } from 'app/logging/logger';
import { parseify } from 'common/util/serialize';
import { workflowLoadRequested } from 'features/nodes/store/actions';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { $flow } from 'features/nodes/store/reactFlowInstance';
import {
WorkflowMigrationError,
WorkflowVersionError,
} from 'features/nodes/types/error';
import { validateWorkflow } from 'features/nodes/util/workflow/validateWorkflow';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { t } from 'i18next';
import { z } from 'zod';
import { fromZodError } from 'zod-validation-error';
import { startAppListening } from '..';
export const addWorkflowLoadRequestedListener = () => {
startAppListening({
actionCreator: workflowLoadRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const workflow = action.payload;
const nodeTemplates = getState().nodes.nodeTemplates;
try {
const { workflow: validatedWorkflow, warnings } = validateWorkflow(
workflow,
nodeTemplates
);
dispatch(workflowLoaded(validatedWorkflow));
if (!warnings.length) {
dispatch(
addToast(
makeToast({
title: t('toast.workflowLoaded'),
status: 'success',
})
)
);
} else {
dispatch(
addToast(
makeToast({
title: t('toast.loadedWithWarnings'),
status: 'warning',
})
)
);
warnings.forEach(({ message, ...rest }) => {
log.warn(rest, message);
});
}
dispatch(setActiveTab('nodes'));
requestAnimationFrame(() => {
$flow.get()?.fitView();
});
} catch (e) {
if (e instanceof WorkflowVersionError) {
// The workflow version was not recognized in the valid list of versions
log.error({ error: parseify(e) }, e.message);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToValidateWorkflow'),
status: 'error',
description: e.message,
})
)
);
} else if (e instanceof WorkflowMigrationError) {
// There was a problem migrating the workflow to the latest version
log.error({ error: parseify(e) }, e.message);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToValidateWorkflow'),
status: 'error',
description: e.message,
})
)
);
} else if (e instanceof z.ZodError) {
// There was a problem validating the workflow itself
const { message } = fromZodError(e, {
prefix: t('nodes.workflowValidation'),
});
log.error({ error: parseify(e) }, message);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToValidateWorkflow'),
status: 'error',
description: message,
})
)
);
} else {
// Some other error occurred
console.log(e);
log.error(
{ error: parseify(e) },
t('nodes.unknownErrorValidatingWorkflow')
);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToValidateWorkflow'),
status: 'error',
description: t('nodes.unknownErrorValidatingWorkflow'),
})
)
);
}
}
},
});
};

View File

@ -1,56 +0,0 @@
import { logger } from 'app/logging/logger';
import { workflowLoadRequested } from 'features/nodes/store/actions';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { $flow } from 'features/nodes/store/reactFlowInstance';
import { validateWorkflow } from 'features/nodes/util/validateWorkflow';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { startAppListening } from '..';
import { t } from 'i18next';
export const addWorkflowLoadedListener = () => {
startAppListening({
actionCreator: workflowLoadRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const workflow = action.payload;
const nodeTemplates = getState().nodes.nodeTemplates;
const { workflow: validatedWorkflow, errors } = validateWorkflow(
workflow,
nodeTemplates
);
dispatch(workflowLoaded(validatedWorkflow));
if (!errors.length) {
dispatch(
addToast(
makeToast({
title: t('toast.workflowLoaded'),
status: 'success',
})
)
);
} else {
dispatch(
addToast(
makeToast({
title: t('toast.loadedWithWarnings'),
status: 'warning',
})
)
);
errors.forEach(({ message, ...rest }) => {
log.warn(rest, message);
});
}
dispatch(setActiveTab('nodes'));
requestAnimationFrame(() => {
$flow.get()?.fitView();
});
},
});
};

View File

@ -23,16 +23,16 @@ import systemReducer from 'features/system/store/systemSlice';
import hotkeysReducer from 'features/ui/store/hotkeysSlice';
import uiReducer from 'features/ui/store/uiSlice';
import dynamicMiddlewares from 'redux-dynamic-middlewares';
import { rememberEnhancer, rememberReducer } from 'redux-remember';
import { Driver, rememberEnhancer, rememberReducer } from 'redux-remember';
import { api } from 'services/api';
import { LOCALSTORAGE_PREFIX } from './constants';
import { STORAGE_PREFIX } from './constants';
import { serialize } from './enhancers/reduxRemember/serialize';
import { unserialize } from './enhancers/reduxRemember/unserialize';
import { actionSanitizer } from './middleware/devtools/actionSanitizer';
import { actionsDenylist } from './middleware/devtools/actionsDenylist';
import { stateSanitizer } from './middleware/devtools/stateSanitizer';
import { listenerMiddleware } from './middleware/listenerMiddleware';
import { $store } from './nanostores/store';
import { createStore as createIDBKeyValStore, get, set } from 'idb-keyval';
const allReducers = {
canvas: canvasReducer,
@ -74,16 +74,28 @@ const rememberedKeys: (keyof typeof allReducers)[] = [
'modelmanager',
];
export const store = configureStore({
// Create a custom idb-keyval store (just needed to customize the name)
export const idbKeyValStore = createIDBKeyValStore('invoke', 'invoke-store');
// Create redux-remember driver, wrapping idb-keyval
const idbKeyValDriver: Driver = {
getItem: (key) => get(key, idbKeyValStore),
setItem: (key, value) => set(key, value, idbKeyValStore),
};
export const createStore = (uniqueStoreKey?: string) =>
configureStore({
reducer: rememberedRootReducer,
enhancers: (existingEnhancers) => {
return existingEnhancers
.concat(
rememberEnhancer(window.localStorage, rememberedKeys, {
rememberEnhancer(idbKeyValDriver, rememberedKeys, {
persistDebounce: 300,
serialize,
unserialize,
prefix: LOCALSTORAGE_PREFIX,
prefix: uniqueStoreKey
? `${STORAGE_PREFIX}${uniqueStoreKey}-`
: STORAGE_PREFIX,
})
)
.concat(autoBatchEnhancer());
@ -119,12 +131,13 @@ export const store = configureStore({
return true;
},
},
});
});
export type AppGetState = typeof store.getState;
export type RootState = ReturnType<typeof store.getState>;
export type AppGetState = ReturnType<
ReturnType<typeof createStore>['getState']
>;
export type RootState = ReturnType<ReturnType<typeof createStore>['getState']>;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type AppThunkDispatch = ThunkDispatch<RootState, any, AnyAction>;
export type AppDispatch = typeof store.dispatch;
export type AppDispatch = ReturnType<typeof createStore>['dispatch'];
export const stateSelector = (state: RootState) => state;
$store.set(store);

View File

@ -20,7 +20,7 @@ import { merge, omit } from 'lodash-es';
import { PropsWithChildren, memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaExternalLinkAlt } from 'react-icons/fa';
import { useAppSelector } from '../../../app/store/storeHooks';
import { useAppSelector } from 'app/store/storeHooks';
import {
Feature,
OPEN_DELAY,

View File

@ -25,6 +25,7 @@ export type Feature =
| 'lora'
| 'noiseUseCPU'
| 'paramCFGScale'
| 'paramCFGRescaleMultiplier'
| 'paramDenoisingStrength'
| 'paramIterations'
| 'paramModel'

View File

@ -0,0 +1,12 @@
import { idbKeyValStore } from 'app/store/store';
import { clear } from 'idb-keyval';
import { useCallback } from 'react';
export const useClearStorage = () => {
const clearStorage = useCallback(() => {
clear(idbKeyValStore);
localStorage.clear();
}, []);
return clearStorage;
};

View File

@ -4,7 +4,7 @@ import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { selectControlAdapterAll } from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { isInvocationNode } from 'features/nodes/types/types';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach } from 'lodash-es';

View File

@ -4,7 +4,7 @@ import IAIButton from 'common/components/IAIButton';
import { clearCanvasHistory } from 'features/canvas/store/canvasSlice';
import { useTranslation } from 'react-i18next';
import { FaTrash } from 'react-icons/fa';
import { isStagingSelector } from '../store/canvasSelectors';
import { isStagingSelector } from 'features/canvas/store/canvasSelectors';
import { memo, useCallback } from 'react';
const ClearCanvasHistoryButtonModal = () => {

View File

@ -11,18 +11,18 @@ import { KonvaEventObject } from 'konva/lib/Node';
import { Vector2d } from 'konva/lib/types';
import { memo, useCallback, useEffect, useRef } from 'react';
import { Layer, Stage } from 'react-konva';
import useCanvasDragMove from '../hooks/useCanvasDragMove';
import useCanvasHotkeys from '../hooks/useCanvasHotkeys';
import useCanvasMouseDown from '../hooks/useCanvasMouseDown';
import useCanvasMouseMove from '../hooks/useCanvasMouseMove';
import useCanvasMouseOut from '../hooks/useCanvasMouseOut';
import useCanvasMouseUp from '../hooks/useCanvasMouseUp';
import useCanvasWheel from '../hooks/useCanvasZoom';
import { canvasResized } from '../store/canvasSlice';
import useCanvasDragMove from 'features/canvas/hooks/useCanvasDragMove';
import useCanvasHotkeys from 'features/canvas/hooks/useCanvasHotkeys';
import useCanvasMouseDown from 'features/canvas/hooks/useCanvasMouseDown';
import useCanvasMouseMove from 'features/canvas/hooks/useCanvasMouseMove';
import useCanvasMouseOut from 'features/canvas/hooks/useCanvasMouseOut';
import useCanvasMouseUp from 'features/canvas/hooks/useCanvasMouseUp';
import useCanvasWheel from 'features/canvas/hooks/useCanvasZoom';
import { canvasResized } from 'features/canvas/store/canvasSlice';
import {
setCanvasBaseLayer,
setCanvasStage,
} from '../util/konvaInstanceProvider';
} from 'features/canvas/util/konvaInstanceProvider';
import IAICanvasBoundingBoxOverlay from './IAICanvasBoundingBoxOverlay';
import IAICanvasGrid from './IAICanvasGrid';
import IAICanvasIntermediateImage from './IAICanvasIntermediateImage';

View File

@ -3,7 +3,7 @@ import { useAppSelector } from 'app/store/storeHooks';
import { isEqual } from 'lodash-es';
import { Group, Rect } from 'react-konva';
import { canvasSelector } from '../store/canvasSelectors';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { memo } from 'react';
const selector = createSelector(

View File

@ -4,7 +4,7 @@ import { memo } from 'react';
import { Image } from 'react-konva';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import useImage from 'use-image';
import { CanvasImage } from '../store/canvasTypes';
import { CanvasImage } from 'features/canvas/store/canvasTypes';
import IAICanvasImageErrorFallback from './IAICanvasImageErrorFallback';
type IAICanvasImageProps = {

View File

@ -2,7 +2,7 @@ import { useColorModeValue, useToken } from '@chakra-ui/react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { Group, Rect, Text } from 'react-konva';
import { CanvasImage } from '../store/canvasTypes';
import { CanvasImage } from 'features/canvas/store/canvasTypes';
type IAICanvasImageErrorFallbackProps = {
canvasImage: CanvasImage;

View File

@ -5,7 +5,7 @@ import { GroupConfig } from 'konva/lib/Group';
import { isEqual } from 'lodash-es';
import { Group, Line } from 'react-konva';
import { isCanvasMaskLine } from '../store/canvasTypes';
import { isCanvasMaskLine } from 'features/canvas/store/canvasTypes';
import { memo } from 'react';
export const canvasLinesSelector = createSelector(

View File

@ -10,7 +10,7 @@ import {
isCanvasBaseLine,
isCanvasEraseRect,
isCanvasFillRect,
} from '../store/canvasTypes';
} from 'features/canvas/store/canvasTypes';
import IAICanvasImage from './IAICanvasImage';
import { memo } from 'react';

View File

@ -28,7 +28,7 @@ import {
FaTimes,
} from 'react-icons/fa';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import { stagingAreaImageSaved } from '../store/actions';
import { stagingAreaImageSaved } from 'features/canvas/store/actions';
const selector = createSelector(
[canvasSelector],

View File

@ -5,7 +5,7 @@ import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import GenerationModeStatusText from 'features/parameters/components/Parameters/Canvas/GenerationModeStatusText';
import { isEqual } from 'lodash-es';
import { useTranslation } from 'react-i18next';
import roundToHundreth from '../util/roundToHundreth';
import roundToHundreth from 'features/canvas/util/roundToHundreth';
import IAICanvasStatusTextCursorPos from './IAICanvasStatusText/IAICanvasStatusTextCursorPos';
import { memo } from 'react';

View File

@ -9,7 +9,7 @@ import { Circle, Group } from 'react-konva';
import {
COLOR_PICKER_SIZE,
COLOR_PICKER_STROKE_RADIUS,
} from '../util/constants';
} from 'features/canvas/util/constants';
import { memo } from 'react';
const canvasBrushPreviewSelector = createSelector(

View File

@ -22,7 +22,7 @@ import { ChangeEvent, memo, useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { FaWrench } from 'react-icons/fa';
import ClearCanvasHistoryButtonModal from '../ClearCanvasHistoryButtonModal';
import ClearCanvasHistoryButtonModal from 'features/canvas/components/ClearCanvasHistoryButtonModal';
export const canvasControlsSelector = createSelector(
[canvasSelector],

View File

@ -17,8 +17,8 @@ import { isEqual } from 'lodash-es';
import { useRef } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { CanvasTool } from '../store/canvasTypes';
import { getCanvasStage } from '../util/konvaInstanceProvider';
import { CanvasTool } from 'features/canvas/store/canvasTypes';
import { getCanvasStage } from 'features/canvas/util/konvaInstanceProvider';
const selector = createSelector(
[canvasSelector, activeTabNameSelector, isStagingSelector],

View File

@ -15,7 +15,7 @@ import { KonvaEventObject } from 'konva/lib/Node';
import { isEqual } from 'lodash-es';
import { MutableRefObject, useCallback } from 'react';
import getScaledCursorPosition from '../util/getScaledCursorPosition';
import getScaledCursorPosition from 'features/canvas/util/getScaledCursorPosition';
import useColorPicker from './useColorUnderCursor';
const selector = createSelector(

View File

@ -14,7 +14,7 @@ import { Vector2d } from 'konva/lib/types';
import { isEqual } from 'lodash-es';
import { MutableRefObject, useCallback } from 'react';
import getScaledCursorPosition from '../util/getScaledCursorPosition';
import getScaledCursorPosition from 'features/canvas/util/getScaledCursorPosition';
import useColorPicker from './useColorUnderCursor';
const selector = createSelector(

View File

@ -15,7 +15,7 @@ import Konva from 'konva';
import { isEqual } from 'lodash-es';
import { MutableRefObject, useCallback } from 'react';
import getScaledCursorPosition from '../util/getScaledCursorPosition';
import getScaledCursorPosition from 'features/canvas/util/getScaledCursorPosition';
const selector = createSelector(
[activeTabNameSelector, canvasSelector, isStagingSelector],

View File

@ -14,7 +14,7 @@ import {
CANVAS_SCALE_BY,
MAX_CANVAS_SCALE,
MIN_CANVAS_SCALE,
} from '../util/constants';
} from 'features/canvas/util/constants';
const selector = createSelector(
[canvasSelector],

View File

@ -3,11 +3,11 @@ import Konva from 'konva';
import {
commitColorPickerColor,
setColorPickerColor,
} from '../store/canvasSlice';
} from 'features/canvas/store/canvasSlice';
import {
getCanvasBaseLayer,
getCanvasStage,
} from '../util/konvaInstanceProvider';
} from 'features/canvas/util/konvaInstanceProvider';
const useColorPicker = () => {
const dispatch = useAppDispatch();

View File

@ -9,12 +9,12 @@ import { IRect, Vector2d } from 'konva/lib/types';
import { clamp, cloneDeep } from 'lodash-es';
import { RgbaColor } from 'react-colorful';
import { ImageDTO } from 'services/api/types';
import calculateCoordinates from '../util/calculateCoordinates';
import calculateScale from '../util/calculateScale';
import { STAGE_PADDING_PERCENTAGE } from '../util/constants';
import floorCoordinates from '../util/floorCoordinates';
import getScaledBoundingBoxDimensions from '../util/getScaledBoundingBoxDimensions';
import roundDimensionsTo64 from '../util/roundDimensionsTo64';
import calculateCoordinates from 'features/canvas/util/calculateCoordinates';
import calculateScale from 'features/canvas/util/calculateScale';
import { STAGE_PADDING_PERCENTAGE } from 'features/canvas/util/constants';
import floorCoordinates from 'features/canvas/util/floorCoordinates';
import getScaledBoundingBoxDimensions from 'features/canvas/util/getScaledBoundingBoxDimensions';
import roundDimensionsTo64 from 'features/canvas/util/roundDimensionsTo64';
import {
BoundingBoxScale,
CanvasBaseLine,

View File

@ -4,7 +4,7 @@ import {
CanvasLayerState,
Dimensions,
isCanvasMaskLine,
} from '../store/canvasTypes';
} from 'features/canvas/store/canvasTypes';
import createMaskStage from './createMaskStage';
import { getCanvasBaseLayer, getCanvasStage } from './konvaInstanceProvider';
import { konvaNodeToBlob } from './konvaNodeToBlob';

View File

@ -2,7 +2,7 @@ import {
areAnyPixelsBlack,
getImageDataTransparency,
} from 'common/util/arrayBuffer';
import { GenerationMode } from '../store/canvasTypes';
import { GenerationMode } from 'features/canvas/store/canvasTypes';
export const getCanvasGenerationMode = (
baseImageData: ImageData,

View File

@ -1,5 +1,5 @@
import { roundToMultiple } from 'common/util/roundDownToMultiple';
import { Dimensions } from '../store/canvasTypes';
import { Dimensions } from 'features/canvas/store/canvasTypes';
const getScaledBoundingBoxDimensions = (dimensions: Dimensions) => {
const { width, height } = dimensions;

View File

@ -1,5 +1,5 @@
import { roundToMultiple } from 'common/util/roundDownToMultiple';
import { Dimensions } from '../store/canvasTypes';
import { Dimensions } from 'features/canvas/store/canvasTypes';
const roundDimensionsTo64 = (dimensions: Dimensions): Dimensions => {
return {

View File

@ -20,7 +20,10 @@ import {
useAddImagesToBoardMutation,
useRemoveImagesFromBoardMutation,
} from 'services/api/endpoints/images';
import { changeBoardReset, isModalOpenChanged } from '../store/slice';
import {
changeBoardReset,
isModalOpenChanged,
} from 'features/changeBoardModal/store/slice';
import { useTranslation } from 'react-i18next';
const selector = createSelector(

View File

@ -6,7 +6,7 @@ import {
controlAdapterDuplicated,
controlAdapterIsEnabledChanged,
controlAdapterRemoved,
} from '../store/controlAdaptersSlice';
} from 'features/controlAdapters/store/controlAdaptersSlice';
import ParamControlAdapterModel from './parameters/ParamControlAdapterModel';
import ParamControlAdapterWeight from './parameters/ParamControlAdapterWeight';
@ -16,8 +16,8 @@ import IAISwitch from 'common/components/IAISwitch';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { useTranslation } from 'react-i18next';
import { useToggle } from 'react-use';
import { useControlAdapterIsEnabled } from '../hooks/useControlAdapterIsEnabled';
import { useControlAdapterType } from '../hooks/useControlAdapterType';
import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
import { useControlAdapterType } from 'features/controlAdapters/hooks/useControlAdapterType';
import ControlAdapterImagePreview from './ControlAdapterImagePreview';
import ControlAdapterProcessorComponent from './ControlAdapterProcessorComponent';
import ControlAdapterShouldAutoConfig from './ControlAdapterShouldAutoConfig';

View File

@ -5,14 +5,19 @@ import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIDndImage from 'common/components/IAIDndImage';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
import { setBoundingBoxDimensions } from 'features/canvas/store/canvasSlice';
import { useControlAdapterControlImage } from 'features/controlAdapters/hooks/useControlAdapterControlImage';
import { useControlAdapterProcessedControlImage } from 'features/controlAdapters/hooks/useControlAdapterProcessedControlImage';
import { useControlAdapterProcessorType } from 'features/controlAdapters/hooks/useControlAdapterProcessorType';
import { controlAdapterImageChanged } from 'features/controlAdapters/store/controlAdaptersSlice';
import {
TypesafeDraggableData,
TypesafeDroppableData,
} from 'features/dnd/types';
import { setHeight, setWidth } from 'features/parameters/store/generationSlice';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { memo, useCallback, useMemo, useState } from 'react';
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { FaRulerVertical, FaSave, FaUndo } from 'react-icons/fa';
import {
@ -22,11 +27,6 @@ import {
useRemoveImageFromBoardMutation,
} from 'services/api/endpoints/images';
import { PostUploadAction } from 'services/api/types';
import IAIDndImageIcon from '../../../common/components/IAIDndImageIcon';
import { controlAdapterImageChanged } from '../store/controlAdaptersSlice';
import { useControlAdapterControlImage } from '../hooks/useControlAdapterControlImage';
import { useControlAdapterProcessedControlImage } from '../hooks/useControlAdapterProcessedControlImage';
import { useControlAdapterProcessorType } from '../hooks/useControlAdapterProcessorType';
type Props = {
id: string;
@ -35,13 +35,15 @@ type Props = {
const selector = createSelector(
stateSelector,
({ controlAdapters, gallery }) => {
({ controlAdapters, gallery, system }) => {
const { pendingControlImages } = controlAdapters;
const { autoAddBoardId } = gallery;
const { isConnected } = system;
return {
pendingControlImages,
autoAddBoardId,
isConnected,
};
},
defaultSelectorOptions
@ -55,18 +57,19 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const { pendingControlImages, autoAddBoardId } = useAppSelector(selector);
const { pendingControlImages, autoAddBoardId, isConnected } =
useAppSelector(selector);
const activeTabName = useAppSelector(activeTabNameSelector);
const [isMouseOverImage, setIsMouseOverImage] = useState(false);
const { currentData: controlImage } = useGetImageDTOQuery(
controlImageName ?? skipToken
);
const { currentData: controlImage, isError: isErrorControlImage } =
useGetImageDTOQuery(controlImageName ?? skipToken);
const { currentData: processedControlImage } = useGetImageDTOQuery(
processedControlImageName ?? skipToken
);
const {
currentData: processedControlImage,
isError: isErrorProcessedControlImage,
} = useGetImageDTOQuery(processedControlImageName ?? skipToken);
const [changeIsIntermediate] = useChangeImageIsIntermediateMutation();
const [addToBoard] = useAddImageToBoardMutation();
@ -158,6 +161,17 @@ const ControlAdapterImagePreview = ({ isSmall, id }: Props) => {
!pendingControlImages.includes(id) &&
processorType !== 'none';
useEffect(() => {
if (isConnected && (isErrorControlImage || isErrorProcessedControlImage)) {
handleResetControlImage();
}
}, [
handleResetControlImage,
isConnected,
isErrorControlImage,
isErrorProcessedControlImage,
]);
return (
<Flex
onMouseEnter={handleMouseEnter}

View File

@ -12,8 +12,8 @@ import NormalBaeProcessor from './processors/NormalBaeProcessor';
import OpenposeProcessor from './processors/OpenposeProcessor';
import PidiProcessor from './processors/PidiProcessor';
import ZoeDepthProcessor from './processors/ZoeDepthProcessor';
import { useControlAdapterIsEnabled } from '../hooks/useControlAdapterIsEnabled';
import { useControlAdapterProcessorNode } from '../hooks/useControlAdapterProcessorNode';
import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
import { useControlAdapterProcessorNode } from 'features/controlAdapters/hooks/useControlAdapterProcessorNode';
export type Props = {
id: string;

View File

@ -3,8 +3,8 @@ import IAISwitch from 'common/components/IAISwitch';
import { controlAdapterAutoConfigToggled } from 'features/controlAdapters/store/controlAdaptersSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useControlAdapterIsEnabled } from '../hooks/useControlAdapterIsEnabled';
import { useControlAdapterShouldAutoConfig } from '../hooks/useControlAdapterShouldAutoConfig';
import { useControlAdapterIsEnabled } from 'features/controlAdapters/hooks/useControlAdapterIsEnabled';
import { useControlAdapterShouldAutoConfig } from 'features/controlAdapters/hooks/useControlAdapterShouldAutoConfig';
import { isNil } from 'lodash-es';
type Props = {

View File

@ -19,7 +19,7 @@ import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { Fragment, memo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaPlus } from 'react-icons/fa';
import { useAddControlAdapter } from '../hooks/useAddControlAdapter';
import { useAddControlAdapter } from 'features/controlAdapters/hooks/useAddControlAdapter';
const selector = createSelector(
[stateSelector],

View File

@ -11,9 +11,9 @@ import { configSelector } from 'features/system/store/configSelectors';
import { map } from 'lodash-es';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { CONTROLNET_PROCESSORS } from '../../store/constants';
import { controlAdapterProcessortTypeChanged } from '../../store/controlAdaptersSlice';
import { ControlAdapterProcessorType } from '../../store/types';
import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants';
import { controlAdapterProcessortTypeChanged } from 'features/controlAdapters/store/controlAdaptersSlice';
import { ControlAdapterProcessorType } from 'features/controlAdapters/store/types';
type Props = {
id: string;

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredCannyImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.canny_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredColorMapImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.color_map_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredContentShuffleImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.content_shuffle_image_processor

View File

@ -4,7 +4,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredHedImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { ChangeEvent, memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.hed_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredLineartAnimeImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.lineart_anime_image_processor

View File

@ -4,7 +4,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredLineartImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { ChangeEvent, memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.lineart_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredMediapipeFaceProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.mediapipe_face_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredMidasDepthImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.midas_depth_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredMlsdImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.mlsd_image_processor

View File

@ -3,7 +3,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredNormalbaeImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.normalbae_image_processor

View File

@ -4,7 +4,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredOpenposeImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { ChangeEvent, memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.openpose_image_processor

View File

@ -4,7 +4,7 @@ import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants'
import { RequiredPidiImageProcessorInvocation } from 'features/controlAdapters/store/types';
import { ChangeEvent, memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.pidi_image_processor

Some files were not shown because too many files have changed in this diff Show More