Compare commits

...

23 Commits

Author SHA1 Message Date
338081f855 feat(nodes,ui): use new metadata nodes for all metadata 2023-09-23 14:51:39 +10:00
57b0e175da chore(ui): typegen 2023-09-23 12:35:11 +10:00
38fbc7ef2b feat(nodes): improved api for workflow and metadata embedding
- Invocations that want metadata or workflows now must inherit from `WithMetadata` and `WithWorkflow` models (in addition to `BaseInvocation`). These provide the appropriate `workflow` and `metadata` fields to the invocation. The alternative API is to use the `@invocation` decorator, but this doesn't give type hints in invocation definitions and is also much harder to understand. I tried and scrapped it.
- Add `MergeMetadataDictInvocation`, which takes a collection of `MetadataDict` and merges them together. This adds additional flexibility to the construction of metadata. Without this, the linear UI would need a separate MetadataItemInvocation for *every* piece of metadata - with this merge node, far fewer nodes are needed to construct metadata.
- Update all invocations to use this new setup
- Update UI with new field types and logic to handle this new way of specifying workflows and metadata should be used
2023-09-23 12:34:59 +10:00
23771aadf5 feat(nodes): refine validation for collector outputs
- Remove the check for `Any` inside the validation function, instead only run the validation function if the collect is not outputting to an `Any` or `list[Any]` (both are valid)
- Add validation for Collect -> Iterate, and refine the test for this
2023-09-23 12:34:47 +10:00
aade61701c feat(nodes): add more tests
- Add test, currently xfail'd, for collect -> iterate node connections
- Add more polymorphics tests
2023-09-23 12:34:47 +10:00
692e104c9c chore: isort 2023-09-23 12:34:47 +10:00
a33d3eaf8e feat(nodes): make MetadataInvocation.items polymorphic 2023-09-23 12:34:47 +10:00
809be4c584 feat(nodes): allow collect out to connect Any and polymorphic inputs
- Add condition for destination fields being `Any` during validation of Collect node edges
- Add condition for polymorphic inputs to accept collect outputs, when the base type matchs, eg `str -> collect -> Union[str, list[str]]`
- Add tests for `Any` and polymorphic handling (which never had tests added)
2023-09-23 12:34:47 +10:00
1171706535 feat(nodes): generalised metadata handling for nodes 2023-09-23 12:34:47 +10:00
78658dcd91 fix(ui): do not process gallery logic for image primitive node 2023-09-23 12:34:38 +10:00
33dc7ead71 fix(ui): hide workflow and gallery checkboxes on image primitive
This node doesn't actually *save* the image, so these checkboxes do nothing on it.
2023-09-23 12:34:38 +10:00
e05179e3f8 fix(ui): fix node glow styling 2023-09-23 12:34:38 +10:00
bd8ea26f38 feat: Add Color PreProcessor to Linear UI 2023-09-23 12:34:38 +10:00
021f77e35e fix: Handle cases where tile size > image size 2023-09-23 12:34:22 +10:00
f718de98f8 feat: Add Color Map Preprocessor 2023-09-23 12:34:22 +10:00
5d274c25f5 feat(ui): refactor informational popover
- Change translations to use arrays of paragraphs instead of a single paragraph.
- Change component to accept a `feature` prop to identify the feature which the popover describes.
- Add optional `wrapperProps`: passed to the wrapper element, allowing more flexibility when using the popover
- Add optional `popoverProps`: passed to the `<Popover />` component, allowing for overriding individual instances of the popover's props
- Move definitions of features and popover settings to `invokeai/frontend/web/src/common/components/IAIInformationalPopover/constants.ts`
  - Add some type safety to the `feature` prop
  - Edit `POPOVER_DATA` to provide `image`, `href`, `buttonLabel`, and any popover props. The popover props are applied to all instances of the popover for the given feature. Note that the component prop `popoverProps` will override settings here.
- Remove the popover's arrow. Because the popover is wrapping groups of components, sometimes the error ends up pointing to nothing, which looks kinda janky. I've just removed the arrow entirely, but feel free to add it back if you think it looks better.
- Use a `link` variant button with external link icon to better communicate that clicking the button will open a new tab.
- Default the link button label to "Learn More" (if a label is provided, that will be used instead)
- Make default position `top`, but set manually set some to `right` - namely, anything with a dropdown. This prevents the popovers from obscuring or being obscured by the dropdowns.
- Do a bit more restructuring of the Popover component itself, and how it is integrated with other components
- More ref forwarding
- Make the open delay 1s
- Set the popovers to use lazy mounting (eg do not mount until the user opens the thing)
- Update the verbiage for many popover items and add missing dynamic prompts stuff
2023-09-23 12:34:22 +10:00
e8c0f040ad fix(nodes): do not use double-underscores in cache service 2023-09-23 12:34:22 +10:00
3c38e735b6 fix(nodes): do not disable invocation cache delete methods
When the runtime disabled flag is on, do not skip the delete methods. This could lead to a hit on a missing resource.

Do skip them when the cache size is 0, because the user cannot change this (must restart app to change it).
2023-09-23 12:34:22 +10:00
f59c3f560a feat(nodes): provide board_id in image creation 2023-09-23 12:34:22 +10:00
faabfe9520 feat(ui): update linear UI to use new board field on save_image
- No longer need to make network request to add image to board after it's finished - removed
- Update linear graphs & upscale graph to save image to the board
- Update autoSwitch logic so when image is generated we still switch to the right board
2023-09-23 12:34:22 +10:00
ba5f001645 feat: move board logic to save_image node
- Remove the add-to-board node
- Create `BoardField` field type & add it to `save_image` node
- Add UI for `BoardField`
- Tighten up some loose types
- Make `save_image` node, in workflow editor, default to not intermediate
- Patch bump `save_image`
2023-09-23 12:34:22 +10:00
3d2a28b6e0 Add images to a board through nodes 2023-09-23 12:34:07 +10:00
78dda533e2 feat: save workflow to images db
- Add `workflow` column to `images` table
- Revise image saving and uploading logic to save workflow and metadata to db
- Update UI queries to fetch metadata and workflow from db instead of file
2023-09-22 23:12:13 +10:00
113 changed files with 3057 additions and 2366 deletions

View File

@ -45,13 +45,17 @@ async def upload_image(
if not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
contents = await file.read()
metadata: Optional[str] = None
workflow: Optional[str] = None
contents = await file.read()
try:
pil_image = Image.open(io.BytesIO(contents))
if crop_visible:
bbox = pil_image.getbbox()
pil_image = pil_image.crop(bbox)
metadata = pil_image.info.get("invokeai_metadata", None)
workflow = pil_image.info.get("invokeai_workflow", None)
except Exception:
# Error opening the image
raise HTTPException(status_code=415, detail="Failed to read image")
@ -63,6 +67,8 @@ async def upload_image(
image_category=image_category,
session_id=session_id,
board_id=board_id,
metadata=metadata,
workflow=workflow,
is_intermediate=is_intermediate,
)

View File

@ -71,7 +71,12 @@ class FieldDescriptions:
denoised_latents = "Denoised latents tensor"
latents = "Latents tensor"
strength = "Strength of denoising (proportional to steps)"
core_metadata = "Optional core metadata to be written to image"
metadata = "Optional metadata to be saved with the image"
metadata_dict_collection = "Collection of MetadataDicts"
metadata_item_polymorphic = "A single metadata item or collection of metadata items"
metadata_item_label = "Label for this metadata item"
metadata_item_value = "The value for this metadata item (may be any type)"
workflow = "Optional workflow to be saved with the image"
interp_mode = "Interpolation mode"
torch_antialias = "Whether or not to apply antialiasing (bilinear or bicubic only)"
fp32 = "Whether or not to use full float32 precision"
@ -88,6 +93,9 @@ class FieldDescriptions:
num_1 = "The first number"
num_2 = "The second number"
mask = "The mask to use for the operation"
board = "The board to save the image to"
image = "The image to process"
tile_size = "Tile size"
class Input(str, Enum):
@ -172,7 +180,12 @@ class UIType(str, Enum):
Scheduler = "Scheduler"
WorkflowField = "WorkflowField"
IsIntermediate = "IsIntermediate"
MetadataField = "MetadataField"
BoardField = "BoardField"
Any = "Any"
MetadataItem = "MetadataItem"
MetadataItemCollection = "MetadataItemCollection"
MetadataItemPolymorphic = "MetadataItemPolymorphic"
MetadataDict = "MetadataDict"
# endregion
@ -618,23 +631,8 @@ class BaseInvocation(ABC, BaseModel):
is_intermediate: bool = InputField(
default=False, description="Whether or not this is an intermediate invocation.", ui_type=UIType.IsIntermediate
)
workflow: Optional[str] = InputField(
default=None,
description="The workflow to save with the image",
ui_type=UIType.WorkflowField,
)
use_cache: bool = InputField(default=True, description="Whether or not to use the cache")
@validator("workflow", pre=True)
def validate_workflow_is_json(cls, v):
if v is None:
return None
try:
json.loads(v)
except json.decoder.JSONDecodeError:
raise ValueError("Workflow must be valid JSON")
return v
UIConfig: ClassVar[Type[UIConfigBase]]
@ -656,6 +654,8 @@ def invocation(
:param Optional[str] title: Adds a title to the invocation. Use if the auto-generated title isn't quite right. Defaults to None.
:param Optional[list[str]] tags: Adds tags to the invocation. Invocations may be searched for by their tags. Defaults to None.
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
:param Optional[str] version: Adds a version to the invocation. Must be a valid semver string. Defaults to None.
:param Optional[bool] use_cache: Whether or not to use the invocation cache. Defaults to True. The user may override this in the workflow editor.
"""
def wrapper(cls: Type[GenericBaseInvocation]) -> Type[GenericBaseInvocation]:
@ -737,3 +737,19 @@ def invocation_output(
return cls
return wrapper
class WithWorkflow(BaseModel):
workflow: Optional[str] = InputField(
default=None, description=FieldDescriptions.workflow, ui_type=UIType.WorkflowField
)
@validator("workflow", pre=True)
def validate_workflow_is_json(cls, v):
if v is None:
return None
try:
json.loads(v)
except json.decoder.JSONDecodeError:
raise ValueError("Workflow must be valid JSON")
return v

View File

@ -25,6 +25,7 @@ from controlnet_aux import (
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import ImageField, ImageOutput
@ -38,6 +39,7 @@ from .baseinvocation import (
InputField,
InvocationContext,
OutputField,
WithWorkflow,
invocation,
invocation_output,
)
@ -127,7 +129,7 @@ class ControlNetInvocation(BaseInvocation):
@invocation(
"image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet", version="1.0.0"
)
class ImageProcessorInvocation(BaseInvocation):
class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Base class for invocations that preprocess images for ControlNet"""
image: ImageField = InputField(description="The image to process")
@ -150,6 +152,7 @@ class ImageProcessorInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -559,3 +562,33 @@ class SamDetectorReproducibleColors(SamDetector):
img[:, :] = ann_color
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
return np.array(final_img, dtype=np.uint8)
@invocation(
"color_map_image_processor",
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
image = image.convert("RGB")
image = np.array(image, dtype=np.uint8)
height, width = image.shape[:2]
width_tile_size = min(self.color_map_tile_size, width)
height_tile_size = min(self.color_map_tile_size, height)
color_map = cv2.resize(
image,
(width // width_tile_size, height // height_tile_size),
interpolation=cv2.INTER_CUBIC,
)
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map

View File

@ -7,13 +7,21 @@ import cv2
import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
from .baseinvocation import (
BaseInvocation,
FieldDescriptions,
Input,
InputField,
InvocationContext,
WithWorkflow,
invocation,
)
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
@ -37,7 +45,7 @@ class ShowImageInvocation(BaseInvocation):
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
class BlankImageInvocation(BaseInvocation):
class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Creates a blank image and forwards it to the pipeline"""
width: int = InputField(default=512, description="The width of the image")
@ -55,6 +63,7 @@ class BlankImageInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -66,7 +75,7 @@ class BlankImageInvocation(BaseInvocation):
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
class ImageCropInvocation(BaseInvocation):
class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Crops an image to a specified box. The box can be outside of the image."""
image: ImageField = InputField(description="The image to crop")
@ -88,6 +97,7 @@ class ImageCropInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -99,7 +109,7 @@ class ImageCropInvocation(BaseInvocation):
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
class ImagePasteInvocation(BaseInvocation):
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Pastes an image into another image."""
base_image: ImageField = InputField(description="The base image")
@ -141,6 +151,7 @@ class ImagePasteInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -152,7 +163,7 @@ class ImagePasteInvocation(BaseInvocation):
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
class MaskFromAlphaInvocation(BaseInvocation):
class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Extracts the alpha channel of an image as a mask."""
image: ImageField = InputField(description="The image to create the mask from")
@ -172,6 +183,7 @@ class MaskFromAlphaInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -183,7 +195,7 @@ class MaskFromAlphaInvocation(BaseInvocation):
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
class ImageMultiplyInvocation(BaseInvocation):
class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
image1: ImageField = InputField(description="The first image to multiply")
@ -202,6 +214,7 @@ class ImageMultiplyInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -216,7 +229,7 @@ IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
class ImageChannelInvocation(BaseInvocation):
class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Gets a channel from an image."""
image: ImageField = InputField(description="The image to get the channel from")
@ -234,6 +247,7 @@ class ImageChannelInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -248,7 +262,7 @@ IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
class ImageConvertInvocation(BaseInvocation):
class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Converts an image to a different mode."""
image: ImageField = InputField(description="The image to convert")
@ -266,6 +280,7 @@ class ImageConvertInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -277,7 +292,7 @@ class ImageConvertInvocation(BaseInvocation):
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
class ImageBlurInvocation(BaseInvocation):
class ImageBlurInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Blurs an image"""
image: ImageField = InputField(description="The image to blur")
@ -300,6 +315,7 @@ class ImageBlurInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -331,16 +347,13 @@ PIL_RESAMPLING_MAP = {
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
class ImageResizeInvocation(BaseInvocation):
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Resizes an image to specific dimensions"""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, gt=0, description="The width to resize to (px)")
height: int = InputField(default=512, gt=0, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -359,7 +372,7 @@ class ImageResizeInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -371,7 +384,7 @@ class ImageResizeInvocation(BaseInvocation):
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
class ImageScaleInvocation(BaseInvocation):
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Scales an image by a factor"""
image: ImageField = InputField(description="The image to scale")
@ -401,6 +414,7 @@ class ImageScaleInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -412,7 +426,7 @@ class ImageScaleInvocation(BaseInvocation):
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
class ImageLerpInvocation(BaseInvocation):
class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Linear interpolation of all pixels of an image"""
image: ImageField = InputField(description="The image to lerp")
@ -434,6 +448,7 @@ class ImageLerpInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -445,7 +460,7 @@ class ImageLerpInvocation(BaseInvocation):
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
class ImageInverseLerpInvocation(BaseInvocation):
class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Inverse linear interpolation of all pixels of an image"""
image: ImageField = InputField(description="The image to lerp")
@ -467,6 +482,7 @@ class ImageInverseLerpInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -478,13 +494,10 @@ class ImageInverseLerpInvocation(BaseInvocation):
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
class ImageNSFWBlurInvocation(BaseInvocation):
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add blur to NSFW-flagged images"""
image: ImageField = InputField(description="The image to check")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -505,7 +518,7 @@ class ImageNSFWBlurInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -525,14 +538,11 @@ class ImageNSFWBlurInvocation(BaseInvocation):
@invocation(
"img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image", version="1.0.0"
)
class ImageWatermarkInvocation(BaseInvocation):
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add an invisible watermark to an image"""
image: ImageField = InputField(description="The image to check")
text: str = InputField(default="InvokeAI", description="Watermark text")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -544,7 +554,7 @@ class ImageWatermarkInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -556,7 +566,7 @@ class ImageWatermarkInvocation(BaseInvocation):
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
class MaskEdgeInvocation(BaseInvocation):
class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Applies an edge mask to an image"""
image: ImageField = InputField(description="The image to apply the mask to")
@ -590,6 +600,7 @@ class MaskEdgeInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -603,7 +614,7 @@ class MaskEdgeInvocation(BaseInvocation):
@invocation(
"mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image", version="1.0.0"
)
class MaskCombineInvocation(BaseInvocation):
class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
mask1: ImageField = InputField(description="The first mask to combine")
@ -622,6 +633,7 @@ class MaskCombineInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -633,7 +645,7 @@ class MaskCombineInvocation(BaseInvocation):
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
class ColorCorrectInvocation(BaseInvocation):
class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""
Shifts the colors of a target image to match the reference image, optionally
using a mask to only color-correct certain regions of the target image.
@ -732,6 +744,7 @@ class ColorCorrectInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -743,7 +756,7 @@ class ColorCorrectInvocation(BaseInvocation):
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
class ImageHueAdjustmentInvocation(BaseInvocation):
class ImageHueAdjustmentInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Adjusts the Hue of an image."""
image: ImageField = InputField(description="The image to adjust")
@ -771,6 +784,7 @@ class ImageHueAdjustmentInvocation(BaseInvocation):
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -846,7 +860,7 @@ CHANNEL_FORMATS = {
category="image",
version="1.0.0",
)
class ImageChannelOffsetInvocation(BaseInvocation):
class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Add or subtract a value from a specific color channel of an image."""
image: ImageField = InputField(description="The image to adjust")
@ -880,6 +894,7 @@ class ImageChannelOffsetInvocation(BaseInvocation):
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -916,7 +931,7 @@ class ImageChannelOffsetInvocation(BaseInvocation):
category="image",
version="1.0.0",
)
class ImageChannelMultiplyInvocation(BaseInvocation):
class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Scale a specific color channel of an image."""
image: ImageField = InputField(description="The image to adjust")
@ -956,6 +971,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation):
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
workflow=self.workflow,
metadata=self.metadata.data if self.metadata else None,
)
return ImageOutput(
@ -972,18 +988,14 @@ class ImageChannelMultiplyInvocation(BaseInvocation):
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.0.0",
version="1.0.1",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation):
class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Saves an image. Unlike an image primitive, this invocation stores a copy of the image."""
image: ImageField = InputField(description="The image to load")
metadata: CoreMetadata = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
image: ImageField = InputField(description=FieldDescriptions.image)
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -992,10 +1004,11 @@ class SaveImageInvocation(BaseInvocation):
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
board_id=self.board.board_id if self.board else None,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)

View File

@ -5,6 +5,7 @@ from typing import Literal, Optional, get_args
import numpy as np
from PIL import Image, ImageOps
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
@ -13,7 +14,7 @@ from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithWorkflow, invocation
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
@ -119,7 +120,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillColorInvocation(BaseInvocation):
class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with a solid color"""
image: ImageField = InputField(description="The image to infill")
@ -143,6 +144,7 @@ class InfillColorInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -154,7 +156,7 @@ class InfillColorInvocation(BaseInvocation):
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillTileInvocation(BaseInvocation):
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with tiles of the image"""
image: ImageField = InputField(description="The image to infill")
@ -179,6 +181,7 @@ class InfillTileInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -192,7 +195,7 @@ class InfillTileInvocation(BaseInvocation):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
)
class InfillPatchMatchInvocation(BaseInvocation):
class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
image: ImageField = InputField(description="The image to infill")
@ -232,6 +235,7 @@ class InfillPatchMatchInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
@ -243,7 +247,7 @@ class InfillPatchMatchInvocation(BaseInvocation):
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class LaMaInfillInvocation(BaseInvocation):
class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the LaMa model"""
image: ImageField = InputField(description="The image to infill")
@ -260,6 +264,8 @@ class LaMaInfillInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -270,7 +276,7 @@ class LaMaInfillInvocation(BaseInvocation):
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
class CV2InfillInvocation(BaseInvocation):
class CV2InfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using OpenCV Inpainting"""
image: ImageField = InputField(description="The image to infill")
@ -287,6 +293,8 @@ class CV2InfillInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(

View File

@ -23,7 +23,7 @@ from pydantic import validator
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.ip_adapter import IPAdapterField
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import (
DenoiseMaskField,
DenoiseMaskOutput,
@ -62,6 +62,7 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIType,
WithWorkflow,
invocation,
invocation_output,
)
@ -621,7 +622,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
@invocation(
"l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents", version="1.0.0"
)
class LatentsToImageInvocation(BaseInvocation):
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""
latents: LatentsField = InputField(
@ -634,11 +635,6 @@ class LatentsToImageInvocation(BaseInvocation):
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
metadata: CoreMetadata = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
@ -707,7 +703,7 @@ class LatentsToImageInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)

View File

@ -1,18 +1,19 @@
from typing import Optional
from typing import Any, Optional, Union
from pydantic import Field
from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
from invokeai.app.invocations.model import LoRAModelField
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from ...version import __version__
@ -25,159 +26,78 @@ class LoRAMetadataField(BaseModelExcludeNull):
weight: float = Field(description="The weight of the LoRA model")
class CoreMetadata(BaseModelExcludeNull):
"""Core generation metadata for an image generated in InvokeAI."""
app_version: str = Field(default=__version__, description="The version of InvokeAI used to generate this image")
generation_mode: str = Field(
description="The generation mode that output this image",
)
created_by: Optional[str] = Field(description="The name of the creator of the image")
positive_prompt: str = Field(description="The positive prompt parameter")
negative_prompt: str = Field(description="The negative prompt parameter")
width: int = Field(description="The width parameter")
height: int = Field(description="The height parameter")
seed: int = Field(description="The seed used for noise generation")
rand_device: str = Field(description="The device used for random number generation")
cfg_scale: float = Field(description="The classifier-free guidance scale parameter")
steps: int = Field(description="The number of steps used for inference")
scheduler: str = Field(description="The scheduler used for inference")
clip_skip: Optional[int] = Field(
default=None,
description="The number of skipped CLIP layers",
)
model: MainModelField = Field(description="The main model used for inference")
controlnets: list[ControlField] = Field(description="The ControlNets used for inference")
loras: list[LoRAMetadataField] = Field(description="The LoRAs used for inference")
vae: Optional[VAEModelField] = Field(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
# Latents-to-Latents
strength: Optional[float] = Field(
default=None,
description="The strength used for latents-to-latents",
)
init_image: Optional[str] = Field(default=None, description="The name of the initial image")
# SDXL
positive_style_prompt: Optional[str] = Field(default=None, description="The positive style prompt parameter")
negative_style_prompt: Optional[str] = Field(default=None, description="The negative style prompt parameter")
# SDXL Refiner
refiner_model: Optional[MainModelField] = Field(default=None, description="The SDXL Refiner model used")
refiner_cfg_scale: Optional[float] = Field(
default=None,
description="The classifier-free guidance scale parameter used for the refiner",
)
refiner_steps: Optional[int] = Field(default=None, description="The number of steps used for the refiner")
refiner_scheduler: Optional[str] = Field(default=None, description="The scheduler used for the refiner")
refiner_positive_aesthetic_score: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_negative_aesthetic_score: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_start: Optional[float] = Field(default=None, description="The start value used for refiner denoising")
class ImageMetadata(BaseModelExcludeNull):
"""An image's generation metadata"""
metadata: Optional[dict] = Field(
default=None,
description="The image's core metadata, if it was created in the Linear or Canvas UI",
)
graph: Optional[dict] = Field(default=None, description="The graph that created the image")
metadata: Optional[dict] = Field(default=None, description="The metadata associated with the image")
workflow: Optional[dict] = Field(default=None, description="The workflow associated with the image")
@invocation_output("metadata_accumulator_output")
class MetadataAccumulatorOutput(BaseInvocationOutput):
"""The output of the MetadataAccumulator node"""
metadata: CoreMetadata = OutputField(description="The core metadata for the image")
class MetadataItem(BaseModel):
label: str = Field(description=FieldDescriptions.metadata_item_label)
value: Any = Field(description=FieldDescriptions.metadata_item_value)
@invocation(
"metadata_accumulator", title="Metadata Accumulator", tags=["metadata"], category="metadata", version="1.0.0"
)
class MetadataAccumulatorInvocation(BaseInvocation):
"""Outputs a Core Metadata Object"""
@invocation_output("metadata_item_output")
class MetadataItemOutput(BaseInvocationOutput):
"""Metadata Item Output"""
generation_mode: str = InputField(
description="The generation mode that output this image",
)
positive_prompt: str = InputField(description="The positive prompt parameter")
negative_prompt: str = InputField(description="The negative prompt parameter")
width: int = InputField(description="The width parameter")
height: int = InputField(description="The height parameter")
seed: int = InputField(description="The seed used for noise generation")
rand_device: str = InputField(description="The device used for random number generation")
cfg_scale: float = InputField(description="The classifier-free guidance scale parameter")
steps: int = InputField(description="The number of steps used for inference")
scheduler: str = InputField(description="The scheduler used for inference")
clip_skip: Optional[int] = Field(
default=None,
description="The number of skipped CLIP layers",
)
model: MainModelField = InputField(description="The main model used for inference")
controlnets: list[ControlField] = InputField(description="The ControlNets used for inference")
loras: list[LoRAMetadataField] = InputField(description="The LoRAs used for inference")
strength: Optional[float] = InputField(
default=None,
description="The strength used for latents-to-latents",
)
init_image: Optional[str] = InputField(
default=None,
description="The name of the initial image",
)
vae: Optional[VAEModelField] = InputField(
default=None,
description="The VAE used for decoding, if the main model's default was not used",
)
item: MetadataItem = OutputField(description="Metadata Item")
# SDXL
positive_style_prompt: Optional[str] = InputField(
default=None,
description="The positive style prompt parameter",
)
negative_style_prompt: Optional[str] = InputField(
default=None,
description="The negative style prompt parameter",
)
# SDXL Refiner
refiner_model: Optional[MainModelField] = InputField(
default=None,
description="The SDXL Refiner model used",
)
refiner_cfg_scale: Optional[float] = InputField(
default=None,
description="The classifier-free guidance scale parameter used for the refiner",
)
refiner_steps: Optional[int] = InputField(
default=None,
description="The number of steps used for the refiner",
)
refiner_scheduler: Optional[str] = InputField(
default=None,
description="The scheduler used for the refiner",
)
refiner_positive_aesthetic_score: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)
refiner_negative_aesthetic_score: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)
refiner_start: Optional[float] = InputField(
default=None,
description="The start value used for refiner denoising",
)
@invocation("metadata_item", title="Metadata Item", tags=["metadata"], category="metadata", version="1.0.0")
class MetadataItemInvocation(BaseInvocation):
"""Used to create an arbitrary metadata item. Provide "label" and make a connection to "value" to store that data as the value."""
def invoke(self, context: InvocationContext) -> MetadataAccumulatorOutput:
"""Collects and outputs a CoreMetadata object"""
label: str = InputField(description=FieldDescriptions.metadata_item_label)
value: Any = InputField(description=FieldDescriptions.metadata_item_value, ui_type=UIType.Any)
return MetadataAccumulatorOutput(metadata=CoreMetadata(**self.dict()))
def invoke(self, context: InvocationContext) -> MetadataItemOutput:
return MetadataItemOutput(item=MetadataItem(label=self.label, value=self.value))
class MetadataDict(BaseModel):
"""Accepts a single MetadataItem or collection of MetadataItems (use a Collect node)."""
data: dict[str, Any] = Field(description="Metadata dict")
@invocation_output("metadata_dict")
class MetadataDictOutput(BaseInvocationOutput):
metadata_dict: MetadataDict = OutputField(description="Metadata Dict")
@invocation("metadata", title="Metadata", tags=["metadata"], category="metadata", version="1.0.0")
class MetadataInvocation(BaseInvocation):
"""Takes a MetadataItem or collection of MetadataItems and outputs a MetadataDict."""
items: Union[list[MetadataItem], MetadataItem] = InputField(description=FieldDescriptions.metadata_item_polymorphic)
def invoke(self, context: InvocationContext) -> MetadataDictOutput:
if isinstance(self.items, MetadataItem):
# single metadata item
data = {self.items.label: self.items.value}
else:
# collection of metadata items
data = {item.label: item.value for item in self.items}
data.update({"app_version": __version__})
return MetadataDictOutput(metadata_dict=(MetadataDict(data=data)))
@invocation("merge_metadata_dict", title="Metadata Merge", tags=["metadata"], category="metadata", version="1.0.0")
class MergeMetadataDictInvocation(BaseInvocation):
"""Merged a collection of MetadataDict into a single MetadataDict."""
collection: list[MetadataDict] = InputField(description=FieldDescriptions.metadata_dict_collection)
def invoke(self, context: InvocationContext) -> MetadataDictOutput:
data = {}
for item in self.collection:
data.update(item.data)
return MetadataDictOutput(metadata_dict=MetadataDict(data=data))
class WithMetadata(BaseModel):
metadata: Optional[MetadataDict] = InputField(default=None, description=FieldDescriptions.metadata)

View File

@ -12,7 +12,7 @@ from diffusers.image_processor import VaeImageProcessor
from pydantic import BaseModel, Field, validator
from tqdm import tqdm
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput, ImageField, ImageOutput
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend import BaseModelType, ModelType, SubModelType
@ -28,6 +28,7 @@ from .baseinvocation import (
Input,
InputField,
InvocationContext,
WithWorkflow,
OutputField,
UIComponent,
UIType,
@ -321,7 +322,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
category="image",
version="1.0.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation):
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""
latents: LatentsField = InputField(
@ -332,11 +333,6 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
description=FieldDescriptions.vae,
input=Input.Connection,
)
metadata: Optional[CoreMetadata] = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
# tiled: bool = InputField(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
def invoke(self, context: InvocationContext) -> ImageOutput:
@ -375,7 +371,7 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)

View File

@ -226,6 +226,12 @@ class ImageField(BaseModel):
image_name: str = Field(description="The name of the image")
class BoardField(BaseModel):
"""A board primitive field"""
board_id: str = Field(description="The id of the board")
@invocation_output("image_output")
class ImageOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
@ -245,7 +251,9 @@ class ImageCollectionOutput(BaseInvocationOutput):
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.0")
class ImageInvocation(BaseInvocation):
class ImageInvocation(
BaseInvocation,
):
"""An image primitive value"""
image: ImageField = InputField(description="The image to load")

View File

@ -7,11 +7,12 @@ import numpy as np
from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image
from realesrgan import RealESRGANer
from invokeai.app.invocations.metadata import WithMetadata
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithWorkflow, invocation
# TODO: Populate this from disk?
# TODO: Use model manager to load?
@ -24,7 +25,7 @@ ESRGAN_MODELS = Literal[
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.0.0")
class ESRGANInvocation(BaseInvocation):
class ESRGANInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Upscales an image using RealESRGAN."""
image: ImageField = InputField(description="The input image")
@ -106,6 +107,7 @@ class ESRGANInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.data if self.metadata else None,
workflow=self.workflow,
)

View File

@ -421,6 +421,14 @@ class Graph(BaseModel):
return True
def _is_destination_field_Any(self, edge: Edge) -> bool:
"""Checks if the destination field for an edge is of type typing.Any"""
return get_input_field(self.get_node(edge.destination.node_id), edge.destination.field) == Any
def _is_destination_field_list_of_Any(self, edge: Edge) -> bool:
"""Checks if the destination field for an edge is of type typing.Any"""
return get_input_field(self.get_node(edge.destination.node_id), edge.destination.field) == list[Any]
def _validate_edge(self, edge: Edge):
"""Validates that a new edge doesn't create a cycle in the graph"""
@ -473,8 +481,19 @@ class Graph(BaseModel):
f"Collector output type does not match collector input type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if collector output type matches input type (if this edge results in both being set)
if isinstance(from_node, CollectInvocation) and edge.source.field == "collection":
# Validate that we are not connecting collector to iterator (currently unsupported)
if isinstance(from_node, CollectInvocation) and isinstance(to_node, IterateInvocation):
raise InvalidEdgeError(
f"Cannot connect collector to iterator: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
)
# Validate if collector output type matches input type (if this edge results in both being set) - skip if the destination field is not Any or list[Any]
if (
isinstance(from_node, CollectInvocation)
and edge.source.field == "collection"
and not self._is_destination_field_list_of_Any(edge)
and not self._is_destination_field_Any(edge)
):
if not self._is_collector_connection_valid(edge.source.node_id, new_output=edge.destination):
raise InvalidEdgeError(
f"Collector input type does not match collector output type: {edge.source.node_id}.{edge.source.field} to {edge.destination.node_id}.{edge.destination.field}"
@ -707,16 +726,15 @@ class Graph(BaseModel):
# Get the input root type
input_root_type = next(t[0] for t in type_degrees if t[1] == 0) # type: ignore
# Verify that all outputs are lists
# if not all((get_origin(f) == list for f in output_fields)):
# return False
# Verify that all outputs are lists
if not all(is_list_or_contains_list(f) for f in output_fields):
return False
# Verify that all outputs match the input type (are a base class or the same class)
if not all((issubclass(input_root_type, get_args(f)[0]) for f in output_fields)):
if not all(
is_union_subtype(input_root_type, get_args(f)[0]) or issubclass(input_root_type, get_args(f)[0])
for f in output_fields
):
return False
return True

View File

@ -59,7 +59,7 @@ class ImageFileStorageBase(ABC):
self,
image: PILImageType,
image_name: str,
metadata: Optional[dict] = None,
metadata: Optional[Union[str, dict]] = None,
workflow: Optional[str] = None,
thumbnail_size: int = 256,
) -> None:
@ -109,7 +109,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
self,
image: PILImageType,
image_name: str,
metadata: Optional[dict] = None,
metadata: Optional[Union[str, dict]] = None,
workflow: Optional[str] = None,
thumbnail_size: int = 256,
) -> None:
@ -119,20 +119,10 @@ class DiskImageFileStorage(ImageFileStorageBase):
pnginfo = PngImagePlugin.PngInfo()
if metadata is not None or workflow is not None:
if metadata is not None:
pnginfo.add_text("invokeai_metadata", json.dumps(metadata))
if workflow is not None:
pnginfo.add_text("invokeai_workflow", workflow)
else:
# For uploaded images, we want to retain metadata. PIL strips it on save; manually add it back
# TODO: retain non-invokeai metadata on save...
original_metadata = image.info.get("invokeai_metadata", None)
if original_metadata is not None:
pnginfo.add_text("invokeai_metadata", original_metadata)
original_workflow = image.info.get("invokeai_workflow", None)
if original_workflow is not None:
pnginfo.add_text("invokeai_workflow", original_workflow)
if metadata is not None:
pnginfo.add_text("invokeai_metadata", json.dumps(metadata) if type(metadata) is dict else metadata)
if workflow is not None:
pnginfo.add_text("invokeai_workflow", workflow)
image.save(image_path, "PNG", pnginfo=pnginfo)

View File

@ -3,11 +3,12 @@ import sqlite3
import threading
from abc import ABC, abstractmethod
from datetime import datetime
from typing import Generic, Optional, TypeVar, cast
from typing import Generic, Optional, TypeVar, Union, cast
from pydantic import BaseModel, Field
from pydantic.generics import GenericModel
from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.models.image_record import ImageRecord, ImageRecordChanges, deserialize_image_record
@ -81,7 +82,7 @@ class ImageRecordStorageBase(ABC):
pass
@abstractmethod
def get_metadata(self, image_name: str) -> Optional[dict]:
def get_metadata(self, image_name: str) -> ImageMetadata:
"""Gets an image's metadata'."""
pass
@ -134,7 +135,8 @@ class ImageRecordStorageBase(ABC):
height: int,
session_id: Optional[str],
node_id: Optional[str],
metadata: Optional[dict],
metadata: Optional[Union[str, dict]],
workflow: Optional[str],
is_intermediate: bool = False,
starred: bool = False,
) -> datetime:
@ -204,6 +206,13 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
"""
)
if "workflow" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images ADD COLUMN workflow TEXT;
"""
)
# Create the `images` table indices.
self._cursor.execute(
"""--sql
@ -269,22 +278,31 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
return deserialize_image_record(dict(result))
def get_metadata(self, image_name: str) -> Optional[dict]:
def get_metadata(self, image_name: str) -> ImageMetadata:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT images.metadata FROM images
SELECT metadata, workflow FROM images
WHERE image_name = ?;
""",
(image_name,),
)
result = cast(Optional[sqlite3.Row], self._cursor.fetchone())
if not result or not result[0]:
return None
return json.loads(result[0])
if not result:
return ImageMetadata()
as_dict = dict(result)
metadata_raw = cast(Optional[str], as_dict.get("metadata", None))
workflow_raw = cast(Optional[str], as_dict.get("workflow", None))
return ImageMetadata(
metadata=json.loads(metadata_raw) if metadata_raw is not None else None,
workflow=json.loads(workflow_raw) if workflow_raw is not None else None,
)
except sqlite3.Error as e:
self._conn.rollback()
raise ImageRecordNotFoundException from e
@ -519,12 +537,15 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
width: int,
height: int,
node_id: Optional[str],
metadata: Optional[dict],
metadata: Optional[Union[str, dict]],
workflow: Optional[str],
is_intermediate: bool = False,
starred: bool = False,
) -> datetime:
try:
metadata_json = None if metadata is None else json.dumps(metadata)
metadata_json: Optional[str] = None
if metadata is not None:
metadata_json = metadata if type(metadata) is str else json.dumps(metadata)
self._lock.acquire()
self._cursor.execute(
"""--sql
@ -537,10 +558,11 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
node_id,
session_id,
metadata,
workflow,
is_intermediate,
starred
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
""",
(
image_name,
@ -551,6 +573,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
node_id,
session_id,
metadata_json,
workflow,
is_intermediate,
starred,
),

View File

@ -1,6 +1,6 @@
from abc import ABC, abstractmethod
from logging import Logger
from typing import TYPE_CHECKING, Callable, Optional
from typing import TYPE_CHECKING, Callable, Optional, Union
from PIL.Image import Image as PILImageType
@ -29,7 +29,6 @@ from invokeai.app.services.item_storage import ItemStorageABC
from invokeai.app.services.models.image_record import ImageDTO, ImageRecord, ImageRecordChanges, image_record_to_dto
from invokeai.app.services.resource_name import NameServiceBase
from invokeai.app.services.urls import UrlServiceBase
from invokeai.app.util.metadata import get_metadata_graph_from_raw_session
if TYPE_CHECKING:
from invokeai.app.services.graph import GraphExecutionState
@ -71,7 +70,7 @@ class ImageServiceABC(ABC):
session_id: Optional[str] = None,
board_id: Optional[str] = None,
is_intermediate: bool = False,
metadata: Optional[dict] = None,
metadata: Optional[Union[str, dict]] = None,
workflow: Optional[str] = None,
) -> ImageDTO:
"""Creates an image, storing the file and its metadata."""
@ -196,7 +195,7 @@ class ImageService(ImageServiceABC):
session_id: Optional[str] = None,
board_id: Optional[str] = None,
is_intermediate: bool = False,
metadata: Optional[dict] = None,
metadata: Optional[Union[str, dict]] = None,
workflow: Optional[str] = None,
) -> ImageDTO:
if image_origin not in ResourceOrigin:
@ -234,6 +233,7 @@ class ImageService(ImageServiceABC):
# Nullable fields
node_id=node_id,
metadata=metadata,
workflow=workflow,
session_id=session_id,
)
if board_id is not None:
@ -311,23 +311,7 @@ class ImageService(ImageServiceABC):
def get_metadata(self, image_name: str) -> Optional[ImageMetadata]:
try:
image_record = self._services.image_records.get(image_name)
metadata = self._services.image_records.get_metadata(image_name)
if not image_record.session_id:
return ImageMetadata(metadata=metadata)
session_raw = self._services.graph_execution_manager.get_raw(image_record.session_id)
graph = None
if session_raw:
try:
graph = get_metadata_graph_from_raw_session(session_raw)
except Exception as e:
self._services.logger.warn(f"Failed to parse session graph: {e}")
graph = None
return ImageMetadata(graph=graph, metadata=metadata)
return self._services.image_records.get_metadata(image_name)
except ImageRecordNotFoundException:
self._services.logger.error("Image record not found")
raise

View File

@ -8,97 +8,97 @@ from invokeai.app.services.invoker import Invoker
class MemoryInvocationCache(InvocationCacheBase):
__cache: dict[Union[int, str], tuple[BaseInvocationOutput, str]]
__max_cache_size: int
__disabled: bool
__hits: int
__misses: int
__cache_ids: Queue
__invoker: Invoker
_cache: dict[Union[int, str], tuple[BaseInvocationOutput, str]]
_max_cache_size: int
_disabled: bool
_hits: int
_misses: int
_cache_ids: Queue
_invoker: Invoker
def __init__(self, max_cache_size: int = 0) -> None:
self.__cache = dict()
self.__max_cache_size = max_cache_size
self.__disabled = False
self.__hits = 0
self.__misses = 0
self.__cache_ids = Queue()
self._cache = dict()
self._max_cache_size = max_cache_size
self._disabled = False
self._hits = 0
self._misses = 0
self._cache_ids = Queue()
def start(self, invoker: Invoker) -> None:
self.__invoker = invoker
if self.__max_cache_size == 0:
self._invoker = invoker
if self._max_cache_size == 0:
return
self.__invoker.services.images.on_deleted(self._delete_by_match)
self.__invoker.services.latents.on_deleted(self._delete_by_match)
self._invoker.services.images.on_deleted(self._delete_by_match)
self._invoker.services.latents.on_deleted(self._delete_by_match)
def get(self, key: Union[int, str]) -> Optional[BaseInvocationOutput]:
if self.__max_cache_size == 0 or self.__disabled:
if self._max_cache_size == 0 or self._disabled:
return
item = self.__cache.get(key, None)
item = self._cache.get(key, None)
if item is not None:
self.__hits += 1
self._hits += 1
return item[0]
self.__misses += 1
self._misses += 1
def save(self, key: Union[int, str], invocation_output: BaseInvocationOutput) -> None:
if self.__max_cache_size == 0 or self.__disabled:
if self._max_cache_size == 0 or self._disabled:
return
if key not in self.__cache:
self.__cache[key] = (invocation_output, invocation_output.json())
self.__cache_ids.put(key)
if self.__cache_ids.qsize() > self.__max_cache_size:
if key not in self._cache:
self._cache[key] = (invocation_output, invocation_output.json())
self._cache_ids.put(key)
if self._cache_ids.qsize() > self._max_cache_size:
try:
self.__cache.pop(self.__cache_ids.get())
self._cache.pop(self._cache_ids.get())
except KeyError:
# this means the cache_ids are somehow out of sync w/ the cache
pass
def delete(self, key: Union[int, str]) -> None:
if self.__max_cache_size == 0 or self.__disabled:
if self._max_cache_size == 0:
return
if key in self.__cache:
del self.__cache[key]
if key in self._cache:
del self._cache[key]
def clear(self, *args, **kwargs) -> None:
if self.__max_cache_size == 0 or self.__disabled:
if self._max_cache_size == 0:
return
self.__cache.clear()
self.__cache_ids = Queue()
self.__misses = 0
self.__hits = 0
self._cache.clear()
self._cache_ids = Queue()
self._misses = 0
self._hits = 0
def create_key(self, invocation: BaseInvocation) -> int:
return hash(invocation.json(exclude={"id"}))
def disable(self) -> None:
if self.__max_cache_size == 0:
if self._max_cache_size == 0:
return
self.__disabled = True
self._disabled = True
def enable(self) -> None:
if self.__max_cache_size == 0:
if self._max_cache_size == 0:
return
self.__disabled = False
self._disabled = False
def get_status(self) -> InvocationCacheStatus:
return InvocationCacheStatus(
hits=self.__hits,
misses=self.__misses,
enabled=not self.__disabled and self.__max_cache_size > 0,
size=len(self.__cache),
max_size=self.__max_cache_size,
hits=self._hits,
misses=self._misses,
enabled=not self._disabled and self._max_cache_size > 0,
size=len(self._cache),
max_size=self._max_cache_size,
)
def _delete_by_match(self, to_match: str) -> None:
if self.__max_cache_size == 0 or self.__disabled:
if self._max_cache_size == 0:
return
keys_to_delete = set()
for key, value_tuple in self.__cache.items():
for key, value_tuple in self._cache.items():
if to_match in value_tuple[1]:
keys_to_delete.add(key)
@ -108,4 +108,4 @@ class MemoryInvocationCache(InvocationCacheBase):
for key in keys_to_delete:
self.delete(key)
self.__invoker.services.logger.debug(f"Deleted {len(keys_to_delete)} cached invocation outputs for {to_match}")
self._invoker.services.logger.debug(f"Deleted {len(keys_to_delete)} cached invocation outputs for {to_match}")

View File

@ -81,6 +81,7 @@
"load": "Load",
"loading": "Loading",
"loadingInvokeAI": "Loading Invoke AI",
"learnMore": "Learn More",
"modelManager": "Model Manager",
"nodeEditor": "Node Editor",
"nodes": "Workflow Editor",
@ -135,6 +136,8 @@
"bgth": "bg_th",
"canny": "Canny",
"cannyDescription": "Canny edge detection",
"colorMap": "Color",
"colorMapDescription": "Generates a color map from the image",
"coarse": "Coarse",
"contentShuffle": "Content Shuffle",
"contentShuffleDescription": "Shuffles the content in an image",
@ -158,6 +161,7 @@
"hideAdvanced": "Hide Advanced",
"highThreshold": "High Threshold",
"imageResolution": "Image Resolution",
"colorMapTileSize": "Tile Size",
"importImageFromCanvas": "Import Image From Canvas",
"importMaskFromCanvas": "Import Mask From Canvas",
"incompatibleBaseModel": "Incompatible base model:",
@ -701,6 +705,8 @@
"addNodeToolTip": "Add Node (Shift+A, Space)",
"animatedEdges": "Animated Edges",
"animatedEdgesHelp": "Animate selected edges and edges connected to selected nodes",
"boardField": "Board",
"boardFieldDescription": "A gallery board",
"boolean": "Booleans",
"booleanCollection": "Boolean Collection",
"booleanCollectionDescription": "A collection of booleans.",
@ -888,7 +894,7 @@
"zoomOutNodes": "Zoom Out"
},
"parameters": {
"aspectRatio": "Ratio",
"aspectRatio": "Aspect Ratio",
"boundingBoxHeader": "Bounding Box",
"boundingBoxHeight": "Bounding Box Height",
"boundingBoxWidth": "Bounding Box Width",
@ -1020,8 +1026,8 @@
"label": "Seed Behaviour",
"perIterationLabel": "Seed per Iteration",
"perIterationDesc": "Use a different seed for each iteration",
"perPromptLabel": "Seed per Prompt",
"perPromptDesc": "Use a different seed for each prompt"
"perPromptLabel": "Seed per Image",
"perPromptDesc": "Use a different seed for each image"
}
},
"sdxl": {
@ -1173,131 +1179,205 @@
"popovers": {
"clipSkip": {
"heading": "CLIP Skip",
"paragraph": "Choose how many layers of the CLIP model to skip. Certain models are better suited to be used with CLIP Skip."
},
"compositingBlur": {
"heading": "Blur",
"paragraph": "The blur radius of the mask."
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraph": "The method of blur applied to the masked area."
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
"paragraph": "Composite the Inpainted/Outpainted images."
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraph": "The mode of the Coherence Pass."
},
"compositingCoherenceSteps": {
"heading": "Steps",
"paragraph": "Number of steps in the Coherence Pass. Similar to Denoising Steps."
},
"compositingStrength": {
"heading": "Strength",
"paragraph": "Amount of noise added for the Coherence Pass. Similar to Denoising Strength."
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraph": "Adjust the mask."
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraph": "Which parts of the denoising process will have the ControlNet applied. ControlNets applied at the start of the process guide composition, and ControlNets applied at the end guide details."
},
"controlNetControlMode": {
"heading": "Control Mode",
"paragraph": "Lends more weight to either the prompt or ControlNet."
},
"controlNetResizeMode": {
"heading": "Resize Mode",
"paragraph": "How the ControlNet image will be fit to the image generation Ratio"
},
"controlNetToggle": {
"heading": "Enable ControlNet",
"paragraph": "ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
},
"controlNetWeight": {
"heading": "Weight",
"paragraph": "How strongly the ControlNet will impact the generated image."
},
"dynamicPromptsToggle": {
"heading": "Enable Dynamic Prompts",
"paragraph": "Dynamic prompts allow multiple options within a prompt. Dynamic prompts can be used by: {option1|option2|option3}. Combinations of prompts will be randomly generated until the “Images” number has been reached."
},
"dynamicPromptsCombinatorial": {
"heading": "Combinatorial Generation",
"paragraph": "Generate an image for every possible combination of Dynamic Prompts until the Max Prompts is reached."
},
"infillMethod": {
"heading": "Infill Method",
"paragraph": "Method to infill the selected area."
},
"lora": {
"heading": "LoRA Weight",
"paragraph": "Weight of the LoRA. Higher weight will lead to larger impacts on the final image."
},
"noiseEnable": {
"heading": "Enable Noise Settings",
"paragraph": "Advanced control over noise generation."
},
"noiseUseCPU": {
"heading": "Use CPU Noise",
"paragraph": "Uses the CPU to generate random noise."
},
"paramCFGScale": {
"heading": "CFG Scale",
"paragraph": "Controls how much your prompt influences the generation process."
},
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraph": "How much noise is added to the input image. 0 will result in an identical image, while 1 will result in a completely new image."
},
"paramIterations": {
"heading": "Iterations",
"paragraph": "The number of images to generate. If Dynamic Prompts is enabled, each of the prompts will be generated this many times."
},
"paramModel": {
"heading": "Model",
"paragraph": "Model used for the denoising steps. Different models are trained to specialize in producing different aesthetic results and content."
"paragraphs": [
"Choose how many layers of the CLIP model to skip.",
"Some models work better with certain CLIP Skip settings.",
"A higher value typically results in a less detailed image."
]
},
"paramNegativeConditioning": {
"heading": "Negative Prompt",
"paragraph": "The generation process avoids the concepts in the negative prompt. Use this to exclude qualities or objects from the output. Supports Compel syntax and embeddings."
"paragraphs": [
"The generation process avoids the concepts in the negative prompt. Use this to exclude qualities or objects from the output.",
"Supports Compel syntax and embeddings."
]
},
"paramPositiveConditioning": {
"heading": "Positive Prompt",
"paragraph": "Guides the generation process. You may use any words or phrases. Supports Compel and Dynamic Prompts syntaxes and embeddings."
},
"paramRatio": {
"heading": "Ratio",
"paragraph": "The ratio of the dimensions of the image generated. An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
"paragraphs": [
"Guides the generation process. You may use any words or phrases.",
"Compel and Dynamic Prompts syntaxes and embeddings."
]
},
"paramScheduler": {
"heading": "Scheduler",
"paragraph": "Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
"paragraphs": [
"Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
]
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
"paragraphs": [
"A second round of denoising helps to composite the Inpainted/Outpainted image."
]
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
"paragraphs": [
"Number of denoising steps used in the Coherence Pass.",
"Same as the main Steps parameter."
]
},
"compositingStrength": {
"heading": "Strength",
"paragraphs": [
"Denoising strength for the Coherence Pass.",
"Same as the Image to Image Denoising Strength parameter."
]
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraphs": [
"Which steps of the denoising process will have the ControlNet applied.",
"ControlNets applied at the beginning of the process guide composition, and ControlNets applied at the end guide details."
]
},
"controlNetControlMode": {
"heading": "Control Mode",
"paragraphs": [
"Lends more weight to either the prompt or ControlNet."
]
},
"controlNetResizeMode": {
"heading": "Resize Mode",
"paragraphs": [
"How the ControlNet image will be fit to the image output size."
]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
"ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
]
},
"controlNetWeight": {
"heading": "Weight",
"paragraphs": [
"How strongly the ControlNet will impact the generated image."
]
},
"dynamicPrompts": {
"heading": "Dynamic Prompts",
"paragraphs": [
"Dynamic Prompts parses a single prompt into many.",
"The basic syntax is \"a {red|green|blue} ball\". This will produce three prompts: \"a red ball\", \"a green ball\" and \"a blue ball\".",
"You can use the syntax as many times as you like in a single prompt, but be sure to keep the number of prompts generated in check with the Max Prompts setting."
]
},
"dynamicPromptsMaxPrompts": {
"heading": "Max Prompts",
"paragraphs": [
"Limits the number of prompts that can be generated by Dynamic Prompts."
]
},
"dynamicPromptsSeedBehaviour": {
"heading": "Seed Behaviour",
"paragraphs": [
"Controls how the seed is used when generating prompts.",
"Per Iteration will use a unique seed for each iteration. Use this to explore prompt variations on a single seed.",
"For example, if you have 5 prompts, each image will use the same seed.",
"Per Image will use a unique seed for each image. This provides more variation."
]
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",
"paragraphs": [
"Higher LoRA weight will lead to larger impacts on the final image."
]
},
"noiseUseCPU": {
"heading": "Use CPU Noise",
"paragraphs": [
"Controls whether noise is generated on the CPU or GPU.",
"With CPU Noise enabled, a particular seed will produce the same image on any machine.",
"There is no performance impact to enabling CPU Noise."
]
},
"paramCFGScale": {
"heading": "CFG Scale",
"paragraphs": [
"Controls how much your prompt influences the generation process."
]
},
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraphs": [
"How much noise is added to the input image.",
"0 will result in an identical image, while 1 will result in a completely new image."
]
},
"paramIterations": {
"heading": "Iterations",
"paragraphs": [
"The number of images to generate.",
"If Dynamic Prompts is enabled, each of the prompts will be generated this many times."
]
},
"paramModel": {
"heading": "Model",
"paragraphs": [
"Model used for the denoising steps.",
"Different models are typically trained to specialize in producing particular aesthetic results and content."
]
},
"paramRatio": {
"heading": "Aspect Ratio",
"paragraphs": [
"The aspect ratio of the dimensions of the image generated.",
"An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
]
},
"paramSeed": {
"heading": "Seed",
"paragraph": "Controls the starting noise used for generation. Disable “Random Seed” to produce identical results with the same generation settings."
"paragraphs": [
"Controls the starting noise used for generation.",
"Disable “Random Seed” to produce identical results with the same generation settings."
]
},
"paramSteps": {
"heading": "Steps",
"paragraph": "Number of steps that will be performed in each generation. Higher step counts will typically create better images but will require more generation time."
"paragraphs": [
"Number of steps that will be performed in each generation.",
"Higher step counts will typically create better images but will require more generation time."
]
},
"paramVAE": {
"heading": "VAE",
"paragraph": "Model used for translating AI output into the final image."
"paragraphs": [
"Model used for translating AI output into the final image."
]
},
"paramVAEPrecision": {
"heading": "VAE Precision",
"paragraph": "The precision used during VAE encoding and decoding. Fp16/Half precision is more efficient, at the expense of minor image variations."
"paragraphs": [
"The precision used during VAE encoding and decoding. FP16/half precision is more efficient, at the expense of minor image variations."
]
},
"scaleBeforeProcessing": {
"heading": "Scale Before Processing",
"paragraph": "Scales the selected area to the size best suited for the model before the image generation process."
"paragraphs": [
"Scales the selected area to the size best suited for the model before the image generation process."
]
}
},
"ui": {

View File

@ -17,7 +17,8 @@ import {
} from 'services/events/actions';
import { startAppListening } from '../..';
const nodeDenylist = ['load_image'];
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
const nodeDenylist = ['load_image', 'image'];
export const addInvocationCompleteEventListener = () => {
startAppListening({
@ -37,6 +38,7 @@ export const addInvocationCompleteEventListener = () => {
const { image_name } = result.image;
const { canvas, gallery } = getState();
// This populates the `getImageDTO` cache
const imageDTO = await dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name)
).unwrap();
@ -52,54 +54,36 @@ export const addInvocationCompleteEventListener = () => {
if (!imageDTO.is_intermediate) {
/**
* Cache updates for when an image result is received
* - *add* to getImageDTO
* - IF `autoAddBoardId` is set:
* - THEN add it to the board_id/images
* - ELSE (`autoAddBoardId` is not set):
* - THEN add it to the no_board/images
* - add it to the no_board/images
*/
const { autoAddBoardId } = gallery;
if (autoAddBoardId && autoAddBoardId !== 'none') {
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
board_id: autoAddBoardId,
imageDTO,
})
);
} else {
dispatch(
imagesApi.util.updateQueryData(
'listImages',
{
board_id: 'none',
categories: IMAGE_CATEGORIES,
},
(draft) => {
imagesAdapter.addOne(draft, imageDTO);
}
)
);
}
dispatch(
imagesApi.util.updateQueryData(
'listImages',
{
board_id: imageDTO.board_id ?? 'none',
categories: IMAGE_CATEGORIES,
},
(draft) => {
imagesAdapter.addOne(draft, imageDTO);
}
)
);
dispatch(
imagesApi.util.invalidateTags([
{ type: 'BoardImagesTotal', id: autoAddBoardId },
{ type: 'BoardAssetsTotal', id: autoAddBoardId },
{ type: 'BoardImagesTotal', id: imageDTO.board_id },
{ type: 'BoardAssetsTotal', id: imageDTO.board_id },
])
);
const { selectedBoardId, shouldAutoSwitch } = gallery;
const { shouldAutoSwitch } = gallery;
// If auto-switch is enabled, select the new image
if (shouldAutoSwitch) {
// if auto-add is enabled, switch the board as the image comes in
if (autoAddBoardId && autoAddBoardId !== selectedBoardId) {
dispatch(boardIdSelected(autoAddBoardId));
dispatch(galleryViewChanged('images'));
} else if (!autoAddBoardId) {
dispatch(galleryViewChanged('images'));
}
dispatch(galleryViewChanged('images'));
dispatch(boardIdSelected(imageDTO.board_id ?? 'none'));
dispatch(imageSelected(imageDTO));
}
}

View File

@ -18,11 +18,14 @@ export const addUpscaleRequestedListener = () => {
const log = logger('session');
const { image_name } = action.payload;
const { esrganModelName } = getState().postprocessing;
const state = getState();
const { esrganModelName } = state.postprocessing;
const { autoAddBoardId } = state.gallery;
const graph = buildAdHocUpscaleGraph({
image_name,
esrganModelName,
autoAddBoardId,
});
try {

View File

@ -1,124 +0,0 @@
import {
Box,
Button,
Divider,
Flex,
Heading,
Image,
Popover,
PopoverArrow,
PopoverBody,
PopoverCloseButton,
PopoverContent,
PopoverProps,
PopoverTrigger,
Portal,
Text,
} from '@chakra-ui/react';
import { ReactNode, memo } from 'react';
import { useTranslation } from 'react-i18next';
import { useAppSelector } from '../../app/store/storeHooks';
const OPEN_DELAY = 1500;
type Props = Omit<PopoverProps, 'children'> & {
details: string;
children: ReactNode;
image?: string;
buttonLabel?: string;
buttonHref?: string;
placement?: PopoverProps['placement'];
};
const IAIInformationalPopover = ({
details,
image,
buttonLabel,
buttonHref,
children,
placement,
}: Props) => {
const shouldEnableInformationalPopovers = useAppSelector(
(state) => state.system.shouldEnableInformationalPopovers
);
const { t } = useTranslation();
const heading = t(`popovers.${details}.heading`);
const paragraph = t(`popovers.${details}.paragraph`);
if (!shouldEnableInformationalPopovers) {
return <>{children}</>;
}
return (
<Popover
placement={placement || 'top'}
closeOnBlur={false}
trigger="hover"
variant="informational"
openDelay={OPEN_DELAY}
>
<PopoverTrigger>
<Box w="full">{children}</Box>
</PopoverTrigger>
<Portal>
<PopoverContent>
<PopoverArrow />
<PopoverCloseButton />
<PopoverBody>
<Flex
sx={{
gap: 3,
flexDirection: 'column',
width: '100%',
alignItems: 'center',
}}
>
{image && (
<Image
sx={{
objectFit: 'contain',
maxW: '60%',
maxH: '60%',
backgroundColor: 'white',
}}
src={image}
alt="Optional Image"
/>
)}
<Flex
sx={{
gap: 3,
flexDirection: 'column',
width: '100%',
}}
>
{heading && (
<>
<Heading size="sm">{heading}</Heading>
<Divider />
</>
)}
<Text>{paragraph}</Text>
{buttonLabel && (
<Flex justifyContent="flex-end">
<Button
onClick={() => window.open(buttonHref)}
size="sm"
variant="invokeAIOutline"
>
{buttonLabel}
</Button>
</Flex>
)}
</Flex>
</Flex>
</PopoverBody>
</PopoverContent>
</Portal>
</Popover>
);
};
export default memo(IAIInformationalPopover);

View File

@ -0,0 +1,155 @@
import {
Box,
BoxProps,
Button,
Divider,
Flex,
Heading,
Image,
Popover,
PopoverBody,
PopoverCloseButton,
PopoverContent,
PopoverProps,
PopoverTrigger,
Portal,
Text,
forwardRef,
} from '@chakra-ui/react';
import { merge, omit } from 'lodash-es';
import { PropsWithChildren, memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaExternalLinkAlt } from 'react-icons/fa';
import { useAppSelector } from '../../../app/store/storeHooks';
import {
Feature,
OPEN_DELAY,
POPOVER_DATA,
POPPER_MODIFIERS,
} from './constants';
type Props = PropsWithChildren & {
feature: Feature;
wrapperProps?: BoxProps;
popoverProps?: PopoverProps;
};
const IAIInformationalPopover = forwardRef(
({ feature, children, wrapperProps, ...rest }: Props, ref) => {
const { t } = useTranslation();
const shouldEnableInformationalPopovers = useAppSelector(
(state) => state.system.shouldEnableInformationalPopovers
);
const data = useMemo(() => POPOVER_DATA[feature], [feature]);
const popoverProps = useMemo(
() => merge(omit(data, ['image', 'href', 'buttonLabel']), rest),
[data, rest]
);
const heading = useMemo<string | undefined>(
() => t(`popovers.${feature}.heading`),
[feature, t]
);
const paragraphs = useMemo<string[]>(
() =>
t(`popovers.${feature}.paragraphs`, {
returnObjects: true,
}) ?? [],
[feature, t]
);
const handleClick = useCallback(() => {
if (!data?.href) {
return;
}
window.open(data.href);
}, [data?.href]);
if (!shouldEnableInformationalPopovers) {
return (
<Box ref={ref} w="full" {...wrapperProps}>
{children}
</Box>
);
}
return (
<Popover
isLazy
closeOnBlur={false}
trigger="hover"
variant="informational"
openDelay={OPEN_DELAY}
modifiers={POPPER_MODIFIERS}
placement="top"
{...popoverProps}
>
<PopoverTrigger>
<Box ref={ref} w="full" {...wrapperProps}>
{children}
</Box>
</PopoverTrigger>
<Portal>
<PopoverContent w={96}>
<PopoverCloseButton />
<PopoverBody>
<Flex
sx={{
gap: 2,
flexDirection: 'column',
alignItems: 'flex-start',
}}
>
{heading && (
<>
<Heading size="sm">{heading}</Heading>
<Divider />
</>
)}
{data?.image && (
<>
<Image
sx={{
objectFit: 'contain',
maxW: '60%',
maxH: '60%',
backgroundColor: 'white',
}}
src={data.image}
alt="Optional Image"
/>
<Divider />
</>
)}
{paragraphs.map((p) => (
<Text key={p}>{p}</Text>
))}
{data?.href && (
<>
<Divider />
<Button
pt={1}
onClick={handleClick}
leftIcon={<FaExternalLinkAlt />}
alignSelf="flex-end"
variant="link"
>
{t('common.learnMore') ?? heading}
</Button>
</>
)}
</Flex>
</PopoverBody>
</PopoverContent>
</Portal>
</Popover>
);
}
);
IAIInformationalPopover.displayName = 'IAIInformationalPopover';
export default memo(IAIInformationalPopover);

View File

@ -0,0 +1,98 @@
import { PopoverProps } from '@chakra-ui/react';
export type Feature =
| 'clipSkip'
| 'paramNegativeConditioning'
| 'paramPositiveConditioning'
| 'paramScheduler'
| 'compositingBlur'
| 'compositingBlurMethod'
| 'compositingCoherencePass'
| 'compositingCoherenceMode'
| 'compositingCoherenceSteps'
| 'compositingStrength'
| 'compositingMaskAdjustments'
| 'controlNetBeginEnd'
| 'controlNetControlMode'
| 'controlNetResizeMode'
| 'controlNet'
| 'controlNetWeight'
| 'dynamicPrompts'
| 'dynamicPromptsMaxPrompts'
| 'dynamicPromptsSeedBehaviour'
| 'infillMethod'
| 'lora'
| 'noiseUseCPU'
| 'paramCFGScale'
| 'paramDenoisingStrength'
| 'paramIterations'
| 'paramModel'
| 'paramRatio'
| 'paramSeed'
| 'paramSteps'
| 'paramVAE'
| 'paramVAEPrecision'
| 'scaleBeforeProcessing';
export type PopoverData = PopoverProps & {
image?: string;
href?: string;
buttonLabel?: string;
};
export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
paramNegativeConditioning: {
placement: 'right',
},
controlNet: {
href: 'https://support.invoke.ai/support/solutions/articles/151000105880',
},
lora: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159072',
},
compositingCoherenceMode: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158838',
},
infillMethod: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
},
scaleBeforeProcessing: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
},
paramIterations: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159073',
},
paramPositiveConditioning: {
href: 'https://support.invoke.ai/support/solutions/articles/151000096606-tips-on-crafting-prompts',
placement: 'right',
},
paramScheduler: {
placement: 'right',
href: 'https://support.invoke.ai/support/solutions/articles/151000159073',
},
paramModel: {
placement: 'right',
href: 'https://support.invoke.ai/support/solutions/articles/151000096601-what-is-a-model-which-should-i-use-',
},
paramRatio: {
gutter: 16,
},
controlNetControlMode: {
placement: 'right',
},
controlNetResizeMode: {
placement: 'right',
},
paramVAE: {
placement: 'right',
},
paramVAEPrecision: {
placement: 'right',
},
} as const;
export const OPEN_DELAY = 1000; // in milliseconds
export const POPPER_MODIFIERS: PopoverProps['modifiers'] = [
{ name: 'preventOverflow', options: { padding: 10 } },
];

View File

@ -44,23 +44,19 @@ const IAIMantineMultiSelect = forwardRef((props: IAIMultiSelectProps, ref) => {
return (
<Tooltip label={tooltip} placement="top" hasArrow isOpen={true}>
<MultiSelect
label={
label ? (
<FormControl ref={ref} isDisabled={disabled}>
<FormLabel>{label}</FormLabel>
</FormControl>
) : undefined
}
ref={inputRef}
disabled={disabled}
onKeyDown={handleKeyDown}
onKeyUp={handleKeyUp}
searchable={searchable}
maxDropdownHeight={300}
styles={styles}
{...rest}
/>
<FormControl ref={ref} isDisabled={disabled}>
{label && <FormLabel>{label}</FormLabel>}
<MultiSelect
ref={inputRef}
disabled={disabled}
onKeyDown={handleKeyDown}
onKeyUp={handleKeyUp}
searchable={searchable}
maxDropdownHeight={300}
styles={styles}
{...rest}
/>
</FormControl>
</Tooltip>
);
});

View File

@ -70,26 +70,23 @@ const IAIMantineSearchableSelect = forwardRef((props: IAISelectProps, ref) => {
return (
<Tooltip label={tooltip} placement="top" hasArrow>
<Select
ref={inputRef}
label={
label ? (
<FormControl ref={ref} isDisabled={disabled}>
<FormLabel>{label}</FormLabel>
</FormControl>
) : undefined
}
disabled={disabled}
searchValue={searchValue}
onSearchChange={setSearchValue}
onChange={handleChange}
onKeyDown={handleKeyDown}
onKeyUp={handleKeyUp}
searchable={searchable}
maxDropdownHeight={300}
styles={styles}
{...rest}
/>
<FormControl ref={ref} isDisabled={disabled}>
{label && <FormLabel>{label}</FormLabel>}
<Select
ref={inputRef}
withinPortal
disabled={disabled}
searchValue={searchValue}
onSearchChange={setSearchValue}
onChange={handleChange}
onKeyDown={handleKeyDown}
onKeyUp={handleKeyUp}
searchable={searchable}
maxDropdownHeight={300}
styles={styles}
{...rest}
/>
</FormControl>
</Tooltip>
);
});

View File

@ -22,19 +22,10 @@ const IAIMantineSelect = forwardRef((props: IAISelectProps, ref) => {
return (
<Tooltip label={tooltip} placement="top" hasArrow>
<Select
label={
label ? (
<FormControl ref={ref} isRequired={required} isDisabled={disabled}>
<FormLabel>{label}</FormLabel>
</FormControl>
) : undefined
}
disabled={disabled}
ref={inputRef}
styles={styles}
{...rest}
/>
<FormControl ref={ref} isRequired={required} isDisabled={disabled}>
<FormLabel>{label}</FormLabel>
<Select disabled={disabled} ref={inputRef} styles={styles} {...rest} />
</FormControl>
</Tooltip>
);
});

View File

@ -1,6 +1,7 @@
import { memo } from 'react';
import { ControlNetConfig } from '../store/controlNetSlice';
import CannyProcessor from './processors/CannyProcessor';
import ColorMapProcessor from './processors/ColorMapProcessor';
import ContentShuffleProcessor from './processors/ContentShuffleProcessor';
import HedProcessor from './processors/HedProcessor';
import LineartAnimeProcessor from './processors/LineartAnimeProcessor';
@ -30,6 +31,16 @@ const ControlNetProcessorComponent = (props: ControlNetProcessorProps) => {
);
}
if (processorNode.type === 'color_map_image_processor') {
return (
<ColorMapProcessor
controlNetId={controlNetId}
processorNode={processorNode}
isEnabled={isEnabled}
/>
);
}
if (processorNode.type === 'hed_image_processor') {
return (
<HedProcessor

View File

@ -10,7 +10,7 @@ import {
Tooltip,
} from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import {
ControlNetConfig,
controlNetBeginStepPctChanged,
@ -50,7 +50,7 @@ const ParamControlNetBeginEnd = (props: Props) => {
);
return (
<IAIInformationalPopover details="controlNetBeginEnd">
<IAIInformationalPopover feature="controlNetBeginEnd">
<FormControl isDisabled={!isEnabled}>
<FormLabel>{t('controlnet.beginEndStepPercent')}</FormLabel>
<HStack w="100%" gap={2} alignItems="center">

View File

@ -1,5 +1,5 @@
import { useAppDispatch } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import {
ControlModes,
@ -35,7 +35,7 @@ export default function ParamControlNetControlMode(
);
return (
<IAIInformationalPopover details="controlNetControlMode">
<IAIInformationalPopover feature="controlNetControlMode">
<IAIMantineSelect
disabled={!isEnabled}
label={t('controlnet.controlMode')}

View File

@ -3,7 +3,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISwitch from 'common/components/IAISwitch';
import { isControlNetEnabledToggled } from 'features/controlNet/store/controlNetSlice';
import { memo, useCallback } from 'react';
@ -28,7 +28,7 @@ const ParamControlNetFeatureToggle = () => {
return (
<Box width="100%">
<IAIInformationalPopover details="controlNetToggle">
<IAIInformationalPopover feature="controlNet">
<IAISwitch
label="Enable ControlNet"
isChecked={isEnabled}

View File

@ -1,5 +1,5 @@
import { useAppDispatch } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import {
ControlNetConfig,
@ -34,7 +34,7 @@ export default function ParamControlNetResizeMode(
);
return (
<IAIInformationalPopover details="controlNetResizeMode">
<IAIInformationalPopover feature="controlNetResizeMode">
<IAIMantineSelect
disabled={!isEnabled}
label={t('controlnet.resizeMode')}

View File

@ -1,5 +1,5 @@
import { useAppDispatch } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISlider from 'common/components/IAISlider';
import {
ControlNetConfig,
@ -24,7 +24,7 @@ const ParamControlNetWeight = (props: ParamControlNetWeightProps) => {
);
return (
<IAIInformationalPopover details="controlNetWeight">
<IAIInformationalPopover feature="controlNetWeight">
<IAISlider
isDisabled={!isEnabled}
label={t('controlnet.weight')}

View File

@ -0,0 +1,59 @@
import IAISlider from 'common/components/IAISlider';
import { CONTROLNET_PROCESSORS } from 'features/controlNet/store/constants';
import { RequiredColorMapImageProcessorInvocation } from 'features/controlNet/store/types';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.color_map_image_processor
.default as RequiredColorMapImageProcessorInvocation;
type ColorMapProcessorProps = {
controlNetId: string;
processorNode: RequiredColorMapImageProcessorInvocation;
isEnabled: boolean;
};
const ColorMapProcessor = (props: ColorMapProcessorProps) => {
const { controlNetId, processorNode, isEnabled } = props;
const { color_map_tile_size } = processorNode;
const processorChanged = useProcessorNodeChanged();
const { t } = useTranslation();
const handleColorMapTileSizeChanged = useCallback(
(v: number) => {
processorChanged(controlNetId, { color_map_tile_size: v });
},
[controlNetId, processorChanged]
);
const handleColorMapTileSizeReset = useCallback(() => {
processorChanged(controlNetId, {
color_map_tile_size: DEFAULTS.color_map_tile_size,
});
}, [controlNetId, processorChanged]);
return (
<ProcessorWrapper>
<IAISlider
isDisabled={!isEnabled}
label={t('controlnet.colorMapTileSize')}
value={color_map_tile_size}
onChange={handleColorMapTileSizeChanged}
handleReset={handleColorMapTileSizeReset}
withReset
min={1}
max={256}
step={1}
withInput
withSliderMarks
sliderNumberInputProps={{
max: 4096,
}}
/>
</ProcessorWrapper>
);
};
export default memo(ColorMapProcessor);

View File

@ -4,5 +4,9 @@ import { PropsWithChildren } from 'react';
type Props = PropsWithChildren;
export default function ProcessorWrapper(props: Props) {
return <Flex sx={{ flexDirection: 'column', gap: 2 }}>{props.children}</Flex>;
return (
<Flex sx={{ flexDirection: 'column', gap: 2, pb: 2 }}>
{props.children}
</Flex>
);
}

View File

@ -1,8 +1,8 @@
import i18n from 'i18next';
import {
ControlNetProcessorType,
RequiredControlNetProcessorNode,
} from './types';
import i18n from 'i18next';
type ControlNetProcessorsDict = Record<
ControlNetProcessorType,
@ -50,6 +50,20 @@ export const CONTROLNET_PROCESSORS: ControlNetProcessorsDict = {
high_threshold: 200,
},
},
color_map_image_processor: {
type: 'color_map_image_processor',
get label() {
return i18n.t('controlnet.colorMap');
},
get description() {
return i18n.t('controlnet.colorMapDescription');
},
default: {
id: 'color_map_image_processor',
type: 'color_map_image_processor',
color_map_tile_size: 64,
},
},
content_shuffle_image_processor: {
type: 'content_shuffle_image_processor',
get label() {

View File

@ -1,6 +1,7 @@
import { isObject } from 'lodash-es';
import {
CannyImageProcessorInvocation,
ColorMapImageProcessorInvocation,
ContentShuffleImageProcessorInvocation,
HedImageProcessorInvocation,
LineartAnimeImageProcessorInvocation,
@ -20,6 +21,7 @@ import { O } from 'ts-toolbelt';
*/
export type ControlNetProcessorNode =
| CannyImageProcessorInvocation
| ColorMapImageProcessorInvocation
| ContentShuffleImageProcessorInvocation
| HedImageProcessorInvocation
| LineartAnimeImageProcessorInvocation
@ -47,6 +49,14 @@ export type RequiredCannyImageProcessorInvocation = O.Required<
'type' | 'low_threshold' | 'high_threshold'
>;
/**
* The Color Map processor node, with parameters flagged as required
*/
export type RequiredColorMapImageProcessorInvocation = O.Required<
ColorMapImageProcessorInvocation,
'type' | 'color_map_tile_size'
>;
/**
* The ContentShuffle processor node, with parameters flagged as required
*/
@ -140,6 +150,7 @@ export type RequiredZoeDepthImageProcessorInvocation = O.Required<
*/
export type RequiredControlNetProcessorNode = O.Required<
| RequiredCannyImageProcessorInvocation
| RequiredColorMapImageProcessorInvocation
| RequiredContentShuffleImageProcessorInvocation
| RequiredHedImageProcessorInvocation
| RequiredLineartAnimeImageProcessorInvocation
@ -166,6 +177,22 @@ export const isCannyImageProcessorInvocation = (
return false;
};
/**
* Type guard for ColorMapImageProcessorInvocation
*/
export const isColorMapImageProcessorInvocation = (
obj: unknown
): obj is ColorMapImageProcessorInvocation => {
if (
isObject(obj) &&
'type' in obj &&
obj.type === 'color_map_image_processor'
) {
return true;
}
return false;
};
/**
* Type guard for ContentShuffleImageProcessorInvocation
*/

View File

@ -43,8 +43,8 @@ const ParamDynamicPromptsCollapse = () => {
activeLabel={activeLabel}
>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<ParamDynamicPromptsSeedBehaviour />
<ParamDynamicPromptsPreview />
<ParamDynamicPromptsSeedBehaviour />
<ParamDynamicPromptsMaxPrompts />
</Flex>
</IAICollapse>

View File

@ -4,9 +4,8 @@ import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAISwitch from 'common/components/IAISwitch';
import { memo, useCallback } from 'react';
import { combinatorialToggled } from '../store/dynamicPromptsSlice';
import { useTranslation } from 'react-i18next';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import { combinatorialToggled } from '../store/dynamicPromptsSlice';
const selector = createSelector(
stateSelector,
@ -28,13 +27,11 @@ const ParamDynamicPromptsCombinatorial = () => {
}, [dispatch]);
return (
<IAIInformationalPopover details="dynamicPromptsCombinatorial">
<IAISwitch
label={t('dynamicPrompts.combinatorial')}
isChecked={combinatorial}
onChange={handleChange}
/>
</IAIInformationalPopover>
<IAISwitch
label={t('dynamicPrompts.combinatorial')}
isChecked={combinatorial}
onChange={handleChange}
/>
);
};

View File

@ -9,6 +9,7 @@ import {
maxPromptsReset,
} from '../store/dynamicPromptsSlice';
import { useTranslation } from 'react-i18next';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
const selector = createSelector(
stateSelector,
@ -46,19 +47,21 @@ const ParamDynamicPromptsMaxPrompts = () => {
}, [dispatch]);
return (
<IAISlider
label={t('dynamicPrompts.maxPrompts')}
isDisabled={isDisabled}
min={min}
max={sliderMax}
value={maxPrompts}
onChange={handleChange}
sliderNumberInputProps={{ max: inputMax }}
withSliderMarks
withInput
withReset
handleReset={handleReset}
/>
<IAIInformationalPopover feature="dynamicPromptsMaxPrompts">
<IAISlider
label={t('dynamicPrompts.maxPrompts')}
isDisabled={isDisabled}
min={min}
max={sliderMax}
value={maxPrompts}
onChange={handleChange}
sliderNumberInputProps={{ max: inputMax }}
withSliderMarks
withInput
withReset
handleReset={handleReset}
/>
</IAIInformationalPopover>
);
};

View File

@ -13,6 +13,7 @@ import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import ScrollableContent from 'features/nodes/components/sidePanel/ScrollableContent';
import { memo } from 'react';
import { FaCircleExclamation } from 'react-icons/fa6';
@ -42,58 +43,73 @@ const ParamDynamicPromptsPreview = () => {
if (isError) {
return (
<Flex
w="full"
h="full"
layerStyle="second"
alignItems="center"
justifyContent="center"
p={8}
>
<IAINoContentFallback
icon={FaCircleExclamation}
label="Problem generating prompts"
/>
</Flex>
<IAIInformationalPopover feature="dynamicPrompts">
<Flex
w="full"
h="full"
layerStyle="second"
alignItems="center"
justifyContent="center"
p={8}
>
<IAINoContentFallback
icon={FaCircleExclamation}
label="Problem generating prompts"
/>
</Flex>
</IAIInformationalPopover>
);
}
return (
<FormControl isInvalid={Boolean(parsingError)}>
<FormLabel whiteSpace="nowrap" overflow="hidden" textOverflow="ellipsis">
Prompts Preview ({prompts.length}){parsingError && ` - ${parsingError}`}
</FormLabel>
<Flex h={64} pos="relative" layerStyle="third" borderRadius="base" p={2}>
<ScrollableContent>
<OrderedList stylePosition="inside" ms={0}>
{prompts.map((prompt, i) => (
<ListItem
fontSize="sm"
key={`${prompt}.${i}`}
sx={listItemStyles}
>
<Text as="span">{prompt}</Text>
</ListItem>
))}
</OrderedList>
</ScrollableContent>
{isLoading && (
<Flex
pos="absolute"
w="full"
h="full"
top={0}
insetInlineStart={0}
layerStyle="second"
opacity={0.7}
alignItems="center"
justifyContent="center"
>
<Spinner />
</Flex>
)}
</Flex>
</FormControl>
<IAIInformationalPopover feature="dynamicPrompts">
<FormControl isInvalid={Boolean(parsingError)}>
<FormLabel
whiteSpace="nowrap"
overflow="hidden"
textOverflow="ellipsis"
>
Prompts Preview ({prompts.length})
{parsingError && ` - ${parsingError}`}
</FormLabel>
<Flex
h={64}
pos="relative"
layerStyle="third"
borderRadius="base"
p={2}
>
<ScrollableContent>
<OrderedList stylePosition="inside" ms={0}>
{prompts.map((prompt, i) => (
<ListItem
fontSize="sm"
key={`${prompt}.${i}`}
sx={listItemStyles}
>
<Text as="span">{prompt}</Text>
</ListItem>
))}
</OrderedList>
</ScrollableContent>
{isLoading && (
<Flex
pos="absolute"
w="full"
h="full"
top={0}
insetInlineStart={0}
layerStyle="second"
opacity={0.7}
alignItems="center"
justifyContent="center"
>
<Spinner />
</Flex>
)}
</Flex>
</FormControl>
</IAIInformationalPopover>
);
};

View File

@ -7,6 +7,7 @@ import {
seedBehaviourChanged,
} from '../store/dynamicPromptsSlice';
import IAIMantineSelectItemWithDescription from 'common/components/IAIMantineSelectItemWithDescription';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
type Item = {
label: string;
@ -47,13 +48,15 @@ const ParamDynamicPromptsSeedBehaviour = () => {
);
return (
<IAIMantineSelect
label={t('dynamicPrompts.seedBehaviour.label')}
value={seedBehaviour}
data={data}
itemComponent={IAIMantineSelectItemWithDescription}
onChange={handleChange}
/>
<IAIInformationalPopover feature="dynamicPromptsSeedBehaviour">
<IAIMantineSelect
label={t('dynamicPrompts.seedBehaviour.label')}
value={seedBehaviour}
data={data}
itemComponent={IAIMantineSelectItemWithDescription}
onChange={handleChange}
/>
</IAIInformationalPopover>
);
};

View File

@ -28,7 +28,7 @@ import {
setShouldShowImageDetails,
setShouldShowProgressInViewer,
} from 'features/ui/store/uiSlice';
import { memo, useCallback, useMemo } from 'react';
import { memo, useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import {
@ -41,9 +41,10 @@ import {
import { FaCircleNodes, FaEllipsis } from 'react-icons/fa6';
import {
useGetImageDTOQuery,
useGetImageMetadataFromFileQuery,
useGetImageMetadataQuery,
} from 'services/api/endpoints/images';
import { menuListMotionProps } from 'theme/components/menu';
import { useDebounce } from 'use-debounce';
import { sentImageToImg2Img } from '../../store/actions';
import SingleSelectionMenuItems from '../ImageContextMenu/SingleSelectionMenuItems';
@ -92,7 +93,6 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
shouldShowImageDetails,
lastSelectedImage,
shouldShowProgressInViewer,
shouldFetchMetadataFromApi,
} = useAppSelector(currentImageButtonsSelector);
const isUpscalingEnabled = useFeatureStatus('upscaling').isFeatureEnabled;
@ -107,16 +107,10 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
lastSelectedImage?.image_name ?? skipToken
);
const getMetadataArg = useMemo(() => {
if (lastSelectedImage) {
return { image: lastSelectedImage, shouldFetchMetadataFromApi };
} else {
return skipToken;
}
}, [lastSelectedImage, shouldFetchMetadataFromApi]);
const [debouncedImageName] = useDebounce(lastSelectedImage?.image_name, 300);
const { metadata, workflow, isLoading } = useGetImageMetadataFromFileQuery(
getMetadataArg,
const { metadata, workflow, isLoading } = useGetImageMetadataQuery(
debouncedImageName ?? skipToken,
{
selectFromResult: (res) => ({
isLoading: res.isFetching,

View File

@ -1,8 +1,9 @@
import { Flex, MenuItem, Spinner } from '@chakra-ui/react';
import { useStore } from '@nanostores/react';
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { useAppToaster } from 'app/components/Toaster';
import { $customStarUI } from 'app/store/nanostores/customStarUI';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAppDispatch } from 'app/store/storeHooks';
import { setInitialCanvasImage } from 'features/canvas/store/canvasSlice';
import {
imagesToChangeSelected,
@ -32,12 +33,12 @@ import {
import { FaCircleNodes } from 'react-icons/fa6';
import { MdStar, MdStarBorder } from 'react-icons/md';
import {
useGetImageMetadataFromFileQuery,
useGetImageMetadataQuery,
useStarImagesMutation,
useUnstarImagesMutation,
} from 'services/api/endpoints/images';
import { ImageDTO } from 'services/api/types';
import { configSelector } from '../../../system/store/configSelectors';
import { useDebounce } from 'use-debounce';
import { sentImageToCanvas, sentImageToImg2Img } from '../../store/actions';
type SingleSelectionMenuItemsProps = {
@ -53,11 +54,12 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
const toaster = useAppToaster();
const isCanvasEnabled = useFeatureStatus('unifiedCanvas').isFeatureEnabled;
const { shouldFetchMetadataFromApi } = useAppSelector(configSelector);
const customStarUi = useStore($customStarUI);
const { metadata, workflow, isLoading } = useGetImageMetadataFromFileQuery(
{ image: imageDTO, shouldFetchMetadataFromApi },
const [debouncedImageName] = useDebounce(imageDTO.image_name, 300);
const { metadata, workflow, isLoading } = useGetImageMetadataQuery(
debouncedImageName ?? skipToken,
{
selectFromResult: (res) => ({
isLoading: res.isFetching,

View File

@ -9,15 +9,15 @@ import {
Tabs,
Text,
} from '@chakra-ui/react';
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { memo } from 'react';
import { useGetImageMetadataFromFileQuery } from 'services/api/endpoints/images';
import { useTranslation } from 'react-i18next';
import { useGetImageMetadataQuery } from 'services/api/endpoints/images';
import { ImageDTO } from 'services/api/types';
import { useDebounce } from 'use-debounce';
import DataViewer from './DataViewer';
import ImageMetadataActions from './ImageMetadataActions';
import { useAppSelector } from '../../../../app/store/storeHooks';
import { configSelector } from '../../../system/store/configSelectors';
import { useTranslation } from 'react-i18next';
type ImageMetadataViewerProps = {
image: ImageDTO;
@ -31,10 +31,10 @@ const ImageMetadataViewer = ({ image }: ImageMetadataViewerProps) => {
// });
const { t } = useTranslation();
const { shouldFetchMetadataFromApi } = useAppSelector(configSelector);
const [debouncedImageName] = useDebounce(image.image_name, 300);
const { metadata, workflow } = useGetImageMetadataFromFileQuery(
{ image, shouldFetchMetadataFromApi },
const { metadata, workflow } = useGetImageMetadataQuery(
debouncedImageName ?? skipToken,
{
selectFromResult: (res) => ({
metadata: res?.currentData?.metadata,

View File

@ -10,7 +10,7 @@ import {
loraWeightChanged,
loraWeightReset,
} from '../store/loraSlice';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
type Props = {
lora: LoRA;
@ -36,7 +36,7 @@ const ParamLora = (props: Props) => {
}, [dispatch, lora.id]);
return (
<IAIInformationalPopover details="lora">
<IAIInformationalPopover feature="lora">
<Flex sx={{ gap: 2.5, alignItems: 'flex-end' }}>
<IAISlider
label={lora.model_name}

View File

@ -1,13 +1,13 @@
import { Checkbox, Flex, FormControl, FormLabel } from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useEmbedWorkflow } from 'features/nodes/hooks/useEmbedWorkflow';
import { useHasImageOutput } from 'features/nodes/hooks/useHasImageOutput';
import { useWithWorkflow } from 'features/nodes/hooks/useWithWorkflow';
import { nodeEmbedWorkflowChanged } from 'features/nodes/store/nodesSlice';
import { ChangeEvent, memo, useCallback } from 'react';
const EmbedWorkflowCheckbox = ({ nodeId }: { nodeId: string }) => {
const dispatch = useAppDispatch();
const hasImageOutput = useHasImageOutput(nodeId);
const withWorkflow = useWithWorkflow(nodeId);
const embedWorkflow = useEmbedWorkflow(nodeId);
const handleChange = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
@ -21,7 +21,7 @@ const EmbedWorkflowCheckbox = ({ nodeId }: { nodeId: string }) => {
[dispatch, nodeId]
);
if (!hasImageOutput) {
if (!withWorkflow) {
return null;
}

View File

@ -1,10 +1,10 @@
import { Flex } from '@chakra-ui/react';
import { useHasImageOutput } from 'features/nodes/hooks/useHasImageOutput';
import { DRAG_HANDLE_CLASSNAME } from 'features/nodes/types/constants';
import { memo } from 'react';
import EmbedWorkflowCheckbox from './EmbedWorkflowCheckbox';
import SaveToGalleryCheckbox from './SaveToGalleryCheckbox';
import UseCacheCheckbox from './UseCacheCheckbox';
import { useHasImageOutput } from 'features/nodes/hooks/useHasImageOutput';
type Props = {
nodeId: string;
@ -25,8 +25,8 @@ const InvocationNodeFooter = ({ nodeId }: Props) => {
justifyContent: 'space-between',
}}
>
{hasImageOutput && <EmbedWorkflowCheckbox nodeId={nodeId} />}
<UseCacheCheckbox nodeId={nodeId} />
{hasImageOutput && <EmbedWorkflowCheckbox nodeId={nodeId} />}
{hasImageOutput && <SaveToGalleryCheckbox nodeId={nodeId} />}
</Flex>
);

View File

@ -16,6 +16,7 @@ import SchedulerInputField from './inputs/SchedulerInputField';
import StringInputField from './inputs/StringInputField';
import VaeModelInputField from './inputs/VaeModelInputField';
import IPAdapterModelInputField from './inputs/IPAdapterModelInputField';
import BoardInputField from './inputs/BoardInputField';
type InputFieldProps = {
nodeId: string;
@ -99,6 +100,16 @@ const InputFieldRenderer = ({ nodeId, fieldName }: InputFieldProps) => {
);
}
if (field?.type === 'BoardField' && fieldTemplate?.type === 'BoardField') {
return (
<BoardInputField
nodeId={nodeId}
field={field}
fieldTemplate={fieldTemplate}
/>
);
}
if (
field?.type === 'MainModelField' &&
fieldTemplate?.type === 'MainModelField'

View File

@ -0,0 +1,64 @@
import { SelectItem } from '@mantine/core';
import { useAppDispatch } from 'app/store/storeHooks';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import { fieldBoardValueChanged } from 'features/nodes/store/nodesSlice';
import {
BoardInputFieldTemplate,
BoardInputFieldValue,
FieldComponentProps,
} from 'features/nodes/types/types';
import { memo, useCallback } from 'react';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
const BoardInputFieldComponent = (
props: FieldComponentProps<BoardInputFieldValue, BoardInputFieldTemplate>
) => {
const { nodeId, field } = props;
const dispatch = useAppDispatch();
const { data, hasBoards } = useListAllBoardsQuery(undefined, {
selectFromResult: ({ data }) => {
const boards: SelectItem[] = [
{
label: 'None',
value: 'none',
},
];
data?.forEach(({ board_id, board_name }) => {
boards.push({
label: board_name,
value: board_id,
});
});
return {
data: boards,
hasBoards: boards.length > 1,
};
},
});
const handleChange = useCallback(
(v: string | null) => {
dispatch(
fieldBoardValueChanged({
nodeId,
fieldName: field.name,
value: v && v !== 'none' ? { board_id: v } : undefined,
})
);
},
[dispatch, field.name, nodeId]
);
return (
<IAIMantineSearchableSelect
className="nowheel nodrag"
value={field.value?.board_id ?? 'none'}
data={data}
onChange={handleChange}
disabled={!hasBoards}
/>
);
};
export default memo(BoardInputFieldComponent);

View File

@ -65,11 +65,6 @@ const SchedulerInputField = (
return (
<IAIMantineSearchableSelect
className="nowheel nodrag"
sx={{
'.mantine-Select-dropdown': {
width: '14rem !important',
},
}}
value={field.value}
data={data}
onChange={handleChange}

View File

@ -143,7 +143,7 @@ export const useBuildNodeData = () => {
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: true,
isIntermediate: type === 'save_image' ? false : true,
inputs,
outputs,
useCache: template.useCache,

View File

@ -17,8 +17,12 @@ export const useHasImageOutput = (nodeId: string) => {
if (!isInvocationNode(node)) {
return false;
}
return some(node.data.outputs, (output) =>
IMAGE_FIELDS.includes(output.type)
return some(
node.data.outputs,
(output) =>
IMAGE_FIELDS.includes(output.type) &&
// the image primitive node does not actually save the image, do not show the image-saving checkboxes
node.data.type !== 'image'
);
},
defaultSelectorOptions

View File

@ -122,13 +122,16 @@ export const useIsValidConnection = () => {
const isIntToFloat = sourceType === 'integer' && targetType === 'float';
const isEitherAnyType = sourceType === 'Any' || targetType === 'Any';
return (
isCollectionItemToNonCollection ||
isNonCollectionToCollectionItem ||
isAnythingToPolymorphicOfSameBaseType ||
isGenericCollectionToAnyCollectionOrPolymorphic ||
isCollectionToGenericCollection ||
isIntToFloat
isIntToFloat ||
isEitherAnyType
);
}

View File

@ -0,0 +1,31 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { useMemo } from 'react';
import { isInvocationNode } from '../types/types';
export const useWithWorkflow = (nodeId: string) => {
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
if (!isInvocationNode(node)) {
return false;
}
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
if (!nodeTemplate) {
return false;
}
return nodeTemplate.withWorkflow;
},
defaultSelectorOptions
),
[nodeId]
);
const withWorkflow = useAppSelector(selector);
return withWorkflow;
};

View File

@ -30,6 +30,7 @@ import {
import { v4 as uuidv4 } from 'uuid';
import { DRAG_HANDLE_CLASSNAME } from '../types/constants';
import {
BoardInputFieldValue,
BooleanInputFieldValue,
ColorInputFieldValue,
ControlNetModelInputFieldValue,
@ -494,6 +495,12 @@ const nodesSlice = createSlice({
) => {
fieldValueReducer(state, action);
},
fieldBoardValueChanged: (
state,
action: FieldValueAction<BoardInputFieldValue>
) => {
fieldValueReducer(state, action);
},
fieldImageValueChanged: (
state,
action: FieldValueAction<ImageInputFieldValue>
@ -871,7 +878,7 @@ const nodesSlice = createSlice({
builder.addCase(appSocketQueueItemStatusChanged, (state, action) => {
if (['in_progress'].includes(action.payload.data.status)) {
forEach(state.nodeExecutionStates, (nes) => {
nes.status = NodeStatus.IN_PROGRESS;
nes.status = NodeStatus.PENDING;
nes.error = null;
nes.progress = null;
nes.progressImage = null;
@ -897,6 +904,7 @@ export const {
imageCollectionFieldValueChanged,
fieldStringValueChanged,
fieldNumberValueChanged,
fieldBoardValueChanged,
fieldBooleanValueChanged,
fieldImageValueChanged,
fieldColorValueChanged,

View File

@ -116,6 +116,8 @@ export const makeConnectionErrorSelector = (
const isIntToFloat = sourceType === 'integer' && targetType === 'float';
const isEitherAnyType = sourceType === 'Any' || targetType === 'Any';
if (
!(
isCollectionItemToNonCollection ||
@ -123,7 +125,8 @@ export const makeConnectionErrorSelector = (
isAnythingToPolymorphicOfSameBaseType ||
isGenericCollectionToAnyCollectionOrPolymorphic ||
isCollectionToGenericCollection ||
isIntToFloat
isIntToFloat ||
isEitherAnyType
)
) {
return i18n.t('nodes.fieldTypesMustMatch');

View File

@ -1,4 +1,9 @@
import { FieldType, FieldUIConfig } from './types';
import {
FieldType,
FieldTypeMap,
FieldTypeMapWithNumber,
FieldUIConfig,
} from './types';
import { t } from 'i18next';
export const HANDLE_TOOLTIP_OPEN_DELAY = 500;
@ -26,9 +31,11 @@ export const COLLECTION_TYPES: FieldType[] = [
'ConditioningCollection',
'ControlCollection',
'ColorCollection',
'MetadataItemCollection',
'MetadataDictCollection',
];
export const POLYMORPHIC_TYPES = [
export const POLYMORPHIC_TYPES: FieldType[] = [
'IntegerPolymorphic',
'BooleanPolymorphic',
'FloatPolymorphic',
@ -38,9 +45,10 @@ export const POLYMORPHIC_TYPES = [
'ConditioningPolymorphic',
'ControlPolymorphic',
'ColorPolymorphic',
'MetadataItemPolymorphic',
];
export const MODEL_TYPES = [
export const MODEL_TYPES: FieldType[] = [
'IPAdapterModelField',
'ControlNetModelField',
'LoRAModelField',
@ -54,7 +62,7 @@ export const MODEL_TYPES = [
'ClipField',
];
export const COLLECTION_MAP = {
export const COLLECTION_MAP: FieldTypeMapWithNumber = {
integer: 'IntegerCollection',
boolean: 'BooleanCollection',
number: 'FloatCollection',
@ -65,13 +73,15 @@ export const COLLECTION_MAP = {
ConditioningField: 'ConditioningCollection',
ControlField: 'ControlCollection',
ColorField: 'ColorCollection',
MetadataItem: 'MetadataItemCollection',
MetadataDict: 'MetadataDictCollection',
};
export const isCollectionItemType = (
itemType: string | undefined
): itemType is keyof typeof COLLECTION_MAP =>
Boolean(itemType && itemType in COLLECTION_MAP);
export const SINGLE_TO_POLYMORPHIC_MAP = {
export const SINGLE_TO_POLYMORPHIC_MAP: FieldTypeMapWithNumber = {
integer: 'IntegerPolymorphic',
boolean: 'BooleanPolymorphic',
number: 'FloatPolymorphic',
@ -82,9 +92,10 @@ export const SINGLE_TO_POLYMORPHIC_MAP = {
ConditioningField: 'ConditioningPolymorphic',
ControlField: 'ControlPolymorphic',
ColorField: 'ColorPolymorphic',
MetadataItem: 'MetadataItemPolymorphic',
};
export const POLYMORPHIC_TO_SINGLE_MAP = {
export const POLYMORPHIC_TO_SINGLE_MAP: FieldTypeMap = {
IntegerPolymorphic: 'integer',
BooleanPolymorphic: 'boolean',
FloatPolymorphic: 'float',
@ -94,9 +105,10 @@ export const POLYMORPHIC_TO_SINGLE_MAP = {
ConditioningPolymorphic: 'ConditioningField',
ControlPolymorphic: 'ControlField',
ColorPolymorphic: 'ColorField',
MetadataItemPolymorphic: 'MetadataItem',
};
export const TYPES_WITH_INPUT_COMPONENTS = [
export const TYPES_WITH_INPUT_COMPONENTS: FieldType[] = [
'string',
'StringPolymorphic',
'boolean',
@ -117,6 +129,7 @@ export const TYPES_WITH_INPUT_COMPONENTS = [
'SDXLMainModelField',
'Scheduler',
'IPAdapterModelField',
'BoardField',
];
export const isPolymorphicItemType = (
@ -125,6 +138,37 @@ export const isPolymorphicItemType = (
Boolean(itemType && itemType in SINGLE_TO_POLYMORPHIC_MAP);
export const FIELDS: Record<FieldType, FieldUIConfig> = {
Any: {
color: 'gray.500',
description: 'Any field type is accepted.',
title: 'Any',
},
MetadataDict: {
color: 'gray.500',
description: 'A metadata dict.',
title: 'Metadata Dict',
},
MetadataDictCollection: {
color: 'gray.500',
description: 'A collection of metadata dicts.',
title: 'Metadata Dict Collection',
},
MetadataItem: {
color: 'gray.500',
description: 'A metadata item.',
title: 'Metadata Item',
},
MetadataItemCollection: {
color: 'gray.500',
description: 'Any field type is accepted.',
title: 'Metadata Item Collection',
},
MetadataItemPolymorphic: {
color: 'gray.500',
description:
'MetadataItem or MetadataItemCollection field types are accepted.',
title: 'Metadata Item Polymorphic',
},
boolean: {
color: 'green.500',
description: t('nodes.booleanDescription'),
@ -240,6 +284,11 @@ export const FIELDS: Record<FieldType, FieldUIConfig> = {
description: t('nodes.imageFieldDescription'),
title: t('nodes.imageField'),
},
BoardField: {
color: 'purple.500',
description: t('nodes.imageFieldDescription'),
title: t('nodes.imageField'),
},
ImagePolymorphic: {
color: 'purple.500',
description: t('nodes.imagePolymorphicDescription'),

View File

@ -54,6 +54,10 @@ export type InvocationTemplate = {
* The type of this node's output
*/
outputType: string; // TODO: generate a union of output types
/**
* Whether or not this invocation supports workflows
*/
withWorkflow: boolean;
/**
* The invocation's version.
*/
@ -72,6 +76,8 @@ export type FieldUIConfig = {
// TODO: Get this from the OpenAPI schema? may be tricky...
export const zFieldType = z.enum([
'Any',
'BoardField',
'boolean',
'BooleanCollection',
'BooleanPolymorphic',
@ -106,6 +112,11 @@ export const zFieldType = z.enum([
'LatentsPolymorphic',
'LoRAModelField',
'MainModelField',
'MetadataDict',
'MetadataDictCollection',
'MetadataItem',
'MetadataItemCollection',
'MetadataItemPolymorphic',
'ONNXModelField',
'Scheduler',
'SDXLMainModelField',
@ -119,6 +130,10 @@ export const zFieldType = z.enum([
]);
export type FieldType = z.infer<typeof zFieldType>;
export type FieldTypeMap = { [key in FieldType]?: FieldType };
export type FieldTypeMapWithNumber = {
[key in FieldType | 'number']?: FieldType;
};
export const zReservedFieldType = z.enum([
'WorkflowField',
@ -187,6 +202,11 @@ export const zImageField = z.object({
});
export type ImageField = z.infer<typeof zImageField>;
export const zBoardField = z.object({
board_id: z.string().trim().min(1),
});
export type BoardField = z.infer<typeof zBoardField>;
export const zLatentsField = z.object({
latents_name: z.string().trim().min(1),
seed: z.number().int().optional(),
@ -494,6 +514,12 @@ export const zImageInputFieldValue = zInputFieldValueBase.extend({
});
export type ImageInputFieldValue = z.infer<typeof zImageInputFieldValue>;
export const zBoardInputFieldValue = zInputFieldValueBase.extend({
type: z.literal('BoardField'),
value: zBoardField.optional(),
});
export type BoardInputFieldValue = z.infer<typeof zBoardInputFieldValue>;
export const zImagePolymorphicInputFieldValue = zInputFieldValueBase.extend({
type: z.literal('ImagePolymorphic'),
value: zImageField.optional(),
@ -591,6 +617,58 @@ export type CollectionItemInputFieldValue = z.infer<
typeof zCollectionItemInputFieldValue
>;
export const zMetadataItem = z.object({
label: z.string(),
value: z.any(),
});
export type MetadataItem = z.infer<typeof zMetadataItem>;
export const zMetadataItemInputFieldValue = zInputFieldValueBase.extend({
type: z.literal('MetadataItem'),
value: zMetadataItem.optional(),
});
export type MetadataItemInputFieldValue = z.infer<
typeof zMetadataItemInputFieldValue
>;
export const zMetadataItemCollectionInputFieldValue =
zInputFieldValueBase.extend({
type: z.literal('MetadataItemCollection'),
value: z.array(zMetadataItem).optional(),
});
export type MetadataItemCollectionInputFieldValue = z.infer<
typeof zMetadataItemCollectionInputFieldValue
>;
export const zMetadataItemPolymorphicInputFieldValue =
zInputFieldValueBase.extend({
type: z.literal('MetadataItemPolymorphic'),
value: z.union([zMetadataItem, z.array(zMetadataItem)]).optional(),
});
export type MetadataItemPolymorphicInputFieldValue = z.infer<
typeof zMetadataItemPolymorphicInputFieldValue
>;
export const zMetadataDict = z.record(z.any());
export type MetadataDict = z.infer<typeof zMetadataDict>;
export const zMetadataDictInputFieldValue = zInputFieldValueBase.extend({
type: z.literal('MetadataDict'),
value: zMetadataDict.optional(),
});
export type MetadataDictInputFieldValue = z.infer<
typeof zMetadataDictInputFieldValue
>;
export const zMetadataDictCollectionInputFieldValue =
zInputFieldValueBase.extend({
type: z.literal('MetadataDictCollection'),
value: z.array(zMetadataDict).optional(),
});
export type MetadataDictCollectionInputFieldValue = z.infer<
typeof zMetadataDictCollectionInputFieldValue
>;
export const zColorField = z.object({
r: z.number().int().min(0).max(255),
g: z.number().int().min(0).max(255),
@ -629,7 +707,14 @@ export type SchedulerInputFieldValue = z.infer<
typeof zSchedulerInputFieldValue
>;
export const zAnyInputFieldValue = zInputFieldValueBase.extend({
type: z.literal('Any'),
value: z.any().optional(),
});
export const zInputFieldValue = z.discriminatedUnion('type', [
zAnyInputFieldValue,
zBoardInputFieldValue,
zBooleanCollectionInputFieldValue,
zBooleanInputFieldValue,
zBooleanPolymorphicInputFieldValue,
@ -673,6 +758,11 @@ export const zInputFieldValue = z.discriminatedUnion('type', [
zUNetInputFieldValue,
zVaeInputFieldValue,
zVaeModelInputFieldValue,
zMetadataItemInputFieldValue,
zMetadataItemCollectionInputFieldValue,
zMetadataItemPolymorphicInputFieldValue,
zMetadataDictInputFieldValue,
zMetadataDictCollectionInputFieldValue,
]);
export type InputFieldValue = z.infer<typeof zInputFieldValue>;
@ -685,6 +775,11 @@ export type InputFieldTemplateBase = {
fieldKind: 'input';
} & _InputField;
export type AnyInputFieldTemplate = InputFieldTemplateBase & {
type: 'Any';
default: undefined;
};
export type IntegerInputFieldTemplate = InputFieldTemplateBase & {
type: 'integer';
default: number;
@ -770,6 +865,11 @@ export type BooleanPolymorphicInputFieldTemplate = Omit<
type: 'BooleanPolymorphic';
};
export type BoardInputFieldTemplate = InputFieldTemplateBase & {
default: BoardField;
type: 'BoardField';
};
export type ImageInputFieldTemplate = InputFieldTemplateBase & {
default: ImageField;
type: 'ImageField';
@ -833,6 +933,11 @@ export type UNetInputFieldTemplate = InputFieldTemplateBase & {
type: 'UNetField';
};
export type MetadataItemFieldTemplate = InputFieldTemplateBase & {
default: undefined;
type: 'UNetField';
};
export type ClipInputFieldTemplate = InputFieldTemplateBase & {
default: undefined;
type: 'ClipField';
@ -945,6 +1050,35 @@ export type WorkflowInputFieldTemplate = InputFieldTemplateBase & {
type: 'WorkflowField';
};
export type MetadataItemInputFieldTemplate = InputFieldTemplateBase & {
default: undefined;
type: 'MetadataItem';
};
export type MetadataItemCollectionInputFieldTemplate =
InputFieldTemplateBase & {
default: undefined;
type: 'MetadataItemCollection';
};
export type MetadataItemPolymorphicInputFieldTemplate = Omit<
MetadataItemInputFieldTemplate,
'type'
> & {
type: 'MetadataItemPolymorphic';
};
export type MetadataDictInputFieldTemplate = InputFieldTemplateBase & {
default: undefined;
type: 'MetadataDict';
};
export type MetadataDictCollectionInputFieldTemplate =
InputFieldTemplateBase & {
default: undefined;
type: 'MetadataDictCollection';
};
/**
* An input field template is generated on each page load from the OpenAPI schema.
*
@ -952,6 +1086,8 @@ export type WorkflowInputFieldTemplate = InputFieldTemplateBase & {
* maximum length, pattern to match, etc).
*/
export type InputFieldTemplate =
| AnyInputFieldTemplate
| BoardInputFieldTemplate
| BooleanCollectionInputFieldTemplate
| BooleanPolymorphicInputFieldTemplate
| BooleanInputFieldTemplate
@ -994,7 +1130,12 @@ export type InputFieldTemplate =
| StringInputFieldTemplate
| UNetInputFieldTemplate
| VaeInputFieldTemplate
| VaeModelInputFieldTemplate;
| VaeModelInputFieldTemplate
| MetadataItemInputFieldTemplate
| MetadataItemCollectionInputFieldTemplate
| MetadataDictInputFieldTemplate
| MetadataItemPolymorphicInputFieldTemplate
| MetadataDictCollectionInputFieldTemplate;
export const isInputFieldValue = (
field?: InputFieldValue | OutputFieldValue
@ -1111,7 +1252,7 @@ export const isInvocationFieldSchema = (
export type InvocationEdgeExtra = { type: 'default' | 'collapsed' };
const zLoRAMetadataItem = z.object({
export const zLoRAMetadataItem = z.object({
lora: zLoRAModelField.deepPartial(),
weight: z.number(),
});
@ -1138,15 +1279,7 @@ export const zCoreMetadata = z
.nullish()
.catch(null),
controlnets: z.array(zControlField.deepPartial()).nullish().catch(null),
loras: z
.array(
z.object({
lora: zLoRAModelField.deepPartial(),
weight: z.number(),
})
)
.nullish()
.catch(null),
loras: z.array(zLoRAMetadataItem).nullish().catch(null),
vae: zVaeModelField.nullish().catch(null),
strength: z.number().nullish().catch(null),
init_image: z.string().nullish().catch(null),

View File

@ -1,5 +1,6 @@
import { isBoolean, isInteger, isNumber, isString } from 'lodash-es';
import { OpenAPIV3 } from 'openapi-types';
import { ControlField } from 'services/api/types';
import {
COLLECTION_MAP,
POLYMORPHIC_TYPES,
@ -8,36 +9,61 @@ import {
isPolymorphicItemType,
} from '../types/constants';
import {
AnyInputFieldTemplate,
BoardInputFieldTemplate,
BooleanCollectionInputFieldTemplate,
BooleanInputFieldTemplate,
BooleanPolymorphicInputFieldTemplate,
ClipInputFieldTemplate,
CollectionInputFieldTemplate,
CollectionItemInputFieldTemplate,
ColorCollectionInputFieldTemplate,
ColorInputFieldTemplate,
ColorPolymorphicInputFieldTemplate,
ConditioningCollectionInputFieldTemplate,
ConditioningField,
ConditioningInputFieldTemplate,
ConditioningPolymorphicInputFieldTemplate,
ControlCollectionInputFieldTemplate,
ControlInputFieldTemplate,
ControlNetModelInputFieldTemplate,
ControlPolymorphicInputFieldTemplate,
DenoiseMaskInputFieldTemplate,
EnumInputFieldTemplate,
FieldType,
FloatCollectionInputFieldTemplate,
FloatPolymorphicInputFieldTemplate,
FloatInputFieldTemplate,
FloatPolymorphicInputFieldTemplate,
IPAdapterInputFieldTemplate,
IPAdapterModelInputFieldTemplate,
ImageCollectionInputFieldTemplate,
ImageField,
ImageInputFieldTemplate,
ImagePolymorphicInputFieldTemplate,
InputFieldTemplate,
InputFieldTemplateBase,
IntegerCollectionInputFieldTemplate,
IntegerInputFieldTemplate,
IntegerPolymorphicInputFieldTemplate,
InvocationFieldSchema,
InvocationSchemaObject,
LatentsCollectionInputFieldTemplate,
LatentsField,
LatentsInputFieldTemplate,
LatentsPolymorphicInputFieldTemplate,
LoRAModelInputFieldTemplate,
MainModelInputFieldTemplate,
MetadataDictCollectionInputFieldTemplate,
MetadataDictInputFieldTemplate,
MetadataItemCollectionInputFieldTemplate,
MetadataItemInputFieldTemplate,
MetadataItemPolymorphicInputFieldTemplate,
SDXLMainModelInputFieldTemplate,
SDXLRefinerModelInputFieldTemplate,
SchedulerInputFieldTemplate,
StringCollectionInputFieldTemplate,
StringInputFieldTemplate,
StringPolymorphicInputFieldTemplate,
UNetInputFieldTemplate,
VaeInputFieldTemplate,
VaeModelInputFieldTemplate,
@ -45,25 +71,7 @@ import {
isNonArraySchemaObject,
isRefObject,
isSchemaObject,
ControlPolymorphicInputFieldTemplate,
ColorPolymorphicInputFieldTemplate,
ColorCollectionInputFieldTemplate,
IntegerPolymorphicInputFieldTemplate,
StringPolymorphicInputFieldTemplate,
BooleanPolymorphicInputFieldTemplate,
ImagePolymorphicInputFieldTemplate,
LatentsPolymorphicInputFieldTemplate,
LatentsCollectionInputFieldTemplate,
ConditioningPolymorphicInputFieldTemplate,
ConditioningCollectionInputFieldTemplate,
ControlCollectionInputFieldTemplate,
ImageField,
LatentsField,
ConditioningField,
IPAdapterInputFieldTemplate,
IPAdapterModelInputFieldTemplate,
} from '../types/types';
import { ControlField } from 'services/api/types';
export type BaseFieldProperties = 'name' | 'title' | 'description';
@ -450,6 +458,19 @@ const buildIPAdapterModelInputFieldTemplate = ({
return template;
};
const buildBoardInputFieldTemplate = ({
schemaObject,
baseField,
}: BuildInputFieldArg): BoardInputFieldTemplate => {
const template: BoardInputFieldTemplate = {
...baseField,
type: 'BoardField',
default: schemaObject.default ?? undefined,
};
return template;
};
const buildImageInputFieldTemplate = ({
schemaObject,
baseField,
@ -716,6 +737,78 @@ const buildCollectionItemInputFieldTemplate = ({
return template;
};
const buildAnyInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): AnyInputFieldTemplate => {
const template: AnyInputFieldTemplate = {
...baseField,
type: 'Any',
default: undefined,
};
return template;
};
const buildMetadataItemInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): MetadataItemInputFieldTemplate => {
const template: MetadataItemInputFieldTemplate = {
...baseField,
type: 'MetadataItem',
default: undefined,
};
return template;
};
const buildMetadataItemCollectionInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): MetadataItemCollectionInputFieldTemplate => {
const template: MetadataItemCollectionInputFieldTemplate = {
...baseField,
type: 'MetadataItemCollection',
default: undefined,
};
return template;
};
const buildMetadataItemPolymorphicInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): MetadataItemPolymorphicInputFieldTemplate => {
const template: MetadataItemPolymorphicInputFieldTemplate = {
...baseField,
type: 'MetadataItemPolymorphic',
default: undefined,
};
return template;
};
const buildMetadataDictInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): MetadataDictInputFieldTemplate => {
const template: MetadataDictInputFieldTemplate = {
...baseField,
type: 'MetadataDict',
default: undefined,
};
return template;
};
const buildMetadataDictCollectionInputFieldTemplate = ({
baseField,
}: BuildInputFieldArg): MetadataDictCollectionInputFieldTemplate => {
const template: MetadataDictCollectionInputFieldTemplate = {
...baseField,
type: 'MetadataDictCollection',
default: undefined,
};
return template;
};
const buildColorInputFieldTemplate = ({
schemaObject,
baseField,
@ -851,7 +944,11 @@ export const getFieldType = (
return;
};
const TEMPLATE_BUILDER_MAP = {
const TEMPLATE_BUILDER_MAP: {
[key in FieldType]?: (arg: BuildInputFieldArg) => InputFieldTemplate;
} = {
BoardField: buildBoardInputFieldTemplate,
Any: buildAnyInputFieldTemplate,
boolean: buildBooleanInputFieldTemplate,
BooleanCollection: buildBooleanCollectionInputFieldTemplate,
BooleanPolymorphic: buildBooleanPolymorphicInputFieldTemplate,
@ -885,6 +982,11 @@ const TEMPLATE_BUILDER_MAP = {
LatentsField: buildLatentsInputFieldTemplate,
LatentsPolymorphic: buildLatentsPolymorphicInputFieldTemplate,
LoRAModelField: buildLoRAModelInputFieldTemplate,
MetadataItem: buildMetadataItemInputFieldTemplate,
MetadataItemCollection: buildMetadataItemCollectionInputFieldTemplate,
MetadataItemPolymorphic: buildMetadataItemPolymorphicInputFieldTemplate,
MetadataDict: buildMetadataDictInputFieldTemplate,
MetadataDictCollection: buildMetadataDictCollectionInputFieldTemplate,
MainModelField: buildMainModelInputFieldTemplate,
Scheduler: buildSchedulerInputFieldTemplate,
SDXLMainModelField: buildSDXLMainModelInputFieldTemplate,
@ -937,7 +1039,13 @@ export const buildInputFieldTemplate = (
return;
}
return TEMPLATE_BUILDER_MAP[fieldType]({
const builder = TEMPLATE_BUILDER_MAP[fieldType];
if (!builder) {
return;
}
return builder({
schemaObject: fieldSchema,
baseField,
});

View File

@ -1,7 +1,11 @@
import { InputFieldTemplate, InputFieldValue } from '../types/types';
import { FieldType, InputFieldTemplate, InputFieldValue } from '../types/types';
const FIELD_VALUE_FALLBACK_MAP = {
const FIELD_VALUE_FALLBACK_MAP: {
[key in FieldType]: InputFieldValue['value'];
} = {
Any: undefined,
enum: '',
BoardField: undefined,
boolean: false,
BooleanCollection: [],
BooleanPolymorphic: false,
@ -33,6 +37,11 @@ const FIELD_VALUE_FALLBACK_MAP = {
LatentsCollection: [],
LatentsField: undefined,
LatentsPolymorphic: undefined,
MetadataItem: undefined,
MetadataItemCollection: [],
MetadataItemPolymorphic: undefined,
MetadataDict: undefined,
MetadataDictCollection: [],
LoRAModelField: undefined,
MainModelField: undefined,
ONNXModelField: undefined,

View File

@ -1,18 +1,16 @@
import { RootState } from 'app/store/store';
import { getValidControlNets } from 'features/controlNet/util/getValidControlNets';
import { omit } from 'lodash-es';
import {
CollectInvocation,
ControlField,
ControlNetInvocation,
MetadataAccumulatorInvocation,
} from 'services/api/types';
import { NonNullableGraph } from '../../types/types';
import { NonNullableGraph, zControlField } from '../../types/types';
import {
CANVAS_COHERENCE_DENOISE_LATENTS,
CONTROL_NET_COLLECT,
METADATA_ACCUMULATOR,
} from './constants';
import { addMainMetadata } from './metadata';
export const addControlNetToLinearGraph = (
state: RootState,
@ -23,12 +21,9 @@ export const addControlNetToLinearGraph = (
const validControlNets = getValidControlNets(controlNets);
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (isControlNetEnabled && Boolean(validControlNets.length)) {
if (validControlNets.length) {
const controlnets: ControlField[] = [];
// We have multiple controlnets, add ControlNet collector
const controlNetIterateNode: CollectInvocation = {
id: CONTROL_NET_COLLECT,
@ -87,15 +82,7 @@ export const addControlNetToLinearGraph = (
graph.nodes[controlNetNode.id] = controlNetNode as ControlNetInvocation;
if (metadataAccumulator?.controlnets) {
// metadata accumulator only needs a control field - not the whole node
// extract what we need and add to the accumulator
const controlField = omit(controlNetNode, [
'id',
'type',
]) as ControlField;
metadataAccumulator.controlnets.push(controlField);
}
controlnets.push(zControlField.parse(controlNetNode));
graph.edges.push({
source: { node_id: controlNetNode.id, field: 'control' },
@ -115,6 +102,8 @@ export const addControlNetToLinearGraph = (
});
}
});
addMainMetadata(graph, { controlnets });
}
}
};

View File

@ -38,15 +38,7 @@ export const addIPAdapterToLinearGraph = (
graph.nodes[ipAdapterNode.id] = ipAdapterNode as IPAdapterInvocation;
// if (metadataAccumulator?.ip_adapters) {
// // metadata accumulator only needs the ip_adapter field - not the whole node
// // extract what we need and add to the accumulator
// const ipAdapterField = omit(ipAdapterNode, [
// 'id',
// 'type',
// ]) as IPAdapterField;
// metadataAccumulator.ip_adapters.push(ipAdapterField);
// }
// TODO: add metadata
graph.edges.push({
source: { node_id: ipAdapterNode.id, field: 'ip_adapter' },

View File

@ -1,21 +1,22 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import {
LoRAMetadataItem,
NonNullableGraph,
zLoRAMetadataItem,
} from 'features/nodes/types/types';
import { forEach, size } from 'lodash-es';
import { LoraLoaderInvocation } from 'services/api/types';
import {
LoraLoaderInvocation,
MetadataAccumulatorInvocation,
} from 'services/api/types';
import {
CANVAS_COHERENCE_DENOISE_LATENTS,
CANVAS_INPAINT_GRAPH,
CANVAS_OUTPAINT_GRAPH,
CANVAS_COHERENCE_DENOISE_LATENTS,
CLIP_SKIP,
LORA_LOADER,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
POSITIVE_CONDITIONING,
} from './constants';
import { addMainMetadata } from './metadata';
export const addLoRAsToGraph = (
state: RootState,
@ -33,29 +34,29 @@ export const addLoRAsToGraph = (
const { loras } = state.lora;
const loraCount = size(loras);
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (loraCount > 0) {
// Remove modelLoaderNodeId unet connection to feed it to LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(
e.source.node_id === modelLoaderNodeId &&
['unet'].includes(e.source.field)
)
);
// Remove CLIP_SKIP connections to conditionings to feed it through LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(e.source.node_id === CLIP_SKIP && ['clip'].includes(e.source.field))
);
if (loraCount === 0) {
return;
}
// Remove modelLoaderNodeId unet connection to feed it to LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(
e.source.node_id === modelLoaderNodeId &&
['unet'].includes(e.source.field)
)
);
// Remove CLIP_SKIP connections to conditionings to feed it through LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(e.source.node_id === CLIP_SKIP && ['clip'].includes(e.source.field))
);
// we need to remember the last lora so we can chain from it
let lastLoraNodeId = '';
let currentLoraIndex = 0;
const loraMetadata: LoRAMetadataItem[] = [];
forEach(loras, (lora) => {
const { model_name, base_model, weight } = lora;
@ -69,13 +70,12 @@ export const addLoRAsToGraph = (
weight,
};
// add the lora to the metadata accumulator
if (metadataAccumulator?.loras) {
metadataAccumulator.loras.push({
loraMetadata.push(
zLoRAMetadataItem.parse({
lora: { model_name, base_model },
weight,
});
}
})
);
// add to graph
graph.nodes[currentLoraNodeId] = loraLoaderNode;
@ -182,4 +182,6 @@ export const addLoRAsToGraph = (
lastLoraNodeId = currentLoraNodeId;
currentLoraIndex += 1;
});
addMainMetadata(graph, { loras: loraMetadata });
};

View File

@ -1,14 +1,14 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { forEach, size } from 'lodash-es';
import {
MetadataAccumulatorInvocation,
SDXLLoraLoaderInvocation,
} from 'services/api/types';
LoRAMetadataItem,
NonNullableGraph,
zLoRAMetadataItem,
} from 'features/nodes/types/types';
import { forEach, size } from 'lodash-es';
import { SDXLLoraLoaderInvocation } from 'services/api/types';
import {
CANVAS_COHERENCE_DENOISE_LATENTS,
LORA_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
POSITIVE_CONDITIONING,
SDXL_CANVAS_INPAINT_GRAPH,
@ -17,6 +17,7 @@ import {
SDXL_REFINER_INPAINT_CREATE_MASK,
SEAMLESS,
} from './constants';
import { addMainMetadata } from './metadata';
export const addSDXLLoRAsToGraph = (
state: RootState,
@ -34,9 +35,12 @@ export const addSDXLLoRAsToGraph = (
const { loras } = state.lora;
const loraCount = size(loras);
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (loraCount === 0) {
return;
}
const loraMetadata: LoRAMetadataItem[] = [];
// Handle Seamless Plugs
const unetLoaderId = modelLoaderNodeId;
@ -47,22 +51,17 @@ export const addSDXLLoRAsToGraph = (
clipLoaderId = SDXL_MODEL_LOADER;
}
if (loraCount > 0) {
// Remove modelLoaderNodeId unet/clip/clip2 connections to feed it to LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(
e.source.node_id === unetLoaderId && ['unet'].includes(e.source.field)
) &&
!(
e.source.node_id === clipLoaderId && ['clip'].includes(e.source.field)
) &&
!(
e.source.node_id === clipLoaderId &&
['clip2'].includes(e.source.field)
)
);
}
// Remove modelLoaderNodeId unet/clip/clip2 connections to feed it to LoRAs
graph.edges = graph.edges.filter(
(e) =>
!(
e.source.node_id === unetLoaderId && ['unet'].includes(e.source.field)
) &&
!(
e.source.node_id === clipLoaderId && ['clip'].includes(e.source.field)
) &&
!(e.source.node_id === clipLoaderId && ['clip2'].includes(e.source.field))
);
// we need to remember the last lora so we can chain from it
let lastLoraNodeId = '';
@ -80,16 +79,12 @@ export const addSDXLLoRAsToGraph = (
weight,
};
// add the lora to the metadata accumulator
if (metadataAccumulator) {
if (!metadataAccumulator.loras) {
metadataAccumulator.loras = [];
}
metadataAccumulator.loras.push({
loraMetadata.push(
zLoRAMetadataItem.parse({
lora: { model_name, base_model },
weight,
});
}
})
);
// add to graph
graph.nodes[currentLoraNodeId] = loraLoaderNode;
@ -242,4 +237,6 @@ export const addSDXLLoRAsToGraph = (
lastLoraNodeId = currentLoraNodeId;
currentLoraIndex += 1;
});
addMainMetadata(graph, { loras: loraMetadata });
};

View File

@ -2,7 +2,6 @@ import { RootState } from 'app/store/store';
import {
CreateDenoiseMaskInvocation,
ImageDTO,
MetadataAccumulatorInvocation,
SeamlessModeInvocation,
} from 'services/api/types';
import { NonNullableGraph } from '../../types/types';
@ -12,7 +11,6 @@ import {
LATENTS_TO_IMAGE,
MASK_COMBINE,
MASK_RESIZE_UP,
METADATA_ACCUMULATOR,
SDXL_CANVAS_IMAGE_TO_IMAGE_GRAPH,
SDXL_CANVAS_INPAINT_GRAPH,
SDXL_CANVAS_OUTPAINT_GRAPH,
@ -26,6 +24,7 @@ import {
SDXL_REFINER_SEAMLESS,
} from './constants';
import { buildSDXLStylePrompts } from './helpers/craftSDXLStylePrompt';
import { addMainMetadata } from './metadata';
export const addSDXLRefinerToGraph = (
state: RootState,
@ -57,21 +56,15 @@ export const addSDXLRefinerToGraph = (
return;
}
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (metadataAccumulator) {
metadataAccumulator.refiner_model = refinerModel;
metadataAccumulator.refiner_positive_aesthetic_score =
refinerPositiveAestheticScore;
metadataAccumulator.refiner_negative_aesthetic_score =
refinerNegativeAestheticScore;
metadataAccumulator.refiner_cfg_scale = refinerCFGScale;
metadataAccumulator.refiner_scheduler = refinerScheduler;
metadataAccumulator.refiner_start = refinerStart;
metadataAccumulator.refiner_steps = refinerSteps;
}
addMainMetadata(graph, {
refiner_model: refinerModel,
refiner_positive_aesthetic_score: refinerPositiveAestheticScore,
refiner_negative_aesthetic_score: refinerNegativeAestheticScore,
refiner_cfg_scale: refinerCFGScale,
refiner_scheduler: refinerScheduler,
refiner_start: refinerStart,
refiner_steps: refinerSteps,
});
const modelLoaderId = modelLoaderNodeId
? modelLoaderNodeId

View File

@ -1,18 +1,14 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { SaveImageInvocation } from 'services/api/types';
import {
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NSFW_CHECKER,
SAVE_IMAGE,
WATERMARKER,
} from './constants';
import {
MetadataAccumulatorInvocation,
SaveImageInvocation,
} from 'services/api/types';
import { RootState } from 'app/store/store';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
/**
* Set the `use_cache` field on the linear/canvas graph's final image output node to False.
@ -24,33 +20,18 @@ export const addSaveImageNode = (
const activeTabName = activeTabNameSelector(state);
const is_intermediate =
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
const { autoAddBoardId } = state.gallery;
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate,
use_cache: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
};
graph.nodes[SAVE_IMAGE] = saveImageNode;
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (metadataAccumulator) {
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: SAVE_IMAGE,
field: 'metadata',
},
});
}
const destination = {
node_id: SAVE_IMAGE,
field: 'image',

View File

@ -1,6 +1,7 @@
import { RootState } from 'app/store/store';
import { SeamlessModeInvocation } from 'services/api/types';
import { NonNullableGraph } from '../../types/types';
import { addMainMetadata } from './metadata';
import {
CANVAS_COHERENCE_DENOISE_LATENTS,
CANVAS_INPAINT_GRAPH,
@ -31,6 +32,11 @@ export const addSeamlessToLinearGraph = (
seamless_y: seamlessYAxis,
} as SeamlessModeInvocation;
addMainMetadata(graph, {
seamless_x: seamlessXAxis,
seamless_y: seamlessYAxis,
});
let denoisingNodeId = DENOISE_LATENTS;
if (

View File

@ -1,6 +1,5 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { MetadataAccumulatorInvocation } from 'services/api/types';
import {
CANVAS_COHERENCE_INPAINT_CREATE_MASK,
CANVAS_IMAGE_TO_IMAGE_GRAPH,
@ -14,7 +13,6 @@ import {
INPAINT_IMAGE,
LATENTS_TO_IMAGE,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
ONNX_MODEL_LOADER,
SDXL_CANVAS_IMAGE_TO_IMAGE_GRAPH,
SDXL_CANVAS_INPAINT_GRAPH,
@ -26,6 +24,7 @@ import {
TEXT_TO_IMAGE_GRAPH,
VAE_LOADER,
} from './constants';
import { addMainMetadata } from './metadata';
export const addVAEToGraph = (
state: RootState,
@ -41,9 +40,6 @@ export const addVAEToGraph = (
);
const isAutoVae = !vae;
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (!isAutoVae) {
graph.nodes[VAE_LOADER] = {
@ -181,7 +177,7 @@ export const addVAEToGraph = (
}
}
if (vae && metadataAccumulator) {
metadataAccumulator.vae = vae;
if (vae) {
addMainMetadata(graph, { vae });
}
};

View File

@ -5,14 +5,8 @@ import {
ImageNSFWBlurInvocation,
ImageWatermarkInvocation,
LatentsToImageInvocation,
MetadataAccumulatorInvocation,
} from 'services/api/types';
import {
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NSFW_CHECKER,
WATERMARKER,
} from './constants';
import { LATENTS_TO_IMAGE, NSFW_CHECKER, WATERMARKER } from './constants';
export const addWatermarkerToGraph = (
state: RootState,
@ -32,10 +26,6 @@ export const addWatermarkerToGraph = (
| ImageNSFWBlurInvocation
| undefined;
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (!nodeToAddTo) {
// something has gone terribly awry
return;
@ -80,17 +70,4 @@ export const addWatermarkerToGraph = (
},
});
}
if (metadataAccumulator) {
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: WATERMARKER,
field: 'metadata',
},
});
}
};

View File

@ -1,20 +1,24 @@
import { BoardId } from 'features/gallery/store/types';
import { NonNullableGraph } from 'features/nodes/types/types';
import { ESRGANModelName } from 'features/parameters/store/postprocessingSlice';
import {
Graph,
ESRGANInvocation,
Graph,
SaveImageInvocation,
} from 'services/api/types';
import { REALESRGAN as ESRGAN, SAVE_IMAGE } from './constants';
import { addMainMetadataNodeToGraph } from './metadata';
type Arg = {
image_name: string;
esrganModelName: ESRGANModelName;
autoAddBoardId: BoardId;
};
export const buildAdHocUpscaleGraph = ({
image_name,
esrganModelName,
autoAddBoardId,
}: Arg): Graph => {
const realesrganNode: ESRGANInvocation = {
id: ESRGAN,
@ -28,6 +32,8 @@ export const buildAdHocUpscaleGraph = ({
id: SAVE_IMAGE,
type: 'save_image',
use_cache: false,
is_intermediate: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
};
const graph: NonNullableGraph = {
@ -50,5 +56,9 @@ export const buildAdHocUpscaleGraph = ({
],
};
addMainMetadataNodeToGraph(graph, {
model: esrganModelName,
});
return graph;
};

View File

@ -19,12 +19,12 @@ import {
IMG2IMG_RESIZE,
LATENTS_TO_IMAGE,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
POSITIVE_CONDITIONING,
SEAMLESS,
} from './constants';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Canvas tab's Image to Image graph.
@ -307,10 +307,7 @@ export const buildCanvasImageToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
@ -324,13 +321,10 @@ export const buildCanvasImageToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
clip_skip: clipSkip,
strength,
init_image: initialImage.image_name,
};
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {

View File

@ -16,7 +16,6 @@ import {
IMAGE_TO_LATENTS,
IMG2IMG_RESIZE,
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
POSITIVE_CONDITIONING,
@ -27,6 +26,7 @@ import {
SEAMLESS,
} from './constants';
import { buildSDXLStylePrompts } from './helpers/craftSDXLStylePrompt';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Canvas tab's Image to Image graph.
@ -318,10 +318,7 @@ export const buildCanvasSDXLImageToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
@ -335,22 +332,8 @@ export const buildCanvasSDXLImageToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
strength,
init_image: initialImage.image_name,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: CANVAS_OUTPUT,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -17,7 +17,6 @@ import { addWatermarkerToGraph } from './addWatermarkerToGraph';
import {
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
ONNX_MODEL_LOADER,
@ -29,6 +28,7 @@ import {
SEAMLESS,
} from './constants';
import { buildSDXLStylePrompts } from './helpers/craftSDXLStylePrompt';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Canvas tab's Text to Image graph.
@ -300,10 +300,7 @@ export const buildCanvasSDXLTextToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
@ -317,20 +314,6 @@ export const buildCanvasSDXLTextToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: CANVAS_OUTPUT,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -20,13 +20,13 @@ import {
DENOISE_LATENTS,
LATENTS_TO_IMAGE,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
ONNX_MODEL_LOADER,
POSITIVE_CONDITIONING,
SEAMLESS,
} from './constants';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Canvas tab's Text to Image graph.
@ -288,10 +288,7 @@ export const buildCanvasTextToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
@ -305,21 +302,7 @@ export const buildCanvasTextToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
clip_skip: clipSkip,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: CANVAS_OUTPUT,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -4,13 +4,20 @@ import { generateSeeds } from 'common/util/generateSeeds';
import { NonNullableGraph } from 'features/nodes/types/types';
import { range, unset } from 'lodash-es';
import { components } from 'services/api/schema';
import { Batch, BatchConfig } from 'services/api/types';
import { Batch, BatchConfig, MetadataItemInvocation } from 'services/api/types';
import {
BATCH_PROMPT,
BATCH_SEED,
BATCH_STYLE_PROMPT,
CANVAS_COHERENCE_NOISE,
METADATA_ACCUMULATOR,
NOISE,
POSITIVE_CONDITIONING,
} from './constants';
import {
addBatchMetadataNodeToGraph,
removeMetadataFromMainMetadataNode,
} from './metadata';
export const prepareLinearUIBatch = (
state: RootState,
@ -23,8 +30,27 @@ export const prepareLinearUIBatch = (
const data: Batch['data'] = [];
const seedMetadataItemNode: MetadataItemInvocation = {
id: BATCH_SEED,
type: 'metadata_item',
label: 'seed',
};
const promptMetadataItemNode: MetadataItemInvocation = {
id: BATCH_PROMPT,
type: 'metadata_item',
label: 'positive_prompt',
};
const stylePromptMetadataItemNode: MetadataItemInvocation = {
id: BATCH_STYLE_PROMPT,
type: 'metadata_item',
label: 'positive_style_prompt',
};
const itemNodesIds: string[] = [];
if (prompts.length === 1) {
unset(graph.nodes[METADATA_ACCUMULATOR], 'seed');
const seeds = generateSeeds({
count: iterations,
start: shouldRandomizeSeed ? undefined : seed,
@ -40,13 +66,15 @@ export const prepareLinearUIBatch = (
});
}
if (graph.nodes[METADATA_ACCUMULATOR]) {
zipped.push({
node_path: METADATA_ACCUMULATOR,
field_name: 'seed',
items: seeds,
});
}
// add to metadata
removeMetadataFromMainMetadataNode(graph, 'seed');
itemNodesIds.push(BATCH_SEED);
graph.nodes[BATCH_SEED] = seedMetadataItemNode;
zipped.push({
node_path: BATCH_SEED,
field_name: 'value',
items: seeds,
});
if (graph.nodes[CANVAS_COHERENCE_NOISE]) {
zipped.push({
@ -77,13 +105,15 @@ export const prepareLinearUIBatch = (
});
}
if (graph.nodes[METADATA_ACCUMULATOR]) {
firstBatchDatumList.push({
node_path: METADATA_ACCUMULATOR,
field_name: 'seed',
items: seeds,
});
}
// add to metadata
removeMetadataFromMainMetadataNode(graph, 'seed');
itemNodesIds.push(BATCH_SEED);
graph.nodes[BATCH_SEED] = seedMetadataItemNode;
firstBatchDatumList.push({
node_path: BATCH_SEED,
field_name: 'value',
items: seeds,
});
if (graph.nodes[CANVAS_COHERENCE_NOISE]) {
firstBatchDatumList.push({
@ -106,13 +136,17 @@ export const prepareLinearUIBatch = (
items: seeds,
});
}
if (graph.nodes[METADATA_ACCUMULATOR]) {
secondBatchDatumList.push({
node_path: METADATA_ACCUMULATOR,
field_name: 'seed',
items: seeds,
});
}
// add to metadata
removeMetadataFromMainMetadataNode(graph, 'seed');
itemNodesIds.push(BATCH_SEED);
graph.nodes[BATCH_SEED] = seedMetadataItemNode;
secondBatchDatumList.push({
node_path: BATCH_SEED,
field_name: 'value',
items: seeds,
});
if (graph.nodes[CANVAS_COHERENCE_NOISE]) {
secondBatchDatumList.push({
node_path: CANVAS_COHERENCE_NOISE,
@ -137,13 +171,15 @@ export const prepareLinearUIBatch = (
});
}
if (graph.nodes[METADATA_ACCUMULATOR]) {
firstBatchDatumList.push({
node_path: METADATA_ACCUMULATOR,
field_name: 'positive_prompt',
items: extendedPrompts,
});
}
// add to metadata
removeMetadataFromMainMetadataNode(graph, 'positive_prompt');
itemNodesIds.push(BATCH_PROMPT);
graph.nodes[BATCH_PROMPT] = promptMetadataItemNode;
firstBatchDatumList.push({
node_path: BATCH_PROMPT,
field_name: 'value',
items: extendedPrompts,
});
if (shouldConcatSDXLStylePrompt && model?.base_model === 'sdxl') {
unset(graph.nodes[METADATA_ACCUMULATOR], 'positive_style_prompt');
@ -160,18 +196,22 @@ export const prepareLinearUIBatch = (
});
}
if (graph.nodes[METADATA_ACCUMULATOR]) {
firstBatchDatumList.push({
node_path: METADATA_ACCUMULATOR,
field_name: 'positive_style_prompt',
items: stylePrompts,
});
}
// add to metadata
removeMetadataFromMainMetadataNode(graph, 'positive_style_prompt');
itemNodesIds.push(BATCH_STYLE_PROMPT);
graph.nodes[BATCH_STYLE_PROMPT] = stylePromptMetadataItemNode;
firstBatchDatumList.push({
node_path: BATCH_STYLE_PROMPT,
field_name: 'value',
items: extendedPrompts,
});
}
data.push(firstBatchDatumList);
}
addBatchMetadataNodeToGraph(graph, itemNodesIds);
const enqueueBatchArg: BatchConfig = {
prepend,
batch: {

View File

@ -20,13 +20,13 @@ import {
IMAGE_TO_LATENTS,
LATENTS_TO_IMAGE,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
POSITIVE_CONDITIONING,
RESIZE,
SEAMLESS,
} from './constants';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Image to Image tab graph.
@ -310,10 +310,7 @@ export const buildLinearImageToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'img2img',
cfg_scale,
height,
@ -325,23 +322,9 @@ export const buildLinearImageToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
clip_skip: clipSkip,
strength,
init_image: initialImage.imageName,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: LATENTS_TO_IMAGE,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -17,7 +17,6 @@ import { addWatermarkerToGraph } from './addWatermarkerToGraph';
import {
IMAGE_TO_LATENTS,
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
POSITIVE_CONDITIONING,
@ -29,6 +28,7 @@ import {
SEAMLESS,
} from './constants';
import { buildSDXLStylePrompts } from './helpers/craftSDXLStylePrompt';
import { addMainMetadataNodeToGraph } from './metadata';
/**
* Builds the Image to Image tab graph.
@ -330,10 +330,7 @@ export const buildLinearSDXLImageToImageGraph = (
});
}
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'sdxl_img2img',
cfg_scale,
height,
@ -345,24 +342,10 @@ export const buildLinearSDXLImageToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined,
controlnets: [],
loras: [],
strength: strength,
strength,
init_image: initialImage.imageName,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: LATENTS_TO_IMAGE,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -10,9 +10,9 @@ import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
import { addMainMetadataNodeToGraph } from './metadata';
import {
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
POSITIVE_CONDITIONING,
@ -224,10 +224,7 @@ export const buildLinearSDXLTextToImageGraph = (
],
};
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'sdxl_txt2img',
cfg_scale,
height,
@ -239,22 +236,8 @@ export const buildLinearSDXLTextToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined,
controlnets: [],
loras: [],
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: LATENTS_TO_IMAGE,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -13,12 +13,12 @@ import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
import { addMainMetadataNodeToGraph } from './metadata';
import {
CLIP_SKIP,
DENOISE_LATENTS,
LATENTS_TO_IMAGE,
MAIN_MODEL_LOADER,
METADATA_ACCUMULATOR,
NEGATIVE_CONDITIONING,
NOISE,
ONNX_MODEL_LOADER,
@ -232,10 +232,7 @@ export const buildLinearTextToImageGraph = (
],
};
// add metadata accumulator, which is only mostly populated - some fields are added later
graph.nodes[METADATA_ACCUMULATOR] = {
id: METADATA_ACCUMULATOR,
type: 'metadata_accumulator',
addMainMetadataNodeToGraph(graph, {
generation_mode: 'txt2img',
cfg_scale,
height,
@ -247,21 +244,7 @@ export const buildLinearTextToImageGraph = (
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
vae: undefined, // option; set in addVAEToGraph
controlnets: [], // populated in addControlNetToLinearGraph
loras: [], // populated in addLoRAsToGraph
clip_skip: clipSkip,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: LATENTS_TO_IMAGE,
field: 'metadata',
},
});
// Add Seamless To Graph

View File

@ -50,7 +50,15 @@ export const IP_ADAPTER = 'ip_adapter';
export const DYNAMIC_PROMPT = 'dynamic_prompt';
export const IMAGE_COLLECTION = 'image_collection';
export const IMAGE_COLLECTION_ITERATE = 'image_collection_iterate';
export const METADATA = 'metadata';
export const BATCH_METADATA = 'batch_metadata';
export const BATCH_METADATA_COLLECT = 'batch_metadata_collect';
export const BATCH_SEED = 'batch_seed';
export const BATCH_PROMPT = 'batch_prompt';
export const BATCH_STYLE_PROMPT = 'batch_style_prompt';
export const METADATA_COLLECT = 'metadata_collect';
export const METADATA_ACCUMULATOR = 'metadata_accumulator';
export const MERGE_METADATA = 'merge_metadata';
export const REALESRGAN = 'esrgan';
export const DIVIDE = 'divide';
export const SCALE = 'scale_image';

View File

@ -0,0 +1,151 @@
import { NonNullableGraph } from 'features/nodes/types/types';
import { map } from 'lodash-es';
import { MetadataInvocationAsCollection } from 'services/api/types';
import { JsonObject } from 'type-fest';
import {
BATCH_METADATA,
BATCH_METADATA_COLLECT,
MERGE_METADATA,
METADATA,
METADATA_COLLECT,
SAVE_IMAGE,
} from './constants';
export const addMainMetadataNodeToGraph = (
graph: NonNullableGraph,
metadata: JsonObject
): void => {
graph.nodes[METADATA] = {
id: METADATA,
type: 'metadata',
items: map(metadata, (value, label) => ({ label, value })),
};
graph.nodes[METADATA_COLLECT] = {
id: METADATA_COLLECT,
type: 'collect',
};
graph.nodes[MERGE_METADATA] = {
id: MERGE_METADATA,
type: 'merge_metadata_dict',
};
graph.edges.push({
source: {
node_id: METADATA,
field: 'metadata_dict',
},
destination: {
node_id: METADATA_COLLECT,
field: 'item',
},
});
graph.edges.push({
source: {
node_id: METADATA_COLLECT,
field: 'collection',
},
destination: {
node_id: MERGE_METADATA,
field: 'collection',
},
});
graph.edges.push({
source: {
node_id: MERGE_METADATA,
field: 'metadata_dict',
},
destination: {
node_id: SAVE_IMAGE,
field: 'metadata',
},
});
return;
};
export const addMainMetadata = (
graph: NonNullableGraph,
metadata: JsonObject
): void => {
const metadataNode = graph.nodes[METADATA] as
| MetadataInvocationAsCollection
| undefined;
if (!metadataNode) {
return;
}
metadataNode.items.push(
...map(metadata, (value, label) => ({ label, value }))
);
};
export const removeMetadataFromMainMetadataNode = (
graph: NonNullableGraph,
label: string
): void => {
const metadataNode = graph.nodes[METADATA] as
| MetadataInvocationAsCollection
| undefined;
if (!metadataNode) {
return;
}
metadataNode.items = metadataNode.items.filter(
(item) => item.label !== label
);
};
export const addBatchMetadataNodeToGraph = (
graph: NonNullableGraph,
itemNodeIds: string[]
) => {
graph.nodes[BATCH_METADATA] = {
id: BATCH_METADATA,
type: 'metadata',
};
graph.nodes[BATCH_METADATA_COLLECT] = {
id: BATCH_METADATA_COLLECT,
type: 'collect',
};
itemNodeIds.forEach((id) => {
graph.edges.push({
source: {
node_id: id,
field: 'item',
},
destination: {
node_id: BATCH_METADATA_COLLECT,
field: 'item',
},
});
});
graph.edges.push({
source: {
node_id: BATCH_METADATA_COLLECT,
field: 'collection',
},
destination: {
node_id: BATCH_METADATA,
field: 'items',
},
});
graph.edges.push({
source: {
node_id: BATCH_METADATA,
field: 'metadata_dict',
},
destination: {
node_id: METADATA_COLLECT,
field: 'item',
},
});
};

View File

@ -4,7 +4,6 @@ import { reduce } from 'lodash-es';
import { OpenAPIV3 } from 'openapi-types';
import { AnyInvocationType } from 'services/events/types';
import {
FieldType,
InputFieldTemplate,
InvocationSchemaObject,
InvocationTemplate,
@ -16,18 +15,11 @@ import {
} from '../types/types';
import { buildInputFieldTemplate, getFieldType } from './fieldTemplateBuilders';
const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'metadata', 'use_cache'];
const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'use_cache'];
const RESERVED_OUTPUT_FIELD_NAMES = ['type'];
const RESERVED_FIELD_TYPES = [
'WorkflowField',
'MetadataField',
'IsIntermediate',
];
const RESERVED_FIELD_TYPES = ['IsIntermediate', 'WorkflowField'];
const invocationDenylist: AnyInvocationType[] = [
'graph',
'metadata_accumulator',
];
const invocationDenylist: AnyInvocationType[] = ['graph'];
const isReservedInputField = (nodeType: string, fieldName: string) => {
if (RESERVED_INPUT_FIELD_NAMES.includes(fieldName)) {
@ -42,7 +34,7 @@ const isReservedInputField = (nodeType: string, fieldName: string) => {
return false;
};
const isReservedFieldType = (fieldType: FieldType) => {
const isReservedFieldType = (fieldType: string) => {
if (RESERVED_FIELD_TYPES.includes(fieldType)) {
return true;
}
@ -86,6 +78,7 @@ export const parseSchema = (
const tags = schema.tags ?? [];
const description = schema.description ?? '';
const version = schema.version;
let withWorkflow = false;
const inputs = reduce(
schema.properties,
@ -112,7 +105,7 @@ export const parseSchema = (
const fieldType = getFieldType(property);
if (!isFieldType(fieldType)) {
if (!fieldType) {
logger('nodes').warn(
{
node: type,
@ -120,11 +113,16 @@ export const parseSchema = (
fieldType,
field: parseify(property),
},
'Skipping unknown input field type'
'Missing input field type'
);
return inputsAccumulator;
}
if (fieldType === 'WorkflowField') {
withWorkflow = true;
return inputsAccumulator;
}
if (isReservedFieldType(fieldType)) {
logger('nodes').trace(
{
@ -133,7 +131,20 @@ export const parseSchema = (
fieldType,
field: parseify(property),
},
'Skipping reserved field type'
`Skipping reserved input field type: ${fieldType}`
);
return inputsAccumulator;
}
if (!isFieldType(fieldType)) {
logger('nodes').warn(
{
node: type,
fieldName: propertyName,
fieldType,
field: parseify(property),
},
`Skipping unknown input field type: ${fieldType}`
);
return inputsAccumulator;
}
@ -146,7 +157,7 @@ export const parseSchema = (
);
if (!field) {
logger('nodes').debug(
logger('nodes').warn(
{
node: type,
fieldName: propertyName,
@ -247,6 +258,7 @@ export const parseSchema = (
inputs,
outputs,
useCache,
withWorkflow,
};
Object.assign(invocationsAccumulator, { [type]: invocation });

View File

@ -1,6 +1,6 @@
import { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISlider from 'common/components/IAISlider';
import { setClipSkip } from 'features/parameters/store/generationSlice';
import { clipSkipMap } from 'features/parameters/types/constants';
@ -47,7 +47,7 @@ export default function ParamClipSkip() {
}
return (
<IAIInformationalPopover details="clipSkip">
<IAIInformationalPopover feature="clipSkip" placement="top">
<IAISlider
label={t('parameters.clipSkip')}
aria-label={t('parameters.clipSkip')}

View File

@ -1,7 +1,8 @@
import { Box, Flex, Spacer, Text } from '@chakra-ui/react';
import { Flex, FormControl, FormLabel, Spacer } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { flipBoundingBoxAxes } from 'features/canvas/store/canvasSlice';
import { generationSelector } from 'features/parameters/store/generationSelectors';
@ -18,7 +19,6 @@ import ParamAspectRatio, {
} from '../../Core/ParamAspectRatio';
import ParamBoundingBoxHeight from './ParamBoundingBoxHeight';
import ParamBoundingBoxWidth from './ParamBoundingBoxWidth';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
const sizeOptsSelector = createSelector(
[generationSelector, canvasSelector],
@ -93,42 +93,29 @@ export default function ParamBoundingBoxSize() {
},
}}
>
<Flex alignItems="center" gap={2}>
<Box width="full">
<IAIInformationalPopover details="paramRatio">
<Text
sx={{
fontSize: 'sm',
width: 'full',
color: 'base.700',
_dark: {
color: 'base.300',
},
}}
>
{t('parameters.aspectRatio')}
</Text>
</IAIInformationalPopover>
</Box>
<Spacer />
<ParamAspectRatio />
<IAIIconButton
tooltip={t('ui.swapSizes')}
aria-label={t('ui.swapSizes')}
size="sm"
icon={<MdOutlineSwapVert />}
fontSize={20}
onClick={handleToggleSize}
/>
<IAIIconButton
tooltip={t('ui.lockRatio')}
aria-label={t('ui.lockRatio')}
size="sm"
icon={<FaLock />}
isChecked={shouldLockAspectRatio}
onClick={handleLockRatio}
/>
</Flex>
<IAIInformationalPopover feature="paramRatio">
<FormControl as={Flex} flexDir="row" alignItems="center" gap={2}>
<FormLabel>{t('parameters.aspectRatio')}</FormLabel>
<Spacer />
<ParamAspectRatio />
<IAIIconButton
tooltip={t('ui.swapSizes')}
aria-label={t('ui.swapSizes')}
size="sm"
icon={<MdOutlineSwapVert />}
fontSize={20}
onClick={handleToggleSize}
/>
<IAIIconButton
tooltip={t('ui.lockRatio')}
aria-label={t('ui.lockRatio')}
size="sm"
icon={<FaLock />}
isChecked={shouldLockAspectRatio}
onClick={handleLockRatio}
/>
</FormControl>
</IAIInformationalPopover>
<ParamBoundingBoxWidth />
<ParamBoundingBoxHeight />
</Flex>

View File

@ -1,6 +1,6 @@
import type { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import { IAISelectDataType } from 'common/components/IAIMantineSearchableSelect';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import { setCanvasCoherenceMode } from 'features/parameters/store/generationSlice';
@ -31,7 +31,7 @@ const ParamCanvasCoherenceMode = () => {
};
return (
<IAIInformationalPopover details="compositingCoherenceMode">
<IAIInformationalPopover feature="compositingCoherenceMode">
<IAIMantineSelect
label={t('parameters.coherenceMode')}
data={coherenceModeSelectData}

View File

@ -1,6 +1,6 @@
import type { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISlider from 'common/components/IAISlider';
import { setCanvasCoherenceSteps } from 'features/parameters/store/generationSlice';
import { memo } from 'react';
@ -14,7 +14,7 @@ const ParamCanvasCoherenceSteps = () => {
const { t } = useTranslation();
return (
<IAIInformationalPopover details="compositingCoherenceSteps">
<IAIInformationalPopover feature="compositingCoherenceSteps">
<IAISlider
label={t('parameters.coherenceSteps')}
min={1}

View File

@ -1,6 +1,6 @@
import type { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISlider from 'common/components/IAISlider';
import { setCanvasCoherenceStrength } from 'features/parameters/store/generationSlice';
import { memo } from 'react';
@ -14,7 +14,7 @@ const ParamCanvasCoherenceStrength = () => {
const { t } = useTranslation();
return (
<IAIInformationalPopover details="compositingStrength">
<IAIInformationalPopover feature="compositingStrength">
<IAISlider
label={t('parameters.coherenceStrength')}
min={0}

View File

@ -1,6 +1,6 @@
import type { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAISlider from 'common/components/IAISlider';
import { setMaskBlur } from 'features/parameters/store/generationSlice';
import { useTranslation } from 'react-i18next';
@ -13,7 +13,7 @@ export default function ParamMaskBlur() {
const { t } = useTranslation();
return (
<IAIInformationalPopover details="compositingBlur">
<IAIInformationalPopover feature="compositingBlur">
<IAISlider
label={t('parameters.maskBlur')}
min={0}

View File

@ -2,7 +2,7 @@ import { SelectItem } from '@mantine/core';
import { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import { setMaskBlurMethod } from 'features/parameters/store/generationSlice';
import { useTranslation } from 'react-i18next';
@ -29,7 +29,7 @@ export default function ParamMaskBlurMethod() {
};
return (
<IAIInformationalPopover details="compositingBlurMethod">
<IAIInformationalPopover feature="compositingBlurMethod">
<IAIMantineSelect
value={maskBlurMethod}
onChange={handleMaskBlurMethodChange}

View File

@ -15,19 +15,13 @@ const ParamCompositingSettingsCollapse = () => {
return (
<IAICollapse label={t('parameters.compositingSettingsHeader')}>
<Flex sx={{ flexDirection: 'column', gap: 2 }}>
<SubParametersWrapper
label={t('parameters.coherencePassHeader')}
headerInfoPopover="compositingCoherencePass"
>
<SubParametersWrapper label={t('parameters.coherencePassHeader')}>
<ParamCanvasCoherenceMode />
<ParamCanvasCoherenceSteps />
<ParamCanvasCoherenceStrength />
</SubParametersWrapper>
<Divider />
<SubParametersWrapper
label={t('parameters.maskAdjustmentsHeader')}
headerInfoPopover="compositingMaskAdjustments"
>
<SubParametersWrapper label={t('parameters.maskAdjustmentsHeader')}>
<ParamMaskBlur />
<ParamMaskBlurMethod />
</SubParametersWrapper>

View File

@ -2,7 +2,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import { setInfillMethod } from 'features/parameters/store/generationSlice';
@ -40,7 +40,7 @@ const ParamInfillMethod = () => {
);
return (
<IAIInformationalPopover details="infillMethod">
<IAIInformationalPopover feature="infillMethod">
<IAIMantineSelect
disabled={infill_methods?.length === 0}
placeholder={isLoading ? 'Loading...' : undefined}

View File

@ -1,7 +1,7 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
import { setBoundingBoxScaleMethod } from 'features/canvas/store/canvasSlice';
@ -36,7 +36,7 @@ const ParamScaleBeforeProcessing = () => {
};
return (
<IAIInformationalPopover details="scaleBeforeProcessing">
<IAIInformationalPopover feature="scaleBeforeProcessing">
<IAIMantineSearchableSelect
label={t('parameters.scaleBeforeProcessing')}
data={BOUNDING_BOX_SCALES_DICT}

View File

@ -1,4 +1,4 @@
import { ButtonGroup, Flex } from '@chakra-ui/react';
import { ButtonGroup } from '@chakra-ui/react';
import { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
@ -29,25 +29,23 @@ export default function ParamAspectRatio() {
const activeTabName = useAppSelector(activeTabNameSelector);
return (
<Flex gap={2} flexGrow={1}>
<ButtonGroup isAttached>
{aspectRatios.map((ratio) => (
<IAIButton
key={ratio.name}
size="sm"
isChecked={aspectRatio === ratio.value}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={() => {
dispatch(setAspectRatio(ratio.value));
dispatch(setShouldLockAspectRatio(false));
}}
>
{ratio.name}
</IAIButton>
))}
</ButtonGroup>
</Flex>
<ButtonGroup isAttached>
{aspectRatios.map((ratio) => (
<IAIButton
key={ratio.name}
size="sm"
isChecked={aspectRatio === ratio.value}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={() => {
dispatch(setAspectRatio(ratio.value));
dispatch(setShouldLockAspectRatio(false));
}}
>
{ratio.name}
</IAIButton>
))}
</ButtonGroup>
);
}

View File

@ -2,7 +2,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAINumberInput from 'common/components/IAINumberInput';
import IAISlider from 'common/components/IAISlider';
import { setCfgScale } from 'features/parameters/store/generationSlice';
@ -54,7 +54,7 @@ const ParamCFGScale = () => {
);
return shouldUseSliders ? (
<IAIInformationalPopover details="paramCFGScale">
<IAIInformationalPopover feature="paramCFGScale">
<IAISlider
label={t('parameters.cfgScale')}
step={shift ? 0.1 : 0.5}
@ -71,7 +71,7 @@ const ParamCFGScale = () => {
/>
</IAIInformationalPopover>
) : (
<IAIInformationalPopover details="paramCFGScale">
<IAIInformationalPopover feature="paramCFGScale">
<IAINumberInput
label={t('parameters.cfgScale')}
step={0.5}

View File

@ -2,7 +2,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAINumberInput from 'common/components/IAINumberInput';
import IAISlider from 'common/components/IAISlider';
import { setIterations } from 'features/parameters/store/generationSlice';
@ -61,7 +61,7 @@ const ParamIterations = ({ asSlider }: Props) => {
}, [dispatch, initial]);
return asSlider || shouldUseSliders ? (
<IAIInformationalPopover details="paramIterations">
<IAIInformationalPopover feature="paramIterations">
<IAISlider
label={t('parameters.iterations')}
step={step}
@ -77,7 +77,7 @@ const ParamIterations = ({ asSlider }: Props) => {
/>
</IAIInformationalPopover>
) : (
<IAIInformationalPopover details="paramIterations">
<IAIInformationalPopover feature="paramIterations">
<IAINumberInput
label={t('parameters.iterations')}
step={step}

View File

@ -1,7 +1,7 @@
import { Box, FormControl, useDisclosure } from '@chakra-ui/react';
import type { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAITextarea from 'common/components/IAITextarea';
import AddEmbeddingButton from 'features/embedding/components/AddEmbeddingButton';
import ParamEmbeddingPopover from 'features/embedding/components/ParamEmbeddingPopover';
@ -76,15 +76,15 @@ const ParamNegativeConditioning = () => {
const isEmbeddingEnabled = useFeatureStatus('embedding').isFeatureEnabled;
return (
<IAIInformationalPopover
placement="right"
details="paramNegativeConditioning"
>
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}
onClose={onClose}
onSelect={handleSelectEmbedding}
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}
onClose={onClose}
onSelect={handleSelectEmbedding}
>
<IAIInformationalPopover
feature="paramNegativeConditioning"
placement="right"
>
<IAITextarea
id="negativePrompt"
@ -98,20 +98,20 @@ const ParamNegativeConditioning = () => {
minH={16}
{...(isEmbeddingEnabled && { onKeyDown: handleKeyDown })}
/>
</ParamEmbeddingPopover>
{!isOpen && isEmbeddingEnabled && (
<Box
sx={{
position: 'absolute',
top: 0,
insetInlineEnd: 0,
}}
>
<AddEmbeddingButton onClick={onOpen} />
</Box>
)}
</FormControl>
</IAIInformationalPopover>
</IAIInformationalPopover>
</ParamEmbeddingPopover>
{!isOpen && isEmbeddingEnabled && (
<Box
sx={{
position: 'absolute',
top: 0,
insetInlineEnd: 0,
}}
>
<AddEmbeddingButton onClick={onOpen} />
</Box>
)}
</FormControl>
);
};

View File

@ -2,6 +2,7 @@ import { Box, FormControl, useDisclosure } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAITextarea from 'common/components/IAITextarea';
import AddEmbeddingButton from 'features/embedding/components/AddEmbeddingButton';
import ParamEmbeddingPopover from 'features/embedding/components/ParamEmbeddingPopover';
@ -12,7 +13,6 @@ import { flushSync } from 'react-dom';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { useFeatureStatus } from '../../../../system/hooks/useFeatureStatus';
import IAIInformationalPopover from '../../../../../common/components/IAIInformationalPopover';
const promptInputSelector = createSelector(
[stateSelector],
@ -104,15 +104,15 @@ const ParamPositiveConditioning = () => {
return (
<Box position="relative">
<IAIInformationalPopover
placement="right"
details="paramPositiveConditioning"
>
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}
onClose={onClose}
onSelect={handleSelectEmbedding}
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}
onClose={onClose}
onSelect={handleSelectEmbedding}
>
<IAIInformationalPopover
feature="paramPositiveConditioning"
placement="right"
>
<IAITextarea
id="prompt"
@ -125,9 +125,9 @@ const ParamPositiveConditioning = () => {
resize="vertical"
minH={32}
/>
</ParamEmbeddingPopover>
</FormControl>
</IAIInformationalPopover>
</IAIInformationalPopover>
</ParamEmbeddingPopover>
</FormControl>
{!isOpen && isEmbeddingEnabled && (
<Box
sx={{

View File

@ -1,7 +1,7 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import { generationSelector } from 'features/parameters/store/generationSelectors';
import { setScheduler } from 'features/parameters/store/generationSlice';
@ -52,7 +52,7 @@ const ParamScheduler = () => {
);
return (
<IAIInformationalPopover details="paramScheduler">
<IAIInformationalPopover feature="paramScheduler">
<IAIMantineSearchableSelect
label={t('parameters.scheduler')}
value={scheduler}

View File

@ -1,7 +1,9 @@
import { Box, Flex, Spacer, Text } from '@chakra-ui/react';
import { Flex, FormControl, FormLabel, Spacer } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIIconButton from 'common/components/IAIIconButton';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import { generationSelector } from 'features/parameters/store/generationSelectors';
import {
setAspectRatio,
@ -16,8 +18,6 @@ import { activeTabNameSelector } from '../../../../ui/store/uiSelectors';
import ParamAspectRatio, { mappedAspectRatios } from './ParamAspectRatio';
import ParamHeight from './ParamHeight';
import ParamWidth from './ParamWidth';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
const sizeOptsSelector = createSelector(
[generationSelector, activeTabNameSelector],
@ -83,47 +83,35 @@ export default function ParamSize() {
},
}}
>
<Flex alignItems="center" gap={2}>
<Box width="full">
<IAIInformationalPopover details="paramRatio">
<Text
sx={{
fontSize: 'sm',
color: 'base.700',
_dark: {
color: 'base.300',
},
}}
>
{t('parameters.aspectRatio')}
</Text>
</IAIInformationalPopover>
</Box>
<Spacer />
<ParamAspectRatio />
<IAIIconButton
tooltip={t('ui.swapSizes')}
aria-label={t('ui.swapSizes')}
size="sm"
icon={<MdOutlineSwapVert />}
fontSize={20}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={handleToggleSize}
/>
<IAIIconButton
tooltip={t('ui.lockRatio')}
aria-label={t('ui.lockRatio')}
size="sm"
icon={<FaLock />}
isChecked={shouldLockAspectRatio}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={handleLockRatio}
/>
</Flex>
<IAIInformationalPopover feature="paramRatio">
<FormControl as={Flex} flexDir="row" alignItems="center" gap={2}>
<FormLabel>{t('parameters.aspectRatio')}</FormLabel>
<Spacer />
<ParamAspectRatio />
<IAIIconButton
tooltip={t('ui.swapSizes')}
aria-label={t('ui.swapSizes')}
size="sm"
icon={<MdOutlineSwapVert />}
fontSize={20}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={handleToggleSize}
/>
<IAIIconButton
tooltip={t('ui.lockRatio')}
aria-label={t('ui.lockRatio')}
size="sm"
icon={<FaLock />}
isChecked={shouldLockAspectRatio}
isDisabled={
activeTabName === 'img2img' ? !shouldFitToWidthHeight : false
}
onClick={handleLockRatio}
/>
</FormControl>
</IAIInformationalPopover>
<Flex gap={2} alignItems="center">
<Flex gap={2} flexDirection="column" width="full">
<ParamWidth

View File

@ -2,7 +2,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
import IAINumberInput from 'common/components/IAINumberInput';
import IAISlider from 'common/components/IAISlider';
@ -57,7 +57,7 @@ const ParamSteps = () => {
}, [dispatch]);
return shouldUseSliders ? (
<IAIInformationalPopover details="paramSteps">
<IAIInformationalPopover feature="paramSteps">
<IAISlider
label={t('parameters.steps')}
min={min}
@ -73,7 +73,7 @@ const ParamSteps = () => {
/>
</IAIInformationalPopover>
) : (
<IAIInformationalPopover details="paramSteps">
<IAIInformationalPopover feature="paramSteps">
<IAINumberInput
label={t('parameters.steps')}
min={min}

View File

@ -7,7 +7,7 @@ import { setImg2imgStrength } from 'features/parameters/store/generationSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import SubParametersWrapper from '../SubParametersWrapper';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
const selector = createSelector(
[stateSelector],
@ -46,8 +46,8 @@ const ImageToImageStrength = () => {
}, [dispatch, initial]);
return (
<SubParametersWrapper>
<IAIInformationalPopover details="paramDenoisingStrength">
<IAIInformationalPopover feature="paramDenoisingStrength">
<SubParametersWrapper>
<IAISlider
label={`${t('parameters.denoisingStrength')}`}
step={step}
@ -62,8 +62,8 @@ const ImageToImageStrength = () => {
withReset
sliderNumberInputProps={{ max: inputMax }}
/>
</IAIInformationalPopover>
</SubParametersWrapper>
</SubParametersWrapper>
</IAIInformationalPopover>
);
};

View File

@ -21,7 +21,7 @@ import {
useGetOnnxModelsQuery,
} from 'services/api/endpoints/models';
import { useFeatureStatus } from '../../../../system/hooks/useFeatureStatus';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
const selector = createSelector(
stateSelector,
@ -120,7 +120,7 @@ const ParamMainModelSelect = () => {
/>
) : (
<Flex w="100%" alignItems="center" gap={3}>
<IAIInformationalPopover details="paramModel" placement="bottom">
<IAIInformationalPopover feature="paramModel">
<IAIMantineSearchableSelect
tooltip={selectedModel?.description}
label={t('modelManager.model')}
@ -136,7 +136,7 @@ const ParamMainModelSelect = () => {
/>
</IAIInformationalPopover>
{isSyncModelEnabled && (
<Box mt={7}>
<Box mt={6}>
<SyncModelsButton iconMode />
</Box>
)}

Some files were not shown because too many files have changed in this diff Show More