feat(nodes): versioning (#4449)

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

This PR is based on #4423 and should not be merged until it is merged.

[feat(nodes): add version to node
schemas](c179d4ccb7)

The `@invocation` decorator is extended with an optional `version` arg.
On execution of the decorator, the version string is parsed using the
`semver` package (this was an indirect dependency and has been added to
`pyproject.toml`).

All built-in nodes are set with `version="1.0.0"`.

The version is added to the OpenAPI Schema for consumption by the
client.

[feat(ui): handle node
versions](03de3e4f78)

- Node versions are now added to node templates
- Node data (including in workflows) include the version of the node
- On loading a workflow, we check to see if the node and template
versions match exactly. If not, a warning is logged to console.
- The node info icon (top-right corner of node, which you may click to
open the notes editor) now shows the version and mentions any issues.
- Some workflow validation logic has been shifted around and is now
executed in a redux listener.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4393

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Loading old workflows should prompt a warning, and the node status icon
should indicate some action is needed.

## [optional] Are there any post deployment tasks we need to perform?

I've updated the default workflows:
- Bump workflow versions from 1.0 to 1.0.1
- Add versions for all nodes in the workflows
- Test workflows

[Default
Workflows.zip](https://github.com/invoke-ai/InvokeAI/files/12511911/Default.Workflows.zip)

I'm not sure where these are being stored right now @Millu
This commit is contained in:
blessedcoolant 2023-09-05 09:53:46 +12:00 committed by GitHub
commit 78cc5a7825
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 506 additions and 166 deletions

View File

@ -244,8 +244,12 @@ copy-paste the template above.
We can use the `@invocation` decorator to provide some additional info to the
UI, like a custom title, tags and category.
We also encourage providing a version. This must be a
[semver](https://semver.org/) version string ("$MAJOR.$MINOR.$PATCH"). The UI
will let users know if their workflow is using a mismatched version of the node.
```python
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations")
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations", version="1.0.0")
class ResizeInvocation(BaseInvocation):
"""Resizes an image"""
@ -279,8 +283,6 @@ take a look a at our [contributing nodes overview](contributingNodes).
## Advanced
-->
### Custom Output Types
Like with custom inputs, sometimes you might find yourself needing custom

View File

@ -26,11 +26,16 @@ from typing import (
from pydantic import BaseModel, Field, validator
from pydantic.fields import Undefined, ModelField
from pydantic.typing import NoArgAnyCallable
import semver
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
class InvalidVersionError(ValueError):
pass
class FieldDescriptions:
denoising_start = "When to start denoising, expressed a percentage of total steps"
denoising_end = "When to stop denoising, expressed a percentage of total steps"
@ -401,6 +406,9 @@ class UIConfigBase(BaseModel):
tags: Optional[list[str]] = Field(default_factory=None, description="The node's tags")
title: Optional[str] = Field(default=None, description="The node's display name")
category: Optional[str] = Field(default=None, description="The node's category")
version: Optional[str] = Field(
default=None, description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".'
)
class InvocationContext:
@ -499,6 +507,8 @@ class BaseInvocation(ABC, BaseModel):
schema["tags"] = uiconfig.tags
if uiconfig and hasattr(uiconfig, "category"):
schema["category"] = uiconfig.category
if uiconfig and hasattr(uiconfig, "version"):
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = list()
schema["required"].extend(["type", "id"])
@ -567,7 +577,11 @@ GenericBaseInvocation = TypeVar("GenericBaseInvocation", bound=BaseInvocation)
def invocation(
invocation_type: str, title: Optional[str] = None, tags: Optional[list[str]] = None, category: Optional[str] = None
invocation_type: str,
title: Optional[str] = None,
tags: Optional[list[str]] = None,
category: Optional[str] = None,
version: Optional[str] = None,
) -> Callable[[Type[GenericBaseInvocation]], Type[GenericBaseInvocation]]:
"""
Adds metadata to an invocation.
@ -594,6 +608,12 @@ def invocation(
cls.UIConfig.tags = tags
if category is not None:
cls.UIConfig.category = category
if version is not None:
try:
semver.Version.parse(version)
except ValueError as e:
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
cls.UIConfig.version = version
# Add the invocation type to the pydantic model of the invocation
invocation_type_annotation = Literal[invocation_type] # type: ignore

View File

@ -10,7 +10,9 @@ from invokeai.app.util.misc import SEED_MAX, get_random_seed
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@invocation("range", title="Integer Range", tags=["collection", "integer", "range"], category="collections")
@invocation(
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
)
class RangeInvocation(BaseInvocation):
"""Creates a range of numbers from start to stop with step"""
@ -33,6 +35,7 @@ class RangeInvocation(BaseInvocation):
title="Integer Range of Size",
tags=["collection", "integer", "size", "range"],
category="collections",
version="1.0.0",
)
class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + size with step"""
@ -50,6 +53,7 @@ class RangeOfSizeInvocation(BaseInvocation):
title="Random Range",
tags=["range", "integer", "random", "collection"],
category="collections",
version="1.0.0",
)
class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers"""

View File

@ -44,7 +44,7 @@ class ConditioningFieldData:
# PerpNeg = "perp_neg"
@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning")
@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning", version="1.0.0")
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@ -267,6 +267,7 @@ class SDXLPromptInvocationBase:
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -351,6 +352,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
title="SDXL Refiner Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -403,7 +405,7 @@ class ClipSkipInvocationOutput(BaseInvocationOutput):
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning")
@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning", version="1.0.0")
class ClipSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""

View File

@ -95,7 +95,7 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
@ -127,7 +127,9 @@ class ControlNetInvocation(BaseInvocation):
)
@invocation("image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet")
@invocation(
"image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet", version="1.0.0"
)
class ImageProcessorInvocation(BaseInvocation):
"""Base class for invocations that preprocess images for ControlNet"""
@ -171,6 +173,7 @@ class ImageProcessorInvocation(BaseInvocation):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.0.0",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
@ -193,6 +196,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.0.0",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@ -221,6 +225,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.0.0",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@ -242,6 +247,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.0.0",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@ -264,6 +270,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.0.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
@ -288,6 +295,7 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.0.0",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
@ -314,6 +322,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@ -329,7 +338,9 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@invocation("mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet")
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@ -350,7 +361,9 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@invocation("pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet")
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@ -376,6 +389,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.0.0",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@ -405,6 +419,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.0.0",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@ -420,6 +435,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.0.0",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
@ -442,6 +458,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.0.0",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@ -470,6 +487,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.0.0",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@ -509,6 +527,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.0.0",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""

View File

@ -10,12 +10,7 @@ from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@invocation(
"cv_inpaint",
title="OpenCV Inpaint",
tags=["opencv", "inpaint"],
category="inpaint",
)
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
class CvInpaintInvocation(BaseInvocation):
"""Simple inpaint using opencv."""

View File

@ -16,7 +16,7 @@ from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@invocation("show_image", title="Show Image", tags=["image"], category="image")
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
class ShowImageInvocation(BaseInvocation):
"""Displays a provided image using the OS image viewer, and passes it forward in the pipeline."""
@ -36,7 +36,7 @@ class ShowImageInvocation(BaseInvocation):
)
@invocation("blank_image", title="Blank Image", tags=["image"], category="image")
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
class BlankImageInvocation(BaseInvocation):
"""Creates a blank image and forwards it to the pipeline"""
@ -65,7 +65,7 @@ class BlankImageInvocation(BaseInvocation):
)
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image")
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
class ImageCropInvocation(BaseInvocation):
"""Crops an image to a specified box. The box can be outside of the image."""
@ -98,7 +98,7 @@ class ImageCropInvocation(BaseInvocation):
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.0")
class ImagePasteInvocation(BaseInvocation):
"""Pastes an image into another image."""
@ -146,7 +146,7 @@ class ImagePasteInvocation(BaseInvocation):
)
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image")
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
class MaskFromAlphaInvocation(BaseInvocation):
"""Extracts the alpha channel of an image as a mask."""
@ -177,7 +177,7 @@ class MaskFromAlphaInvocation(BaseInvocation):
)
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image")
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
class ImageMultiplyInvocation(BaseInvocation):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@ -210,7 +210,7 @@ class ImageMultiplyInvocation(BaseInvocation):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image")
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
class ImageChannelInvocation(BaseInvocation):
"""Gets a channel from an image."""
@ -242,7 +242,7 @@ class ImageChannelInvocation(BaseInvocation):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image")
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
class ImageConvertInvocation(BaseInvocation):
"""Converts an image to a different mode."""
@ -271,7 +271,7 @@ class ImageConvertInvocation(BaseInvocation):
)
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image")
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
class ImageBlurInvocation(BaseInvocation):
"""Blurs an image"""
@ -325,7 +325,7 @@ PIL_RESAMPLING_MAP = {
}
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image")
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
class ImageResizeInvocation(BaseInvocation):
"""Resizes an image to specific dimensions"""
@ -365,7 +365,7 @@ class ImageResizeInvocation(BaseInvocation):
)
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image")
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
class ImageScaleInvocation(BaseInvocation):
"""Scales an image by a factor"""
@ -406,7 +406,7 @@ class ImageScaleInvocation(BaseInvocation):
)
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image")
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
class ImageLerpInvocation(BaseInvocation):
"""Linear interpolation of all pixels of an image"""
@ -439,7 +439,7 @@ class ImageLerpInvocation(BaseInvocation):
)
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image")
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
class ImageInverseLerpInvocation(BaseInvocation):
"""Inverse linear interpolation of all pixels of an image"""
@ -472,7 +472,7 @@ class ImageInverseLerpInvocation(BaseInvocation):
)
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image")
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
class ImageNSFWBlurInvocation(BaseInvocation):
"""Add blur to NSFW-flagged images"""
@ -517,7 +517,9 @@ class ImageNSFWBlurInvocation(BaseInvocation):
return caution.resize((caution.width // 2, caution.height // 2))
@invocation("img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image")
@invocation(
"img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image", version="1.0.0"
)
class ImageWatermarkInvocation(BaseInvocation):
"""Add an invisible watermark to an image"""
@ -548,7 +550,7 @@ class ImageWatermarkInvocation(BaseInvocation):
)
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image")
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
class MaskEdgeInvocation(BaseInvocation):
"""Applies an edge mask to an image"""
@ -593,7 +595,9 @@ class MaskEdgeInvocation(BaseInvocation):
)
@invocation("mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image")
@invocation(
"mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image", version="1.0.0"
)
class MaskCombineInvocation(BaseInvocation):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@ -623,7 +627,7 @@ class MaskCombineInvocation(BaseInvocation):
)
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image")
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
class ColorCorrectInvocation(BaseInvocation):
"""
Shifts the colors of a target image to match the reference image, optionally
@ -728,7 +732,7 @@ class ColorCorrectInvocation(BaseInvocation):
)
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image")
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
class ImageHueAdjustmentInvocation(BaseInvocation):
"""Adjusts the Hue of an image."""
@ -774,6 +778,7 @@ class ImageHueAdjustmentInvocation(BaseInvocation):
title="Adjust Image Luminosity",
tags=["image", "luminosity", "hsl"],
category="image",
version="1.0.0",
)
class ImageLuminosityAdjustmentInvocation(BaseInvocation):
"""Adjusts the Luminosity (Value) of an image."""
@ -826,6 +831,7 @@ class ImageLuminosityAdjustmentInvocation(BaseInvocation):
title="Adjust Image Saturation",
tags=["image", "saturation", "hsl"],
category="image",
version="1.0.0",
)
class ImageSaturationAdjustmentInvocation(BaseInvocation):
"""Adjusts the Saturation of an image."""

View File

@ -116,7 +116,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillColorInvocation(BaseInvocation):
"""Infills transparent areas of an image with a solid color"""
@ -151,7 +151,7 @@ class InfillColorInvocation(BaseInvocation):
)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillTileInvocation(BaseInvocation):
"""Infills transparent areas of an image with tiles of the image"""
@ -187,7 +187,9 @@ class InfillTileInvocation(BaseInvocation):
)
@invocation("infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint")
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
)
class InfillPatchMatchInvocation(BaseInvocation):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@ -218,7 +220,7 @@ class InfillPatchMatchInvocation(BaseInvocation):
)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class LaMaInfillInvocation(BaseInvocation):
"""Infills transparent areas of an image using the LaMa model"""

View File

@ -74,7 +74,7 @@ class SchedulerOutput(BaseInvocationOutput):
scheduler: SAMPLER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
@invocation("scheduler", title="Scheduler", tags=["scheduler"], category="latents")
@invocation("scheduler", title="Scheduler", tags=["scheduler"], category="latents", version="1.0.0")
class SchedulerInvocation(BaseInvocation):
"""Selects a scheduler."""
@ -86,7 +86,9 @@ class SchedulerInvocation(BaseInvocation):
return SchedulerOutput(scheduler=self.scheduler)
@invocation("create_denoise_mask", title="Create Denoise Mask", tags=["mask", "denoise"], category="latents")
@invocation(
"create_denoise_mask", title="Create Denoise Mask", tags=["mask", "denoise"], category="latents", version="1.0.0"
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
@ -186,6 +188,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.0.0",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -544,7 +547,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=result_latents, seed=seed)
@invocation("l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents")
@invocation(
"l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents", version="1.0.0"
)
class LatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
@ -641,7 +646,7 @@ class LatentsToImageInvocation(BaseInvocation):
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@invocation("lresize", title="Resize Latents", tags=["latents", "resize"], category="latents")
@invocation("lresize", title="Resize Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
@ -685,7 +690,7 @@ class ResizeLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation("lscale", title="Scale Latents", tags=["latents", "resize"], category="latents")
@invocation("lscale", title="Scale Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
@ -721,7 +726,9 @@ class ScaleLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation("i2l", title="Image to Latents", tags=["latents", "image", "vae", "i2l"], category="latents")
@invocation(
"i2l", title="Image to Latents", tags=["latents", "image", "vae", "i2l"], category="latents", version="1.0.0"
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@ -801,7 +808,7 @@ class ImageToLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=latents, seed=None)
@invocation("lblend", title="Blend Latents", tags=["latents", "blend"], category="latents")
@invocation("lblend", title="Blend Latents", tags=["latents", "blend"], category="latents", version="1.0.0")
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""

View File

@ -7,7 +7,7 @@ from invokeai.app.invocations.primitives import IntegerOutput
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math")
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.0")
class AddInvocation(BaseInvocation):
"""Adds two numbers"""
@ -18,7 +18,7 @@ class AddInvocation(BaseInvocation):
return IntegerOutput(value=self.a + self.b)
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math")
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.0")
class SubtractInvocation(BaseInvocation):
"""Subtracts two numbers"""
@ -29,7 +29,7 @@ class SubtractInvocation(BaseInvocation):
return IntegerOutput(value=self.a - self.b)
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math")
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.0")
class MultiplyInvocation(BaseInvocation):
"""Multiplies two numbers"""
@ -40,7 +40,7 @@ class MultiplyInvocation(BaseInvocation):
return IntegerOutput(value=self.a * self.b)
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math")
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.0")
class DivideInvocation(BaseInvocation):
"""Divides two numbers"""
@ -51,7 +51,7 @@ class DivideInvocation(BaseInvocation):
return IntegerOutput(value=int(self.a / self.b))
@invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math")
@invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math", version="1.0.0")
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""

View File

@ -98,7 +98,9 @@ class MetadataAccumulatorOutput(BaseInvocationOutput):
metadata: CoreMetadata = OutputField(description="The core metadata for the image")
@invocation("metadata_accumulator", title="Metadata Accumulator", tags=["metadata"], category="metadata")
@invocation(
"metadata_accumulator", title="Metadata Accumulator", tags=["metadata"], category="metadata", version="1.0.0"
)
class MetadataAccumulatorInvocation(BaseInvocation):
"""Outputs a Core Metadata Object"""

View File

@ -73,7 +73,7 @@ class LoRAModelField(BaseModel):
base_model: BaseModelType = Field(description="Base model")
@invocation("main_model_loader", title="Main Model", tags=["model"], category="model")
@invocation("main_model_loader", title="Main Model", tags=["model"], category="model", version="1.0.0")
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
@ -173,7 +173,7 @@ class LoraLoaderOutput(BaseInvocationOutput):
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.0")
class LoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -244,7 +244,7 @@ class SDXLLoraLoaderOutput(BaseInvocationOutput):
clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
@invocation("sdxl_lora_loader", title="SDXL LoRA", tags=["lora", "model"], category="model")
@invocation("sdxl_lora_loader", title="SDXL LoRA", tags=["lora", "model"], category="model", version="1.0.0")
class SDXLLoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -338,7 +338,7 @@ class VaeLoaderOutput(BaseInvocationOutput):
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.0")
class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
@ -376,7 +376,7 @@ class SeamlessModeOutput(BaseInvocationOutput):
vae: Optional[VaeField] = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("seamless", title="Seamless", tags=["seamless", "model"], category="model")
@invocation("seamless", title="Seamless", tags=["seamless", "model"], category="model", version="1.0.0")
class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE."""

View File

@ -78,7 +78,7 @@ def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
)
@invocation("noise", title="Noise", tags=["latents", "noise"], category="latents")
@invocation("noise", title="Noise", tags=["latents", "noise"], category="latents", version="1.0.0")
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""

View File

@ -56,7 +56,7 @@ ORT_TO_NP_TYPE = {
PRECISION_VALUES = Literal[tuple(list(ORT_TO_NP_TYPE.keys()))]
@invocation("prompt_onnx", title="ONNX Prompt (Raw)", tags=["prompt", "onnx"], category="conditioning")
@invocation("prompt_onnx", title="ONNX Prompt (Raw)", tags=["prompt", "onnx"], category="conditioning", version="1.0.0")
class ONNXPromptInvocation(BaseInvocation):
prompt: str = InputField(default="", description=FieldDescriptions.raw_prompt, ui_component=UIComponent.Textarea)
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
@ -143,6 +143,7 @@ class ONNXPromptInvocation(BaseInvocation):
title="ONNX Text to Latents",
tags=["latents", "inference", "txt2img", "onnx"],
category="latents",
version="1.0.0",
)
class ONNXTextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
@ -319,6 +320,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.0.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
@ -403,7 +405,7 @@ class OnnxModelField(BaseModel):
model_type: ModelType = Field(description="Model Type")
@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model")
@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model", version="1.0.0")
class OnnxModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""

View File

@ -45,7 +45,7 @@ from invokeai.app.invocations.primitives import FloatCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@invocation("float_range", title="Float Range", tags=["math", "range"], category="math")
@invocation("float_range", title="Float Range", tags=["math", "range"], category="math", version="1.0.0")
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
@ -96,7 +96,7 @@ EASING_FUNCTION_KEYS = Literal[tuple(list(EASING_FUNCTIONS_MAP.keys()))]
# actually I think for now could just use CollectionOutput (which is list[Any]
@invocation("step_param_easing", title="Step Param Easing", tags=["step", "easing"], category="step")
@invocation("step_param_easing", title="Step Param Easing", tags=["step", "easing"], category="step", version="1.0.0")
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""

View File

@ -44,7 +44,9 @@ class BooleanCollectionOutput(BaseInvocationOutput):
)
@invocation("boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives")
@invocation(
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.0"
)
class BooleanInvocation(BaseInvocation):
"""A boolean primitive value"""
@ -59,6 +61,7 @@ class BooleanInvocation(BaseInvocation):
title="Boolean Collection Primitive",
tags=["primitives", "boolean", "collection"],
category="primitives",
version="1.0.0",
)
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
@ -90,7 +93,9 @@ class IntegerCollectionOutput(BaseInvocationOutput):
)
@invocation("integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives")
@invocation(
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.0"
)
class IntegerInvocation(BaseInvocation):
"""An integer primitive value"""
@ -105,6 +110,7 @@ class IntegerInvocation(BaseInvocation):
title="Integer Collection Primitive",
tags=["primitives", "integer", "collection"],
category="primitives",
version="1.0.0",
)
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
@ -136,7 +142,7 @@ class FloatCollectionOutput(BaseInvocationOutput):
)
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives")
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.0")
class FloatInvocation(BaseInvocation):
"""A float primitive value"""
@ -151,6 +157,7 @@ class FloatInvocation(BaseInvocation):
title="Float Collection Primitive",
tags=["primitives", "float", "collection"],
category="primitives",
version="1.0.0",
)
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
@ -182,7 +189,7 @@ class StringCollectionOutput(BaseInvocationOutput):
)
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives")
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.0")
class StringInvocation(BaseInvocation):
"""A string primitive value"""
@ -197,6 +204,7 @@ class StringInvocation(BaseInvocation):
title="String Collection Primitive",
tags=["primitives", "string", "collection"],
category="primitives",
version="1.0.0",
)
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
@ -236,7 +244,7 @@ class ImageCollectionOutput(BaseInvocationOutput):
)
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives")
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.0")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
@ -257,6 +265,7 @@ class ImageInvocation(BaseInvocation):
title="Image Collection Primitive",
tags=["primitives", "image", "collection"],
category="primitives",
version="1.0.0",
)
class ImageCollectionInvocation(BaseInvocation):
"""A collection of image primitive values"""
@ -318,7 +327,9 @@ class LatentsCollectionOutput(BaseInvocationOutput):
)
@invocation("latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives")
@invocation(
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.0"
)
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
@ -335,6 +346,7 @@ class LatentsInvocation(BaseInvocation):
title="Latents Collection Primitive",
tags=["primitives", "latents", "collection"],
category="primitives",
version="1.0.0",
)
class LatentsCollectionInvocation(BaseInvocation):
"""A collection of latents tensor primitive values"""
@ -388,7 +400,7 @@ class ColorCollectionOutput(BaseInvocationOutput):
)
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives")
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.0")
class ColorInvocation(BaseInvocation):
"""A color primitive value"""
@ -430,6 +442,7 @@ class ConditioningCollectionOutput(BaseInvocationOutput):
title="Conditioning Primitive",
tags=["primitives", "conditioning"],
category="primitives",
version="1.0.0",
)
class ConditioningInvocation(BaseInvocation):
"""A conditioning tensor primitive value"""
@ -445,6 +458,7 @@ class ConditioningInvocation(BaseInvocation):
title="Conditioning Collection Primitive",
tags=["primitives", "conditioning", "collection"],
category="primitives",
version="1.0.0",
)
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""

View File

@ -10,7 +10,7 @@ from invokeai.app.invocations.primitives import StringCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation
@invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt")
@invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt", version="1.0.0")
class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""
@ -29,7 +29,7 @@ class DynamicPromptInvocation(BaseInvocation):
return StringCollectionOutput(collection=prompts)
@invocation("prompt_from_file", title="Prompts from File", tags=["prompt", "file"], category="prompt")
@invocation("prompt_from_file", title="Prompts from File", tags=["prompt", "file"], category="prompt", version="1.0.0")
class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file"""

View File

@ -33,7 +33,7 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.0")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
@ -119,6 +119,7 @@ class SDXLModelLoaderInvocation(BaseInvocation):
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.0",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""

View File

@ -23,7 +23,7 @@ ESRGAN_MODELS = Literal[
]
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.0.0")
class ESRGANInvocation(BaseInvocation):
"""Upscales an image using RealESRGAN."""

View File

@ -75,6 +75,7 @@
"@reduxjs/toolkit": "^1.9.5",
"@roarr/browser-log-writer": "^1.1.5",
"@stevebel/png": "^1.5.1",
"compare-versions": "^6.1.0",
"dateformat": "^5.0.3",
"formik": "^2.4.3",
"framer-motion": "^10.16.1",

View File

@ -84,6 +84,7 @@ import { addUserInvokedCanvasListener } from './listeners/userInvokedCanvas';
import { addUserInvokedImageToImageListener } from './listeners/userInvokedImageToImage';
import { addUserInvokedNodesListener } from './listeners/userInvokedNodes';
import { addUserInvokedTextToImageListener } from './listeners/userInvokedTextToImage';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
export const listenerMiddleware = createListenerMiddleware();
@ -202,6 +203,9 @@ addBoardIdSelectedListener();
// Node schemas
addReceivedOpenAPISchemaListener();
// Workflows
addWorkflowLoadedListener();
// DND
addImageDroppedListener();

View File

@ -0,0 +1,55 @@
import { logger } from 'app/logging/logger';
import { workflowLoadRequested } from 'features/nodes/store/actions';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { $flow } from 'features/nodes/store/reactFlowInstance';
import { validateWorkflow } from 'features/nodes/util/validateWorkflow';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { startAppListening } from '..';
export const addWorkflowLoadedListener = () => {
startAppListening({
actionCreator: workflowLoadRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const workflow = action.payload;
const nodeTemplates = getState().nodes.nodeTemplates;
const { workflow: validatedWorkflow, errors } = validateWorkflow(
workflow,
nodeTemplates
);
dispatch(workflowLoaded(validatedWorkflow));
if (!errors.length) {
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded',
status: 'success',
})
)
);
} else {
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded with Warnings',
status: 'warning',
})
)
);
errors.forEach(({ message, ...rest }) => {
log.warn(rest, message);
});
}
dispatch(setActiveTab('nodes'));
requestAnimationFrame(() => {
$flow.get()?.fitView();
});
},
});
};

View File

@ -17,16 +17,13 @@ import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { DeleteImageButton } from 'features/deleteImageModal/components/DeleteImageButton';
import { imagesToDeleteSelected } from 'features/deleteImageModal/store/slice';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { workflowLoadRequested } from 'features/nodes/store/actions';
import ParamUpscalePopover from 'features/parameters/components/Parameters/Upscale/ParamUpscaleSettings';
import { useRecallParameters } from 'features/parameters/hooks/useRecallParameters';
import { initialImageSelected } from 'features/parameters/store/actions';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import {
setActiveTab,
setShouldShowImageDetails,
setShouldShowProgressInViewer,
} from 'features/ui/store/uiSlice';
@ -124,16 +121,7 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
if (!workflow) {
return;
}
dispatch(workflowLoaded(workflow));
dispatch(setActiveTab('nodes'));
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded',
status: 'success',
})
)
);
dispatch(workflowLoadRequested(workflow));
}, [dispatch, workflow]);
const handleClickUseAllParameters = useCallback(() => {

View File

@ -7,12 +7,9 @@ import {
isModalOpenChanged,
} from 'features/changeBoardModal/store/slice';
import { imagesToDeleteSelected } from 'features/deleteImageModal/store/slice';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { useRecallParameters } from 'features/parameters/hooks/useRecallParameters';
import { initialImageSelected } from 'features/parameters/store/actions';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { useCopyImageToClipboard } from 'features/ui/hooks/useCopyImageToClipboard';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { memo, useCallback } from 'react';
@ -36,6 +33,7 @@ import {
} from 'services/api/endpoints/images';
import { ImageDTO } from 'services/api/types';
import { sentImageToCanvas, sentImageToImg2Img } from '../../store/actions';
import { workflowLoadRequested } from 'features/nodes/store/actions';
type SingleSelectionMenuItemsProps = {
imageDTO: ImageDTO;
@ -102,16 +100,7 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
if (!workflow) {
return;
}
dispatch(workflowLoaded(workflow));
dispatch(setActiveTab('nodes'));
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded',
status: 'success',
})
)
);
dispatch(workflowLoadRequested(workflow));
}, [dispatch, workflow]);
const handleSendToImageToImage = useCallback(() => {

View File

@ -3,6 +3,7 @@ import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { $flow } from 'features/nodes/store/reactFlowInstance';
import { contextMenusClosed } from 'features/ui/store/uiSlice';
import { useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
@ -13,6 +14,7 @@ import {
OnConnectStart,
OnEdgesChange,
OnEdgesDelete,
OnInit,
OnMoveEnd,
OnNodesChange,
OnNodesDelete,
@ -147,6 +149,11 @@ export const Flow = () => {
dispatch(contextMenusClosed());
}, [dispatch]);
const onInit: OnInit = useCallback((flow) => {
$flow.set(flow);
flow.fitView();
}, []);
useHotkeys(['Ctrl+c', 'Meta+c'], (e) => {
e.preventDefault();
dispatch(selectionCopied());
@ -170,6 +177,7 @@ export const Flow = () => {
edgeTypes={edgeTypes}
nodes={nodes}
edges={edges}
onInit={onInit}
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
onEdgesDelete={onEdgesDelete}

View File

@ -12,6 +12,7 @@ import {
Tooltip,
useDisclosure,
} from '@chakra-ui/react';
import { compare } from 'compare-versions';
import { useNodeData } from 'features/nodes/hooks/useNodeData';
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
import { useNodeTemplate } from 'features/nodes/hooks/useNodeTemplate';
@ -20,6 +21,7 @@ import { isInvocationNodeData } from 'features/nodes/types/types';
import { memo, useMemo } from 'react';
import { FaInfoCircle } from 'react-icons/fa';
import NotesTextarea from './NotesTextarea';
import { useDoNodeVersionsMatch } from 'features/nodes/hooks/useDoNodeVersionsMatch';
interface Props {
nodeId: string;
@ -29,6 +31,7 @@ const InvocationNodeNotes = ({ nodeId }: Props) => {
const { isOpen, onOpen, onClose } = useDisclosure();
const label = useNodeLabel(nodeId);
const title = useNodeTemplateTitle(nodeId);
const doVersionsMatch = useDoNodeVersionsMatch(nodeId);
return (
<>
@ -50,7 +53,11 @@ const InvocationNodeNotes = ({ nodeId }: Props) => {
>
<Icon
as={FaInfoCircle}
sx={{ boxSize: 4, w: 8, color: 'base.400' }}
sx={{
boxSize: 4,
w: 8,
color: doVersionsMatch ? 'base.400' : 'error.400',
}}
/>
</Flex>
</Tooltip>
@ -92,16 +99,59 @@ const TooltipContent = memo(({ nodeId }: { nodeId: string }) => {
return 'Unknown Node';
}, [data, nodeTemplate]);
const versionComponent = useMemo(() => {
if (!isInvocationNodeData(data) || !nodeTemplate) {
return null;
}
if (!data.version) {
return (
<Text as="span" sx={{ color: 'error.500' }}>
Version unknown
</Text>
);
}
if (!nodeTemplate.version) {
return (
<Text as="span" sx={{ color: 'error.500' }}>
Version {data.version} (unknown template)
</Text>
);
}
if (compare(data.version, nodeTemplate.version, '<')) {
return (
<Text as="span" sx={{ color: 'error.500' }}>
Version {data.version} (update node)
</Text>
);
}
if (compare(data.version, nodeTemplate.version, '>')) {
return (
<Text as="span" sx={{ color: 'error.500' }}>
Version {data.version} (update app)
</Text>
);
}
return <Text as="span">Version {data.version}</Text>;
}, [data, nodeTemplate]);
if (!isInvocationNodeData(data)) {
return <Text sx={{ fontWeight: 600 }}>Unknown Node</Text>;
}
return (
<Flex sx={{ flexDir: 'column' }}>
<Text sx={{ fontWeight: 600 }}>{title}</Text>
<Text as="span" sx={{ fontWeight: 600 }}>
{title}
</Text>
<Text sx={{ opacity: 0.7, fontStyle: 'oblique 5deg' }}>
{nodeTemplate?.description}
</Text>
{versionComponent}
{data?.notes && <Text>{data.notes}</Text>}
</Flex>
);

View File

@ -138,13 +138,14 @@ export const useBuildNodeData = () => {
data: {
id: nodeId,
type,
inputs,
outputs,
isOpen: true,
version: template.version,
label: '',
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: true,
inputs,
outputs,
},
};

View File

@ -0,0 +1,33 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { compareVersions } from 'compare-versions';
import { useMemo } from 'react';
import { isInvocationNode } from '../types/types';
export const useDoNodeVersionsMatch = (nodeId: string) => {
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
if (!isInvocationNode(node)) {
return false;
}
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
if (!nodeTemplate?.version || !node.data?.version) {
return false;
}
return compareVersions(nodeTemplate.version, node.data.version) === 0;
},
defaultSelectorOptions
),
[nodeId]
);
const nodeTemplate = useAppSelector(selector);
return nodeTemplate;
};

View File

@ -2,13 +2,13 @@ import { ListItem, Text, UnorderedList } from '@chakra-ui/react';
import { useLogger } from 'app/logging/useLogger';
import { useAppDispatch } from 'app/store/storeHooks';
import { parseify } from 'common/util/serialize';
import { workflowLoaded } from 'features/nodes/store/nodesSlice';
import { zValidatedWorkflow } from 'features/nodes/types/types';
import { zWorkflow } from 'features/nodes/types/types';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { memo, useCallback } from 'react';
import { ZodError } from 'zod';
import { fromZodError, fromZodIssue } from 'zod-validation-error';
import { workflowLoadRequested } from '../store/actions';
export const useLoadWorkflowFromFile = () => {
const dispatch = useAppDispatch();
@ -24,7 +24,7 @@ export const useLoadWorkflowFromFile = () => {
try {
const parsedJSON = JSON.parse(String(rawJSON));
const result = zValidatedWorkflow.safeParse(parsedJSON);
const result = zWorkflow.safeParse(parsedJSON);
if (!result.success) {
const { message } = fromZodError(result.error, {
@ -45,32 +45,8 @@ export const useLoadWorkflowFromFile = () => {
reader.abort();
return;
}
dispatch(workflowLoaded(result.data.workflow));
if (!result.data.warnings.length) {
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded',
status: 'success',
})
)
);
reader.abort();
return;
}
dispatch(
addToast(
makeToast({
title: 'Workflow Loaded with Warnings',
status: 'warning',
})
)
);
result.data.warnings.forEach(({ message, ...rest }) => {
logger.warn(rest, message);
});
dispatch(workflowLoadRequested(result.data));
reader.abort();
} catch {

View File

@ -1,5 +1,6 @@
import { createAction, isAnyOf } from '@reduxjs/toolkit';
import { Graph } from 'services/api/types';
import { Workflow } from '../types/types';
export const textToImageGraphBuilt = createAction<Graph>(
'nodes/textToImageGraphBuilt'
@ -16,3 +17,7 @@ export const isAnyGraphBuilt = isAnyOf(
canvasGraphBuilt,
nodesGraphBuilt
);
export const workflowLoadRequested = createAction<Workflow>(
'nodes/workflowLoadRequested'
);

View File

@ -0,0 +1,4 @@
import { atom } from 'nanostores';
import { ReactFlowInstance } from 'reactflow';
export const $flow = atom<ReactFlowInstance | null>(null);

View File

@ -52,6 +52,10 @@ export type InvocationTemplate = {
* The type of this node's output
*/
outputType: string; // TODO: generate a union of output types
/**
* The invocation's version.
*/
version?: string;
};
export type FieldUIConfig = {
@ -962,6 +966,7 @@ export type InvocationSchemaExtra = {
title: string;
category?: string;
tags?: string[];
version?: string;
properties: Omit<
NonNullable<OpenAPIV3.SchemaObject['properties']> &
(_InputField | _OutputField),
@ -1095,6 +1100,29 @@ export const zCoreMetadata = z
export type CoreMetadata = z.infer<typeof zCoreMetadata>;
export const zSemVer = z.string().refine((val) => {
const [major, minor, patch] = val.split('.');
return (
major !== undefined &&
Number.isInteger(Number(major)) &&
minor !== undefined &&
Number.isInteger(Number(minor)) &&
patch !== undefined &&
Number.isInteger(Number(patch))
);
});
export const zParsedSemver = zSemVer.transform((val) => {
const [major, minor, patch] = val.split('.');
return {
major: Number(major),
minor: Number(minor),
patch: Number(patch),
};
});
export type SemVer = z.infer<typeof zSemVer>;
export const zInvocationNodeData = z.object({
id: z.string().trim().min(1),
// no easy way to build this dynamically, and we don't want to anyways, because this will be used
@ -1107,6 +1135,7 @@ export const zInvocationNodeData = z.object({
notes: z.string(),
embedWorkflow: z.boolean(),
isIntermediate: z.boolean(),
version: zSemVer.optional(),
});
// Massage this to get better type safety while developing
@ -1195,20 +1224,6 @@ export const zFieldIdentifier = z.object({
export type FieldIdentifier = z.infer<typeof zFieldIdentifier>;
export const zSemVer = z.string().refine((val) => {
const [major, minor, patch] = val.split('.');
return (
major !== undefined &&
minor !== undefined &&
patch !== undefined &&
Number.isInteger(Number(major)) &&
Number.isInteger(Number(minor)) &&
Number.isInteger(Number(patch))
);
});
export type SemVer = z.infer<typeof zSemVer>;
export type WorkflowWarning = {
message: string;
issues: string[];

View File

@ -73,6 +73,7 @@ export const parseSchema = (
const title = schema.title.replace('Invocation', '');
const tags = schema.tags ?? [];
const description = schema.description ?? '';
const version = schema.version ?? '';
const inputs = reduce(
schema.properties,
@ -225,11 +226,12 @@ export const parseSchema = (
const invocation: InvocationTemplate = {
title,
type,
version,
tags,
description,
outputType,
inputs,
outputs,
outputType,
};
Object.assign(invocationsAccumulator, { [type]: invocation });

View File

@ -0,0 +1,96 @@
import { compareVersions } from 'compare-versions';
import { cloneDeep, keyBy } from 'lodash-es';
import {
InvocationTemplate,
Workflow,
WorkflowWarning,
isWorkflowInvocationNode,
} from '../types/types';
import { parseify } from 'common/util/serialize';
export const validateWorkflow = (
workflow: Workflow,
nodeTemplates: Record<string, InvocationTemplate>
) => {
const clone = cloneDeep(workflow);
const { nodes, edges } = clone;
const errors: WorkflowWarning[] = [];
const invocationNodes = nodes.filter(isWorkflowInvocationNode);
const keyedNodes = keyBy(invocationNodes, 'id');
nodes.forEach((node) => {
if (!isWorkflowInvocationNode(node)) {
return;
}
const nodeTemplate = nodeTemplates[node.data.type];
if (!nodeTemplate) {
errors.push({
message: `Node "${node.data.type}" skipped`,
issues: [`Node type "${node.data.type}" does not exist`],
data: node,
});
return;
}
if (
nodeTemplate.version &&
node.data.version &&
compareVersions(nodeTemplate.version, node.data.version) !== 0
) {
errors.push({
message: `Node "${node.data.type}" has mismatched version`,
issues: [
`Node "${node.data.type}" v${node.data.version} may be incompatible with installed v${nodeTemplate.version}`,
],
data: { node, nodeTemplate: parseify(nodeTemplate) },
});
return;
}
});
edges.forEach((edge, i) => {
const sourceNode = keyedNodes[edge.source];
const targetNode = keyedNodes[edge.target];
const issues: string[] = [];
if (!sourceNode) {
issues.push(`Output node ${edge.source} does not exist`);
} else if (
edge.type === 'default' &&
!(edge.sourceHandle in sourceNode.data.outputs)
) {
issues.push(
`Output field "${edge.source}.${edge.sourceHandle}" does not exist`
);
}
if (!targetNode) {
issues.push(`Input node ${edge.target} does not exist`);
} else if (
edge.type === 'default' &&
!(edge.targetHandle in targetNode.data.inputs)
) {
issues.push(
`Input field "${edge.target}.${edge.targetHandle}" does not exist`
);
}
if (!nodeTemplates[sourceNode?.data.type ?? '__UNKNOWN_NODE_TYPE__']) {
issues.push(
`Source node "${edge.source}" missing template "${sourceNode?.data.type}"`
);
}
if (!nodeTemplates[targetNode?.data.type ?? '__UNKNOWN_NODE_TYPE__']) {
issues.push(
`Source node "${edge.target}" missing template "${targetNode?.data.type}"`
);
}
if (issues.length) {
delete edges[i];
const src = edge.type === 'default' ? edge.sourceHandle : edge.source;
const tgt = edge.type === 'default' ? edge.targetHandle : edge.target;
errors.push({
message: `Edge "${src} -> ${tgt}" skipped`,
issues,
data: edge,
});
}
});
return { workflow: clone, errors };
};

View File

@ -6981,6 +6981,11 @@ export type components = {
* @description The node's category
*/
category?: string;
/**
* Version
* @description The node's version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".
*/
version?: string;
};
/**
* Input
@ -7036,24 +7041,12 @@ export type components = {
/** Ui Order */
ui_order?: number;
};
/**
* StableDiffusionOnnxModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusionOnnxModelFormat: "olive" | "onnx";
/**
* StableDiffusion1ModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
/**
* ControlNetModelFormat
* @description An enumeration.
* @enum {string}
*/
ControlNetModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusionXLModelFormat
* @description An enumeration.
@ -7066,6 +7059,18 @@ export type components = {
* @enum {string}
*/
StableDiffusion2ModelFormat: "checkpoint" | "diffusers";
/**
* ControlNetModelFormat
* @description An enumeration.
* @enum {string}
*/
ControlNetModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusionOnnxModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusionOnnxModelFormat: "olive" | "onnx";
};
responses: never;
parameters: never;

View File

@ -2970,6 +2970,11 @@ commondir@^1.0.1:
resolved "https://registry.yarnpkg.com/commondir/-/commondir-1.0.1.tgz#ddd800da0c66127393cca5950ea968a3aaf1253b"
integrity sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg==
compare-versions@^6.1.0:
version "6.1.0"
resolved "https://registry.yarnpkg.com/compare-versions/-/compare-versions-6.1.0.tgz#3f2131e3ae93577df111dba133e6db876ffe127a"
integrity sha512-LNZQXhqUvqUTotpZ00qLSaify3b4VFD588aRr8MKFw4CMUr98ytzCW5wDH5qx/DEY5kCDXcbcRuCqL0szEf2tg==
compute-scroll-into-view@1.0.20:
version "1.0.20"
resolved "https://registry.yarnpkg.com/compute-scroll-into-view/-/compute-scroll-into-view-1.0.20.tgz#1768b5522d1172754f5d0c9b02de3af6be506a43"

View File

@ -74,6 +74,7 @@ dependencies = [
"rich~=13.3",
"safetensors==0.3.1",
"scikit-image~=0.21.0",
"semver~=3.0.1",
"send2trash",
"test-tube~=0.7.5",
"torch~=2.0.1",

View File

@ -1,4 +1,10 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvalidVersionError,
invocation,
invocation_output,
)
from .test_nodes import (
ImageToImageTestInvocation,
TextToImageTestInvocation,
@ -616,18 +622,38 @@ def test_invocation_decorator():
title = "Test Invocation"
tags = ["first", "second", "third"]
category = "category"
version = "1.2.3"
@invocation(invocation_type, title=title, tags=tags, category=category)
class Test(BaseInvocation):
@invocation(invocation_type, title=title, tags=tags, category=category, version=version)
class TestInvocation(BaseInvocation):
def invoke(self):
pass
schema = Test.schema()
schema = TestInvocation.schema()
assert schema.get("title") == title
assert schema.get("tags") == tags
assert schema.get("category") == category
assert Test(id="1").type == invocation_type # type: ignore (type is dynamically added)
assert schema.get("version") == version
assert TestInvocation(id="1").type == invocation_type # type: ignore (type is dynamically added)
def test_invocation_version_must_be_semver():
invocation_type = "test_invocation"
valid_version = "1.0.0"
invalid_version = "not_semver"
@invocation(invocation_type, version=valid_version)
class ValidVersionInvocation(BaseInvocation):
def invoke(self):
pass
with pytest.raises(InvalidVersionError):
@invocation(invocation_type, version=invalid_version)
class InvalidVersionInvocation(BaseInvocation):
def invoke(self):
pass
def test_invocation_output_decorator():