diff --git a/docs/contributing/INVOCATIONS.md b/docs/contributing/INVOCATIONS.md index 3809bd4178..b34a2f25ac 100644 --- a/docs/contributing/INVOCATIONS.md +++ b/docs/contributing/INVOCATIONS.md @@ -29,12 +29,13 @@ The first set of things we need to do when creating a new Invocation are - - Create a new class that derives from a predefined parent class called `BaseInvocation`. -- The name of every Invocation must end with the word `Invocation` in order for - it to be recognized as an Invocation. - Every Invocation must have a `docstring` that describes what this Invocation does. -- Every Invocation must have a unique `type` field defined which becomes its - indentifier. +- While not strictly required, we suggest every invocation class name ends in + "Invocation", eg "CropImageInvocation". +- Every Invocation must use the `@invocation` decorator to provide its unique + invocation type. You may also provide its title, tags and category using the + decorator. - Invocations are strictly typed. We make use of the native [typing](https://docs.python.org/3/library/typing.html) library and the installed [pydantic](https://pydantic-docs.helpmanual.io/) library for @@ -43,12 +44,11 @@ The first set of things we need to do when creating a new Invocation are - So let us do that. ```python -from typing import Literal -from .baseinvocation import BaseInvocation +from .baseinvocation import BaseInvocation, invocation +@invocation('resize') class ResizeInvocation(BaseInvocation): '''Resizes an image''' - type: Literal['resize'] = 'resize' ``` That's great. @@ -62,8 +62,10 @@ our Invocation takes. ### **Inputs** -Every Invocation input is a pydantic `Field` and like everything else should be -strictly typed and defined. +Every Invocation input must be defined using the `InputField` function. This is +a wrapper around the pydantic `Field` function, which handles a few extra things +and provides type hints. Like everything else, this should be strictly typed and +defined. So let us create these inputs for our Invocation. First up, the `image` input we need. Generally, we can use standard variable types in Python but InvokeAI @@ -76,55 +78,51 @@ create your own custom field types later in this guide. For now, let's go ahead and use it. ```python -from typing import Literal, Union -from pydantic import Field - -from .baseinvocation import BaseInvocation -from ..models.image import ImageField +from .baseinvocation import BaseInvocation, InputField, invocation +from .primitives import ImageField +@invocation('resize') class ResizeInvocation(BaseInvocation): - '''Resizes an image''' - type: Literal['resize'] = 'resize' # Inputs - image: Union[ImageField, None] = Field(description="The input image", default=None) + image: ImageField = InputField(description="The input image") ``` Let us break down our input code. ```python -image: Union[ImageField, None] = Field(description="The input image", default=None) +image: ImageField = InputField(description="The input image") ``` -| Part | Value | Description | -| --------- | ---------------------------------------------------- | -------------------------------------------------------------------------------------------------- | -| Name | `image` | The variable that will hold our image | -| Type Hint | `Union[ImageField, None]` | The types for our field. Indicates that the image can either be an `ImageField` type or `None` | -| Field | `Field(description="The input image", default=None)` | The image variable is a field which needs a description and a default value that we set to `None`. | +| Part | Value | Description | +| --------- | ------------------------------------------- | ------------------------------------------------------------------------------- | +| Name | `image` | The variable that will hold our image | +| Type Hint | `ImageField` | The types for our field. Indicates that the image must be an `ImageField` type. | +| Field | `InputField(description="The input image")` | The image variable is an `InputField` which needs a description. | Great. Now let us create our other inputs for `width` and `height` ```python -from typing import Literal, Union -from pydantic import Field - -from .baseinvocation import BaseInvocation -from ..models.image import ImageField +from .baseinvocation import BaseInvocation, InputField, invocation +from .primitives import ImageField +@invocation('resize') class ResizeInvocation(BaseInvocation): '''Resizes an image''' - type: Literal['resize'] = 'resize' # Inputs - image: Union[ImageField, None] = Field(description="The input image", default=None) - width: int = Field(default=512, ge=64, le=2048, description="Width of the new image") - height: int = Field(default=512, ge=64, le=2048, description="Height of the new image") + image: ImageField = InputField(description="The input image") + width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image") + height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image") ``` -As you might have noticed, we added two new parameters to the field type for -`width` and `height` called `gt` and `le`. These basically stand for _greater -than or equal to_ and _less than or equal to_. There are various other param -types for field that you can find on the **pydantic** documentation. +As you might have noticed, we added two new arguments to the `InputField` +definition for `width` and `height`, called `gt` and `le`. They stand for +_greater than or equal to_ and _less than or equal to_. + +These impose contraints on those fields, and will raise an exception if the +values do not meet the constraints. Field constraints are provided by +**pydantic**, so anything you see in the **pydantic docs** will work. **Note:** _Any time it is possible to define constraints for our field, we should do it so the frontend has more information on how to parse this field._ @@ -141,20 +139,17 @@ that are provided by it by InvokeAI. Let us create this function first. ```python -from typing import Literal, Union -from pydantic import Field - -from .baseinvocation import BaseInvocation, InvocationContext -from ..models.image import ImageField +from .baseinvocation import BaseInvocation, InputField, invocation +from .primitives import ImageField +@invocation('resize') class ResizeInvocation(BaseInvocation): '''Resizes an image''' - type: Literal['resize'] = 'resize' # Inputs - image: Union[ImageField, None] = Field(description="The input image", default=None) - width: int = Field(default=512, ge=64, le=2048, description="Width of the new image") - height: int = Field(default=512, ge=64, le=2048, description="Height of the new image") + image: ImageField = InputField(description="The input image") + width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image") + height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image") def invoke(self, context: InvocationContext): pass @@ -173,21 +168,18 @@ all the necessary info related to image outputs. So let us use that. We will cover how to create your own output types later in this guide. ```python -from typing import Literal, Union -from pydantic import Field - -from .baseinvocation import BaseInvocation, InvocationContext -from ..models.image import ImageField +from .baseinvocation import BaseInvocation, InputField, invocation +from .primitives import ImageField from .image import ImageOutput +@invocation('resize') class ResizeInvocation(BaseInvocation): '''Resizes an image''' - type: Literal['resize'] = 'resize' # Inputs - image: Union[ImageField, None] = Field(description="The input image", default=None) - width: int = Field(default=512, ge=64, le=2048, description="Width of the new image") - height: int = Field(default=512, ge=64, le=2048, description="Height of the new image") + image: ImageField = InputField(description="The input image") + width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image") + height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image") def invoke(self, context: InvocationContext) -> ImageOutput: pass @@ -195,39 +187,34 @@ class ResizeInvocation(BaseInvocation): Perfect. Now that we have our Invocation setup, let us do what we want to do. -- We will first load the image. Generally we do this using the `PIL` library but - we can use one of the services provided by InvokeAI to load the image. +- We will first load the image using one of the services provided by InvokeAI to + load the image. - We will resize the image using `PIL` to our input data. - We will output this image in the format we set above. So let's do that. ```python -from typing import Literal, Union -from pydantic import Field - -from .baseinvocation import BaseInvocation, InvocationContext -from ..models.image import ImageField, ResourceOrigin, ImageCategory +from .baseinvocation import BaseInvocation, InputField, invocation +from .primitives import ImageField from .image import ImageOutput +@invocation("resize") class ResizeInvocation(BaseInvocation): - '''Resizes an image''' - type: Literal['resize'] = 'resize' + """Resizes an image""" - # Inputs - image: Union[ImageField, None] = Field(description="The input image", default=None) - width: int = Field(default=512, ge=64, le=2048, description="Width of the new image") - height: int = Field(default=512, ge=64, le=2048, description="Height of the new image") + image: ImageField = InputField(description="The input image") + width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image") + height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image") def invoke(self, context: InvocationContext) -> ImageOutput: - # Load the image using InvokeAI's predefined Image Service. - image = context.services.images.get_pil_image(self.image.image_origin, self.image.image_name) + # Load the image using InvokeAI's predefined Image Service. Returns the PIL image. + image = context.services.images.get_pil_image(self.image.image_name) # Resizing the image - # Because we used the above service, we already have a PIL image. So we can simply resize. resized_image = image.resize((self.width, self.height)) - # Preparing the image for output using InvokeAI's predefined Image Service. + # Save the image using InvokeAI's predefined Image Service. Returns the prepared PIL image. output_image = context.services.images.create( image=resized_image, image_origin=ResourceOrigin.INTERNAL, @@ -241,7 +228,6 @@ class ResizeInvocation(BaseInvocation): return ImageOutput( image=ImageField( image_name=output_image.image_name, - image_origin=output_image.image_origin, ), width=output_image.width, height=output_image.height, @@ -253,6 +239,20 @@ certain way that the images need to be dispatched in order to be stored and read correctly. In 99% of the cases when dealing with an image output, you can simply copy-paste the template above. +### Customization + +We can use the `@invocation` decorator to provide some additional info to the +UI, like a custom title, tags and category. + +```python +@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations") +class ResizeInvocation(BaseInvocation): + """Resizes an image""" + + image: ImageField = InputField(description="The input image") + ... +``` + That's it. You made your own **Resize Invocation**. ## Result @@ -271,10 +271,57 @@ new Invocation ready to be used. ![resize node editor](../assets/contributing/resize_node_editor.png) ## Contributing Nodes -Once you've created a Node, the next step is to share it with the community! The best way to do this is to submit a Pull Request to add the Node to the [Community Nodes](nodes/communityNodes) list. If you're not sure how to do that, take a look a at our [contributing nodes overview](contributingNodes). + +Once you've created a Node, the next step is to share it with the community! The +best way to do this is to submit a Pull Request to add the Node to the +[Community Nodes](nodes/communityNodes) list. If you're not sure how to do that, +take a look a at our [contributing nodes overview](contributingNodes). ## Advanced +--> + +### Custom Output Types + +Like with custom inputs, sometimes you might find yourself needing custom +outputs that InvokeAI does not provide. We can easily set one up. + +Now that you are familiar with Invocations and Inputs, let us use that knowledge +to create an output that has an `image` field, a `color` field and a `string` +field. + +- An invocation output is a class that derives from the parent class of + `BaseInvocationOutput`. +- All invocation outputs must use the `@invocation_output` decorator to provide + their unique output type. +- Output fields must use the provided `OutputField` function. This is very + similar to the `InputField` function described earlier - it's a wrapper around + `pydantic`'s `Field()`. +- It is not mandatory but we recommend using names ending with `Output` for + output types. +- It is not mandatory but we highly recommend adding a `docstring` to describe + what your output type is for. + +Now that we know the basic rules for creating a new output type, let us go ahead +and make it. + +```python +from .baseinvocation import BaseInvocationOutput, OutputField, invocation_output +from .primitives import ImageField, ColorField + +@invocation_output('image_color_string_output') +class ImageColorStringOutput(BaseInvocationOutput): + '''Base class for nodes that output a single image''' + + image: ImageField = OutputField(description="The image") + color: ColorField = OutputField(description="The color") + text: str = OutputField(description="The string") +``` + +That's all there is to it. + + - +--> diff --git a/invokeai/app/invocations/baseinvocation.py b/invokeai/app/invocations/baseinvocation.py index f56e7c7aa5..e6f4f52f25 100644 --- a/invokeai/app/invocations/baseinvocation.py +++ b/invokeai/app/invocations/baseinvocation.py @@ -2,16 +2,18 @@ from __future__ import annotations +import json from abc import ABC, abstractmethod from enum import Enum from inspect import signature -import json +import re from typing import ( TYPE_CHECKING, AbstractSet, Any, Callable, ClassVar, + Literal, Mapping, Optional, Type, @@ -22,7 +24,7 @@ from typing import ( ) from pydantic import BaseModel, Field, validator -from pydantic.fields import Undefined +from pydantic.fields import Undefined, ModelField from pydantic.typing import NoArgAnyCallable if TYPE_CHECKING: @@ -368,8 +370,7 @@ def OutputField( class UIConfigBase(BaseModel): """ Provides additional node configuration to the UI. - This is used internally by the @tags and @title decorator logic. You probably want to use those - decorators, though you may add this class to a node definition to specify the title and tags. + This is used internally by the @invocation decorator logic. Do not use this directly. """ tags: Optional[list[str]] = Field(default_factory=None, description="The node's tags") @@ -387,10 +388,11 @@ class InvocationContext: class BaseInvocationOutput(BaseModel): - """Base class for all invocation outputs""" + """ + Base class for all invocation outputs. - # All outputs must include a type name like this: - # type: Literal['your_output_name'] # noqa f821 + All invocation outputs must use the `@invocation_output` decorator to provide their unique type. + """ @classmethod def get_all_subclasses_tuple(cls): @@ -426,12 +428,12 @@ class MissingInputException(Exception): class BaseInvocation(ABC, BaseModel): - """A node to process inputs and produce outputs. - May use dependency injection in __init__ to receive providers. """ + A node to process inputs and produce outputs. + May use dependency injection in __init__ to receive providers. - # All invocations must include a type name like this: - # type: Literal['your_output_name'] # noqa f821 + All invocations must use the `@invocation` decorator to provide their unique type. + """ @classmethod def get_all_subclasses(cls): @@ -511,9 +513,11 @@ class BaseInvocation(ABC, BaseModel): raise MissingInputException(self.__fields__["type"].default, field_name) return self.invoke(context) - id: str = Field(description="The id of this node. Must be unique among all nodes.") + id: str = Field( + description="The id of this instance of an invocation. Must be unique among all instances of invocations." + ) is_intermediate: bool = InputField( - default=False, description="Whether or not this node is an intermediate node.", ui_type=UIType.IsIntermediate + default=False, description="Whether or not this is an intermediate invocation.", ui_type=UIType.IsIntermediate ) workflow: Optional[str] = InputField( default=None, @@ -534,66 +538,85 @@ class BaseInvocation(ABC, BaseModel): UIConfig: ClassVar[Type[UIConfigBase]] -T = TypeVar("T", bound=BaseInvocation) +GenericBaseInvocation = TypeVar("GenericBaseInvocation", bound=BaseInvocation) -def title(title: str) -> Callable[[Type[T]], Type[T]]: - """Adds a title to the invocation. Use this to override the default title generation, which is based on the class name.""" - - def wrapper(cls: Type[T]) -> Type[T]: - uiconf_name = cls.__qualname__ + ".UIConfig" - if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name: - cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict()) - cls.UIConfig.title = title - return cls - - return wrapper - - -def tags(*tags: str) -> Callable[[Type[T]], Type[T]]: - """Adds tags to the invocation. Use this to improve the streamline finding the invocation in the UI.""" - - def wrapper(cls: Type[T]) -> Type[T]: - uiconf_name = cls.__qualname__ + ".UIConfig" - if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name: - cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict()) - cls.UIConfig.tags = list(tags) - return cls - - return wrapper - - -def category(category: str) -> Callable[[Type[T]], Type[T]]: - """Adds a category to the invocation. This is used to group invocations in the UI.""" - - def wrapper(cls: Type[T]) -> Type[T]: - uiconf_name = cls.__qualname__ + ".UIConfig" - if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name: - cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict()) - cls.UIConfig.category = category - return cls - - return wrapper - - -def node( - title: Optional[str] = None, tags: Optional[list[str]] = None, category: Optional[str] = None -) -> Callable[[Type[T]], Type[T]]: +def invocation( + invocation_type: str, title: Optional[str] = None, tags: Optional[list[str]] = None, category: Optional[str] = None +) -> Callable[[Type[GenericBaseInvocation]], Type[GenericBaseInvocation]]: """ - Adds metadata to the invocation as a decorator. + Adds metadata to an invocation. - :param Optional[str] title: Adds a title to the node. Use if the auto-generated title isn't quite right. Defaults to None. - :param Optional[list[str]] tags: Adds tags to the node. Nodes may be searched for by their tags. Defaults to None. - :param Optional[str] category: Adds a category to the node. Used to group the nodes in the UI. Defaults to None. + :param str invocation_type: The type of the invocation. Must be unique among all invocations. + :param Optional[str] title: Adds a title to the invocation. Use if the auto-generated title isn't quite right. Defaults to None. + :param Optional[list[str]] tags: Adds tags to the invocation. Invocations may be searched for by their tags. Defaults to None. + :param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None. """ - def wrapper(cls: Type[T]) -> Type[T]: + def wrapper(cls: Type[GenericBaseInvocation]) -> Type[GenericBaseInvocation]: + # Validate invocation types on creation of invocation classes + # TODO: ensure unique? + if re.compile(r"^\S+$").match(invocation_type) is None: + raise ValueError(f'"invocation_type" must consist of non-whitespace characters, got "{invocation_type}"') + + # Add OpenAPI schema extras uiconf_name = cls.__qualname__ + ".UIConfig" if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name: cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict()) - cls.UIConfig.title = title - cls.UIConfig.tags = tags - cls.UIConfig.category = category + if title is not None: + cls.UIConfig.title = title + if tags is not None: + cls.UIConfig.tags = tags + if category is not None: + cls.UIConfig.category = category + + # Add the invocation type to the pydantic model of the invocation + invocation_type_annotation = Literal[invocation_type] # type: ignore + invocation_type_field = ModelField.infer( + name="type", + value=invocation_type, + annotation=invocation_type_annotation, + class_validators=None, + config=cls.__config__, + ) + cls.__fields__.update({"type": invocation_type_field}) + cls.__annotations__.update({"type": invocation_type_annotation}) + + return cls + + return wrapper + + +GenericBaseInvocationOutput = TypeVar("GenericBaseInvocationOutput", bound=BaseInvocationOutput) + + +def invocation_output( + output_type: str, +) -> Callable[[Type[GenericBaseInvocationOutput]], Type[GenericBaseInvocationOutput]]: + """ + Adds metadata to an invocation output. + + :param str output_type: The type of the invocation output. Must be unique among all invocation outputs. + """ + + def wrapper(cls: Type[GenericBaseInvocationOutput]) -> Type[GenericBaseInvocationOutput]: + # Validate output types on creation of invocation output classes + # TODO: ensure unique? + if re.compile(r"^\S+$").match(output_type) is None: + raise ValueError(f'"output_type" must consist of non-whitespace characters, got "{output_type}"') + + # Add the output type to the pydantic model of the invocation output + output_type_annotation = Literal[output_type] # type: ignore + output_type_field = ModelField.infer( + name="type", + value=output_type, + annotation=output_type_annotation, + class_validators=None, + config=cls.__config__, + ) + cls.__fields__.update({"type": output_type_field}) + cls.__annotations__.update({"type": output_type_annotation}) + return cls return wrapper diff --git a/invokeai/app/invocations/collections.py b/invokeai/app/invocations/collections.py index 1f8568beed..979f2e43b7 100644 --- a/invokeai/app/invocations/collections.py +++ b/invokeai/app/invocations/collections.py @@ -1,6 +1,5 @@ # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team -from typing import Literal import numpy as np from pydantic import validator @@ -8,16 +7,13 @@ from pydantic import validator from invokeai.app.invocations.primitives import IntegerCollectionOutput from invokeai.app.util.misc import SEED_MAX, get_random_seed -from .baseinvocation import BaseInvocation, InputField, InvocationContext, node +from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation -@node(title="Integer Range", tags=["collection", "integer", "range"], category="collections") +@invocation("range", title="Integer Range", tags=["collection", "integer", "range"], category="collections") class RangeInvocation(BaseInvocation): """Creates a range of numbers from start to stop with step""" - type: Literal["range"] = "range" - - # Inputs start: int = InputField(default=0, description="The start of the range") stop: int = InputField(default=10, description="The stop of the range") step: int = InputField(default=1, description="The step of the range") @@ -32,13 +28,15 @@ class RangeInvocation(BaseInvocation): return IntegerCollectionOutput(collection=list(range(self.start, self.stop, self.step))) -@node(title="Integer Range of Size", tags=["collection", "integer", "size", "range"], category="collections") +@invocation( + "range_of_size", + title="Integer Range of Size", + tags=["collection", "integer", "size", "range"], + category="collections", +) class RangeOfSizeInvocation(BaseInvocation): """Creates a range from start to start + size with step""" - type: Literal["range_of_size"] = "range_of_size" - - # Inputs start: int = InputField(default=0, description="The start of the range") size: int = InputField(default=1, description="The number of values") step: int = InputField(default=1, description="The step of the range") @@ -47,13 +45,15 @@ class RangeOfSizeInvocation(BaseInvocation): return IntegerCollectionOutput(collection=list(range(self.start, self.start + self.size, self.step))) -@node(title="Random Range", tags=["range", "integer", "random", "collection"], category="collections") +@invocation( + "random_range", + title="Random Range", + tags=["range", "integer", "random", "collection"], + category="collections", +) class RandomRangeInvocation(BaseInvocation): """Creates a collection of random numbers""" - type: Literal["random_range"] = "random_range" - - # Inputs low: int = InputField(default=0, description="The inclusive low value") high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value") size: int = InputField(default=1, description="The number of values to generate") diff --git a/invokeai/app/invocations/compel.py b/invokeai/app/invocations/compel.py index 9d0f848986..65b1a794b0 100644 --- a/invokeai/app/invocations/compel.py +++ b/invokeai/app/invocations/compel.py @@ -1,6 +1,6 @@ import re from dataclasses import dataclass -from typing import List, Literal, Union +from typing import List, Union import torch from compel import Compel, ReturnedEmbeddingsType @@ -26,9 +26,8 @@ from .baseinvocation import ( InvocationContext, OutputField, UIComponent, - category, - tags, - title, + invocation, + invocation_output, ) from .model import ClipField @@ -45,14 +44,10 @@ class ConditioningFieldData: # PerpNeg = "perp_neg" -@title("Prompt") -@tags("prompt", "compel") -@category("conditioning") +@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning") class CompelInvocation(BaseInvocation): """Parse prompt using compel package to conditioning.""" - type: Literal["compel"] = "compel" - prompt: str = InputField( default="", description=FieldDescriptions.compel_prompt, @@ -267,14 +262,15 @@ class SDXLPromptInvocationBase: return c, c_pooled, ec -@title("SDXL Prompt") -@tags("sdxl", "compel", "prompt") -@category("conditioning") +@invocation( + "sdxl_compel_prompt", + title="SDXL Prompt", + tags=["sdxl", "compel", "prompt"], + category="conditioning", +) class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase): """Parse prompt using compel package to conditioning.""" - type: Literal["sdxl_compel_prompt"] = "sdxl_compel_prompt" - prompt: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea) style: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea) original_width: int = InputField(default=1024, description="") @@ -327,14 +323,15 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase): ) -@title("SDXL Refiner Prompt") -@tags("sdxl", "compel", "prompt") -@category("conditioning") +@invocation( + "sdxl_refiner_compel_prompt", + title="SDXL Refiner Prompt", + tags=["sdxl", "compel", "prompt"], + category="conditioning", +) class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase): """Parse prompt using compel package to conditioning.""" - type: Literal["sdxl_refiner_compel_prompt"] = "sdxl_refiner_compel_prompt" - style: str = InputField( default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea ) # TODO: ? @@ -376,21 +373,17 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase ) +@invocation_output("clip_skip_output") class ClipSkipInvocationOutput(BaseInvocationOutput): """Clip skip node output""" - type: Literal["clip_skip_output"] = "clip_skip_output" clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP") -@title("CLIP Skip") -@tags("clipskip", "clip", "skip") -@category("conditioning") +@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning") class ClipSkipInvocation(BaseInvocation): """Skip layers in clip text_encoder model.""" - type: Literal["clip_skip"] = "clip_skip" - clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP") skipped_layers: int = InputField(default=0, description=FieldDescriptions.skipped_layers) diff --git a/invokeai/app/invocations/controlnet_image_processors.py b/invokeai/app/invocations/controlnet_image_processors.py index cc6455a714..a666c5d6f4 100644 --- a/invokeai/app/invocations/controlnet_image_processors.py +++ b/invokeai/app/invocations/controlnet_image_processors.py @@ -40,10 +40,8 @@ from .baseinvocation import ( InvocationContext, OutputField, UIType, - category, - node, - tags, - title, + invocation, + invocation_output, ) @@ -89,22 +87,18 @@ class ControlField(BaseModel): return v +@invocation_output("control_output") class ControlOutput(BaseInvocationOutput): """node output for ControlNet info""" - type: Literal["control_output"] = "control_output" - # Outputs control: ControlField = OutputField(description=FieldDescriptions.control) -@node(title="ControlNet", tags=["controlnet"], category="controlnet") +@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet") class ControlNetInvocation(BaseInvocation): """Collects ControlNet info to pass to other nodes""" - type: Literal["controlnet"] = "controlnet" - - # Inputs image: ImageField = InputField(description="The control image") control_model: ControlNetModelField = InputField( default="lllyasviel/sd-controlnet-canny", description=FieldDescriptions.controlnet_model, input=Input.Direct @@ -135,12 +129,10 @@ class ControlNetInvocation(BaseInvocation): ) +@invocation("image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet") class ImageProcessorInvocation(BaseInvocation): """Base class for invocations that preprocess images for ControlNet""" - type: Literal["image_processor"] = "image_processor" - - # Inputs image: ImageField = InputField(description="The image to process") def run_processor(self, image): @@ -176,15 +168,15 @@ class ImageProcessorInvocation(BaseInvocation): ) -@title("Canny Processor") -@tags("controlnet", "canny") -@category("controlnet") +@invocation( + "canny_image_processor", + title="Canny Processor", + tags=["controlnet", "canny"], + category="controlnet", +) class CannyImageProcessorInvocation(ImageProcessorInvocation): """Canny edge detection for ControlNet""" - type: Literal["canny_image_processor"] = "canny_image_processor" - - # Input low_threshold: int = InputField( default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)" ) @@ -198,15 +190,15 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("HED (softedge) Processor") -@tags("controlnet", "hed", "softedge") -@category("controlnet") +@invocation( + "hed_image_processor", + title="HED (softedge) Processor", + tags=["controlnet", "hed", "softedge"], + category="controlnet", +) class HedImageProcessorInvocation(ImageProcessorInvocation): """Applies HED edge detection to image""" - type: Literal["hed_image_processor"] = "hed_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) # safe not supported in controlnet_aux v0.0.3 @@ -226,15 +218,15 @@ class HedImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Lineart Processor") -@tags("controlnet", "lineart") -@category("controlnet") +@invocation( + "lineart_image_processor", + title="Lineart Processor", + tags=["controlnet", "lineart"], + category="controlnet", +) class LineartImageProcessorInvocation(ImageProcessorInvocation): """Applies line art processing to image""" - type: Literal["lineart_image_processor"] = "lineart_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) coarse: bool = InputField(default=False, description="Whether to use coarse mode") @@ -247,15 +239,15 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Lineart Anime Processor") -@tags("controlnet", "lineart", "anime") -@category("controlnet") +@invocation( + "lineart_anime_image_processor", + title="Lineart Anime Processor", + tags=["controlnet", "lineart", "anime"], + category="controlnet", +) class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation): """Applies line art anime processing to image""" - type: Literal["lineart_anime_image_processor"] = "lineart_anime_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) @@ -269,15 +261,15 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Openpose Processor") -@tags("controlnet", "openpose", "pose") -@category("controlnet") +@invocation( + "openpose_image_processor", + title="Openpose Processor", + tags=["controlnet", "openpose", "pose"], + category="controlnet", +) class OpenposeImageProcessorInvocation(ImageProcessorInvocation): """Applies Openpose processing to image""" - type: Literal["openpose_image_processor"] = "openpose_image_processor" - - # Inputs hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode") detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) @@ -293,15 +285,15 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Midas (Depth) Processor") -@tags("controlnet", "midas", "depth") -@category("controlnet") +@invocation( + "midas_depth_image_processor", + title="Midas Depth Processor", + tags=["controlnet", "midas"], + category="controlnet", +) class MidasDepthImageProcessorInvocation(ImageProcessorInvocation): """Applies Midas depth processing to image""" - type: Literal["midas_depth_image_processor"] = "midas_depth_image_processor" - - # Inputs a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)") bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`") # depth_and_normal not supported in controlnet_aux v0.0.3 @@ -319,15 +311,15 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Normal BAE Processor") -@tags("controlnet", "normal", "bae") -@category("controlnet") +@invocation( + "normalbae_image_processor", + title="Normal BAE Processor", + tags=["controlnet"], + category="controlnet", +) class NormalbaeImageProcessorInvocation(ImageProcessorInvocation): """Applies NormalBae processing to image""" - type: Literal["normalbae_image_processor"] = "normalbae_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) @@ -339,15 +331,10 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("MLSD Processor") -@tags("controlnet", "mlsd") -@category("controlnet") +@invocation("mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet") class MlsdImageProcessorInvocation(ImageProcessorInvocation): """Applies MLSD processing to image""" - type: Literal["mlsd_image_processor"] = "mlsd_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`") @@ -365,15 +352,10 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("PIDI Processor") -@tags("controlnet", "pidi") -@category("controlnet") +@invocation("pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet") class PidiImageProcessorInvocation(ImageProcessorInvocation): """Applies PIDI processing to image""" - type: Literal["pidi_image_processor"] = "pidi_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode) @@ -391,15 +373,15 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Content Shuffle Processor") -@tags("controlnet", "contentshuffle") -@category("controlnet") +@invocation( + "content_shuffle_image_processor", + title="Content Shuffle Processor", + tags=["controlnet", "contentshuffle"], + category="controlnet", +) class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation): """Applies content shuffle processing to image""" - type: Literal["content_shuffle_image_processor"] = "content_shuffle_image_processor" - - # Inputs detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) h: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `h` parameter") @@ -420,29 +402,30 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation): # should work with controlnet_aux >= 0.0.4 and timm <= 0.6.13 -@title("Zoe (Depth) Processor") -@tags("controlnet", "zoe", "depth") -@category("controlnet") +@invocation( + "zoe_depth_image_processor", + title="Zoe (Depth) Processor", + tags=["controlnet", "zoe", "depth"], + category="controlnet", +) class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation): """Applies Zoe depth processing to image""" - type: Literal["zoe_depth_image_processor"] = "zoe_depth_image_processor" - def run_processor(self, image): zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators") processed_image = zoe_depth_processor(image) return processed_image -@title("Mediapipe Face Processor") -@tags("controlnet", "mediapipe", "face") -@category("controlnet") +@invocation( + "mediapipe_face_processor", + title="Mediapipe Face Processor", + tags=["controlnet", "mediapipe", "face"], + category="controlnet", +) class MediapipeFaceProcessorInvocation(ImageProcessorInvocation): """Applies mediapipe face processing to image""" - type: Literal["mediapipe_face_processor"] = "mediapipe_face_processor" - - # Inputs max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect") min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection") @@ -456,15 +439,15 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Leres (Depth) Processor") -@tags("controlnet", "leres", "depth") -@category("controlnet") +@invocation( + "leres_image_processor", + title="Leres (Depth) Processor", + tags=["controlnet", "leres", "depth"], + category="controlnet", +) class LeresImageProcessorInvocation(ImageProcessorInvocation): """Applies leres processing to image""" - type: Literal["leres_image_processor"] = "leres_image_processor" - - # Inputs thr_a: float = InputField(default=0, description="Leres parameter `thr_a`") thr_b: float = InputField(default=0, description="Leres parameter `thr_b`") boost: bool = InputField(default=False, description="Whether to use boost mode") @@ -484,15 +467,15 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Tile Resample Processor") -@tags("controlnet", "tile") -@category("controlnet") +@invocation( + "tile_image_processor", + title="Tile Resample Processor", + tags=["controlnet", "tile"], + category="controlnet", +) class TileResamplerProcessorInvocation(ImageProcessorInvocation): """Tile resampler processor""" - type: Literal["tile_image_processor"] = "tile_image_processor" - - # Inputs # res: int = InputField(default=512, ge=0, le=1024, description="The pixel resolution for each tile") down_sampling_rate: float = InputField(default=1.0, ge=1.0, le=8.0, description="Down sampling rate") @@ -523,14 +506,15 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation): return processed_image -@title("Segment Anything Processor") -@tags("controlnet", "segmentanything") -@category("controlnet") +@invocation( + "segment_anything_processor", + title="Segment Anything Processor", + tags=["controlnet", "segmentanything"], + category="controlnet", +) class SegmentAnythingProcessorInvocation(ImageProcessorInvocation): """Applies segment anything processing to image""" - type: Literal["segment_anything_processor"] = "segment_anything_processor" - def run_processor(self, image): # segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints") segment_anything_processor = SamDetectorReproducibleColors.from_pretrained( diff --git a/invokeai/app/invocations/cv.py b/invokeai/app/invocations/cv.py index c03422d95e..40d8867aa1 100644 --- a/invokeai/app/invocations/cv.py +++ b/invokeai/app/invocations/cv.py @@ -1,6 +1,5 @@ # Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) -from typing import Literal import cv2 as cv import numpy @@ -8,18 +7,18 @@ from PIL import Image, ImageOps from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.models.image import ImageCategory, ResourceOrigin -from .baseinvocation import BaseInvocation, InputField, InvocationContext, category, tags, title +from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation -@title("OpenCV Inpaint") -@tags("opencv", "inpaint") -@category("inpaint") +@invocation( + "cv_inpaint", + title="OpenCV Inpaint", + tags=["opencv", "inpaint"], + category="inpaint", +) class CvInpaintInvocation(BaseInvocation): """Simple inpaint using opencv.""" - type: Literal["cv_inpaint"] = "cv_inpaint" - - # Inputs image: ImageField = InputField(description="The image to inpaint") mask: ImageField = InputField(description="The mask to use when inpainting") diff --git a/invokeai/app/invocations/image.py b/invokeai/app/invocations/image.py index 1168751161..5eeead7db2 100644 --- a/invokeai/app/invocations/image.py +++ b/invokeai/app/invocations/image.py @@ -13,18 +13,13 @@ from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark from invokeai.backend.image_util.safety_checker import SafetyChecker from ..models.image import ImageCategory, ResourceOrigin -from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title +from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation -@title("Show Image") -@tags("image") +@invocation("show_image", title="Show Image", tags=["image"], category="image") class ShowImageInvocation(BaseInvocation): - """Displays a provided image, and passes it forward in the pipeline.""" + """Displays a provided image using the OS image viewer, and passes it forward in the pipeline.""" - # Metadata - type: Literal["show_image"] = "show_image" - - # Inputs image: ImageField = InputField(description="The image to show") def invoke(self, context: InvocationContext) -> ImageOutput: @@ -41,15 +36,10 @@ class ShowImageInvocation(BaseInvocation): ) -@title("Blank Image") -@tags("image") +@invocation("blank_image", title="Blank Image", tags=["image"], category="image") class BlankImageInvocation(BaseInvocation): """Creates a blank image and forwards it to the pipeline""" - # Metadata - type: Literal["blank_image"] = "blank_image" - - # Inputs width: int = InputField(default=512, description="The width of the image") height: int = InputField(default=512, description="The height of the image") mode: Literal["RGB", "RGBA"] = InputField(default="RGB", description="The mode of the image") @@ -75,15 +65,10 @@ class BlankImageInvocation(BaseInvocation): ) -@title("Crop Image") -@tags("image", "crop") +@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image") class ImageCropInvocation(BaseInvocation): """Crops an image to a specified box. The box can be outside of the image.""" - # Metadata - type: Literal["img_crop"] = "img_crop" - - # Inputs image: ImageField = InputField(description="The image to crop") x: int = InputField(default=0, description="The left x coordinate of the crop rectangle") y: int = InputField(default=0, description="The top y coordinate of the crop rectangle") @@ -113,15 +98,10 @@ class ImageCropInvocation(BaseInvocation): ) -@title("Paste Image") -@tags("image", "paste") +@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image") class ImagePasteInvocation(BaseInvocation): """Pastes an image into another image.""" - # Metadata - type: Literal["img_paste"] = "img_paste" - - # Inputs base_image: ImageField = InputField(description="The base image") image: ImageField = InputField(description="The image to paste") mask: Optional[ImageField] = InputField( @@ -166,15 +146,10 @@ class ImagePasteInvocation(BaseInvocation): ) -@title("Mask from Alpha") -@tags("image", "mask") +@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image") class MaskFromAlphaInvocation(BaseInvocation): """Extracts the alpha channel of an image as a mask.""" - # Metadata - type: Literal["tomask"] = "tomask" - - # Inputs image: ImageField = InputField(description="The image to create the mask from") invert: bool = InputField(default=False, description="Whether or not to invert the mask") @@ -202,15 +177,10 @@ class MaskFromAlphaInvocation(BaseInvocation): ) -@title("Multiply Images") -@tags("image", "multiply") +@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image") class ImageMultiplyInvocation(BaseInvocation): """Multiplies two images together using `PIL.ImageChops.multiply()`.""" - # Metadata - type: Literal["img_mul"] = "img_mul" - - # Inputs image1: ImageField = InputField(description="The first image to multiply") image2: ImageField = InputField(description="The second image to multiply") @@ -240,15 +210,10 @@ class ImageMultiplyInvocation(BaseInvocation): IMAGE_CHANNELS = Literal["A", "R", "G", "B"] -@title("Extract Image Channel") -@tags("image", "channel") +@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image") class ImageChannelInvocation(BaseInvocation): """Gets a channel from an image.""" - # Metadata - type: Literal["img_chan"] = "img_chan" - - # Inputs image: ImageField = InputField(description="The image to get the channel from") channel: IMAGE_CHANNELS = InputField(default="A", description="The channel to get") @@ -277,15 +242,10 @@ class ImageChannelInvocation(BaseInvocation): IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"] -@title("Convert Image Mode") -@tags("image", "convert") +@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image") class ImageConvertInvocation(BaseInvocation): """Converts an image to a different mode.""" - # Metadata - type: Literal["img_conv"] = "img_conv" - - # Inputs image: ImageField = InputField(description="The image to convert") mode: IMAGE_MODES = InputField(default="L", description="The mode to convert to") @@ -311,15 +271,10 @@ class ImageConvertInvocation(BaseInvocation): ) -@title("Blur Image") -@tags("image", "blur") +@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image") class ImageBlurInvocation(BaseInvocation): """Blurs an image""" - # Metadata - type: Literal["img_blur"] = "img_blur" - - # Inputs image: ImageField = InputField(description="The image to blur") radius: float = InputField(default=8.0, ge=0, description="The blur radius") # Metadata @@ -370,15 +325,10 @@ PIL_RESAMPLING_MAP = { } -@title("Resize Image") -@tags("image", "resize") +@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image") class ImageResizeInvocation(BaseInvocation): """Resizes an image to specific dimensions""" - # Metadata - type: Literal["img_resize"] = "img_resize" - - # Inputs image: ImageField = InputField(description="The image to resize") width: int = InputField(default=512, ge=64, multiple_of=8, description="The width to resize to (px)") height: int = InputField(default=512, ge=64, multiple_of=8, description="The height to resize to (px)") @@ -415,15 +365,10 @@ class ImageResizeInvocation(BaseInvocation): ) -@title("Scale Image") -@tags("image", "scale") +@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image") class ImageScaleInvocation(BaseInvocation): """Scales an image by a factor""" - # Metadata - type: Literal["img_scale"] = "img_scale" - - # Inputs image: ImageField = InputField(description="The image to scale") scale_factor: float = InputField( default=2.0, @@ -461,15 +406,10 @@ class ImageScaleInvocation(BaseInvocation): ) -@title("Lerp Image") -@tags("image", "lerp") +@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image") class ImageLerpInvocation(BaseInvocation): """Linear interpolation of all pixels of an image""" - # Metadata - type: Literal["img_lerp"] = "img_lerp" - - # Inputs image: ImageField = InputField(description="The image to lerp") min: int = InputField(default=0, ge=0, le=255, description="The minimum output value") max: int = InputField(default=255, ge=0, le=255, description="The maximum output value") @@ -499,15 +439,10 @@ class ImageLerpInvocation(BaseInvocation): ) -@title("Inverse Lerp Image") -@tags("image", "ilerp") +@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image") class ImageInverseLerpInvocation(BaseInvocation): """Inverse linear interpolation of all pixels of an image""" - # Metadata - type: Literal["img_ilerp"] = "img_ilerp" - - # Inputs image: ImageField = InputField(description="The image to lerp") min: int = InputField(default=0, ge=0, le=255, description="The minimum input value") max: int = InputField(default=255, ge=0, le=255, description="The maximum input value") @@ -537,15 +472,10 @@ class ImageInverseLerpInvocation(BaseInvocation): ) -@title("Blur NSFW Image") -@tags("image", "nsfw") +@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image") class ImageNSFWBlurInvocation(BaseInvocation): """Add blur to NSFW-flagged images""" - # Metadata - type: Literal["img_nsfw"] = "img_nsfw" - - # Inputs image: ImageField = InputField(description="The image to check") metadata: Optional[CoreMetadata] = InputField( default=None, description=FieldDescriptions.core_metadata, ui_hidden=True @@ -587,15 +517,10 @@ class ImageNSFWBlurInvocation(BaseInvocation): return caution.resize((caution.width // 2, caution.height // 2)) -@title("Add Invisible Watermark") -@tags("image", "watermark") +@invocation("img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image") class ImageWatermarkInvocation(BaseInvocation): """Add an invisible watermark to an image""" - # Metadata - type: Literal["img_watermark"] = "img_watermark" - - # Inputs image: ImageField = InputField(description="The image to check") text: str = InputField(default="InvokeAI", description="Watermark text") metadata: Optional[CoreMetadata] = InputField( @@ -623,14 +548,10 @@ class ImageWatermarkInvocation(BaseInvocation): ) -@title("Mask Edge") -@tags("image", "mask", "inpaint") +@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image") class MaskEdgeInvocation(BaseInvocation): """Applies an edge mask to an image""" - type: Literal["mask_edge"] = "mask_edge" - - # Inputs image: ImageField = InputField(description="The image to apply the mask to") edge_size: int = InputField(description="The size of the edge") edge_blur: int = InputField(description="The amount of blur on the edge") @@ -672,14 +593,10 @@ class MaskEdgeInvocation(BaseInvocation): ) -@title("Combine Mask") -@tags("image", "mask", "multiply") +@invocation("mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image") class MaskCombineInvocation(BaseInvocation): """Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`.""" - type: Literal["mask_combine"] = "mask_combine" - - # Inputs mask1: ImageField = InputField(description="The first mask to combine") mask2: ImageField = InputField(description="The second image to combine") @@ -706,17 +623,13 @@ class MaskCombineInvocation(BaseInvocation): ) -@title("Color Correct") -@tags("image", "color") +@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image") class ColorCorrectInvocation(BaseInvocation): """ Shifts the colors of a target image to match the reference image, optionally using a mask to only color-correct certain regions of the target image. """ - type: Literal["color_correct"] = "color_correct" - - # Inputs image: ImageField = InputField(description="The image to color-correct") reference: ImageField = InputField(description="Reference image for color-correction") mask: Optional[ImageField] = InputField(default=None, description="Mask to use when applying color-correction") @@ -815,14 +728,10 @@ class ColorCorrectInvocation(BaseInvocation): ) -@title("Image Hue Adjustment") -@tags("image", "hue", "hsl") +@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image") class ImageHueAdjustmentInvocation(BaseInvocation): """Adjusts the Hue of an image.""" - type: Literal["img_hue_adjust"] = "img_hue_adjust" - - # Inputs image: ImageField = InputField(description="The image to adjust") hue: int = InputField(default=0, description="The degrees by which to rotate the hue, 0-360") @@ -860,14 +769,15 @@ class ImageHueAdjustmentInvocation(BaseInvocation): ) -@title("Image Luminosity Adjustment") -@tags("image", "luminosity", "hsl") +@invocation( + "img_luminosity_adjust", + title="Adjust Image Luminosity", + tags=["image", "luminosity", "hsl"], + category="image", +) class ImageLuminosityAdjustmentInvocation(BaseInvocation): """Adjusts the Luminosity (Value) of an image.""" - type: Literal["img_luminosity_adjust"] = "img_luminosity_adjust" - - # Inputs image: ImageField = InputField(description="The image to adjust") luminosity: float = InputField( default=1.0, ge=0, le=1, description="The factor by which to adjust the luminosity (value)" @@ -911,14 +821,15 @@ class ImageLuminosityAdjustmentInvocation(BaseInvocation): ) -@title("Image Saturation Adjustment") -@tags("image", "saturation", "hsl") +@invocation( + "img_saturation_adjust", + title="Adjust Image Saturation", + tags=["image", "saturation", "hsl"], + category="image", +) class ImageSaturationAdjustmentInvocation(BaseInvocation): """Adjusts the Saturation of an image.""" - type: Literal["img_saturation_adjust"] = "img_saturation_adjust" - - # Inputs image: ImageField = InputField(description="The image to adjust") saturation: float = InputField(default=1.0, ge=0, le=1, description="The factor by which to adjust the saturation") diff --git a/invokeai/app/invocations/infill.py b/invokeai/app/invocations/infill.py index ce6e5457e3..438c56e312 100644 --- a/invokeai/app/invocations/infill.py +++ b/invokeai/app/invocations/infill.py @@ -12,7 +12,7 @@ from invokeai.backend.image_util.lama import LaMA from invokeai.backend.image_util.patchmatch import PatchMatch from ..models.image import ImageCategory, ResourceOrigin -from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title +from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation def infill_methods() -> list[str]: @@ -116,14 +116,10 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int] return si -@title("Solid Color Infill") -@tags("image", "inpaint") +@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint") class InfillColorInvocation(BaseInvocation): """Infills transparent areas of an image with a solid color""" - type: Literal["infill_rgba"] = "infill_rgba" - - # Inputs image: ImageField = InputField(description="The image to infill") color: ColorField = InputField( default=ColorField(r=127, g=127, b=127, a=255), @@ -155,14 +151,10 @@ class InfillColorInvocation(BaseInvocation): ) -@title("Tile Infill") -@tags("image", "inpaint") +@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint") class InfillTileInvocation(BaseInvocation): """Infills transparent areas of an image with tiles of the image""" - type: Literal["infill_tile"] = "infill_tile" - - # Input image: ImageField = InputField(description="The image to infill") tile_size: int = InputField(default=32, ge=1, description="The tile size (px)") seed: int = InputField( @@ -195,14 +187,10 @@ class InfillTileInvocation(BaseInvocation): ) -@title("PatchMatch Infill") -@tags("image", "inpaint") +@invocation("infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint") class InfillPatchMatchInvocation(BaseInvocation): """Infills transparent areas of an image using the PatchMatch algorithm""" - type: Literal["infill_patchmatch"] = "infill_patchmatch" - - # Inputs image: ImageField = InputField(description="The image to infill") def invoke(self, context: InvocationContext) -> ImageOutput: @@ -230,14 +218,10 @@ class InfillPatchMatchInvocation(BaseInvocation): ) -@title("LaMa Infill") -@tags("image", "inpaint") +@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint") class LaMaInfillInvocation(BaseInvocation): """Infills transparent areas of an image using the LaMa model""" - type: Literal["infill_lama"] = "infill_lama" - - # Inputs image: ImageField = InputField(description="The image to infill") def invoke(self, context: InvocationContext) -> ImageOutput: diff --git a/invokeai/app/invocations/latent.py b/invokeai/app/invocations/latent.py index 3052904ce0..31dde74a09 100644 --- a/invokeai/app/invocations/latent.py +++ b/invokeai/app/invocations/latent.py @@ -47,7 +47,15 @@ from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import Post from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP from ...backend.util.devices import choose_precision, choose_torch_device from ..models.image import ImageCategory, ResourceOrigin -from .baseinvocation import BaseInvocation, FieldDescriptions, Input, InputField, InvocationContext, UIType, tags, title +from .baseinvocation import ( + BaseInvocation, + FieldDescriptions, + Input, + InputField, + InvocationContext, + UIType, + invocation, +) from .compel import ConditioningField from .controlnet_image_processors import ControlField from .model import ModelInfo, UNetField, VaeField @@ -58,15 +66,10 @@ DEFAULT_PRECISION = choose_precision(choose_torch_device()) SAMPLER_NAME_VALUES = Literal[tuple(list(SCHEDULER_MAP.keys()))] -@title("Create Denoise Mask") -@tags("mask", "denoise") +@invocation("create_denoise_mask", title="Create Denoise Mask", tags=["mask", "denoise"], category="latents") class CreateDenoiseMaskInvocation(BaseInvocation): """Creates mask for denoising model run.""" - # Metadata - type: Literal["create_denoise_mask"] = "create_denoise_mask" - - # Inputs vae: VaeField = InputField(description=FieldDescriptions.vae, input=Input.Connection, ui_order=0) image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1) mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2) @@ -158,14 +161,15 @@ def get_scheduler( return scheduler -@title("Denoise Latents") -@tags("latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l") +@invocation( + "denoise_latents", + title="Denoise Latents", + tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"], + category="latents", +) class DenoiseLatentsInvocation(BaseInvocation): """Denoises noisy latents to decodable images""" - type: Literal["denoise_latents"] = "denoise_latents" - - # Inputs positive_conditioning: ConditioningField = InputField( description=FieldDescriptions.positive_cond, input=Input.Connection, ui_order=0 ) @@ -512,14 +516,10 @@ class DenoiseLatentsInvocation(BaseInvocation): return build_latents_output(latents_name=name, latents=result_latents, seed=seed) -@title("Latents to Image") -@tags("latents", "image", "vae", "l2i") +@invocation("l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents") class LatentsToImageInvocation(BaseInvocation): """Generates an image from latents.""" - type: Literal["l2i"] = "l2i" - - # Inputs latents: LatentsField = InputField( description=FieldDescriptions.latents, input=Input.Connection, @@ -613,14 +613,10 @@ class LatentsToImageInvocation(BaseInvocation): LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"] -@title("Resize Latents") -@tags("latents", "resize") +@invocation("lresize", title="Resize Latents", tags=["latents", "resize"], category="latents") class ResizeLatentsInvocation(BaseInvocation): """Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8.""" - type: Literal["lresize"] = "lresize" - - # Inputs latents: LatentsField = InputField( description=FieldDescriptions.latents, input=Input.Connection, @@ -661,14 +657,10 @@ class ResizeLatentsInvocation(BaseInvocation): return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed) -@title("Scale Latents") -@tags("latents", "resize") +@invocation("lscale", title="Scale Latents", tags=["latents", "resize"], category="latents") class ScaleLatentsInvocation(BaseInvocation): """Scales latents by a given factor.""" - type: Literal["lscale"] = "lscale" - - # Inputs latents: LatentsField = InputField( description=FieldDescriptions.latents, input=Input.Connection, @@ -701,14 +693,10 @@ class ScaleLatentsInvocation(BaseInvocation): return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed) -@title("Image to Latents") -@tags("latents", "image", "vae", "i2l") +@invocation("i2l", title="Image to Latents", tags=["latents", "image", "vae", "i2l"], category="latents") class ImageToLatentsInvocation(BaseInvocation): """Encodes an image into latents.""" - type: Literal["i2l"] = "i2l" - - # Inputs image: ImageField = InputField( description="The image to encode", ) @@ -785,14 +773,10 @@ class ImageToLatentsInvocation(BaseInvocation): return build_latents_output(latents_name=name, latents=latents, seed=None) -@title("Blend Latents") -@tags("latents", "blend") +@invocation("lblend", title="Blend Latents", tags=["latents", "blend"], category="latents") class BlendLatentsInvocation(BaseInvocation): """Blend two latents using a given alpha. Latents must have same size.""" - type: Literal["lblend"] = "lblend" - - # Inputs latents_a: LatentsField = InputField( description=FieldDescriptions.latents, input=Input.Connection, diff --git a/invokeai/app/invocations/math.py b/invokeai/app/invocations/math.py index 80cdc09221..2a8dc12b28 100644 --- a/invokeai/app/invocations/math.py +++ b/invokeai/app/invocations/math.py @@ -1,22 +1,16 @@ # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) -from typing import Literal - import numpy as np from invokeai.app.invocations.primitives import IntegerOutput -from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title +from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation -@title("Add Integers") -@tags("math") +@invocation("add", title="Add Integers", tags=["math", "add"], category="math") class AddInvocation(BaseInvocation): """Adds two numbers""" - type: Literal["add"] = "add" - - # Inputs a: int = InputField(default=0, description=FieldDescriptions.num_1) b: int = InputField(default=0, description=FieldDescriptions.num_2) @@ -24,14 +18,10 @@ class AddInvocation(BaseInvocation): return IntegerOutput(value=self.a + self.b) -@title("Subtract Integers") -@tags("math") +@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math") class SubtractInvocation(BaseInvocation): """Subtracts two numbers""" - type: Literal["sub"] = "sub" - - # Inputs a: int = InputField(default=0, description=FieldDescriptions.num_1) b: int = InputField(default=0, description=FieldDescriptions.num_2) @@ -39,14 +29,10 @@ class SubtractInvocation(BaseInvocation): return IntegerOutput(value=self.a - self.b) -@title("Multiply Integers") -@tags("math") +@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math") class MultiplyInvocation(BaseInvocation): """Multiplies two numbers""" - type: Literal["mul"] = "mul" - - # Inputs a: int = InputField(default=0, description=FieldDescriptions.num_1) b: int = InputField(default=0, description=FieldDescriptions.num_2) @@ -54,14 +40,10 @@ class MultiplyInvocation(BaseInvocation): return IntegerOutput(value=self.a * self.b) -@title("Divide Integers") -@tags("math") +@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math") class DivideInvocation(BaseInvocation): """Divides two numbers""" - type: Literal["div"] = "div" - - # Inputs a: int = InputField(default=0, description=FieldDescriptions.num_1) b: int = InputField(default=0, description=FieldDescriptions.num_2) @@ -69,14 +51,10 @@ class DivideInvocation(BaseInvocation): return IntegerOutput(value=int(self.a / self.b)) -@title("Random Integer") -@tags("math") +@invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math") class RandomIntInvocation(BaseInvocation): """Outputs a single random integer.""" - type: Literal["rand_int"] = "rand_int" - - # Inputs low: int = InputField(default=0, description="The inclusive low value") high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value") diff --git a/invokeai/app/invocations/metadata.py b/invokeai/app/invocations/metadata.py index a2e34109df..4a968ff3ac 100644 --- a/invokeai/app/invocations/metadata.py +++ b/invokeai/app/invocations/metadata.py @@ -1,4 +1,4 @@ -from typing import Literal, Optional +from typing import Optional from pydantic import Field @@ -8,8 +8,8 @@ from invokeai.app.invocations.baseinvocation import ( InputField, InvocationContext, OutputField, - tags, - title, + invocation, + invocation_output, ) from invokeai.app.invocations.controlnet_image_processors import ControlField from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField @@ -91,21 +91,17 @@ class ImageMetadata(BaseModelExcludeNull): graph: Optional[dict] = Field(default=None, description="The graph that created the image") +@invocation_output("metadata_accumulator_output") class MetadataAccumulatorOutput(BaseInvocationOutput): """The output of the MetadataAccumulator node""" - type: Literal["metadata_accumulator_output"] = "metadata_accumulator_output" - metadata: CoreMetadata = OutputField(description="The core metadata for the image") -@title("Metadata Accumulator") -@tags("metadata") +@invocation("metadata_accumulator", title="Metadata Accumulator", tags=["metadata"], category="metadata") class MetadataAccumulatorInvocation(BaseInvocation): """Outputs a Core Metadata Object""" - type: Literal["metadata_accumulator"] = "metadata_accumulator" - generation_mode: str = InputField( description="The generation mode that output this image", ) diff --git a/invokeai/app/invocations/model.py b/invokeai/app/invocations/model.py index 31d04cfd37..c3dde5e725 100644 --- a/invokeai/app/invocations/model.py +++ b/invokeai/app/invocations/model.py @@ -1,5 +1,5 @@ import copy -from typing import List, Literal, Optional +from typing import List, Optional from pydantic import BaseModel, Field @@ -13,8 +13,8 @@ from .baseinvocation import ( InvocationContext, OutputField, UIType, - tags, - title, + invocation, + invocation_output, ) @@ -49,11 +49,10 @@ class VaeField(BaseModel): seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless') +@invocation_output("model_loader_output") class ModelLoaderOutput(BaseInvocationOutput): """Model loader output""" - type: Literal["model_loader_output"] = "model_loader_output" - unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet") clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP") vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE") @@ -74,14 +73,10 @@ class LoRAModelField(BaseModel): base_model: BaseModelType = Field(description="Base model") -@title("Main Model") -@tags("model") +@invocation("main_model_loader", title="Main Model", tags=["model"], category="model") class MainModelLoaderInvocation(BaseInvocation): """Loads a main model, outputting its submodels.""" - type: Literal["main_model_loader"] = "main_model_loader" - - # Inputs model: MainModelField = InputField(description=FieldDescriptions.main_model, input=Input.Direct) # TODO: precision? @@ -170,25 +165,18 @@ class MainModelLoaderInvocation(BaseInvocation): ) +@invocation_output("lora_loader_output") class LoraLoaderOutput(BaseInvocationOutput): """Model loader output""" - # fmt: off - type: Literal["lora_loader_output"] = "lora_loader_output" - unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet") clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP") - # fmt: on -@title("LoRA") -@tags("lora", "model") +@invocation("lora_loader", title="LoRA", tags=["model"], category="model") class LoraLoaderInvocation(BaseInvocation): """Apply selected lora to unet and text_encoder.""" - type: Literal["lora_loader"] = "lora_loader" - - # Inputs lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA") weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight) unet: Optional[UNetField] = InputField( @@ -247,25 +235,19 @@ class LoraLoaderInvocation(BaseInvocation): return output +@invocation_output("sdxl_lora_loader_output") class SDXLLoraLoaderOutput(BaseInvocationOutput): """SDXL LoRA Loader Output""" - # fmt: off - type: Literal["sdxl_lora_loader_output"] = "sdxl_lora_loader_output" - unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet") clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 1") clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2") - # fmt: on -@title("SDXL LoRA") -@tags("sdxl", "lora", "model") +@invocation("sdxl_lora_loader", title="SDXL LoRA", tags=["lora", "model"], category="model") class SDXLLoraLoaderInvocation(BaseInvocation): """Apply selected lora to unet and text_encoder.""" - type: Literal["sdxl_lora_loader"] = "sdxl_lora_loader" - lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA") weight: float = Field(default=0.75, description=FieldDescriptions.lora_weight) unet: Optional[UNetField] = Field( @@ -349,23 +331,17 @@ class VAEModelField(BaseModel): base_model: BaseModelType = Field(description="Base model") +@invocation_output("vae_loader_output") class VaeLoaderOutput(BaseInvocationOutput): - """Model loader output""" + """VAE output""" - type: Literal["vae_loader_output"] = "vae_loader_output" - - # Outputs vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE") -@title("VAE") -@tags("vae", "model") +@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model") class VaeLoaderInvocation(BaseInvocation): """Loads a VAE model, outputting a VaeLoaderOutput""" - type: Literal["vae_loader"] = "vae_loader" - - # Inputs vae_model: VAEModelField = InputField( description=FieldDescriptions.vae_model, input=Input.Direct, ui_type=UIType.VaeModel, title="VAE" ) @@ -392,24 +368,18 @@ class VaeLoaderInvocation(BaseInvocation): ) +@invocation_output("seamless_output") class SeamlessModeOutput(BaseInvocationOutput): """Modified Seamless Model output""" - type: Literal["seamless_output"] = "seamless_output" - - # Outputs unet: Optional[UNetField] = OutputField(description=FieldDescriptions.unet, title="UNet") vae: Optional[VaeField] = OutputField(description=FieldDescriptions.vae, title="VAE") -@title("Seamless") -@tags("seamless", "model") +@invocation("seamless", title="Seamless", tags=["seamless", "model"], category="model") class SeamlessModeInvocation(BaseInvocation): """Applies the seamless transformation to the Model UNet and VAE.""" - type: Literal["seamless"] = "seamless" - - # Inputs unet: Optional[UNetField] = InputField( default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet" ) diff --git a/invokeai/app/invocations/noise.py b/invokeai/app/invocations/noise.py index d1dd320ee8..1f1d9fe3ce 100644 --- a/invokeai/app/invocations/noise.py +++ b/invokeai/app/invocations/noise.py @@ -1,6 +1,5 @@ # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team -from typing import Literal import torch from pydantic import validator @@ -16,8 +15,8 @@ from .baseinvocation import ( InputField, InvocationContext, OutputField, - tags, - title, + invocation, + invocation_output, ) """ @@ -62,12 +61,10 @@ Nodes """ +@invocation_output("noise_output") class NoiseOutput(BaseInvocationOutput): """Invocation noise output""" - type: Literal["noise_output"] = "noise_output" - - # Inputs noise: LatentsField = OutputField(default=None, description=FieldDescriptions.noise) width: int = OutputField(description=FieldDescriptions.width) height: int = OutputField(description=FieldDescriptions.height) @@ -81,14 +78,10 @@ def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int): ) -@title("Noise") -@tags("latents", "noise") +@invocation("noise", title="Noise", tags=["latents", "noise"], category="latents") class NoiseInvocation(BaseInvocation): """Generates latent noise.""" - type: Literal["noise"] = "noise" - - # Inputs seed: int = InputField( ge=0, le=SEED_MAX, diff --git a/invokeai/app/invocations/onnx.py b/invokeai/app/invocations/onnx.py index 504a36a424..b61ea2da99 100644 --- a/invokeai/app/invocations/onnx.py +++ b/invokeai/app/invocations/onnx.py @@ -31,8 +31,8 @@ from .baseinvocation import ( OutputField, UIComponent, UIType, - tags, - title, + invocation, + invocation_output, ) from .controlnet_image_processors import ControlField from .latent import SAMPLER_NAME_VALUES, LatentsField, LatentsOutput, build_latents_output, get_scheduler @@ -56,11 +56,8 @@ ORT_TO_NP_TYPE = { PRECISION_VALUES = Literal[tuple(list(ORT_TO_NP_TYPE.keys()))] -@title("ONNX Prompt (Raw)") -@tags("onnx", "prompt") +@invocation("prompt_onnx", title="ONNX Prompt (Raw)", tags=["prompt", "onnx"], category="conditioning") class ONNXPromptInvocation(BaseInvocation): - type: Literal["prompt_onnx"] = "prompt_onnx" - prompt: str = InputField(default="", description=FieldDescriptions.raw_prompt, ui_component=UIComponent.Textarea) clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection) @@ -141,14 +138,15 @@ class ONNXPromptInvocation(BaseInvocation): # Text to image -@title("ONNX Text to Latents") -@tags("latents", "inference", "txt2img", "onnx") +@invocation( + "t2l_onnx", + title="ONNX Text to Latents", + tags=["latents", "inference", "txt2img", "onnx"], + category="latents", +) class ONNXTextToLatentsInvocation(BaseInvocation): """Generates latents from conditionings.""" - type: Literal["t2l_onnx"] = "t2l_onnx" - - # Inputs positive_conditioning: ConditioningField = InputField( description=FieldDescriptions.positive_cond, input=Input.Connection, @@ -316,14 +314,15 @@ class ONNXTextToLatentsInvocation(BaseInvocation): # Latent to image -@title("ONNX Latents to Image") -@tags("latents", "image", "vae", "onnx") +@invocation( + "l2i_onnx", + title="ONNX Latents to Image", + tags=["latents", "image", "vae", "onnx"], + category="image", +) class ONNXLatentsToImageInvocation(BaseInvocation): """Generates an image from latents.""" - type: Literal["l2i_onnx"] = "l2i_onnx" - - # Inputs latents: LatentsField = InputField( description=FieldDescriptions.denoised_latents, input=Input.Connection, @@ -386,17 +385,14 @@ class ONNXLatentsToImageInvocation(BaseInvocation): ) +@invocation_output("model_loader_output_onnx") class ONNXModelLoaderOutput(BaseInvocationOutput): """Model loader output""" - # fmt: off - type: Literal["model_loader_output_onnx"] = "model_loader_output_onnx" - unet: UNetField = OutputField(default=None, description=FieldDescriptions.unet, title="UNet") clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP") vae_decoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Decoder") vae_encoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Encoder") - # fmt: on class OnnxModelField(BaseModel): @@ -407,14 +403,10 @@ class OnnxModelField(BaseModel): model_type: ModelType = Field(description="Model Type") -@title("ONNX Main Model") -@tags("onnx", "model") +@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model") class OnnxModelLoaderInvocation(BaseInvocation): """Loads a main model, outputting its submodels.""" - type: Literal["onnx_model_loader"] = "onnx_model_loader" - - # Inputs model: OnnxModelField = InputField( description=FieldDescriptions.onnx_main_model, input=Input.Direct, ui_type=UIType.ONNXModel ) diff --git a/invokeai/app/invocations/param_easing.py b/invokeai/app/invocations/param_easing.py index 70baa85817..1b3c0dc09e 100644 --- a/invokeai/app/invocations/param_easing.py +++ b/invokeai/app/invocations/param_easing.py @@ -42,17 +42,13 @@ from matplotlib.ticker import MaxNLocator from invokeai.app.invocations.primitives import FloatCollectionOutput -from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title +from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation -@title("Float Range") -@tags("math", "range") +@invocation("float_range", title="Float Range", tags=["math", "range"], category="math") class FloatLinearRangeInvocation(BaseInvocation): """Creates a range""" - type: Literal["float_range"] = "float_range" - - # Inputs start: float = InputField(default=5, description="The first value of the range") stop: float = InputField(default=10, description="The last value of the range") steps: int = InputField(default=30, description="number of values to interpolate over (including start and stop)") @@ -100,14 +96,10 @@ EASING_FUNCTION_KEYS = Literal[tuple(list(EASING_FUNCTIONS_MAP.keys()))] # actually I think for now could just use CollectionOutput (which is list[Any] -@title("Step Param Easing") -@tags("step", "easing") +@invocation("step_param_easing", title="Step Param Easing", tags=["step", "easing"], category="step") class StepParamEasingInvocation(BaseInvocation): """Experimental per-step parameter easing for denoising steps""" - type: Literal["step_param_easing"] = "step_param_easing" - - # Inputs easing: EASING_FUNCTION_KEYS = InputField(default="Linear", description="The easing function to use") num_steps: int = InputField(default=20, description="number of denoising steps") start_value: float = InputField(default=0.0, description="easing starting value") diff --git a/invokeai/app/invocations/primitives.py b/invokeai/app/invocations/primitives.py index 49cd49d189..8089b9d5e2 100644 --- a/invokeai/app/invocations/primitives.py +++ b/invokeai/app/invocations/primitives.py @@ -1,6 +1,6 @@ # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) -from typing import Literal, Optional, Tuple +from typing import Optional, Tuple import torch from pydantic import BaseModel, Field @@ -15,8 +15,8 @@ from .baseinvocation import ( OutputField, UIComponent, UIType, - tags, - title, + invocation, + invocation_output, ) """ @@ -29,44 +29,39 @@ Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color # region Boolean +@invocation_output("boolean_output") class BooleanOutput(BaseInvocationOutput): """Base class for nodes that output a single boolean""" - type: Literal["boolean_output"] = "boolean_output" value: bool = OutputField(description="The output boolean") +@invocation_output("boolean_collection_output") class BooleanCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of booleans""" - type: Literal["boolean_collection_output"] = "boolean_collection_output" - - # Outputs collection: list[bool] = OutputField(description="The output boolean collection", ui_type=UIType.BooleanCollection) -@title("Boolean Primitive") -@tags("primitives", "boolean") +@invocation("boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives") class BooleanInvocation(BaseInvocation): """A boolean primitive value""" - type: Literal["boolean"] = "boolean" - - # Inputs value: bool = InputField(default=False, description="The boolean value") def invoke(self, context: InvocationContext) -> BooleanOutput: return BooleanOutput(value=self.value) -@title("Boolean Primitive Collection") -@tags("primitives", "boolean", "collection") +@invocation( + "boolean_collection", + title="Boolean Collection Primitive", + tags=["primitives", "boolean", "collection"], + category="primitives", +) class BooleanCollectionInvocation(BaseInvocation): """A collection of boolean primitive values""" - type: Literal["boolean_collection"] = "boolean_collection" - - # Inputs collection: list[bool] = InputField( default_factory=list, description="The collection of boolean values", ui_type=UIType.BooleanCollection ) @@ -80,44 +75,39 @@ class BooleanCollectionInvocation(BaseInvocation): # region Integer +@invocation_output("integer_output") class IntegerOutput(BaseInvocationOutput): """Base class for nodes that output a single integer""" - type: Literal["integer_output"] = "integer_output" value: int = OutputField(description="The output integer") +@invocation_output("integer_collection_output") class IntegerCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of integers""" - type: Literal["integer_collection_output"] = "integer_collection_output" - - # Outputs collection: list[int] = OutputField(description="The int collection", ui_type=UIType.IntegerCollection) -@title("Integer Primitive") -@tags("primitives", "integer") +@invocation("integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives") class IntegerInvocation(BaseInvocation): """An integer primitive value""" - type: Literal["integer"] = "integer" - - # Inputs value: int = InputField(default=0, description="The integer value") def invoke(self, context: InvocationContext) -> IntegerOutput: return IntegerOutput(value=self.value) -@title("Integer Primitive Collection") -@tags("primitives", "integer", "collection") +@invocation( + "integer_collection", + title="Integer Collection Primitive", + tags=["primitives", "integer", "collection"], + category="primitives", +) class IntegerCollectionInvocation(BaseInvocation): """A collection of integer primitive values""" - type: Literal["integer_collection"] = "integer_collection" - - # Inputs collection: list[int] = InputField( default=0, description="The collection of integer values", ui_type=UIType.IntegerCollection ) @@ -131,44 +121,39 @@ class IntegerCollectionInvocation(BaseInvocation): # region Float +@invocation_output("float_output") class FloatOutput(BaseInvocationOutput): """Base class for nodes that output a single float""" - type: Literal["float_output"] = "float_output" value: float = OutputField(description="The output float") +@invocation_output("float_collection_output") class FloatCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of floats""" - type: Literal["float_collection_output"] = "float_collection_output" - - # Outputs collection: list[float] = OutputField(description="The float collection", ui_type=UIType.FloatCollection) -@title("Float Primitive") -@tags("primitives", "float") +@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives") class FloatInvocation(BaseInvocation): """A float primitive value""" - type: Literal["float"] = "float" - - # Inputs value: float = InputField(default=0.0, description="The float value") def invoke(self, context: InvocationContext) -> FloatOutput: return FloatOutput(value=self.value) -@title("Float Primitive Collection") -@tags("primitives", "float", "collection") +@invocation( + "float_collection", + title="Float Collection Primitive", + tags=["primitives", "float", "collection"], + category="primitives", +) class FloatCollectionInvocation(BaseInvocation): """A collection of float primitive values""" - type: Literal["float_collection"] = "float_collection" - - # Inputs collection: list[float] = InputField( default_factory=list, description="The collection of float values", ui_type=UIType.FloatCollection ) @@ -182,44 +167,39 @@ class FloatCollectionInvocation(BaseInvocation): # region String +@invocation_output("string_output") class StringOutput(BaseInvocationOutput): """Base class for nodes that output a single string""" - type: Literal["string_output"] = "string_output" value: str = OutputField(description="The output string") +@invocation_output("string_collection_output") class StringCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of strings""" - type: Literal["string_collection_output"] = "string_collection_output" - - # Outputs collection: list[str] = OutputField(description="The output strings", ui_type=UIType.StringCollection) -@title("String Primitive") -@tags("primitives", "string") +@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives") class StringInvocation(BaseInvocation): """A string primitive value""" - type: Literal["string"] = "string" - - # Inputs value: str = InputField(default="", description="The string value", ui_component=UIComponent.Textarea) def invoke(self, context: InvocationContext) -> StringOutput: return StringOutput(value=self.value) -@title("String Primitive Collection") -@tags("primitives", "string", "collection") +@invocation( + "string_collection", + title="String Collection Primitive", + tags=["primitives", "string", "collection"], + category="primitives", +) class StringCollectionInvocation(BaseInvocation): """A collection of string primitive values""" - type: Literal["string_collection"] = "string_collection" - - # Inputs collection: list[str] = InputField( default_factory=list, description="The collection of string values", ui_type=UIType.StringCollection ) @@ -239,33 +219,26 @@ class ImageField(BaseModel): image_name: str = Field(description="The name of the image") +@invocation_output("image_output") class ImageOutput(BaseInvocationOutput): """Base class for nodes that output a single image""" - type: Literal["image_output"] = "image_output" image: ImageField = OutputField(description="The output image") width: int = OutputField(description="The width of the image in pixels") height: int = OutputField(description="The height of the image in pixels") +@invocation_output("image_collection_output") class ImageCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of images""" - type: Literal["image_collection_output"] = "image_collection_output" - - # Outputs collection: list[ImageField] = OutputField(description="The output images", ui_type=UIType.ImageCollection) -@title("Image Primitive") -@tags("primitives", "image") +@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives") class ImageInvocation(BaseInvocation): """An image primitive value""" - # Metadata - type: Literal["image"] = "image" - - # Inputs image: ImageField = InputField(description="The image to load") def invoke(self, context: InvocationContext) -> ImageOutput: @@ -278,14 +251,15 @@ class ImageInvocation(BaseInvocation): ) -@title("Image Primitive Collection") -@tags("primitives", "image", "collection") +@invocation( + "image_collection", + title="Image Collection Primitive", + tags=["primitives", "image", "collection"], + category="primitives", +) class ImageCollectionInvocation(BaseInvocation): """A collection of image primitive values""" - type: Literal["image_collection"] = "image_collection" - - # Inputs collection: list[ImageField] = InputField( default=0, description="The collection of image values", ui_type=UIType.ImageCollection ) @@ -306,10 +280,10 @@ class DenoiseMaskField(BaseModel): masked_latents_name: Optional[str] = Field(description="The name of the masked image latents") +@invocation_output("denoise_mask_output") class DenoiseMaskOutput(BaseInvocationOutput): """Base class for nodes that output a single image""" - type: Literal["denoise_mask_output"] = "denoise_mask_output" denoise_mask: DenoiseMaskField = OutputField(description="Mask for denoise model run") @@ -325,11 +299,10 @@ class LatentsField(BaseModel): seed: Optional[int] = Field(default=None, description="Seed used to generate this latents") +@invocation_output("latents_output") class LatentsOutput(BaseInvocationOutput): """Base class for nodes that output a single latents tensor""" - type: Literal["latents_output"] = "latents_output" - latents: LatentsField = OutputField( description=FieldDescriptions.latents, ) @@ -337,25 +310,20 @@ class LatentsOutput(BaseInvocationOutput): height: int = OutputField(description=FieldDescriptions.height) +@invocation_output("latents_collection_output") class LatentsCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of latents tensors""" - type: Literal["latents_collection_output"] = "latents_collection_output" - collection: list[LatentsField] = OutputField( description=FieldDescriptions.latents, ui_type=UIType.LatentsCollection, ) -@title("Latents Primitive") -@tags("primitives", "latents") +@invocation("latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives") class LatentsInvocation(BaseInvocation): """A latents tensor primitive value""" - type: Literal["latents"] = "latents" - - # Inputs latents: LatentsField = InputField(description="The latents tensor", input=Input.Connection) def invoke(self, context: InvocationContext) -> LatentsOutput: @@ -364,14 +332,15 @@ class LatentsInvocation(BaseInvocation): return build_latents_output(self.latents.latents_name, latents) -@title("Latents Primitive Collection") -@tags("primitives", "latents", "collection") +@invocation( + "latents_collection", + title="Latents Collection Primitive", + tags=["primitives", "latents", "collection"], + category="primitives", +) class LatentsCollectionInvocation(BaseInvocation): """A collection of latents tensor primitive values""" - type: Literal["latents_collection"] = "latents_collection" - - # Inputs collection: list[LatentsField] = InputField( description="The collection of latents tensors", ui_type=UIType.LatentsCollection ) @@ -405,30 +374,24 @@ class ColorField(BaseModel): return (self.r, self.g, self.b, self.a) +@invocation_output("color_output") class ColorOutput(BaseInvocationOutput): """Base class for nodes that output a single color""" - type: Literal["color_output"] = "color_output" color: ColorField = OutputField(description="The output color") +@invocation_output("color_collection_output") class ColorCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of colors""" - type: Literal["color_collection_output"] = "color_collection_output" - - # Outputs collection: list[ColorField] = OutputField(description="The output colors", ui_type=UIType.ColorCollection) -@title("Color Primitive") -@tags("primitives", "color") +@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives") class ColorInvocation(BaseInvocation): """A color primitive value""" - type: Literal["color"] = "color" - - # Inputs color: ColorField = InputField(default=ColorField(r=0, g=0, b=0, a=255), description="The color value") def invoke(self, context: InvocationContext) -> ColorOutput: @@ -446,47 +409,47 @@ class ConditioningField(BaseModel): conditioning_name: str = Field(description="The name of conditioning tensor") +@invocation_output("conditioning_output") class ConditioningOutput(BaseInvocationOutput): """Base class for nodes that output a single conditioning tensor""" - type: Literal["conditioning_output"] = "conditioning_output" - conditioning: ConditioningField = OutputField(description=FieldDescriptions.cond) +@invocation_output("conditioning_collection_output") class ConditioningCollectionOutput(BaseInvocationOutput): """Base class for nodes that output a collection of conditioning tensors""" - type: Literal["conditioning_collection_output"] = "conditioning_collection_output" - - # Outputs collection: list[ConditioningField] = OutputField( description="The output conditioning tensors", ui_type=UIType.ConditioningCollection, ) -@title("Conditioning Primitive") -@tags("primitives", "conditioning") +@invocation( + "conditioning", + title="Conditioning Primitive", + tags=["primitives", "conditioning"], + category="primitives", +) class ConditioningInvocation(BaseInvocation): """A conditioning tensor primitive value""" - type: Literal["conditioning"] = "conditioning" - conditioning: ConditioningField = InputField(description=FieldDescriptions.cond, input=Input.Connection) def invoke(self, context: InvocationContext) -> ConditioningOutput: return ConditioningOutput(conditioning=self.conditioning) -@title("Conditioning Primitive Collection") -@tags("primitives", "conditioning", "collection") +@invocation( + "conditioning_collection", + title="Conditioning Collection Primitive", + tags=["primitives", "conditioning", "collection"], + category="primitives", +) class ConditioningCollectionInvocation(BaseInvocation): """A collection of conditioning tensor primitive values""" - type: Literal["conditioning_collection"] = "conditioning_collection" - - # Inputs collection: list[ConditioningField] = InputField( default=0, description="The collection of conditioning tensors", ui_type=UIType.ConditioningCollection ) diff --git a/invokeai/app/invocations/prompt.py b/invokeai/app/invocations/prompt.py index cc49e33f4c..c42deeaa2c 100644 --- a/invokeai/app/invocations/prompt.py +++ b/invokeai/app/invocations/prompt.py @@ -1,5 +1,5 @@ from os.path import exists -from typing import Literal, Optional, Union +from typing import Optional, Union import numpy as np from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator @@ -7,17 +7,13 @@ from pydantic import validator from invokeai.app.invocations.primitives import StringCollectionOutput -from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, tags, title +from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation -@title("Dynamic Prompt") -@tags("prompt", "collection") +@invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt") class DynamicPromptInvocation(BaseInvocation): """Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator""" - type: Literal["dynamic_prompt"] = "dynamic_prompt" - - # Inputs prompt: str = InputField(description="The prompt to parse with dynamicprompts", ui_component=UIComponent.Textarea) max_prompts: int = InputField(default=1, description="The number of prompts to generate") combinatorial: bool = InputField(default=False, description="Whether to use the combinatorial generator") @@ -33,14 +29,10 @@ class DynamicPromptInvocation(BaseInvocation): return StringCollectionOutput(collection=prompts) -@title("Prompts from File") -@tags("prompt", "file") +@invocation("prompt_from_file", title="Prompts from File", tags=["prompt", "file"], category="prompt") class PromptsFromFileInvocation(BaseInvocation): """Loads prompts from a text file""" - type: Literal["prompt_from_file"] = "prompt_from_file" - - # Inputs file_path: str = InputField(description="Path to prompt text file") pre_prompt: Optional[str] = InputField( default=None, description="String to prepend to each prompt", ui_component=UIComponent.Textarea diff --git a/invokeai/app/invocations/sdxl.py b/invokeai/app/invocations/sdxl.py index fc224db14d..288858a173 100644 --- a/invokeai/app/invocations/sdxl.py +++ b/invokeai/app/invocations/sdxl.py @@ -1,5 +1,3 @@ -from typing import Literal - from ...backend.model_management import ModelType, SubModelType from .baseinvocation import ( BaseInvocation, @@ -10,41 +8,35 @@ from .baseinvocation import ( InvocationContext, OutputField, UIType, - tags, - title, + invocation, + invocation_output, ) from .model import ClipField, MainModelField, ModelInfo, UNetField, VaeField +@invocation_output("sdxl_model_loader_output") class SDXLModelLoaderOutput(BaseInvocationOutput): """SDXL base model loader output""" - type: Literal["sdxl_model_loader_output"] = "sdxl_model_loader_output" - unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet") clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 1") clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2") vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE") +@invocation_output("sdxl_refiner_model_loader_output") class SDXLRefinerModelLoaderOutput(BaseInvocationOutput): """SDXL refiner model loader output""" - type: Literal["sdxl_refiner_model_loader_output"] = "sdxl_refiner_model_loader_output" - unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet") clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2") vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE") -@title("SDXL Main Model") -@tags("model", "sdxl") +@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model") class SDXLModelLoaderInvocation(BaseInvocation): """Loads an sdxl base model, outputting its submodels.""" - type: Literal["sdxl_model_loader"] = "sdxl_model_loader" - - # Inputs model: MainModelField = InputField( description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel ) @@ -122,14 +114,15 @@ class SDXLModelLoaderInvocation(BaseInvocation): ) -@title("SDXL Refiner Model") -@tags("model", "sdxl", "refiner") +@invocation( + "sdxl_refiner_model_loader", + title="SDXL Refiner Model", + tags=["model", "sdxl", "refiner"], + category="model", +) class SDXLRefinerModelLoaderInvocation(BaseInvocation): """Loads an sdxl refiner model, outputting its submodels.""" - type: Literal["sdxl_refiner_model_loader"] = "sdxl_refiner_model_loader" - - # Inputs model: MainModelField = InputField( description=FieldDescriptions.sdxl_refiner_model, input=Input.Direct, diff --git a/invokeai/app/invocations/upscale.py b/invokeai/app/invocations/upscale.py index bf68cc6c06..e9fb3f9963 100644 --- a/invokeai/app/invocations/upscale.py +++ b/invokeai/app/invocations/upscale.py @@ -11,7 +11,7 @@ from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.models.image import ImageCategory, ResourceOrigin -from .baseinvocation import BaseInvocation, InputField, InvocationContext, title, tags +from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation # TODO: Populate this from disk? # TODO: Use model manager to load? @@ -23,14 +23,10 @@ ESRGAN_MODELS = Literal[ ] -@title("Upscale (RealESRGAN)") -@tags("esrgan", "upscale") +@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan") class ESRGANInvocation(BaseInvocation): """Upscales an image using RealESRGAN.""" - type: Literal["esrgan"] = "esrgan" - - # Inputs image: ImageField = InputField(description="The input image") model_name: ESRGAN_MODELS = InputField(default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use") diff --git a/invokeai/app/services/graph.py b/invokeai/app/services/graph.py index 51cc8a30ae..18c99fafc1 100644 --- a/invokeai/app/services/graph.py +++ b/invokeai/app/services/graph.py @@ -3,7 +3,7 @@ import copy import itertools import uuid -from typing import Annotated, Any, Literal, Optional, Union, get_args, get_origin, get_type_hints +from typing import Annotated, Any, Optional, Union, get_args, get_origin, get_type_hints import networkx as nx from pydantic import BaseModel, root_validator, validator @@ -14,11 +14,13 @@ from ..invocations import * # noqa: F401 F403 from ..invocations.baseinvocation import ( BaseInvocation, BaseInvocationOutput, + invocation, Input, InputField, InvocationContext, OutputField, UIType, + invocation_output, ) # in 3.10 this would be "from types import NoneType" @@ -148,24 +150,16 @@ class NodeAlreadyExecutedError(Exception): # TODO: Create and use an Empty output? +@invocation_output("graph_output") class GraphInvocationOutput(BaseInvocationOutput): - type: Literal["graph_output"] = "graph_output" - - class Config: - schema_extra = { - "required": [ - "type", - "image", - ] - } + pass # TODO: Fill this out and move to invocations +@invocation("graph") class GraphInvocation(BaseInvocation): """Execute a graph""" - type: Literal["graph"] = "graph" - # TODO: figure out how to create a default here graph: "Graph" = Field(description="The graph to run", default=None) @@ -174,22 +168,20 @@ class GraphInvocation(BaseInvocation): return GraphInvocationOutput() +@invocation_output("iterate_output") class IterateInvocationOutput(BaseInvocationOutput): """Used to connect iteration outputs. Will be expanded to a specific output.""" - type: Literal["iterate_output"] = "iterate_output" - item: Any = OutputField( description="The item being iterated over", title="Collection Item", ui_type=UIType.CollectionItem ) # TODO: Fill this out and move to invocations +@invocation("iterate") class IterateInvocation(BaseInvocation): """Iterates over a list of items""" - type: Literal["iterate"] = "iterate" - collection: list[Any] = InputField( description="The list of items to iterate over", default_factory=list, ui_type=UIType.Collection ) @@ -200,19 +192,17 @@ class IterateInvocation(BaseInvocation): return IterateInvocationOutput(item=self.collection[self.index]) +@invocation_output("collect_output") class CollectInvocationOutput(BaseInvocationOutput): - type: Literal["collect_output"] = "collect_output" - collection: list[Any] = OutputField( description="The collection of input items", title="Collection", ui_type=UIType.Collection ) +@invocation("collect") class CollectInvocation(BaseInvocation): """Collects values into a collection""" - type: Literal["collect"] = "collect" - item: Any = InputField( description="The item to collect (all inputs must be of the same type)", ui_type=UIType.CollectionItem, diff --git a/invokeai/frontend/web/src/features/nodes/types/types.ts b/invokeai/frontend/web/src/features/nodes/types/types.ts index 394351c720..a6a36715d4 100644 --- a/invokeai/frontend/web/src/features/nodes/types/types.ts +++ b/invokeai/frontend/web/src/features/nodes/types/types.ts @@ -678,6 +678,7 @@ export type TypeHints = { export type InvocationSchemaExtra = { output: OpenAPIV3.ReferenceObject; // the output of the invocation title: string; + category?: string; tags?: string[]; properties: Omit< NonNullable & diff --git a/invokeai/frontend/web/src/services/api/schema.d.ts b/invokeai/frontend/web/src/services/api/schema.d.ts index a64952efa0..85470183c1 100644 --- a/invokeai/frontend/web/src/services/api/schema.d.ts +++ b/invokeai/frontend/web/src/services/api/schema.d.ts @@ -326,12 +326,12 @@ export type components = { AddInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -340,12 +340,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default add - * @enum {string} - */ - type: "add"; /** * A * @description The first number @@ -358,6 +352,12 @@ export type components = { * @default 0 */ b?: number; + /** + * Type + * @default add + * @enum {string} + */ + type: "add"; }; /** * AppConfig @@ -409,12 +409,12 @@ export type components = { BlankImageInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -423,12 +423,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default blank_image - * @enum {string} - */ - type: "blank_image"; /** * Width * @description The width of the image @@ -459,6 +453,12 @@ export type components = { * } */ color?: components["schemas"]["ColorField"]; + /** + * Type + * @default blank_image + * @enum {string} + */ + type: "blank_image"; }; /** * Blend Latents @@ -467,12 +467,12 @@ export type components = { BlendLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -481,12 +481,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default lblend - * @enum {string} - */ - type: "lblend"; /** * Latents A * @description Latents tensor @@ -503,6 +497,12 @@ export type components = { * @default 0.5 */ alpha?: number; + /** + * Type + * @default lblend + * @enum {string} + */ + type: "lblend"; }; /** BoardChanges */ BoardChanges: { @@ -680,18 +680,18 @@ export type components = { file: Blob; }; /** - * Boolean Primitive Collection + * Boolean Collection Primitive * @description A collection of boolean primitive values */ BooleanCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -700,34 +700,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Collection + * @description The collection of boolean values + */ + collection?: boolean[]; /** * Type * @default boolean_collection * @enum {string} */ type: "boolean_collection"; - /** - * Collection - * @description The collection of boolean values - */ - collection?: boolean[]; }; /** * BooleanCollectionOutput * @description Base class for nodes that output a collection of booleans */ BooleanCollectionOutput: { + /** + * Collection + * @description The output boolean collection + */ + collection: boolean[]; /** * Type * @default boolean_collection_output * @enum {string} */ type: "boolean_collection_output"; - /** - * Collection - * @description The output boolean collection - */ - collection: boolean[]; }; /** * Boolean Primitive @@ -736,12 +736,12 @@ export type components = { BooleanInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -750,35 +750,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default boolean - * @enum {string} - */ - type: "boolean"; /** * Value * @description The boolean value * @default false */ value?: boolean; + /** + * Type + * @default boolean + * @enum {string} + */ + type: "boolean"; }; /** * BooleanOutput * @description Base class for nodes that output a single boolean */ BooleanOutput: { + /** + * Value + * @description The output boolean + */ + value: boolean; /** * Type * @default boolean_output * @enum {string} */ type: "boolean_output"; - /** - * Value - * @description The output boolean - */ - value: boolean; }; /** * Canny Processor @@ -787,12 +787,12 @@ export type components = { CannyImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -801,17 +801,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default canny_image_processor * @enum {string} */ type: "canny_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Low Threshold * @description The low threshold of the Canny pixel gradient (0-255) @@ -855,12 +855,12 @@ export type components = { ClipSkipInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -869,12 +869,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default clip_skip - * @enum {string} - */ - type: "clip_skip"; /** * CLIP * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count @@ -886,23 +880,29 @@ export type components = { * @default 0 */ skipped_layers?: number; + /** + * Type + * @default clip_skip + * @enum {string} + */ + type: "clip_skip"; }; /** * ClipSkipInvocationOutput * @description Clip skip node output */ ClipSkipInvocationOutput: { + /** + * CLIP + * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count + */ + clip?: components["schemas"]["ClipField"]; /** * Type * @default clip_skip_output * @enum {string} */ type: "clip_skip_output"; - /** - * CLIP - * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count - */ - clip?: components["schemas"]["ClipField"]; }; /** * CollectInvocation @@ -911,12 +911,12 @@ export type components = { CollectInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -925,12 +925,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default collect - * @enum {string} - */ - type: "collect"; /** * Collection Item * @description The item to collect (all inputs must be of the same type) @@ -941,40 +935,48 @@ export type components = { * @description The collection, will be provided on execution */ collection?: unknown[]; + /** + * Type + * @default collect + * @enum {string} + */ + type: "collect"; }; /** * CollectInvocationOutput - * @description Base class for all invocation outputs + * @description Base class for all invocation outputs. + * + * All invocation outputs must use the `@invocation_output` decorator to provide their unique type. */ CollectInvocationOutput: { + /** + * Collection + * @description The collection of input items + */ + collection: unknown[]; /** * Type * @default collect_output * @enum {string} */ type: "collect_output"; - /** - * Collection - * @description The collection of input items - */ - collection: unknown[]; }; /** * ColorCollectionOutput * @description Base class for nodes that output a collection of colors */ ColorCollectionOutput: { + /** + * Collection + * @description The output colors + */ + collection: components["schemas"]["ColorField"][]; /** * Type * @default color_collection_output * @enum {string} */ type: "color_collection_output"; - /** - * Collection - * @description The output colors - */ - collection: components["schemas"]["ColorField"][]; }; /** * Color Correct @@ -984,12 +986,12 @@ export type components = { ColorCorrectInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -998,12 +1000,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default color_correct - * @enum {string} - */ - type: "color_correct"; /** * Image * @description The image to color-correct @@ -1025,6 +1021,12 @@ export type components = { * @default 8 */ mask_blur_radius?: number; + /** + * Type + * @default color_correct + * @enum {string} + */ + type: "color_correct"; }; /** * ColorField @@ -1059,12 +1061,12 @@ export type components = { ColorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1073,12 +1075,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default color - * @enum {string} - */ - type: "color"; /** * Color * @description The color value @@ -1090,37 +1086,43 @@ export type components = { * } */ color?: components["schemas"]["ColorField"]; + /** + * Type + * @default color + * @enum {string} + */ + type: "color"; }; /** * ColorOutput * @description Base class for nodes that output a single color */ ColorOutput: { + /** + * Color + * @description The output color + */ + color: components["schemas"]["ColorField"]; /** * Type * @default color_output * @enum {string} */ type: "color_output"; - /** - * Color - * @description The output color - */ - color: components["schemas"]["ColorField"]; }; /** - * Compel Prompt + * Prompt * @description Parse prompt using compel package to conditioning. */ CompelInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1129,18 +1131,18 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default compel - * @enum {string} - */ - type: "compel"; /** * Prompt * @description Prompt to be parsed by Compel to create a conditioning tensor * @default */ prompt?: string; + /** + * Type + * @default compel + * @enum {string} + */ + type: "compel"; /** * CLIP * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count @@ -1148,18 +1150,18 @@ export type components = { clip?: components["schemas"]["ClipField"]; }; /** - * Conditioning Primitive Collection + * Conditioning Collection Primitive * @description A collection of conditioning tensor primitive values */ ConditioningCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1168,35 +1170,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default conditioning_collection - * @enum {string} - */ - type: "conditioning_collection"; /** * Collection * @description The collection of conditioning tensors * @default 0 */ collection?: components["schemas"]["ConditioningField"][]; + /** + * Type + * @default conditioning_collection + * @enum {string} + */ + type: "conditioning_collection"; }; /** * ConditioningCollectionOutput * @description Base class for nodes that output a collection of conditioning tensors */ ConditioningCollectionOutput: { + /** + * Collection + * @description The output conditioning tensors + */ + collection: components["schemas"]["ConditioningField"][]; /** * Type * @default conditioning_collection_output * @enum {string} */ type: "conditioning_collection_output"; - /** - * Collection - * @description The output conditioning tensors - */ - collection: components["schemas"]["ConditioningField"][]; }; /** * ConditioningField @@ -1216,12 +1218,12 @@ export type components = { ConditioningInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1230,34 +1232,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Conditioning + * @description Conditioning tensor + */ + conditioning?: components["schemas"]["ConditioningField"]; /** * Type * @default conditioning * @enum {string} */ type: "conditioning"; - /** - * Conditioning - * @description Conditioning tensor - */ - conditioning?: components["schemas"]["ConditioningField"]; }; /** * ConditioningOutput * @description Base class for nodes that output a single conditioning tensor */ ConditioningOutput: { + /** + * Conditioning + * @description Conditioning tensor + */ + conditioning: components["schemas"]["ConditioningField"]; /** * Type * @default conditioning_output * @enum {string} */ type: "conditioning_output"; - /** - * Conditioning - * @description Conditioning tensor - */ - conditioning: components["schemas"]["ConditioningField"]; }; /** * Content Shuffle Processor @@ -1266,12 +1268,12 @@ export type components = { ContentShuffleImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1280,17 +1282,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default content_shuffle_image_processor * @enum {string} */ type: "content_shuffle_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -1374,12 +1376,12 @@ export type components = { ControlNetInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1388,12 +1390,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default controlnet - * @enum {string} - */ - type: "controlnet"; /** * Image * @description The control image @@ -1437,6 +1433,12 @@ export type components = { * @enum {string} */ resize_mode?: "just_resize" | "crop_resize" | "fill_resize" | "just_resize_simple"; + /** + * Type + * @default controlnet + * @enum {string} + */ + type: "controlnet"; }; /** ControlNetModelCheckpointConfig */ ControlNetModelCheckpointConfig: { @@ -1500,17 +1502,17 @@ export type components = { * @description node output for ControlNet info */ ControlOutput: { + /** + * Control + * @description ControlNet(s) to apply + */ + control: components["schemas"]["ControlField"]; /** * Type * @default control_output * @enum {string} */ type: "control_output"; - /** - * Control - * @description ControlNet(s) to apply - */ - control: components["schemas"]["ControlField"]; }; /** * CoreMetadata @@ -1666,12 +1668,12 @@ export type components = { CreateDenoiseMaskInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1680,12 +1682,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default create_denoise_mask - * @enum {string} - */ - type: "create_denoise_mask"; /** * Vae * @description VAE @@ -1713,6 +1709,12 @@ export type components = { * @default false */ fp32?: boolean; + /** + * Type + * @default create_denoise_mask + * @enum {string} + */ + type: "create_denoise_mask"; }; /** * OpenCV Inpaint @@ -1721,12 +1723,12 @@ export type components = { CvInpaintInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1735,12 +1737,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default cv_inpaint - * @enum {string} - */ - type: "cv_inpaint"; /** * Image * @description The image to inpaint @@ -1751,6 +1747,12 @@ export type components = { * @description The mask to use when inpainting */ mask?: components["schemas"]["ImageField"]; + /** + * Type + * @default cv_inpaint + * @enum {string} + */ + type: "cv_inpaint"; }; /** DeleteBoardResult */ DeleteBoardResult: { @@ -1782,12 +1784,12 @@ export type components = { DenoiseLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1796,12 +1798,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default denoise_latents - * @enum {string} - */ - type: "denoise_latents"; /** * Noise * @description Noise tensor @@ -1853,6 +1849,12 @@ export type components = { * @description The mask to use for the operation */ denoise_mask?: components["schemas"]["DenoiseMaskField"]; + /** + * Type + * @default denoise_latents + * @enum {string} + */ + type: "denoise_latents"; /** * Positive Conditioning * @description Positive conditioning tensor @@ -1890,17 +1892,17 @@ export type components = { * @description Base class for nodes that output a single image */ DenoiseMaskOutput: { + /** + * Denoise Mask + * @description Mask for denoise model run + */ + denoise_mask: components["schemas"]["DenoiseMaskField"]; /** * Type * @default denoise_mask_output * @enum {string} */ type: "denoise_mask_output"; - /** - * Denoise Mask - * @description Mask for denoise model run - */ - denoise_mask: components["schemas"]["DenoiseMaskField"]; }; /** * Divide Integers @@ -1909,12 +1911,12 @@ export type components = { DivideInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1923,12 +1925,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default div - * @enum {string} - */ - type: "div"; /** * A * @description The first number @@ -1941,6 +1937,12 @@ export type components = { * @default 0 */ b?: number; + /** + * Type + * @default div + * @enum {string} + */ + type: "div"; }; /** * Dynamic Prompt @@ -1949,12 +1951,12 @@ export type components = { DynamicPromptInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -1963,12 +1965,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default dynamic_prompt - * @enum {string} - */ - type: "dynamic_prompt"; /** * Prompt * @description The prompt to parse with dynamicprompts @@ -1986,6 +1982,12 @@ export type components = { * @default false */ combinatorial?: boolean; + /** + * Type + * @default dynamic_prompt + * @enum {string} + */ + type: "dynamic_prompt"; }; /** * Upscale (RealESRGAN) @@ -1994,12 +1996,12 @@ export type components = { ESRGANInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2008,12 +2010,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default esrgan - * @enum {string} - */ - type: "esrgan"; /** * Image * @description The input image @@ -2026,6 +2022,12 @@ export type components = { * @enum {string} */ model_name?: "RealESRGAN_x4plus.pth" | "RealESRGAN_x4plus_anime_6B.pth" | "ESRGAN_SRx4_DF2KOST_official-ff704c30.pth" | "RealESRGAN_x2plus.pth"; + /** + * Type + * @default esrgan + * @enum {string} + */ + type: "esrgan"; }; /** Edge */ Edge: { @@ -2054,18 +2056,18 @@ export type components = { field: string; }; /** - * Float Primitive Collection + * Float Collection Primitive * @description A collection of float primitive values */ FloatCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2074,34 +2076,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Collection + * @description The collection of float values + */ + collection?: number[]; /** * Type * @default float_collection * @enum {string} */ type: "float_collection"; - /** - * Collection - * @description The collection of float values - */ - collection?: number[]; }; /** * FloatCollectionOutput * @description Base class for nodes that output a collection of floats */ FloatCollectionOutput: { + /** + * Collection + * @description The float collection + */ + collection: number[]; /** * Type * @default float_collection_output * @enum {string} */ type: "float_collection_output"; - /** - * Collection - * @description The float collection - */ - collection: number[]; }; /** * Float Primitive @@ -2110,12 +2112,12 @@ export type components = { FloatInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2124,18 +2126,18 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default float - * @enum {string} - */ - type: "float"; /** * Value * @description The float value * @default 0 */ value?: number; + /** + * Type + * @default float + * @enum {string} + */ + type: "float"; }; /** * Float Range @@ -2144,12 +2146,12 @@ export type components = { FloatLinearRangeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2158,12 +2160,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default float_range - * @enum {string} - */ - type: "float_range"; /** * Start * @description The first value of the range @@ -2182,23 +2178,29 @@ export type components = { * @default 30 */ steps?: number; + /** + * Type + * @default float_range + * @enum {string} + */ + type: "float_range"; }; /** * FloatOutput * @description Base class for nodes that output a single float */ FloatOutput: { + /** + * Value + * @description The output float + */ + value: number; /** * Type * @default float_output * @enum {string} */ type: "float_output"; - /** - * Value - * @description The output float - */ - value: number; }; /** Graph */ Graph: { @@ -2212,7 +2214,7 @@ export type components = { * @description The nodes in this graph */ nodes?: { - [key: string]: components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; + [key: string]: components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; }; /** * Edges @@ -2255,7 +2257,7 @@ export type components = { * @description The results of node executions */ results: { - [key: string]: components["schemas"]["BooleanOutput"] | components["schemas"]["BooleanCollectionOutput"] | components["schemas"]["IntegerOutput"] | components["schemas"]["IntegerCollectionOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["StringCollectionOutput"] | components["schemas"]["ImageOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["DenoiseMaskOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["LatentsCollectionOutput"] | components["schemas"]["ColorOutput"] | components["schemas"]["ColorCollectionOutput"] | components["schemas"]["ConditioningOutput"] | components["schemas"]["ConditioningCollectionOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["SDXLLoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["SeamlessModeOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["ONNXModelLoaderOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]; + [key: string]: components["schemas"]["BooleanOutput"] | components["schemas"]["BooleanCollectionOutput"] | components["schemas"]["IntegerOutput"] | components["schemas"]["IntegerCollectionOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["StringCollectionOutput"] | components["schemas"]["ImageOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["DenoiseMaskOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["LatentsCollectionOutput"] | components["schemas"]["ColorOutput"] | components["schemas"]["ColorCollectionOutput"] | components["schemas"]["ConditioningOutput"] | components["schemas"]["ConditioningCollectionOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["SDXLLoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["SeamlessModeOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["ONNXModelLoaderOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]; }; /** * Errors @@ -2286,12 +2288,12 @@ export type components = { GraphInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2300,21 +2302,23 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Graph + * @description The graph to run + */ + graph?: components["schemas"]["Graph"]; /** * Type * @default graph * @enum {string} */ type: "graph"; - /** - * Graph - * @description The graph to run - */ - graph?: components["schemas"]["Graph"]; }; /** * GraphInvocationOutput - * @description Base class for all invocation outputs + * @description Base class for all invocation outputs. + * + * All invocation outputs must use the `@invocation_output` decorator to provide their unique type. */ GraphInvocationOutput: { /** @@ -2336,12 +2340,12 @@ export type components = { HedImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2350,17 +2354,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default hed_image_processor * @enum {string} */ type: "hed_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -2387,12 +2391,12 @@ export type components = { ImageBlurInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2401,12 +2405,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_blur - * @enum {string} - */ - type: "img_blur"; /** * Image * @description The image to blur @@ -2425,6 +2423,12 @@ export type components = { * @enum {string} */ blur_type?: "gaussian" | "box"; + /** + * Type + * @default img_blur + * @enum {string} + */ + type: "img_blur"; }; /** * ImageCategory @@ -2445,12 +2449,12 @@ export type components = { ImageChannelInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2459,12 +2463,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_chan - * @enum {string} - */ - type: "img_chan"; /** * Image * @description The image to get the channel from @@ -2477,20 +2475,26 @@ export type components = { * @enum {string} */ channel?: "A" | "R" | "G" | "B"; + /** + * Type + * @default img_chan + * @enum {string} + */ + type: "img_chan"; }; /** - * Image Primitive Collection + * Image Collection Primitive * @description A collection of image primitive values */ ImageCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2499,35 +2503,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default image_collection - * @enum {string} - */ - type: "image_collection"; /** * Collection * @description The collection of image values * @default 0 */ collection?: components["schemas"]["ImageField"][]; + /** + * Type + * @default image_collection + * @enum {string} + */ + type: "image_collection"; }; /** * ImageCollectionOutput * @description Base class for nodes that output a collection of images */ ImageCollectionOutput: { + /** + * Collection + * @description The output images + */ + collection: components["schemas"]["ImageField"][]; /** * Type * @default image_collection_output * @enum {string} */ type: "image_collection_output"; - /** - * Collection - * @description The output images - */ - collection: components["schemas"]["ImageField"][]; }; /** * Convert Image Mode @@ -2536,12 +2540,12 @@ export type components = { ImageConvertInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2550,12 +2554,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_conv - * @enum {string} - */ - type: "img_conv"; /** * Image * @description The image to convert @@ -2568,6 +2566,12 @@ export type components = { * @enum {string} */ mode?: "L" | "RGB" | "RGBA" | "CMYK" | "YCbCr" | "LAB" | "HSV" | "I" | "F"; + /** + * Type + * @default img_conv + * @enum {string} + */ + type: "img_conv"; }; /** * Crop Image @@ -2576,12 +2580,12 @@ export type components = { ImageCropInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2590,12 +2594,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_crop - * @enum {string} - */ - type: "img_crop"; /** * Image * @description The image to crop @@ -2625,6 +2623,12 @@ export type components = { * @default 512 */ height?: number; + /** + * Type + * @default img_crop + * @enum {string} + */ + type: "img_crop"; }; /** * ImageDTO @@ -2713,18 +2717,18 @@ export type components = { image_name: string; }; /** - * Image Hue Adjustment + * Adjust Image Hue * @description Adjusts the Hue of an image. */ ImageHueAdjustmentInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2733,12 +2737,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_hue_adjust - * @enum {string} - */ - type: "img_hue_adjust"; /** * Image * @description The image to adjust @@ -2750,6 +2748,12 @@ export type components = { * @default 0 */ hue?: number; + /** + * Type + * @default img_hue_adjust + * @enum {string} + */ + type: "img_hue_adjust"; }; /** * Inverse Lerp Image @@ -2758,12 +2762,12 @@ export type components = { ImageInverseLerpInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2772,12 +2776,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_ilerp - * @enum {string} - */ - type: "img_ilerp"; /** * Image * @description The image to lerp @@ -2795,6 +2793,12 @@ export type components = { * @default 255 */ max?: number; + /** + * Type + * @default img_ilerp + * @enum {string} + */ + type: "img_ilerp"; }; /** * Image Primitive @@ -2803,12 +2807,12 @@ export type components = { ImageInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2817,17 +2821,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to load + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default image * @enum {string} */ type: "image"; - /** - * Image - * @description The image to load - */ - image?: components["schemas"]["ImageField"]; }; /** * Lerp Image @@ -2836,12 +2840,12 @@ export type components = { ImageLerpInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2850,12 +2854,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_lerp - * @enum {string} - */ - type: "img_lerp"; /** * Image * @description The image to lerp @@ -2873,20 +2871,26 @@ export type components = { * @default 255 */ max?: number; + /** + * Type + * @default img_lerp + * @enum {string} + */ + type: "img_lerp"; }; /** - * Image Luminosity Adjustment + * Adjust Image Luminosity * @description Adjusts the Luminosity (Value) of an image. */ ImageLuminosityAdjustmentInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2895,12 +2899,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_luminosity_adjust - * @enum {string} - */ - type: "img_luminosity_adjust"; /** * Image * @description The image to adjust @@ -2912,6 +2910,12 @@ export type components = { * @default 1 */ luminosity?: number; + /** + * Type + * @default img_luminosity_adjust + * @enum {string} + */ + type: "img_luminosity_adjust"; }; /** * ImageMetadata @@ -2936,12 +2940,12 @@ export type components = { ImageMultiplyInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2950,12 +2954,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_mul - * @enum {string} - */ - type: "img_mul"; /** * Image1 * @description The first image to multiply @@ -2966,6 +2964,12 @@ export type components = { * @description The second image to multiply */ image2?: components["schemas"]["ImageField"]; + /** + * Type + * @default img_mul + * @enum {string} + */ + type: "img_mul"; }; /** * Blur NSFW Image @@ -2974,12 +2978,12 @@ export type components = { ImageNSFWBlurInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -2988,17 +2992,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Metadata + * @description Optional core metadata to be written to image + */ + metadata?: components["schemas"]["CoreMetadata"]; /** * Type * @default img_nsfw * @enum {string} */ type: "img_nsfw"; - /** - * Metadata - * @description Optional core metadata to be written to image - */ - metadata?: components["schemas"]["CoreMetadata"]; /** * Image * @description The image to check @@ -3010,12 +3014,6 @@ export type components = { * @description Base class for nodes that output a single image */ ImageOutput: { - /** - * Type - * @default image_output - * @enum {string} - */ - type: "image_output"; /** * Image * @description The output image @@ -3031,6 +3029,12 @@ export type components = { * @description The height of the image in pixels */ height: number; + /** + * Type + * @default image_output + * @enum {string} + */ + type: "image_output"; }; /** * Paste Image @@ -3039,12 +3043,12 @@ export type components = { ImagePasteInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3053,12 +3057,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_paste - * @enum {string} - */ - type: "img_paste"; /** * Base Image * @description The base image @@ -3086,20 +3084,26 @@ export type components = { * @default 0 */ y?: number; + /** + * Type + * @default img_paste + * @enum {string} + */ + type: "img_paste"; }; /** - * ImageProcessorInvocation + * Base Image Processor * @description Base class for invocations that preprocess images for ControlNet */ ImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3108,17 +3112,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default image_processor * @enum {string} */ type: "image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; }; /** * ImageRecordChanges @@ -3156,12 +3160,12 @@ export type components = { ImageResizeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3170,12 +3174,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_resize - * @enum {string} - */ - type: "img_resize"; /** * Image * @description The image to resize @@ -3205,20 +3203,26 @@ export type components = { * @description Optional core metadata to be written to image */ metadata?: components["schemas"]["CoreMetadata"]; + /** + * Type + * @default img_resize + * @enum {string} + */ + type: "img_resize"; }; /** - * Image Saturation Adjustment + * Adjust Image Saturation * @description Adjusts the Saturation of an image. */ ImageSaturationAdjustmentInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3227,12 +3231,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_saturation_adjust - * @enum {string} - */ - type: "img_saturation_adjust"; /** * Image * @description The image to adjust @@ -3244,6 +3242,12 @@ export type components = { * @default 1 */ saturation?: number; + /** + * Type + * @default img_saturation_adjust + * @enum {string} + */ + type: "img_saturation_adjust"; }; /** * Scale Image @@ -3252,12 +3256,12 @@ export type components = { ImageScaleInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3266,12 +3270,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_scale - * @enum {string} - */ - type: "img_scale"; /** * Image * @description The image to scale @@ -3290,6 +3288,12 @@ export type components = { * @enum {string} */ resample_mode?: "nearest" | "box" | "bilinear" | "hamming" | "bicubic" | "lanczos"; + /** + * Type + * @default img_scale + * @enum {string} + */ + type: "img_scale"; }; /** * Image to Latents @@ -3298,12 +3302,12 @@ export type components = { ImageToLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3312,12 +3316,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default i2l - * @enum {string} - */ - type: "i2l"; /** * Image * @description The image to encode @@ -3340,6 +3338,12 @@ export type components = { * @default false */ fp32?: boolean; + /** + * Type + * @default i2l + * @enum {string} + */ + type: "i2l"; }; /** * ImageUrlsDTO @@ -3369,12 +3373,12 @@ export type components = { ImageWatermarkInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3383,12 +3387,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default img_watermark - * @enum {string} - */ - type: "img_watermark"; /** * Image * @description The image to check @@ -3405,6 +3403,12 @@ export type components = { * @description Optional core metadata to be written to image */ metadata?: components["schemas"]["CoreMetadata"]; + /** + * Type + * @default img_watermark + * @enum {string} + */ + type: "img_watermark"; }; /** ImagesUpdatedFromListResult */ ImagesUpdatedFromListResult: { @@ -3421,12 +3425,12 @@ export type components = { InfillColorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3435,12 +3439,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default infill_rgba - * @enum {string} - */ - type: "infill_rgba"; /** * Image * @description The image to infill @@ -3457,6 +3455,12 @@ export type components = { * } */ color?: components["schemas"]["ColorField"]; + /** + * Type + * @default infill_rgba + * @enum {string} + */ + type: "infill_rgba"; }; /** * PatchMatch Infill @@ -3465,12 +3469,12 @@ export type components = { InfillPatchMatchInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3479,17 +3483,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to infill + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default infill_patchmatch * @enum {string} */ type: "infill_patchmatch"; - /** - * Image - * @description The image to infill - */ - image?: components["schemas"]["ImageField"]; }; /** * Tile Infill @@ -3498,12 +3502,12 @@ export type components = { InfillTileInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3512,12 +3516,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default infill_tile - * @enum {string} - */ - type: "infill_tile"; /** * Image * @description The image to infill @@ -3534,20 +3532,26 @@ export type components = { * @description The seed to use for tile generation (omit for random) */ seed?: number; + /** + * Type + * @default infill_tile + * @enum {string} + */ + type: "infill_tile"; }; /** - * Integer Primitive Collection + * Integer Collection Primitive * @description A collection of integer primitive values */ IntegerCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3556,35 +3560,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default integer_collection - * @enum {string} - */ - type: "integer_collection"; /** * Collection * @description The collection of integer values * @default 0 */ collection?: number[]; + /** + * Type + * @default integer_collection + * @enum {string} + */ + type: "integer_collection"; }; /** * IntegerCollectionOutput * @description Base class for nodes that output a collection of integers */ IntegerCollectionOutput: { + /** + * Collection + * @description The int collection + */ + collection: number[]; /** * Type * @default integer_collection_output * @enum {string} */ type: "integer_collection_output"; - /** - * Collection - * @description The int collection - */ - collection: number[]; }; /** * Integer Primitive @@ -3593,12 +3597,12 @@ export type components = { IntegerInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3607,35 +3611,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default integer - * @enum {string} - */ - type: "integer"; /** * Value * @description The integer value * @default 0 */ value?: number; + /** + * Type + * @default integer + * @enum {string} + */ + type: "integer"; }; /** * IntegerOutput * @description Base class for nodes that output a single integer */ IntegerOutput: { + /** + * Value + * @description The output integer + */ + value: number; /** * Type * @default integer_output * @enum {string} */ type: "integer_output"; - /** - * Value - * @description The output integer - */ - value: number; }; /** * IterateInvocation @@ -3644,12 +3648,12 @@ export type components = { IterateInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3658,12 +3662,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default iterate - * @enum {string} - */ - type: "iterate"; /** * Collection * @description The list of items to iterate over @@ -3675,23 +3673,29 @@ export type components = { * @default 0 */ index?: number; + /** + * Type + * @default iterate + * @enum {string} + */ + type: "iterate"; }; /** * IterateInvocationOutput * @description Used to connect iteration outputs. Will be expanded to a specific output. */ IterateInvocationOutput: { + /** + * Collection Item + * @description The item being iterated over + */ + item?: unknown; /** * Type * @default iterate_output * @enum {string} */ type: "iterate_output"; - /** - * Collection Item - * @description The item being iterated over - */ - item?: unknown; }; /** * LaMa Infill @@ -3700,12 +3704,12 @@ export type components = { LaMaInfillInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3714,31 +3718,31 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to infill + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default infill_lama * @enum {string} */ type: "infill_lama"; - /** - * Image - * @description The image to infill - */ - image?: components["schemas"]["ImageField"]; }; /** - * Latents Primitive Collection + * Latents Collection Primitive * @description A collection of latents tensor primitive values */ LatentsCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3747,34 +3751,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Collection + * @description The collection of latents tensors + */ + collection?: components["schemas"]["LatentsField"][]; /** * Type * @default latents_collection * @enum {string} */ type: "latents_collection"; - /** - * Collection - * @description The collection of latents tensors - */ - collection?: components["schemas"]["LatentsField"][]; }; /** * LatentsCollectionOutput * @description Base class for nodes that output a collection of latents tensors */ LatentsCollectionOutput: { + /** + * Collection + * @description Latents tensor + */ + collection: components["schemas"]["LatentsField"][]; /** * Type * @default latents_collection_output * @enum {string} */ type: "latents_collection_output"; - /** - * Collection - * @description Latents tensor - */ - collection: components["schemas"]["LatentsField"][]; }; /** * LatentsField @@ -3799,12 +3803,12 @@ export type components = { LatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3813,29 +3817,23 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Latents + * @description The latents tensor + */ + latents?: components["schemas"]["LatentsField"]; /** * Type * @default latents * @enum {string} */ type: "latents"; - /** - * Latents - * @description The latents tensor - */ - latents?: components["schemas"]["LatentsField"]; }; /** * LatentsOutput * @description Base class for nodes that output a single latents tensor */ LatentsOutput: { - /** - * Type - * @default latents_output - * @enum {string} - */ - type: "latents_output"; /** * Latents * @description Latents tensor @@ -3851,6 +3849,12 @@ export type components = { * @description Height of output (px) */ height: number; + /** + * Type + * @default latents_output + * @enum {string} + */ + type: "latents_output"; }; /** * Latents to Image @@ -3859,12 +3863,12 @@ export type components = { LatentsToImageInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3873,12 +3877,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default l2i - * @enum {string} - */ - type: "l2i"; /** * Tiled * @description Processing using overlapping tiles (reduce memory consumption) @@ -3896,6 +3894,12 @@ export type components = { * @description Optional core metadata to be written to image */ metadata?: components["schemas"]["CoreMetadata"]; + /** + * Type + * @default l2i + * @enum {string} + */ + type: "l2i"; /** * Latents * @description Latents tensor @@ -3914,12 +3918,12 @@ export type components = { LeresImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3928,17 +3932,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default leres_image_processor * @enum {string} */ type: "leres_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Thr A * @description Leres parameter `thr_a` @@ -3977,12 +3981,12 @@ export type components = { LineartAnimeImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -3991,17 +3995,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default lineart_anime_image_processor * @enum {string} */ type: "lineart_anime_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -4022,12 +4026,12 @@ export type components = { LineartImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4036,17 +4040,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default lineart_image_processor * @enum {string} */ type: "lineart_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -4150,12 +4154,12 @@ export type components = { LoraLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4164,12 +4168,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default lora_loader - * @enum {string} - */ - type: "lora_loader"; /** * LoRA * @description LoRA model to load @@ -4191,18 +4189,18 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip?: components["schemas"]["ClipField"]; + /** + * Type + * @default lora_loader + * @enum {string} + */ + type: "lora_loader"; }; /** * LoraLoaderOutput * @description Model loader output */ LoraLoaderOutput: { - /** - * Type - * @default lora_loader_output - * @enum {string} - */ - type: "lora_loader_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -4213,6 +4211,12 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip?: components["schemas"]["ClipField"]; + /** + * Type + * @default lora_loader_output + * @enum {string} + */ + type: "lora_loader_output"; }; /** * MainModelField @@ -4236,12 +4240,12 @@ export type components = { MainModelLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4250,31 +4254,31 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Model + * @description Main model (UNet, VAE, CLIP) to load + */ + model: components["schemas"]["MainModelField"]; /** * Type * @default main_model_loader * @enum {string} */ type: "main_model_loader"; - /** - * Model - * @description Main model (UNet, VAE, CLIP) to load - */ - model: components["schemas"]["MainModelField"]; }; /** - * Combine Mask + * Combine Masks * @description Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`. */ MaskCombineInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4283,12 +4287,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default mask_combine - * @enum {string} - */ - type: "mask_combine"; /** * Mask1 * @description The first mask to combine @@ -4299,6 +4297,12 @@ export type components = { * @description The second image to combine */ mask2?: components["schemas"]["ImageField"]; + /** + * Type + * @default mask_combine + * @enum {string} + */ + type: "mask_combine"; }; /** * Mask Edge @@ -4307,12 +4311,12 @@ export type components = { MaskEdgeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4321,12 +4325,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default mask_edge - * @enum {string} - */ - type: "mask_edge"; /** * Image * @description The image to apply the mask to @@ -4352,6 +4350,12 @@ export type components = { * @description Second threshold for the hysteresis procedure in Canny edge detection */ high_threshold?: number; + /** + * Type + * @default mask_edge + * @enum {string} + */ + type: "mask_edge"; }; /** * Mask from Alpha @@ -4360,12 +4364,12 @@ export type components = { MaskFromAlphaInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4374,12 +4378,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default tomask - * @enum {string} - */ - type: "tomask"; /** * Image * @description The image to create the mask from @@ -4391,6 +4389,12 @@ export type components = { * @default false */ invert?: boolean; + /** + * Type + * @default tomask + * @enum {string} + */ + type: "tomask"; }; /** * Mediapipe Face Processor @@ -4399,12 +4403,12 @@ export type components = { MediapipeFaceProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4413,17 +4417,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default mediapipe_face_processor * @enum {string} */ type: "mediapipe_face_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Max Faces * @description Maximum number of faces to detect @@ -4450,12 +4454,12 @@ export type components = { MetadataAccumulatorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4464,12 +4468,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default metadata_accumulator - * @enum {string} - */ - type: "metadata_accumulator"; /** * Generation Mode * @description The generation mode that output this image @@ -4600,37 +4598,43 @@ export type components = { * @description The start value used for refiner denoising */ refiner_start?: number; + /** + * Type + * @default metadata_accumulator + * @enum {string} + */ + type: "metadata_accumulator"; }; /** * MetadataAccumulatorOutput * @description The output of the MetadataAccumulator node */ MetadataAccumulatorOutput: { + /** + * Metadata + * @description The core metadata for the image + */ + metadata: components["schemas"]["CoreMetadata"]; /** * Type * @default metadata_accumulator_output * @enum {string} */ type: "metadata_accumulator_output"; - /** - * Metadata - * @description The core metadata for the image - */ - metadata: components["schemas"]["CoreMetadata"]; }; /** - * Midas (Depth) Processor + * Midas Depth Processor * @description Applies Midas depth processing to image */ MidasDepthImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4639,17 +4643,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default midas_depth_image_processor * @enum {string} */ type: "midas_depth_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * A Mult * @description Midas parameter `a_mult` (a = a_mult * PI) @@ -4670,12 +4674,12 @@ export type components = { MlsdImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4684,17 +4688,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default mlsd_image_processor * @enum {string} */ type: "mlsd_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -4745,12 +4749,6 @@ export type components = { * @description Model loader output */ ModelLoaderOutput: { - /** - * Type - * @default model_loader_output - * @enum {string} - */ - type: "model_loader_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -4766,6 +4764,12 @@ export type components = { * @description VAE */ vae: components["schemas"]["VaeField"]; + /** + * Type + * @default model_loader_output + * @enum {string} + */ + type: "model_loader_output"; }; /** * ModelType @@ -4791,12 +4795,12 @@ export type components = { MultiplyInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4805,12 +4809,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default mul - * @enum {string} - */ - type: "mul"; /** * A * @description The first number @@ -4823,6 +4821,12 @@ export type components = { * @default 0 */ b?: number; + /** + * Type + * @default mul + * @enum {string} + */ + type: "mul"; }; /** * Noise @@ -4831,12 +4835,12 @@ export type components = { NoiseInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4845,12 +4849,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default noise - * @enum {string} - */ - type: "noise"; /** * Seed * @description Seed for random number generation @@ -4874,18 +4872,18 @@ export type components = { * @default true */ use_cpu?: boolean; + /** + * Type + * @default noise + * @enum {string} + */ + type: "noise"; }; /** * NoiseOutput * @description Invocation noise output */ NoiseOutput: { - /** - * Type - * @default noise_output - * @enum {string} - */ - type: "noise_output"; /** * Noise * @description Noise tensor @@ -4901,6 +4899,12 @@ export type components = { * @description Height of output (px) */ height: number; + /** + * Type + * @default noise_output + * @enum {string} + */ + type: "noise_output"; }; /** * Normal BAE Processor @@ -4909,12 +4913,12 @@ export type components = { NormalbaeImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4923,17 +4927,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default normalbae_image_processor * @enum {string} */ type: "normalbae_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -4954,12 +4958,12 @@ export type components = { ONNXLatentsToImageInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -4968,12 +4972,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default l2i_onnx - * @enum {string} - */ - type: "l2i_onnx"; /** * Latents * @description Denoised latents tensor @@ -4989,18 +4987,18 @@ export type components = { * @description Optional core metadata to be written to image */ metadata?: components["schemas"]["CoreMetadata"]; + /** + * Type + * @default l2i_onnx + * @enum {string} + */ + type: "l2i_onnx"; }; /** * ONNXModelLoaderOutput * @description Model loader output */ ONNXModelLoaderOutput: { - /** - * Type - * @default model_loader_output_onnx - * @enum {string} - */ - type: "model_loader_output_onnx"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -5021,21 +5019,29 @@ export type components = { * @description VAE */ vae_encoder?: components["schemas"]["VaeField"]; + /** + * Type + * @default model_loader_output_onnx + * @enum {string} + */ + type: "model_loader_output_onnx"; }; /** * ONNX Prompt (Raw) * @description A node to process inputs and produce outputs. * May use dependency injection in __init__ to receive providers. + * + * All invocations must use the `@invocation` decorator to provide their unique type. */ ONNXPromptInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5044,12 +5050,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default prompt_onnx - * @enum {string} - */ - type: "prompt_onnx"; /** * Prompt * @description Raw prompt text (no parsing) @@ -5061,6 +5061,12 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip?: components["schemas"]["ClipField"]; + /** + * Type + * @default prompt_onnx + * @enum {string} + */ + type: "prompt_onnx"; }; /** ONNXStableDiffusion1ModelConfig */ ONNXStableDiffusion1ModelConfig: { @@ -5116,12 +5122,12 @@ export type components = { ONNXTextToLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5130,12 +5136,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default t2l_onnx - * @enum {string} - */ - type: "t2l_onnx"; /** * Positive Conditioning * @description Positive conditioning tensor @@ -5187,6 +5187,12 @@ export type components = { * @description ControlNet(s) to apply */ control?: components["schemas"]["ControlField"] | components["schemas"]["ControlField"][]; + /** + * Type + * @default t2l_onnx + * @enum {string} + */ + type: "t2l_onnx"; }; /** * OffsetPaginatedResults[BoardDTO] @@ -5262,12 +5268,12 @@ export type components = { OnnxModelLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5276,17 +5282,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Model + * @description ONNX Main model (UNet, VAE, CLIP) to load + */ + model: components["schemas"]["OnnxModelField"]; /** * Type * @default onnx_model_loader * @enum {string} */ type: "onnx_model_loader"; - /** - * Model - * @description ONNX Main model (UNet, VAE, CLIP) to load - */ - model: components["schemas"]["OnnxModelField"]; }; /** * Openpose Processor @@ -5295,12 +5301,12 @@ export type components = { OpenposeImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5309,17 +5315,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default openpose_image_processor * @enum {string} */ type: "openpose_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Hand And Face * @description Whether to use hands and face mode @@ -5377,12 +5383,12 @@ export type components = { PidiImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5391,17 +5397,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default pidi_image_processor * @enum {string} */ type: "pidi_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Detect Resolution * @description Pixel resolution for detection @@ -5434,12 +5440,12 @@ export type components = { PromptsFromFileInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5448,12 +5454,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default prompt_from_file - * @enum {string} - */ - type: "prompt_from_file"; /** * File Path * @description Path to prompt text file @@ -5481,6 +5481,12 @@ export type components = { * @default 1 */ max_prompts?: number; + /** + * Type + * @default prompt_from_file + * @enum {string} + */ + type: "prompt_from_file"; }; /** * Random Integer @@ -5489,12 +5495,12 @@ export type components = { RandomIntInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5503,12 +5509,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default rand_int - * @enum {string} - */ - type: "rand_int"; /** * Low * @description The inclusive low value @@ -5521,6 +5521,12 @@ export type components = { * @default 2147483647 */ high?: number; + /** + * Type + * @default rand_int + * @enum {string} + */ + type: "rand_int"; }; /** * Random Range @@ -5529,12 +5535,12 @@ export type components = { RandomRangeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5543,12 +5549,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default random_range - * @enum {string} - */ - type: "random_range"; /** * Low * @description The inclusive low value @@ -5572,6 +5572,12 @@ export type components = { * @description The seed for the RNG (omit for random) */ seed?: number; + /** + * Type + * @default random_range + * @enum {string} + */ + type: "random_range"; }; /** * Integer Range @@ -5580,12 +5586,12 @@ export type components = { RangeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5594,12 +5600,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default range - * @enum {string} - */ - type: "range"; /** * Start * @description The start of the range @@ -5618,6 +5618,12 @@ export type components = { * @default 1 */ step?: number; + /** + * Type + * @default range + * @enum {string} + */ + type: "range"; }; /** * Integer Range of Size @@ -5626,12 +5632,12 @@ export type components = { RangeOfSizeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5640,12 +5646,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default range_of_size - * @enum {string} - */ - type: "range_of_size"; /** * Start * @description The start of the range @@ -5664,6 +5664,12 @@ export type components = { * @default 1 */ step?: number; + /** + * Type + * @default range_of_size + * @enum {string} + */ + type: "range_of_size"; }; /** RemoveImagesFromBoardResult */ RemoveImagesFromBoardResult: { @@ -5680,12 +5686,12 @@ export type components = { ResizeLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5694,12 +5700,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default lresize - * @enum {string} - */ - type: "lresize"; /** * Latents * @description Latents tensor @@ -5728,6 +5728,12 @@ export type components = { * @default false */ antialias?: boolean; + /** + * Type + * @default lresize + * @enum {string} + */ + type: "lresize"; }; /** * ResourceOrigin @@ -5740,18 +5746,18 @@ export type components = { */ ResourceOrigin: "internal" | "external"; /** - * SDXL Compel Prompt + * SDXL Prompt * @description Parse prompt using compel package to conditioning. */ SDXLCompelPromptInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5760,12 +5766,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default sdxl_compel_prompt - * @enum {string} - */ - type: "sdxl_compel_prompt"; /** * Prompt * @description Prompt to be parsed by Compel to create a conditioning tensor @@ -5818,6 +5818,12 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip2?: components["schemas"]["ClipField"]; + /** + * Type + * @default sdxl_compel_prompt + * @enum {string} + */ + type: "sdxl_compel_prompt"; }; /** * SDXL LoRA @@ -5826,12 +5832,12 @@ export type components = { SDXLLoraLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5840,12 +5846,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default sdxl_lora_loader - * @enum {string} - */ - type: "sdxl_lora_loader"; /** * LoRA * @description LoRA model to load @@ -5872,18 +5872,18 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip2?: components["schemas"]["ClipField"]; + /** + * Type + * @default sdxl_lora_loader + * @enum {string} + */ + type: "sdxl_lora_loader"; }; /** * SDXLLoraLoaderOutput * @description SDXL LoRA Loader Output */ SDXLLoraLoaderOutput: { - /** - * Type - * @default sdxl_lora_loader_output - * @enum {string} - */ - type: "sdxl_lora_loader_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -5899,6 +5899,12 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip2?: components["schemas"]["ClipField"]; + /** + * Type + * @default sdxl_lora_loader_output + * @enum {string} + */ + type: "sdxl_lora_loader_output"; }; /** * SDXL Main Model @@ -5907,12 +5913,12 @@ export type components = { SDXLModelLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5921,29 +5927,23 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Model + * @description SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load + */ + model: components["schemas"]["MainModelField"]; /** * Type * @default sdxl_model_loader * @enum {string} */ type: "sdxl_model_loader"; - /** - * Model - * @description SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load - */ - model: components["schemas"]["MainModelField"]; }; /** * SDXLModelLoaderOutput * @description SDXL base model loader output */ SDXLModelLoaderOutput: { - /** - * Type - * @default sdxl_model_loader_output - * @enum {string} - */ - type: "sdxl_model_loader_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -5964,20 +5964,26 @@ export type components = { * @description VAE */ vae: components["schemas"]["VaeField"]; + /** + * Type + * @default sdxl_model_loader_output + * @enum {string} + */ + type: "sdxl_model_loader_output"; }; /** - * SDXL Refiner Compel Prompt + * SDXL Refiner Prompt * @description Parse prompt using compel package to conditioning. */ SDXLRefinerCompelPromptInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -5986,12 +5992,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default sdxl_refiner_compel_prompt - * @enum {string} - */ - type: "sdxl_refiner_compel_prompt"; /** * Style * @description Prompt to be parsed by Compel to create a conditioning tensor @@ -6029,6 +6029,12 @@ export type components = { * @description CLIP (tokenizer, text encoder, LoRAs) and skipped layer count */ clip2?: components["schemas"]["ClipField"]; + /** + * Type + * @default sdxl_refiner_compel_prompt + * @enum {string} + */ + type: "sdxl_refiner_compel_prompt"; }; /** * SDXL Refiner Model @@ -6037,12 +6043,12 @@ export type components = { SDXLRefinerModelLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6051,29 +6057,23 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Model + * @description SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load + */ + model: components["schemas"]["MainModelField"]; /** * Type * @default sdxl_refiner_model_loader * @enum {string} */ type: "sdxl_refiner_model_loader"; - /** - * Model - * @description SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load - */ - model: components["schemas"]["MainModelField"]; }; /** * SDXLRefinerModelLoaderOutput * @description SDXL refiner model loader output */ SDXLRefinerModelLoaderOutput: { - /** - * Type - * @default sdxl_refiner_model_loader_output - * @enum {string} - */ - type: "sdxl_refiner_model_loader_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -6089,6 +6089,12 @@ export type components = { * @description VAE */ vae: components["schemas"]["VaeField"]; + /** + * Type + * @default sdxl_refiner_model_loader_output + * @enum {string} + */ + type: "sdxl_refiner_model_loader_output"; }; /** * Scale Latents @@ -6097,12 +6103,12 @@ export type components = { ScaleLatentsInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6111,12 +6117,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default lscale - * @enum {string} - */ - type: "lscale"; /** * Latents * @description Latents tensor @@ -6140,6 +6140,12 @@ export type components = { * @default false */ antialias?: boolean; + /** + * Type + * @default lscale + * @enum {string} + */ + type: "lscale"; }; /** * SchedulerPredictionType @@ -6154,12 +6160,12 @@ export type components = { SeamlessModeInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6168,12 +6174,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default seamless - * @enum {string} - */ - type: "seamless"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -6196,18 +6196,18 @@ export type components = { * @default true */ seamless_x?: boolean; + /** + * Type + * @default seamless + * @enum {string} + */ + type: "seamless"; }; /** * SeamlessModeOutput * @description Modified Seamless Model output */ SeamlessModeOutput: { - /** - * Type - * @default seamless_output - * @enum {string} - */ - type: "seamless_output"; /** * UNet * @description UNet (scheduler, LoRAs) @@ -6218,6 +6218,12 @@ export type components = { * @description VAE */ vae?: components["schemas"]["VaeField"]; + /** + * Type + * @default seamless_output + * @enum {string} + */ + type: "seamless_output"; }; /** * Segment Anything Processor @@ -6226,12 +6232,12 @@ export type components = { SegmentAnythingProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6240,31 +6246,31 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default segment_anything_processor * @enum {string} */ type: "segment_anything_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; }; /** * Show Image - * @description Displays a provided image, and passes it forward in the pipeline. + * @description Displays a provided image using the OS image viewer, and passes it forward in the pipeline. */ ShowImageInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6273,17 +6279,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to show + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default show_image * @enum {string} */ type: "show_image"; - /** - * Image - * @description The image to show - */ - image?: components["schemas"]["ImageField"]; }; /** StableDiffusion1ModelCheckpointConfig */ StableDiffusion1ModelCheckpointConfig: { @@ -6442,12 +6448,12 @@ export type components = { StepParamEasingInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6456,12 +6462,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default step_param_easing - * @enum {string} - */ - type: "step_param_easing"; /** * Easing * @description The easing function to use @@ -6521,20 +6521,26 @@ export type components = { * @default false */ show_easing_plot?: boolean; + /** + * Type + * @default step_param_easing + * @enum {string} + */ + type: "step_param_easing"; }; /** - * String Primitive Collection + * String Collection Primitive * @description A collection of string primitive values */ StringCollectionInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6543,34 +6549,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Collection + * @description The collection of string values + */ + collection?: string[]; /** * Type * @default string_collection * @enum {string} */ type: "string_collection"; - /** - * Collection - * @description The collection of string values - */ - collection?: string[]; }; /** * StringCollectionOutput * @description Base class for nodes that output a collection of strings */ StringCollectionOutput: { + /** + * Collection + * @description The output strings + */ + collection: string[]; /** * Type * @default string_collection_output * @enum {string} */ type: "string_collection_output"; - /** - * Collection - * @description The output strings - */ - collection: string[]; }; /** * String Primitive @@ -6579,12 +6585,12 @@ export type components = { StringInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6593,35 +6599,35 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default string - * @enum {string} - */ - type: "string"; /** * Value * @description The string value * @default */ value?: string; + /** + * Type + * @default string + * @enum {string} + */ + type: "string"; }; /** * StringOutput * @description Base class for nodes that output a single string */ StringOutput: { + /** + * Value + * @description The output string + */ + value: string; /** * Type * @default string_output * @enum {string} */ type: "string_output"; - /** - * Value - * @description The output string - */ - value: string; }; /** * SubModelType @@ -6636,12 +6642,12 @@ export type components = { SubtractInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6650,12 +6656,6 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; - /** - * Type - * @default sub - * @enum {string} - */ - type: "sub"; /** * A * @description The first number @@ -6668,6 +6668,12 @@ export type components = { * @default 0 */ b?: number; + /** + * Type + * @default sub + * @enum {string} + */ + type: "sub"; }; /** TextualInversionModelConfig */ TextualInversionModelConfig: { @@ -6694,12 +6700,12 @@ export type components = { TileResamplerProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6708,17 +6714,17 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default tile_image_processor * @enum {string} */ type: "tile_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; /** * Down Sampling Rate * @description Down sampling rate @@ -6795,12 +6801,12 @@ export type components = { VaeLoaderInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6809,34 +6815,34 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * VAE + * @description VAE model to load + */ + vae_model: components["schemas"]["VAEModelField"]; /** * Type * @default vae_loader * @enum {string} */ type: "vae_loader"; - /** - * VAE - * @description VAE model to load - */ - vae_model: components["schemas"]["VAEModelField"]; }; /** * VaeLoaderOutput - * @description Model loader output + * @description VAE output */ VaeLoaderOutput: { + /** + * VAE + * @description VAE + */ + vae: components["schemas"]["VaeField"]; /** * Type * @default vae_loader_output * @enum {string} */ type: "vae_loader_output"; - /** - * VAE - * @description VAE - */ - vae: components["schemas"]["VaeField"]; }; /** VaeModelConfig */ VaeModelConfig: { @@ -6877,12 +6883,12 @@ export type components = { ZoeDepthImageProcessorInvocation: { /** * Id - * @description The id of this node. Must be unique among all nodes. + * @description The id of this instance of an invocation. Must be unique among all instances of invocations. */ id: string; /** * Is Intermediate - * @description Whether or not this node is an intermediate node. + * @description Whether or not this is an intermediate invocation. * @default false */ is_intermediate?: boolean; @@ -6891,35 +6897,39 @@ export type components = { * @description The workflow to save with the image */ workflow?: string; + /** + * Image + * @description The image to process + */ + image?: components["schemas"]["ImageField"]; /** * Type * @default zoe_depth_image_processor * @enum {string} */ type: "zoe_depth_image_processor"; - /** - * Image - * @description The image to process - */ - image?: components["schemas"]["ImageField"]; }; /** * UIConfigBase * @description Provides additional node configuration to the UI. - * This is used internally by the @tags and @title decorator logic. You probably want to use those - * decorators, though you may add this class to a node definition to specify the title and tags. + * This is used internally by the @invocation decorator logic. Do not use this directly. */ UIConfigBase: { /** * Tags - * @description The tags to display in the UI + * @description The node's tags */ tags?: string[]; /** * Title - * @description The display name of the node + * @description The node's display name */ title?: string; + /** + * Category + * @description The node's category + */ + category?: string; }; /** * Input @@ -6974,23 +6984,17 @@ export type components = { ui_order?: number; }; /** - * StableDiffusion2ModelFormat + * StableDiffusion1ModelFormat * @description An enumeration. * @enum {string} */ - StableDiffusion2ModelFormat: "checkpoint" | "diffusers"; + StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; /** * ControlNetModelFormat * @description An enumeration. * @enum {string} */ ControlNetModelFormat: "checkpoint" | "diffusers"; - /** - * StableDiffusion1ModelFormat - * @description An enumeration. - * @enum {string} - */ - StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; /** * StableDiffusionOnnxModelFormat * @description An enumeration. @@ -7003,6 +7007,12 @@ export type components = { * @enum {string} */ StableDiffusionXLModelFormat: "checkpoint" | "diffusers"; + /** + * StableDiffusion2ModelFormat + * @description An enumeration. + * @enum {string} + */ + StableDiffusion2ModelFormat: "checkpoint" | "diffusers"; }; responses: never; parameters: never; @@ -7119,7 +7129,7 @@ export type operations = { }; requestBody: { content: { - "application/json": components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; + "application/json": components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; }; }; responses: { @@ -7160,7 +7170,7 @@ export type operations = { }; requestBody: { content: { - "application/json": components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; + "application/json": components["schemas"]["BooleanInvocation"] | components["schemas"]["BooleanCollectionInvocation"] | components["schemas"]["IntegerInvocation"] | components["schemas"]["IntegerCollectionInvocation"] | components["schemas"]["FloatInvocation"] | components["schemas"]["FloatCollectionInvocation"] | components["schemas"]["StringInvocation"] | components["schemas"]["StringCollectionInvocation"] | components["schemas"]["ImageInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["LatentsInvocation"] | components["schemas"]["LatentsCollectionInvocation"] | components["schemas"]["ColorInvocation"] | components["schemas"]["ConditioningInvocation"] | components["schemas"]["ConditioningCollectionInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["SDXLLoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["SeamlessModeInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CreateDenoiseMaskInvocation"] | components["schemas"]["DenoiseLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["BlendLatentsInvocation"] | components["schemas"]["ONNXPromptInvocation"] | components["schemas"]["ONNXTextToLatentsInvocation"] | components["schemas"]["ONNXLatentsToImageInvocation"] | components["schemas"]["OnnxModelLoaderInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["BlankImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["MaskEdgeInvocation"] | components["schemas"]["MaskCombineInvocation"] | components["schemas"]["ColorCorrectInvocation"] | components["schemas"]["ImageHueAdjustmentInvocation"] | components["schemas"]["ImageLuminosityAdjustmentInvocation"] | components["schemas"]["ImageSaturationAdjustmentInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["LaMaInfillInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"]; }; }; responses: { diff --git a/tests/nodes/test_nodes.py b/tests/nodes/test_nodes.py index 92faa6a0ba..26b8b2b565 100644 --- a/tests/nodes/test_nodes.py +++ b/tests/nodes/test_nodes.py @@ -1,65 +1,63 @@ -from typing import Any, Callable, Literal, Union +from typing import Any, Callable, Union from pydantic import Field -from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext +from invokeai.app.invocations.baseinvocation import ( + BaseInvocation, + BaseInvocationOutput, + InvocationContext, + invocation, + invocation_output, +) from invokeai.app.invocations.image import ImageField # Define test invocations before importing anything that uses invocations +@invocation_output("test_list_output") class ListPassThroughInvocationOutput(BaseInvocationOutput): - type: Literal["test_list_output"] = "test_list_output" - collection: list[ImageField] = Field(default_factory=list) +@invocation("test_list") class ListPassThroughInvocation(BaseInvocation): - type: Literal["test_list"] = "test_list" - collection: list[ImageField] = Field(default_factory=list) def invoke(self, context: InvocationContext) -> ListPassThroughInvocationOutput: return ListPassThroughInvocationOutput(collection=self.collection) +@invocation_output("test_prompt_output") class PromptTestInvocationOutput(BaseInvocationOutput): - type: Literal["test_prompt_output"] = "test_prompt_output" - prompt: str = Field(default="") +@invocation("test_prompt") class PromptTestInvocation(BaseInvocation): - type: Literal["test_prompt"] = "test_prompt" - prompt: str = Field(default="") def invoke(self, context: InvocationContext) -> PromptTestInvocationOutput: return PromptTestInvocationOutput(prompt=self.prompt) +@invocation("test_error") class ErrorInvocation(BaseInvocation): - type: Literal["test_error"] = "test_error" - def invoke(self, context: InvocationContext) -> PromptTestInvocationOutput: raise Exception("This invocation is supposed to fail") +@invocation_output("test_image_output") class ImageTestInvocationOutput(BaseInvocationOutput): - type: Literal["test_image_output"] = "test_image_output" - image: ImageField = Field() +@invocation("test_text_to_image") class TextToImageTestInvocation(BaseInvocation): - type: Literal["test_text_to_image"] = "test_text_to_image" - prompt: str = Field(default="") def invoke(self, context: InvocationContext) -> ImageTestInvocationOutput: return ImageTestInvocationOutput(image=ImageField(image_name=self.id)) +@invocation("test_image_to_image") class ImageToImageTestInvocation(BaseInvocation): - type: Literal["test_image_to_image"] = "test_image_to_image" - prompt: str = Field(default="") image: Union[ImageField, None] = Field(default=None) @@ -67,13 +65,13 @@ class ImageToImageTestInvocation(BaseInvocation): return ImageTestInvocationOutput(image=ImageField(image_name=self.id)) +@invocation_output("test_prompt_collection_output") class PromptCollectionTestInvocationOutput(BaseInvocationOutput): - type: Literal["test_prompt_collection_output"] = "test_prompt_collection_output" collection: list[str] = Field(default_factory=list) +@invocation("test_prompt_collection") class PromptCollectionTestInvocation(BaseInvocation): - type: Literal["test_prompt_collection"] = "test_prompt_collection" collection: list[str] = Field() def invoke(self, context: InvocationContext) -> PromptCollectionTestInvocationOutput: