feat(nodes): add invocation cache

The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.

## Results

This feature provides anywhere some significant to massive performance improvement.

The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.

## Overview

A new `invocation_cache` service is added to handle the caching. There's not much to it.

All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.

The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.

To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.

## In-Memory Implementation

An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.

Max node cache size is added as `node_cache_size` under the `Generation` config category.

It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.

Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.

## Node Definition

The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.

Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.

The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.

## One Gotcha

Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.

If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.

## Linear UI

The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.

This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.

This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.

## Workflow Editor

All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.

The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.

Users should consider saving their workflows after loading them in and having them updated.

## Future Enhancements - Callback

A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.

This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.

## Future Enhancements - Persisted Cache

Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
This commit is contained in:
psychedelicious 2023-09-18 13:41:19 +10:00
parent d94d4ef83f
commit 593604bbba
45 changed files with 1600 additions and 266 deletions

View File

@ -9,6 +9,7 @@ from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.backend.util.logging import InvokeAILogger
@ -126,6 +127,7 @@ class ApiDependencies:
configuration=config,
performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger,
invocation_cache=MemoryInvocationCache(max_cache_size=config.node_cache_size),
)
create_system_graphs(services.graph_library)

View File

@ -1,5 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from .services.config import InvokeAIAppConfig
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
@ -309,6 +311,7 @@ def invoke_cli():
performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger,
configuration=config,
invocation_cache=MemoryInvocationCache(max_cache_size=config.node_cache_size),
)
system_graphs = create_system_graphs(services.graph_library)

View File

@ -568,7 +568,24 @@ class BaseInvocation(ABC, BaseModel):
raise RequiredConnectionException(self.__fields__["type"].default, field_name)
elif _input == Input.Any:
raise MissingInputException(self.__fields__["type"].default, field_name)
return self.invoke(context)
output: BaseInvocationOutput
if self.use_cache:
key = context.services.invocation_cache.create_key(self)
cached_value = context.services.invocation_cache.get(key)
if cached_value is None:
context.services.logger.debug(f'Invocation cache miss for type "{self.get_type()}": {self.id}')
output = self.invoke(context)
context.services.invocation_cache.save(key, output)
return output
else:
context.services.logger.debug(f'Invocation cache hit for type "{self.get_type()}": {self.id}')
return cached_value
else:
context.services.logger.debug(f'Skipping invocation cache for "{self.get_type()}": {self.id}')
return self.invoke(context)
def get_type(self) -> str:
return self.__fields__["type"].default
id: str = Field(
description="The id of this instance of an invocation. Must be unique among all instances of invocations."
@ -581,6 +598,7 @@ class BaseInvocation(ABC, BaseModel):
description="The workflow to save with the image",
ui_type=UIType.WorkflowField,
)
use_cache: bool = InputField(default=True, description="Whether or not to use the cache")
@validator("workflow", pre=True)
def validate_workflow_is_json(cls, v):
@ -604,6 +622,7 @@ def invocation(
tags: Optional[list[str]] = None,
category: Optional[str] = None,
version: Optional[str] = None,
use_cache: Optional[bool] = True,
) -> Callable[[Type[GenericBaseInvocation]], Type[GenericBaseInvocation]]:
"""
Adds metadata to an invocation.
@ -636,6 +655,8 @@ def invocation(
except ValueError as e:
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
cls.UIConfig.version = version
if use_cache is not None:
cls.__fields__["use_cache"].default = use_cache
# Add the invocation type to the pydantic model of the invocation
invocation_type_annotation = Literal[invocation_type] # type: ignore

View File

@ -56,6 +56,7 @@ class RangeOfSizeInvocation(BaseInvocation):
tags=["range", "integer", "random", "collection"],
category="collections",
version="1.0.0",
use_cache=False,
)
class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers"""

View File

@ -965,3 +965,42 @@ class ImageChannelMultiplyInvocation(BaseInvocation):
width=image_dto.width,
height=image_dto.height,
)
@invocation(
"save_image",
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.0.0",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation):
"""Saves an image. Unlike an image primitive, this invocation stores a copy of the image."""
image: ImageField = InputField(description="The image to load")
metadata: CoreMetadata = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -54,7 +54,14 @@ class DivideInvocation(BaseInvocation):
return IntegerOutput(value=int(self.a / self.b))
@invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math", version="1.0.0")
@invocation(
"rand_int",
title="Random Integer",
tags=["math", "random"],
category="math",
version="1.0.0",
use_cache=False,
)
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""

View File

@ -10,7 +10,14 @@ from invokeai.app.invocations.primitives import StringCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation
@invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt", version="1.0.0")
@invocation(
"dynamic_prompt",
title="Dynamic Prompt",
tags=["prompt", "collection"],
category="prompt",
version="1.0.0",
use_cache=False,
)
class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""

View File

@ -253,6 +253,7 @@ class InvokeAIAppConfig(InvokeAISettings):
attention_type : Literal[tuple(["auto", "normal", "xformers", "sliced", "torch-sdp"])] = Field(default="auto", description="Attention type", category="Generation", )
attention_slice_size: Literal[tuple(["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8])] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', category="Generation", )
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category="Generation",)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", category="Generation", )
# NODES
allow_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.", category="Nodes")

View File

@ -0,0 +1,29 @@
from abc import ABC, abstractmethod
from typing import Optional, Union
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput
class InvocationCacheBase(ABC):
"""Base class for invocation caches."""
@abstractmethod
def get(self, key: Union[int, str]) -> Optional[BaseInvocationOutput]:
"""Retrieves and invocation output from the cache"""
pass
@abstractmethod
def save(self, key: Union[int, str], value: BaseInvocationOutput) -> None:
"""Stores an invocation output in the cache"""
pass
@abstractmethod
def delete(self, key: Union[int, str]) -> None:
"""Deleted an invocation output from the cache"""
pass
@classmethod
@abstractmethod
def create_key(cls, value: BaseInvocation) -> Union[int, str]:
"""Creates the cache key for an invocation"""
pass

View File

@ -0,0 +1,34 @@
from queue import Queue
from typing import Optional, Union
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
class MemoryInvocationCache(InvocationCacheBase):
__cache: dict[Union[int, str], BaseInvocationOutput]
__max_cache_size: int
__cache_ids: Queue
def __init__(self, max_cache_size: int = 512) -> None:
self.__cache = dict()
self.__max_cache_size = max_cache_size
self.__cache_ids = Queue()
def get(self, key: Union[int, str]) -> Optional[BaseInvocationOutput]:
return self.__cache.get(key, None)
def save(self, key: Union[int, str], value: BaseInvocationOutput) -> None:
if key not in self.__cache:
self.__cache[key] = value
self.__cache_ids.put(key)
if self.__cache_ids.qsize() > self.__max_cache_size:
self.__cache.pop(self.__cache_ids.get())
def delete(self, key: Union[int, str]) -> None:
if key in self.__cache:
del self.__cache[key]
@classmethod
def create_key(cls, value: BaseInvocation) -> Union[int, str]:
return hash(value.json(exclude={"id"}))

View File

@ -12,6 +12,7 @@ if TYPE_CHECKING:
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.graph import GraphExecutionState, LibraryGraph
from invokeai.app.services.images import ImageServiceABC
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
from invokeai.app.services.invocation_queue import InvocationQueueABC
from invokeai.app.services.invocation_stats import InvocationStatsServiceBase
from invokeai.app.services.invoker import InvocationProcessorABC
@ -37,6 +38,7 @@ class InvocationServices:
processor: "InvocationProcessorABC"
performance_statistics: "InvocationStatsServiceBase"
queue: "InvocationQueueABC"
invocation_cache: "InvocationCacheBase"
def __init__(
self,
@ -53,6 +55,7 @@ class InvocationServices:
processor: "InvocationProcessorABC",
performance_statistics: "InvocationStatsServiceBase",
queue: "InvocationQueueABC",
invocation_cache: "InvocationCacheBase",
):
self.board_images = board_images
self.boards = boards
@ -68,3 +71,4 @@ class InvocationServices:
self.processor = processor
self.performance_statistics = performance_statistics
self.queue = queue
self.invocation_cache = invocation_cache

View File

@ -0,0 +1,5 @@
import { Store } from '@reduxjs/toolkit';
import { atom } from 'nanostores';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export const $store = atom<Store<any> | undefined>();

View File

@ -31,6 +31,7 @@ import { actionSanitizer } from './middleware/devtools/actionSanitizer';
import { actionsDenylist } from './middleware/devtools/actionsDenylist';
import { stateSanitizer } from './middleware/devtools/stateSanitizer';
import { listenerMiddleware } from './middleware/listenerMiddleware';
import { $store } from './nanostores/store';
const allReducers = {
canvas: canvasReducer,
@ -121,3 +122,4 @@ export type RootState = ReturnType<typeof store.getState>;
export type AppThunkDispatch = ThunkDispatch<RootState, any, AnyAction>;
export type AppDispatch = typeof store.dispatch;
export const stateSelector = (state: RootState) => state;
$store.set(store);

View File

@ -27,7 +27,7 @@ const EmbedWorkflowCheckbox = ({ nodeId }: { nodeId: string }) => {
return (
<FormControl as={Flex} sx={{ alignItems: 'center', gap: 2, w: 'auto' }}>
<FormLabel sx={{ fontSize: 'xs', mb: '1px' }}>Embed Workflow</FormLabel>
<FormLabel sx={{ fontSize: 'xs', mb: '1px' }}>Workflow</FormLabel>
<Checkbox
className="nopan"
size="sm"

View File

@ -1,14 +1,13 @@
import { Flex, Grid, GridItem } from '@chakra-ui/react';
import { useAnyOrDirectInputFieldNames } from 'features/nodes/hooks/useAnyOrDirectInputFieldNames';
import { useConnectionInputFieldNames } from 'features/nodes/hooks/useConnectionInputFieldNames';
import { useOutputFieldNames } from 'features/nodes/hooks/useOutputFieldNames';
import { memo } from 'react';
import NodeWrapper from '../common/NodeWrapper';
import InvocationNodeFooter from './InvocationNodeFooter';
import InvocationNodeHeader from './InvocationNodeHeader';
import NodeWrapper from '../common/NodeWrapper';
import OutputField from './fields/OutputField';
import InputField from './fields/InputField';
import { useOutputFieldNames } from 'features/nodes/hooks/useOutputFieldNames';
import { useWithFooter } from 'features/nodes/hooks/useWithFooter';
import { useConnectionInputFieldNames } from 'features/nodes/hooks/useConnectionInputFieldNames';
import { useAnyOrDirectInputFieldNames } from 'features/nodes/hooks/useAnyOrDirectInputFieldNames';
import OutputField from './fields/OutputField';
type Props = {
nodeId: string;
@ -22,7 +21,6 @@ const InvocationNode = ({ nodeId, isOpen, label, type, selected }: Props) => {
const inputConnectionFieldNames = useConnectionInputFieldNames(nodeId);
const inputAnyOrDirectFieldNames = useAnyOrDirectInputFieldNames(nodeId);
const outputFieldNames = useOutputFieldNames(nodeId);
const withFooter = useWithFooter(nodeId);
return (
<NodeWrapper nodeId={nodeId} selected={selected}>
@ -43,7 +41,7 @@ const InvocationNode = ({ nodeId, isOpen, label, type, selected }: Props) => {
h: 'full',
py: 2,
gap: 1,
borderBottomRadius: withFooter ? 0 : 'base',
borderBottomRadius: 0,
}}
>
<Flex sx={{ flexDir: 'column', px: 2, w: 'full', h: 'full' }}>
@ -76,7 +74,7 @@ const InvocationNode = ({ nodeId, isOpen, label, type, selected }: Props) => {
))}
</Flex>
</Flex>
{withFooter && <InvocationNodeFooter nodeId={nodeId} />}
<InvocationNodeFooter nodeId={nodeId} />
</>
)}
</NodeWrapper>

View File

@ -3,12 +3,15 @@ import { DRAG_HANDLE_CLASSNAME } from 'features/nodes/types/constants';
import { memo } from 'react';
import EmbedWorkflowCheckbox from './EmbedWorkflowCheckbox';
import SaveToGalleryCheckbox from './SaveToGalleryCheckbox';
import UseCacheCheckbox from './UseCacheCheckbox';
import { useHasImageOutput } from 'features/nodes/hooks/useHasImageOutput';
type Props = {
nodeId: string;
};
const InvocationNodeFooter = ({ nodeId }: Props) => {
const hasImageOutput = useHasImageOutput(nodeId);
return (
<Flex
className={DRAG_HANDLE_CLASSNAME}
@ -22,8 +25,9 @@ const InvocationNodeFooter = ({ nodeId }: Props) => {
justifyContent: 'space-between',
}}
>
<EmbedWorkflowCheckbox nodeId={nodeId} />
<SaveToGalleryCheckbox nodeId={nodeId} />
{hasImageOutput && <EmbedWorkflowCheckbox nodeId={nodeId} />}
<UseCacheCheckbox nodeId={nodeId} />
{hasImageOutput && <SaveToGalleryCheckbox nodeId={nodeId} />}
</Flex>
);
};

View File

@ -0,0 +1,35 @@
import { Checkbox, Flex, FormControl, FormLabel } from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useUseCache } from 'features/nodes/hooks/useUseCache';
import { nodeUseCacheChanged } from 'features/nodes/store/nodesSlice';
import { ChangeEvent, memo, useCallback } from 'react';
const UseCacheCheckbox = ({ nodeId }: { nodeId: string }) => {
const dispatch = useAppDispatch();
const useCache = useUseCache(nodeId);
const handleChange = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
dispatch(
nodeUseCacheChanged({
nodeId,
useCache: e.target.checked,
})
);
},
[dispatch, nodeId]
);
return (
<FormControl as={Flex} sx={{ alignItems: 'center', gap: 2, w: 'auto' }}>
<FormLabel sx={{ fontSize: 'xs', mb: '1px' }}>Use Cache</FormLabel>
<Checkbox
className="nopan"
size="sm"
onChange={handleChange}
isChecked={useCache}
/>
</FormControl>
);
};
export default memo(UseCacheCheckbox);

View File

@ -146,6 +146,7 @@ export const useBuildNodeData = () => {
isIntermediate: true,
inputs,
outputs,
useCache: template.useCache,
},
};

View File

@ -0,0 +1,29 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { useMemo } from 'react';
import { isInvocationNode } from '../types/types';
export const useUseCache = (nodeId: string) => {
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
if (!isInvocationNode(node)) {
return false;
}
// cast to boolean to support older workflows that didn't have useCache
// TODO: handle this better somehow
return node.data.useCache;
},
defaultSelectorOptions
),
[nodeId]
);
const useCache = useAppSelector(selector);
return useCache;
};

View File

@ -7,7 +7,7 @@ import { useMemo } from 'react';
import { FOOTER_FIELDS } from '../types/constants';
import { isInvocationNode } from '../types/types';
export const useWithFooter = (nodeId: string) => {
export const useHasImageOutputs = (nodeId: string) => {
const selector = useMemo(
() =>
createSelector(

View File

@ -260,6 +260,20 @@ const nodesSlice = createSlice({
}
node.data.embedWorkflow = embedWorkflow;
},
nodeUseCacheChanged: (
state,
action: PayloadAction<{ nodeId: string; useCache: boolean }>
) => {
const { nodeId, useCache } = action.payload;
const nodeIndex = state.nodes.findIndex((n) => n.id === nodeId);
const node = state.nodes?.[nodeIndex];
if (!isInvocationNode(node)) {
return;
}
node.data.useCache = useCache;
},
nodeIsIntermediateChanged: (
state,
action: PayloadAction<{ nodeId: string; isIntermediate: boolean }>
@ -904,6 +918,7 @@ export const {
nodeIsIntermediateChanged,
mouseOverNodeChanged,
nodeExclusivelySelected,
nodeUseCacheChanged,
} = nodesSlice.actions;
export default nodesSlice.reducer;

View File

@ -1,3 +1,4 @@
import { $store } from 'app/store/nanostores/store';
import {
SchedulerParam,
zBaseModel,
@ -7,7 +8,8 @@ import {
zSDXLRefinerModel,
zScheduler,
} from 'features/parameters/types/parameterSchemas';
import { keyBy } from 'lodash-es';
import i18n from 'i18next';
import { has, keyBy } from 'lodash-es';
import { OpenAPIV3 } from 'openapi-types';
import { RgbaColor } from 'react-colorful';
import { Node } from 'reactflow';
@ -20,7 +22,6 @@ import {
import { O } from 'ts-toolbelt';
import { JsonObject } from 'type-fest';
import { z } from 'zod';
import i18n from 'i18next';
export type NonNullableGraph = O.Required<Graph, 'nodes' | 'edges'>;
@ -57,6 +58,10 @@ export type InvocationTemplate = {
* The invocation's version.
*/
version?: string;
/**
* Whether or not this node should use the cache
*/
useCache: boolean;
};
export type FieldUIConfig = {
@ -976,6 +981,9 @@ export type InvocationSchemaExtra = {
type: Omit<OpenAPIV3.SchemaObject, 'default'> & {
default: AnyInvocationType;
};
use_cache: Omit<OpenAPIV3.SchemaObject, 'default'> & {
default: boolean;
};
};
};
@ -1139,9 +1147,37 @@ export const zInvocationNodeData = z.object({
version: zSemVer.optional(),
});
export const zInvocationNodeDataV2 = z.preprocess(
(arg) => {
try {
const data = zInvocationNodeData.parse(arg);
if (!has(data, 'useCache')) {
const nodeTemplates = $store.get()?.getState().nodes.nodeTemplates as
| Record<string, InvocationTemplate>
| undefined;
const template = nodeTemplates?.[data.type];
let useCache = true;
if (template) {
useCache = template.useCache;
}
Object.assign(data, { useCache });
}
return data;
} catch {
return arg;
}
},
zInvocationNodeData.extend({
useCache: z.boolean(),
})
);
// Massage this to get better type safety while developing
export type InvocationNodeData = Omit<
z.infer<typeof zInvocationNodeData>,
z.infer<typeof zInvocationNodeDataV2>,
'type'
> & {
type: AnyInvocationType;
@ -1169,7 +1205,7 @@ const zDimension = z.number().gt(0).nullish();
export const zWorkflowInvocationNode = z.object({
id: z.string().trim().min(1),
type: z.literal('invocation'),
data: zInvocationNodeData,
data: zInvocationNodeDataV2,
width: zDimension,
height: zDimension,
position: zPosition,
@ -1231,6 +1267,8 @@ export type WorkflowWarning = {
data: JsonObject;
};
const CURRENT_WORKFLOW_VERSION = '1.0.0';
export const zWorkflow = z.object({
name: z.string().default(''),
author: z.string().default(''),
@ -1246,7 +1284,7 @@ export const zWorkflow = z.object({
.object({
version: zSemVer,
})
.default({ version: '1.0.0' }),
.default({ version: CURRENT_WORKFLOW_VERSION }),
});
export const zValidatedWorkflow = zWorkflow.transform((workflow) => {

View File

@ -1,46 +1,32 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import {
ImageNSFWBlurInvocation,
LatentsToImageInvocation,
MetadataAccumulatorInvocation,
} from 'services/api/types';
import {
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NSFW_CHECKER,
} from './constants';
import { LATENTS_TO_IMAGE, NSFW_CHECKER } from './constants';
export const addNSFWCheckerToGraph = (
state: RootState,
graph: NonNullableGraph,
nodeIdToAddTo = LATENTS_TO_IMAGE
): void => {
const activeTabName = activeTabNameSelector(state);
const is_intermediate =
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
const nodeToAddTo = graph.nodes[nodeIdToAddTo] as
| LatentsToImageInvocation
| undefined;
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (!nodeToAddTo) {
// something has gone terribly awry
return;
}
nodeToAddTo.is_intermediate = true;
nodeToAddTo.use_cache = true;
const nsfwCheckerNode: ImageNSFWBlurInvocation = {
id: NSFW_CHECKER,
type: 'img_nsfw',
is_intermediate,
is_intermediate: true,
};
graph.nodes[NSFW_CHECKER] = nsfwCheckerNode as ImageNSFWBlurInvocation;
@ -54,17 +40,4 @@ export const addNSFWCheckerToGraph = (
field: 'image',
},
});
if (metadataAccumulator) {
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: NSFW_CHECKER,
field: 'metadata',
},
});
}
};

View File

@ -0,0 +1,92 @@
import { NonNullableGraph } from 'features/nodes/types/types';
import {
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
METADATA_ACCUMULATOR,
NSFW_CHECKER,
SAVE_IMAGE,
WATERMARKER,
} from './constants';
import {
MetadataAccumulatorInvocation,
SaveImageInvocation,
} from 'services/api/types';
import { RootState } from 'app/store/store';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
/**
* Set the `use_cache` field on the linear/canvas graph's final image output node to False.
*/
export const addSaveImageNode = (
state: RootState,
graph: NonNullableGraph
): void => {
const activeTabName = activeTabNameSelector(state);
const is_intermediate =
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate,
use_cache: false,
};
graph.nodes[SAVE_IMAGE] = saveImageNode;
const metadataAccumulator = graph.nodes[METADATA_ACCUMULATOR] as
| MetadataAccumulatorInvocation
| undefined;
if (metadataAccumulator) {
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: SAVE_IMAGE,
field: 'metadata',
},
});
}
const destination = {
node_id: SAVE_IMAGE,
field: 'image',
};
if (WATERMARKER in graph.nodes) {
graph.edges.push({
source: {
node_id: WATERMARKER,
field: 'image',
},
destination,
});
} else if (NSFW_CHECKER in graph.nodes) {
graph.edges.push({
source: {
node_id: NSFW_CHECKER,
field: 'image',
},
destination,
});
} else if (CANVAS_OUTPUT in graph.nodes) {
graph.edges.push({
source: {
node_id: CANVAS_OUTPUT,
field: 'image',
},
destination,
});
} else if (LATENTS_TO_IMAGE in graph.nodes) {
graph.edges.push({
source: {
node_id: LATENTS_TO_IMAGE,
field: 'image',
},
destination,
});
}
};

View File

@ -51,6 +51,7 @@ export const addWatermarkerToGraph = (
// no matter the situation, we want the l2i node to be intermediate
nodeToAddTo.is_intermediate = true;
nodeToAddTo.use_cache = true;
if (nsfwCheckerNode) {
// if we are using NSFW checker, we need to "disable" it output by marking it intermediate,

View File

@ -25,6 +25,7 @@ import {
POSITIVE_CONDITIONING,
SEAMLESS,
} from './constants';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Image to Image graph.
@ -53,14 +54,10 @@ export const buildCanvasImageToImageGraph = (
// The bounding box determines width and height, not the width and height params
const { width, height } = state.canvas.boundingBoxDimensions;
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -92,31 +89,31 @@ export const buildCanvasImageToImageGraph = (
[modelLoaderNodeId]: {
type: 'main_model_loader',
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
is_intermediate: true,
is_intermediate,
skipped_layers: clipSkip,
},
[POSITIVE_CONDITIONING]: {
type: 'compel',
id: POSITIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: positivePrompt,
},
[NEGATIVE_CONDITIONING]: {
type: 'compel',
id: NEGATIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: negativePrompt,
},
[NOISE]: {
type: 'noise',
id: NOISE,
is_intermediate: true,
is_intermediate,
use_cpu,
width: !isUsingScaledDimensions
? width
@ -128,12 +125,12 @@ export const buildCanvasImageToImageGraph = (
[IMAGE_TO_LATENTS]: {
type: 'i2l',
id: IMAGE_TO_LATENTS,
is_intermediate: true,
is_intermediate,
},
[DENOISE_LATENTS]: {
type: 'denoise_latents',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -143,7 +140,7 @@ export const buildCanvasImageToImageGraph = (
[CANVAS_OUTPUT]: {
type: 'l2i',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
},
},
edges: [
@ -238,7 +235,7 @@ export const buildCanvasImageToImageGraph = (
graph.nodes[IMG2IMG_RESIZE] = {
id: IMG2IMG_RESIZE,
type: 'img_resize',
is_intermediate: true,
is_intermediate,
image: initialImage,
width: scaledBoundingBoxDimensions.width,
height: scaledBoundingBoxDimensions.height,
@ -246,13 +243,13 @@ export const buildCanvasImageToImageGraph = (
graph.nodes[LATENTS_TO_IMAGE] = {
id: LATENTS_TO_IMAGE,
type: 'l2i',
is_intermediate: true,
is_intermediate,
fp32,
};
graph.nodes[CANVAS_OUTPUT] = {
id: CANVAS_OUTPUT,
type: 'img_resize',
is_intermediate: !shouldAutoSave,
is_intermediate,
width: width,
height: height,
};
@ -293,7 +290,7 @@ export const buildCanvasImageToImageGraph = (
graph.nodes[CANVAS_OUTPUT] = {
type: 'l2i',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
fp32,
};
@ -337,17 +334,6 @@ export const buildCanvasImageToImageGraph = (
init_image: initialImage.image_name,
};
graph.edges.push({
source: {
node_id: METADATA_ACCUMULATOR,
field: 'metadata',
},
destination: {
node_id: CANVAS_OUTPUT,
field: 'metadata',
},
});
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
addSeamlessToLinearGraph(state, graph, modelLoaderNodeId);
@ -377,5 +363,7 @@ export const buildCanvasImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -44,6 +44,7 @@ import {
RANGE_OF_SIZE,
SEAMLESS,
} from './constants';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Inpaint graph.
@ -87,12 +88,8 @@ export const buildCanvasInpaintGraph = (
const { width, height } = state.canvas.boundingBoxDimensions;
// We may need to set the inpaint width and height to scale the image
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const is_intermediate = true;
const fp32 = vaePrecision === 'fp32';
const isUsingScaledDimensions = ['auto', 'manual'].includes(
@ -111,56 +108,56 @@ export const buildCanvasInpaintGraph = (
[modelLoaderNodeId]: {
type: 'main_model_loader',
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
is_intermediate: true,
is_intermediate,
skipped_layers: clipSkip,
},
[POSITIVE_CONDITIONING]: {
type: 'compel',
id: POSITIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: positivePrompt,
},
[NEGATIVE_CONDITIONING]: {
type: 'compel',
id: NEGATIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: negativePrompt,
},
[MASK_BLUR]: {
type: 'img_blur',
id: MASK_BLUR,
is_intermediate: true,
is_intermediate,
radius: maskBlur,
blur_type: maskBlurMethod,
},
[INPAINT_IMAGE]: {
type: 'i2l',
id: INPAINT_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[INPAINT_CREATE_MASK]: {
type: 'create_denoise_mask',
id: INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
},
[DENOISE_LATENTS]: {
type: 'denoise_latents',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: steps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -171,18 +168,18 @@ export const buildCanvasInpaintGraph = (
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_NOISE_INCREMENT]: {
type: 'add',
id: CANVAS_COHERENCE_NOISE_INCREMENT,
b: 1,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: CANVAS_COHERENCE_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: canvasCoherenceSteps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -192,19 +189,19 @@ export const buildCanvasInpaintGraph = (
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[CANVAS_OUTPUT]: {
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
reference: canvasInitImage,
},
[RANGE_OF_SIZE]: {
type: 'range_of_size',
id: RANGE_OF_SIZE,
is_intermediate: true,
is_intermediate,
// seed - must be connected manually
// start: 0,
size: iterations,
@ -213,7 +210,7 @@ export const buildCanvasInpaintGraph = (
[ITERATE]: {
type: 'iterate',
id: ITERATE,
is_intermediate: true,
is_intermediate,
},
},
edges: [
@ -436,7 +433,7 @@ export const buildCanvasInpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_UP] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasInitImage,
@ -444,7 +441,7 @@ export const buildCanvasInpaintGraph = (
graph.nodes[MASK_RESIZE_UP] = {
type: 'img_resize',
id: MASK_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasMaskImage,
@ -452,14 +449,14 @@ export const buildCanvasInpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[MASK_RESIZE_DOWN] = {
type: 'img_resize',
id: MASK_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
@ -597,7 +594,7 @@ export const buildCanvasInpaintGraph = (
graph.nodes[CANVAS_COHERENCE_INPAINT_CREATE_MASK] = {
type: 'create_denoise_mask',
id: CANVAS_COHERENCE_INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
};
@ -650,7 +647,7 @@ export const buildCanvasInpaintGraph = (
graph.nodes[CANVAS_COHERENCE_MASK_EDGE] = {
type: 'mask_edge',
id: CANVAS_COHERENCE_MASK_EDGE,
is_intermediate: true,
is_intermediate,
edge_blur: maskBlur,
edge_size: maskBlur * 2,
low_threshold: 100,
@ -747,5 +744,7 @@ export const buildCanvasInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -46,6 +46,7 @@ import {
RANGE_OF_SIZE,
SEAMLESS,
} from './constants';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Outpaint graph.
@ -91,14 +92,10 @@ export const buildCanvasOutpaintGraph = (
const { width, height } = state.canvas.boundingBoxDimensions;
// We may need to set the inpaint width and height to scale the image
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -115,61 +112,61 @@ export const buildCanvasOutpaintGraph = (
[modelLoaderNodeId]: {
type: 'main_model_loader',
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
is_intermediate: true,
is_intermediate,
skipped_layers: clipSkip,
},
[POSITIVE_CONDITIONING]: {
type: 'compel',
id: POSITIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: positivePrompt,
},
[NEGATIVE_CONDITIONING]: {
type: 'compel',
id: NEGATIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: negativePrompt,
},
[MASK_FROM_ALPHA]: {
type: 'tomask',
id: MASK_FROM_ALPHA,
is_intermediate: true,
is_intermediate,
image: canvasInitImage,
},
[MASK_COMBINE]: {
type: 'mask_combine',
id: MASK_COMBINE,
is_intermediate: true,
is_intermediate,
mask2: canvasMaskImage,
},
[INPAINT_IMAGE]: {
type: 'i2l',
id: INPAINT_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[INPAINT_CREATE_MASK]: {
type: 'create_denoise_mask',
id: INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
},
[DENOISE_LATENTS]: {
type: 'denoise_latents',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: steps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -180,18 +177,18 @@ export const buildCanvasOutpaintGraph = (
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_NOISE_INCREMENT]: {
type: 'add',
id: CANVAS_COHERENCE_NOISE_INCREMENT,
b: 1,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: CANVAS_COHERENCE_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: canvasCoherenceSteps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -201,18 +198,18 @@ export const buildCanvasOutpaintGraph = (
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[CANVAS_OUTPUT]: {
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
},
[RANGE_OF_SIZE]: {
type: 'range_of_size',
id: RANGE_OF_SIZE,
is_intermediate: true,
is_intermediate,
// seed - must be connected manually
// start: 0,
size: iterations,
@ -221,7 +218,7 @@ export const buildCanvasOutpaintGraph = (
[ITERATE]: {
type: 'iterate',
id: ITERATE,
is_intermediate: true,
is_intermediate,
},
},
edges: [
@ -472,7 +469,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_patchmatch',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
downscale: infillPatchmatchDownscaleSize,
};
}
@ -481,7 +478,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_lama',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
};
}
@ -489,7 +486,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_cv2',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
};
}
@ -497,7 +494,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_tile',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
tile_size: infillTileSize,
};
}
@ -511,7 +508,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_UP] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasInitImage,
@ -519,28 +516,28 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[MASK_RESIZE_UP] = {
type: 'img_resize',
id: MASK_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
};
graph.nodes[INPAINT_IMAGE_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[INPAINT_INFILL_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_INFILL_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[MASK_RESIZE_DOWN] = {
type: 'img_resize',
id: MASK_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
@ -699,7 +696,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[CANVAS_COHERENCE_INPAINT_CREATE_MASK] = {
type: 'create_denoise_mask',
id: CANVAS_COHERENCE_INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
};
@ -746,7 +743,7 @@ export const buildCanvasOutpaintGraph = (
graph.nodes[CANVAS_COHERENCE_MASK_EDGE] = {
type: 'mask_edge',
id: CANVAS_COHERENCE_MASK_EDGE,
is_intermediate: true,
is_intermediate,
edge_blur: maskBlur,
edge_size: maskBlur * 2,
low_threshold: 100,
@ -849,5 +846,7 @@ export const buildCanvasOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -27,6 +27,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Image to Image graph.
@ -61,14 +62,10 @@ export const buildCanvasSDXLImageToImageGraph = (
// The bounding box determines width and height, not the width and height params
const { width, height } = state.canvas.boundingBoxDimensions;
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -122,7 +119,7 @@ export const buildCanvasSDXLImageToImageGraph = (
[NOISE]: {
type: 'noise',
id: NOISE,
is_intermediate: true,
is_intermediate,
use_cpu,
width: !isUsingScaledDimensions
? width
@ -134,13 +131,13 @@ export const buildCanvasSDXLImageToImageGraph = (
[IMAGE_TO_LATENTS]: {
type: 'i2l',
id: IMAGE_TO_LATENTS,
is_intermediate: true,
is_intermediate,
fp32,
},
[SDXL_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: SDXL_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -251,7 +248,7 @@ export const buildCanvasSDXLImageToImageGraph = (
graph.nodes[IMG2IMG_RESIZE] = {
id: IMG2IMG_RESIZE,
type: 'img_resize',
is_intermediate: true,
is_intermediate,
image: initialImage,
width: scaledBoundingBoxDimensions.width,
height: scaledBoundingBoxDimensions.height,
@ -259,13 +256,13 @@ export const buildCanvasSDXLImageToImageGraph = (
graph.nodes[LATENTS_TO_IMAGE] = {
id: LATENTS_TO_IMAGE,
type: 'l2i',
is_intermediate: true,
is_intermediate,
fp32,
};
graph.nodes[CANVAS_OUTPUT] = {
id: CANVAS_OUTPUT,
type: 'img_resize',
is_intermediate: !shouldAutoSave,
is_intermediate,
width: width,
height: height,
};
@ -306,7 +303,7 @@ export const buildCanvasSDXLImageToImageGraph = (
graph.nodes[CANVAS_OUTPUT] = {
type: 'l2i',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
fp32,
};
@ -403,5 +400,7 @@ export const buildCanvasSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -46,6 +46,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Inpaint graph.
@ -94,14 +95,10 @@ export const buildCanvasSDXLInpaintGraph = (
const { width, height } = state.canvas.boundingBoxDimensions;
// We may need to set the inpaint width and height to scale the image
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -139,32 +136,32 @@ export const buildCanvasSDXLInpaintGraph = (
[MASK_BLUR]: {
type: 'img_blur',
id: MASK_BLUR,
is_intermediate: true,
is_intermediate,
radius: maskBlur,
blur_type: maskBlurMethod,
},
[INPAINT_IMAGE]: {
type: 'i2l',
id: INPAINT_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[INPAINT_CREATE_MASK]: {
type: 'create_denoise_mask',
id: INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
},
[SDXL_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: SDXL_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: steps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -177,18 +174,18 @@ export const buildCanvasSDXLInpaintGraph = (
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_NOISE_INCREMENT]: {
type: 'add',
id: CANVAS_COHERENCE_NOISE_INCREMENT,
b: 1,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: CANVAS_COHERENCE_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: canvasCoherenceSteps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -198,19 +195,19 @@ export const buildCanvasSDXLInpaintGraph = (
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[CANVAS_OUTPUT]: {
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
reference: canvasInitImage,
},
[RANGE_OF_SIZE]: {
type: 'range_of_size',
id: RANGE_OF_SIZE,
is_intermediate: true,
is_intermediate,
// seed - must be connected manually
// start: 0,
size: iterations,
@ -219,7 +216,7 @@ export const buildCanvasSDXLInpaintGraph = (
[ITERATE]: {
type: 'iterate',
id: ITERATE,
is_intermediate: true,
is_intermediate,
},
},
edges: [
@ -451,7 +448,7 @@ export const buildCanvasSDXLInpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_UP] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasInitImage,
@ -459,7 +456,7 @@ export const buildCanvasSDXLInpaintGraph = (
graph.nodes[MASK_RESIZE_UP] = {
type: 'img_resize',
id: MASK_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasMaskImage,
@ -467,14 +464,14 @@ export const buildCanvasSDXLInpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[MASK_RESIZE_DOWN] = {
type: 'img_resize',
id: MASK_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
@ -612,7 +609,7 @@ export const buildCanvasSDXLInpaintGraph = (
graph.nodes[CANVAS_COHERENCE_INPAINT_CREATE_MASK] = {
type: 'create_denoise_mask',
id: CANVAS_COHERENCE_INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
};
@ -665,7 +662,7 @@ export const buildCanvasSDXLInpaintGraph = (
graph.nodes[CANVAS_COHERENCE_MASK_EDGE] = {
type: 'mask_edge',
id: CANVAS_COHERENCE_MASK_EDGE,
is_intermediate: true,
is_intermediate,
edge_blur: maskBlur,
edge_size: maskBlur * 2,
low_threshold: 100,
@ -776,5 +773,7 @@ export const buildCanvasSDXLInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -48,6 +48,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Outpaint graph.
@ -98,14 +99,10 @@ export const buildCanvasSDXLOutpaintGraph = (
const { width, height } = state.canvas.boundingBoxDimensions;
// We may need to set the inpaint width and height to scale the image
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -143,37 +140,37 @@ export const buildCanvasSDXLOutpaintGraph = (
[MASK_FROM_ALPHA]: {
type: 'tomask',
id: MASK_FROM_ALPHA,
is_intermediate: true,
is_intermediate,
image: canvasInitImage,
},
[MASK_COMBINE]: {
type: 'mask_combine',
id: MASK_COMBINE,
is_intermediate: true,
is_intermediate,
mask2: canvasMaskImage,
},
[INPAINT_IMAGE]: {
type: 'i2l',
id: INPAINT_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[INPAINT_CREATE_MASK]: {
type: 'create_denoise_mask',
id: INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
},
[SDXL_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: SDXL_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: steps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -186,18 +183,18 @@ export const buildCanvasSDXLOutpaintGraph = (
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_NOISE_INCREMENT]: {
type: 'add',
id: CANVAS_COHERENCE_NOISE_INCREMENT,
b: 1,
is_intermediate: true,
is_intermediate,
},
[CANVAS_COHERENCE_DENOISE_LATENTS]: {
type: 'denoise_latents',
id: CANVAS_COHERENCE_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
steps: canvasCoherenceSteps,
cfg_scale: cfg_scale,
scheduler: scheduler,
@ -207,18 +204,18 @@ export const buildCanvasSDXLOutpaintGraph = (
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
is_intermediate: true,
is_intermediate,
fp32,
},
[CANVAS_OUTPUT]: {
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
},
[RANGE_OF_SIZE]: {
type: 'range_of_size',
id: RANGE_OF_SIZE,
is_intermediate: true,
is_intermediate,
// seed - must be connected manually
// start: 0,
size: iterations,
@ -227,7 +224,7 @@ export const buildCanvasSDXLOutpaintGraph = (
[ITERATE]: {
type: 'iterate',
id: ITERATE,
is_intermediate: true,
is_intermediate,
},
},
edges: [
@ -487,7 +484,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_patchmatch',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
downscale: infillPatchmatchDownscaleSize,
};
}
@ -496,7 +493,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_lama',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
};
}
@ -504,7 +501,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_cv2',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
};
}
@ -512,7 +509,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[INPAINT_INFILL] = {
type: 'infill_tile',
id: INPAINT_INFILL,
is_intermediate: true,
is_intermediate,
tile_size: infillTileSize,
};
}
@ -526,7 +523,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[INPAINT_IMAGE_RESIZE_UP] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
image: canvasInitImage,
@ -534,28 +531,28 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[MASK_RESIZE_UP] = {
type: 'img_resize',
id: MASK_RESIZE_UP,
is_intermediate: true,
is_intermediate,
width: scaledWidth,
height: scaledHeight,
};
graph.nodes[INPAINT_IMAGE_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_IMAGE_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[INPAINT_INFILL_RESIZE_DOWN] = {
type: 'img_resize',
id: INPAINT_INFILL_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
graph.nodes[MASK_RESIZE_DOWN] = {
type: 'img_resize',
id: MASK_RESIZE_DOWN,
is_intermediate: true,
is_intermediate,
width: width,
height: height,
};
@ -715,7 +712,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[CANVAS_COHERENCE_INPAINT_CREATE_MASK] = {
type: 'create_denoise_mask',
id: CANVAS_COHERENCE_INPAINT_CREATE_MASK,
is_intermediate: true,
is_intermediate,
fp32,
};
@ -762,7 +759,7 @@ export const buildCanvasSDXLOutpaintGraph = (
graph.nodes[CANVAS_COHERENCE_MASK_EDGE] = {
type: 'mask_edge',
id: CANVAS_COHERENCE_MASK_EDGE,
is_intermediate: true,
is_intermediate,
edge_blur: maskBlur,
edge_size: maskBlur * 2,
low_threshold: 100,
@ -879,5 +876,7 @@ export const buildCanvasSDXLOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -29,6 +29,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Canvas tab's Text to Image graph.
@ -55,14 +56,10 @@ export const buildCanvasSDXLTextToImageGraph = (
// The bounding box determines width and height, not the width and height params
const { width, height } = state.canvas.boundingBoxDimensions;
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -94,7 +91,7 @@ export const buildCanvasSDXLTextToImageGraph = (
? {
type: 't2l_onnx',
id: SDXL_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -102,7 +99,7 @@ export const buildCanvasSDXLTextToImageGraph = (
: {
type: 'denoise_latents',
id: SDXL_DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -131,27 +128,27 @@ export const buildCanvasSDXLTextToImageGraph = (
[modelLoaderNodeId]: {
type: modelLoaderNodeType,
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[POSITIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'sdxl_compel_prompt',
id: POSITIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: positivePrompt,
style: craftedPositiveStylePrompt,
},
[NEGATIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'sdxl_compel_prompt',
id: NEGATIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: negativePrompt,
style: craftedNegativeStylePrompt,
},
[NOISE]: {
type: 'noise',
id: NOISE,
is_intermediate: true,
is_intermediate,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
@ -253,14 +250,14 @@ export const buildCanvasSDXLTextToImageGraph = (
graph.nodes[LATENTS_TO_IMAGE] = {
id: LATENTS_TO_IMAGE,
type: isUsingOnnxModel ? 'l2i_onnx' : 'l2i',
is_intermediate: true,
is_intermediate,
fp32,
};
graph.nodes[CANVAS_OUTPUT] = {
id: CANVAS_OUTPUT,
type: 'img_resize',
is_intermediate: !shouldAutoSave,
is_intermediate,
width: width,
height: height,
};
@ -291,7 +288,7 @@ export const buildCanvasSDXLTextToImageGraph = (
graph.nodes[CANVAS_OUTPUT] = {
type: isUsingOnnxModel ? 'l2i_onnx' : 'l2i',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
fp32,
};
@ -383,5 +380,7 @@ export const buildCanvasSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -10,6 +10,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addDynamicPromptsToGraph } from './addDynamicPromptsToGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
@ -53,14 +54,10 @@ export const buildCanvasTextToImageGraph = (
// The bounding box determines width and height, not the width and height params
const { width, height } = state.canvas.boundingBoxDimensions;
const {
scaledBoundingBoxDimensions,
boundingBoxScaleMethod,
shouldAutoSave,
} = state.canvas;
const { scaledBoundingBoxDimensions, boundingBoxScaleMethod } = state.canvas;
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingScaledDimensions = ['auto', 'manual'].includes(
boundingBoxScaleMethod
);
@ -89,7 +86,7 @@ export const buildCanvasTextToImageGraph = (
? {
type: 't2l_onnx',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -97,7 +94,7 @@ export const buildCanvasTextToImageGraph = (
: {
type: 'denoise_latents',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -122,31 +119,31 @@ export const buildCanvasTextToImageGraph = (
[modelLoaderNodeId]: {
type: modelLoaderNodeType,
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
is_intermediate: true,
is_intermediate,
skipped_layers: clipSkip,
},
[POSITIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'compel',
id: POSITIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: positivePrompt,
},
[NEGATIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'compel',
id: NEGATIVE_CONDITIONING,
is_intermediate: true,
is_intermediate,
prompt: negativePrompt,
},
[NOISE]: {
type: 'noise',
id: NOISE,
is_intermediate: true,
is_intermediate,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
@ -239,14 +236,14 @@ export const buildCanvasTextToImageGraph = (
graph.nodes[LATENTS_TO_IMAGE] = {
id: LATENTS_TO_IMAGE,
type: isUsingOnnxModel ? 'l2i_onnx' : 'l2i',
is_intermediate: true,
is_intermediate,
fp32,
};
graph.nodes[CANVAS_OUTPUT] = {
id: CANVAS_OUTPUT,
type: 'img_resize',
is_intermediate: !shouldAutoSave,
is_intermediate,
width: width,
height: height,
};
@ -277,7 +274,7 @@ export const buildCanvasTextToImageGraph = (
graph.nodes[CANVAS_OUTPUT] = {
type: isUsingOnnxModel ? 'l2i_onnx' : 'l2i',
id: CANVAS_OUTPUT,
is_intermediate: !shouldAutoSave,
is_intermediate,
fp32,
};
@ -356,5 +353,7 @@ export const buildCanvasTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -27,6 +27,7 @@ import {
RESIZE,
SEAMLESS,
} from './constants';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Image to Image tab graph.
@ -85,6 +86,7 @@ export const buildLinearImageToImageGraph = (
}
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
let modelLoaderNodeId = MAIN_MODEL_LOADER;
@ -100,31 +102,37 @@ export const buildLinearImageToImageGraph = (
type: 'main_model_loader',
id: modelLoaderNodeId,
model,
is_intermediate,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
skipped_layers: clipSkip,
is_intermediate,
},
[POSITIVE_CONDITIONING]: {
type: 'compel',
id: POSITIVE_CONDITIONING,
prompt: positivePrompt,
is_intermediate,
},
[NEGATIVE_CONDITIONING]: {
type: 'compel',
id: NEGATIVE_CONDITIONING,
prompt: negativePrompt,
is_intermediate,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate,
},
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
},
[DENOISE_LATENTS]: {
type: 'denoise_latents',
@ -134,6 +142,7 @@ export const buildLinearImageToImageGraph = (
steps,
denoising_start: 1 - strength,
denoising_end: 1,
is_intermediate,
},
[IMAGE_TO_LATENTS]: {
type: 'i2l',
@ -143,6 +152,7 @@ export const buildLinearImageToImageGraph = (
// image_name: initialImage.image_name,
// },
fp32,
is_intermediate,
},
},
edges: [
@ -375,5 +385,7 @@ export const buildLinearImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -29,6 +29,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
/**
* Builds the Image to Image tab graph.
@ -85,6 +86,7 @@ export const buildLinearSDXLImageToImageGraph = (
}
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
// Model Loader ID
let modelLoaderNodeId = SDXL_MODEL_LOADER;
@ -105,28 +107,33 @@ export const buildLinearSDXLImageToImageGraph = (
type: 'sdxl_model_loader',
id: modelLoaderNodeId,
model,
is_intermediate,
},
[POSITIVE_CONDITIONING]: {
type: 'sdxl_compel_prompt',
id: POSITIVE_CONDITIONING,
prompt: positivePrompt,
style: craftedPositiveStylePrompt,
is_intermediate,
},
[NEGATIVE_CONDITIONING]: {
type: 'sdxl_compel_prompt',
id: NEGATIVE_CONDITIONING,
prompt: negativePrompt,
style: craftedNegativeStylePrompt,
is_intermediate,
},
[NOISE]: {
type: 'noise',
id: NOISE,
use_cpu,
is_intermediate,
},
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
},
[SDXL_DENOISE_LATENTS]: {
type: 'denoise_latents',
@ -138,6 +145,7 @@ export const buildLinearSDXLImageToImageGraph = (
? Math.min(refinerStart, 1 - strength)
: 1 - strength,
denoising_end: shouldUseSDXLRefiner ? refinerStart : 1,
is_intermediate,
},
[IMAGE_TO_LATENTS]: {
type: 'i2l',
@ -147,6 +155,7 @@ export const buildLinearSDXLImageToImageGraph = (
// image_name: initialImage.image_name,
// },
fp32,
is_intermediate,
},
},
edges: [
@ -398,5 +407,7 @@ export const buildLinearSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -23,6 +23,7 @@ import {
SEAMLESS,
} from './constants';
import { craftSDXLStylePrompt } from './helpers/craftSDXLStylePrompt';
import { addSaveImageNode } from './addSaveImageNode';
export const buildLinearSDXLTextToImageGraph = (
state: RootState
@ -56,13 +57,13 @@ export const buildLinearSDXLTextToImageGraph = (
const use_cpu = shouldUseNoiseSettings
? shouldUseCpuNoise
: initialGenerationState.shouldUseCpuNoise;
if (!model) {
log.error('No model found in state');
throw new Error('No model found in state');
}
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
// Construct Style Prompt
const { craftedPositiveStylePrompt, craftedNegativeStylePrompt } =
@ -88,18 +89,21 @@ export const buildLinearSDXLTextToImageGraph = (
type: 'sdxl_model_loader',
id: modelLoaderNodeId,
model,
is_intermediate,
},
[POSITIVE_CONDITIONING]: {
type: 'sdxl_compel_prompt',
id: POSITIVE_CONDITIONING,
prompt: positivePrompt,
style: craftedPositiveStylePrompt,
is_intermediate,
},
[NEGATIVE_CONDITIONING]: {
type: 'sdxl_compel_prompt',
id: NEGATIVE_CONDITIONING,
prompt: negativePrompt,
style: craftedNegativeStylePrompt,
is_intermediate,
},
[NOISE]: {
type: 'noise',
@ -107,6 +111,7 @@ export const buildLinearSDXLTextToImageGraph = (
width,
height,
use_cpu,
is_intermediate,
},
[SDXL_DENOISE_LATENTS]: {
type: 'denoise_latents',
@ -116,11 +121,13 @@ export const buildLinearSDXLTextToImageGraph = (
steps,
denoising_start: 0,
denoising_end: shouldUseSDXLRefiner ? refinerStart : 1,
is_intermediate,
},
[LATENTS_TO_IMAGE]: {
type: 'l2i',
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
},
},
edges: [
@ -291,5 +298,7 @@ export const buildLinearSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -26,6 +26,7 @@ import {
SEAMLESS,
TEXT_TO_IMAGE_GRAPH,
} from './constants';
import { addSaveImageNode } from './addSaveImageNode';
export const buildLinearTextToImageGraph = (
state: RootState
@ -58,7 +59,7 @@ export const buildLinearTextToImageGraph = (
}
const fp32 = vaePrecision === 'fp32';
const is_intermediate = true;
const isUsingOnnxModel = model.model_type === 'onnx';
let modelLoaderNodeId = isUsingOnnxModel
@ -74,7 +75,7 @@ export const buildLinearTextToImageGraph = (
? {
type: 't2l_onnx',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -82,7 +83,7 @@ export const buildLinearTextToImageGraph = (
: {
type: 'denoise_latents',
id: DENOISE_LATENTS,
is_intermediate: true,
is_intermediate,
cfg_scale,
scheduler,
steps,
@ -108,26 +109,26 @@ export const buildLinearTextToImageGraph = (
[modelLoaderNodeId]: {
type: modelLoaderNodeType,
id: modelLoaderNodeId,
is_intermediate: true,
is_intermediate,
model,
},
[CLIP_SKIP]: {
type: 'clip_skip',
id: CLIP_SKIP,
skipped_layers: clipSkip,
is_intermediate: true,
is_intermediate,
},
[POSITIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'compel',
id: POSITIVE_CONDITIONING,
prompt: positivePrompt,
is_intermediate: true,
is_intermediate,
},
[NEGATIVE_CONDITIONING]: {
type: isUsingOnnxModel ? 'prompt_onnx' : 'compel',
id: NEGATIVE_CONDITIONING,
prompt: negativePrompt,
is_intermediate: true,
is_intermediate,
},
[NOISE]: {
type: 'noise',
@ -135,7 +136,7 @@ export const buildLinearTextToImageGraph = (
width,
height,
use_cpu,
is_intermediate: true,
is_intermediate,
},
[t2lNode.id]: t2lNode,
[LATENTS_TO_IMAGE]: {
@ -293,5 +294,7 @@ export const buildLinearTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
return graph;
};

View File

@ -55,6 +55,9 @@ export const buildNodesGraph = (nodesState: NodesState): Graph => {
{} as Record<Exclude<string, 'id' | 'type'>, unknown>
);
// add reserved use_cache
transformedInputs['use_cache'] = node.data.useCache;
// Build this specific node
const graphNode = {
type,

View File

@ -3,6 +3,7 @@ export const POSITIVE_CONDITIONING = 'positive_conditioning';
export const NEGATIVE_CONDITIONING = 'negative_conditioning';
export const DENOISE_LATENTS = 'denoise_latents';
export const LATENTS_TO_IMAGE = 'latents_to_image';
export const SAVE_IMAGE = 'save_image';
export const NSFW_CHECKER = 'nsfw_checker';
export const WATERMARKER = 'invisible_watermark';
export const NOISE = 'noise';

View File

@ -16,7 +16,7 @@ import {
} from '../types/types';
import { buildInputFieldTemplate, getFieldType } from './fieldTemplateBuilders';
const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'metadata'];
const RESERVED_INPUT_FIELD_NAMES = ['id', 'type', 'metadata', 'use_cache'];
const RESERVED_OUTPUT_FIELD_NAMES = ['type'];
const RESERVED_FIELD_TYPES = [
'WorkflowField',
@ -235,6 +235,8 @@ export const parseSchema = (
{} as Record<string, OutputFieldTemplate>
);
const useCache = schema.properties.use_cache.default;
const invocation: InvocationTemplate = {
title,
type,
@ -244,6 +246,7 @@ export const parseSchema = (
outputType,
inputs,
outputs,
useCache,
};
Object.assign(invocationsAccumulator, { [type]: invocation });

File diff suppressed because one or more lines are too long

View File

@ -132,6 +132,7 @@ export type DivideInvocation = s['DivideInvocation'];
export type ImageNSFWBlurInvocation = s['ImageNSFWBlurInvocation'];
export type ImageWatermarkInvocation = s['ImageWatermarkInvocation'];
export type SeamlessModeInvocation = s['SeamlessModeInvocation'];
export type SaveImageInvocation = s['SaveImageInvocation'];
// ControlNet Nodes
export type ControlNetInvocation = s['ControlNetInvocation'];

View File

@ -1,5 +1,9 @@
import logging
import pytest
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
# This import must happen before other invoke imports or test in other files(!!) break
from .test_nodes import ( # isort: split
PromptCollectionTestInvocation,
@ -42,7 +46,7 @@ def mock_services() -> InvocationServices:
return InvocationServices(
model_manager=None, # type: ignore
events=TestEventService(),
logger=None, # type: ignore
logger=logging, # type: ignore
images=None, # type: ignore
latents=None, # type: ignore
boards=None, # type: ignore
@ -53,6 +57,7 @@ def mock_services() -> InvocationServices:
performance_statistics=InvocationStatsService(graph_execution_manager),
processor=DefaultInvocationProcessor(),
configuration=None, # type: ignore
invocation_cache=MemoryInvocationCache(), # type: ignore
)

View File

@ -1,6 +1,9 @@
import logging
import pytest
from invokeai.app.services.graph import Graph, GraphExecutionState, LibraryGraph
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from invokeai.app.services.invocation_queue import MemoryInvocationQueue
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.invocation_stats import InvocationStatsService
@ -39,7 +42,7 @@ def mock_services() -> InvocationServices:
return InvocationServices(
model_manager=None, # type: ignore
events=TestEventService(),
logger=None, # type: ignore
logger=logging, # type: ignore
images=None, # type: ignore
latents=None, # type: ignore
boards=None, # type: ignore
@ -50,6 +53,7 @@ def mock_services() -> InvocationServices:
processor=DefaultInvocationProcessor(),
performance_statistics=InvocationStatsService(graph_execution_manager),
configuration=None, # type: ignore
invocation_cache=MemoryInvocationCache(),
)