mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into releases/v3.2.0rc1
This commit is contained in:
@ -296,8 +296,18 @@ code for InvokeAI. For this to work, you will need to install the
|
||||
on your system, please see the [Git Installation
|
||||
Guide](https://github.com/git-guides/install-git)
|
||||
|
||||
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
|
||||
|
||||
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
|
||||
|
||||
> **Why do I need the frontend toolchain**?
|
||||
>
|
||||
> The InvokeAI project uses trunk-based development. That means our `main` branch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in `main` - we only build it for production releases.
|
||||
>
|
||||
> That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
|
||||
|
||||
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
|
||||
1. From the command line, run this command:
|
||||
2. From the command line, run this command:
|
||||
```bash
|
||||
git clone https://github.com/<your_github_username>/InvokeAI.git
|
||||
```
|
||||
@ -305,10 +315,10 @@ Guide](https://github.com/git-guides/install-git)
|
||||
This will create a directory named `InvokeAI` and populate it with the
|
||||
full source code from your fork of the InvokeAI repository.
|
||||
|
||||
2. Activate the InvokeAI virtual environment as per step (4) of the manual
|
||||
3. Activate the InvokeAI virtual environment as per step (4) of the manual
|
||||
installation protocol (important!)
|
||||
|
||||
3. Enter the InvokeAI repository directory and run one of these
|
||||
4. Enter the InvokeAI repository directory and run one of these
|
||||
commands, based on your GPU:
|
||||
|
||||
=== "CUDA (NVidia)"
|
||||
@ -334,11 +344,15 @@ installation protocol (important!)
|
||||
Be sure to pass `-e` (for an editable install) and don't forget the
|
||||
dot ("."). It is part of the command.
|
||||
|
||||
You can now run `invokeai` and its related commands. The code will be
|
||||
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
|
||||
|
||||
6. You can now run `invokeai` and its related commands. The code will be
|
||||
read from the repository, so that you can edit the .py source files
|
||||
and watch the code's behavior change.
|
||||
|
||||
4. If you wish to contribute to the InvokeAI project, you are
|
||||
When you pull in new changes to the repo, be sure to re-build the UI.
|
||||
|
||||
7. If you wish to contribute to the InvokeAI project, you are
|
||||
encouraged to establish a GitHub account and "fork"
|
||||
https://github.com/invoke-ai/InvokeAI into your own copy of the
|
||||
repository. You can then use GitHub functions to create and submit
|
||||
|
@ -121,18 +121,6 @@ To be imported, an .obj must use triangulated meshes, so make sure to enable tha
|
||||
**Example Usage:**
|
||||

|
||||
|
||||
--------------------------------
|
||||
### Enhance Image (simple adjustments)
|
||||
|
||||
**Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
|
||||
|
||||
Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
|
||||
|
||||
**Node Link:** https://github.com/dwringer/image-enhance-node
|
||||
|
||||
**Example Usage:**
|
||||

|
||||
|
||||
--------------------------------
|
||||
### Generative Grammar-Based Prompt Nodes
|
||||
|
||||
@ -153,16 +141,26 @@ This includes 3 Nodes:
|
||||
|
||||
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
|
||||
|
||||
This includes 4 Nodes:
|
||||
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
|
||||
This includes 14 Nodes:
|
||||
- *Adjust Image Hue Plus* - Rotate the hue of an image in one of several different color spaces.
|
||||
- *Blend Latents/Noise (Masked)* - Use a mask to blend part of one latents tensor [including Noise outputs] into another. Can be used to "renoise" sections during a multi-stage [masked] denoising process.
|
||||
- *Enhance Image* - Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
|
||||
- *Equivalent Achromatic Lightness* - Calculates image lightness accounting for Helmholtz-Kohlrausch effect based on a method described by High, Green, and Nussbaum (2023).
|
||||
- *Text to Mask (Clipseg)* - Input a prompt and an image to generate a mask representing areas of the image matched by the prompt.
|
||||
- *Text to Mask Advanced (Clipseg)* - Output up to four prompt masks combined with logical "and", logical "or", or as separate channels of an RGBA image.
|
||||
- *Image Layer Blend* - Perform a layered blend of two images using alpha compositing. Opacity of top layer is selectable, with optional mask and several different blend modes/color spaces.
|
||||
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
|
||||
- *Image Dilate or Erode* - Dilate or expand a mask (or any image!). This is equivalent to an expand/contract operation.
|
||||
- *Image Value Thresholds* - Clip an image to pure black/white beyond specified thresholds.
|
||||
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
|
||||
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
|
||||
- *Shadows/Highlights/Midtones* - Extract three masks (with adjustable hard or soft thresholds) representing shadows, midtones, and highlights regions of an image.
|
||||
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
|
||||
|
||||
**Node Link:** https://github.com/dwringer/composition-nodes
|
||||
|
||||
**Example Usage:**
|
||||

|
||||
**Nodes and Output Examples:**
|
||||

|
||||
|
||||
--------------------------------
|
||||
### Size Stepper Nodes
|
||||
|
@ -332,6 +332,7 @@ class InvokeAiInstance:
|
||||
Configure the InvokeAI runtime directory
|
||||
"""
|
||||
|
||||
auto_install = False
|
||||
# set sys.argv to a consistent state
|
||||
new_argv = [sys.argv[0]]
|
||||
for i in range(1, len(sys.argv)):
|
||||
@ -340,13 +341,17 @@ class InvokeAiInstance:
|
||||
new_argv.append(el)
|
||||
new_argv.append(sys.argv[i + 1])
|
||||
elif el in ["-y", "--yes", "--yes-to-all"]:
|
||||
new_argv.append(el)
|
||||
auto_install = True
|
||||
sys.argv = new_argv
|
||||
|
||||
import messages
|
||||
import requests # to catch download exceptions
|
||||
from messages import introduction
|
||||
|
||||
introduction()
|
||||
auto_install = auto_install or messages.user_wants_auto_configuration()
|
||||
if auto_install:
|
||||
sys.argv.append("--yes")
|
||||
else:
|
||||
messages.introduction()
|
||||
|
||||
from invokeai.frontend.install.invokeai_configure import invokeai_configure
|
||||
|
||||
|
@ -7,7 +7,7 @@ import os
|
||||
import platform
|
||||
from pathlib import Path
|
||||
|
||||
from prompt_toolkit import prompt
|
||||
from prompt_toolkit import HTML, prompt
|
||||
from prompt_toolkit.completion import PathCompleter
|
||||
from prompt_toolkit.validation import Validator
|
||||
from rich import box, print
|
||||
@ -65,17 +65,50 @@ def confirm_install(dest: Path) -> bool:
|
||||
if dest.exists():
|
||||
print(f":exclamation: Directory {dest} already exists :exclamation:")
|
||||
dest_confirmed = Confirm.ask(
|
||||
":stop_sign: Are you sure you want to (re)install in this location?",
|
||||
":stop_sign: (re)install in this location?",
|
||||
default=False,
|
||||
)
|
||||
else:
|
||||
print(f"InvokeAI will be installed in {dest}")
|
||||
dest_confirmed = not Confirm.ask("Would you like to pick a different location?", default=False)
|
||||
dest_confirmed = Confirm.ask("Use this location?", default=True)
|
||||
console.line()
|
||||
|
||||
return dest_confirmed
|
||||
|
||||
|
||||
def user_wants_auto_configuration() -> bool:
|
||||
"""Prompt the user to choose between manual and auto configuration."""
|
||||
console.rule("InvokeAI Configuration Section")
|
||||
console.print(
|
||||
Panel(
|
||||
Group(
|
||||
"\n".join(
|
||||
[
|
||||
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
|
||||
"",
|
||||
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
|
||||
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
|
||||
"",
|
||||
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
|
||||
]
|
||||
),
|
||||
),
|
||||
box=box.MINIMAL,
|
||||
padding=(1, 1),
|
||||
)
|
||||
)
|
||||
choice = (
|
||||
prompt(
|
||||
HTML("Choose <b><a></b>utomatic or <b><m></b>anual configuration [a/m] (a): "),
|
||||
validator=Validator.from_callable(
|
||||
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
|
||||
),
|
||||
)
|
||||
or "a"
|
||||
)
|
||||
return choice.lower().startswith("a")
|
||||
|
||||
|
||||
def dest_path(dest=None) -> Path:
|
||||
"""
|
||||
Prompt the user for the destination path and create the path
|
||||
|
@ -49,7 +49,7 @@ def check_internet() -> bool:
|
||||
return False
|
||||
|
||||
|
||||
logger = InvokeAILogger.getLogger()
|
||||
logger = InvokeAILogger.get_logger()
|
||||
|
||||
|
||||
class ApiDependencies:
|
||||
|
@ -7,6 +7,7 @@ from fastapi.routing import APIRouter
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from invokeai.app.invocations.upscale import ESRGAN_MODELS
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
||||
from invokeai.backend.image_util.patchmatch import PatchMatch
|
||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||
@ -113,3 +114,33 @@ async def set_log_level(
|
||||
async def clear_invocation_cache() -> None:
|
||||
"""Clears the invocation cache"""
|
||||
ApiDependencies.invoker.services.invocation_cache.clear()
|
||||
|
||||
|
||||
@app_router.put(
|
||||
"/invocation_cache/enable",
|
||||
operation_id="enable_invocation_cache",
|
||||
responses={200: {"description": "The operation was successful"}},
|
||||
)
|
||||
async def enable_invocation_cache() -> None:
|
||||
"""Clears the invocation cache"""
|
||||
ApiDependencies.invoker.services.invocation_cache.enable()
|
||||
|
||||
|
||||
@app_router.put(
|
||||
"/invocation_cache/disable",
|
||||
operation_id="disable_invocation_cache",
|
||||
responses={200: {"description": "The operation was successful"}},
|
||||
)
|
||||
async def disable_invocation_cache() -> None:
|
||||
"""Clears the invocation cache"""
|
||||
ApiDependencies.invoker.services.invocation_cache.disable()
|
||||
|
||||
|
||||
@app_router.get(
|
||||
"/invocation_cache/status",
|
||||
operation_id="get_invocation_cache_status",
|
||||
responses={200: {"model": InvocationCacheStatus}},
|
||||
)
|
||||
async def get_invocation_cache_status() -> InvocationCacheStatus:
|
||||
"""Clears the invocation cache"""
|
||||
return ApiDependencies.invoker.services.invocation_cache.get_status()
|
||||
|
@ -146,7 +146,8 @@ async def update_model(
|
||||
async def import_model(
|
||||
location: str = Body(description="A model path, repo_id or URL to import"),
|
||||
prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body(
|
||||
description="Prediction type for SDv2 checkpoint files", default="v_prediction"
|
||||
description="Prediction type for SDv2 checkpoints and rare SDv1 checkpoints",
|
||||
default=None,
|
||||
),
|
||||
) -> ImportModelResponse:
|
||||
"""Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically"""
|
||||
|
@ -8,7 +8,6 @@ app_config.parse_args()
|
||||
|
||||
if True: # hack to make flake8 happy with imports coming after setting up the config
|
||||
import asyncio
|
||||
import logging
|
||||
import mimetypes
|
||||
import socket
|
||||
from inspect import signature
|
||||
@ -41,7 +40,9 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
|
||||
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
|
||||
|
||||
|
||||
logger = InvokeAILogger.getLogger(config=app_config)
|
||||
app_config = InvokeAIAppConfig.get_config()
|
||||
app_config.parse_args()
|
||||
logger = InvokeAILogger.get_logger(config=app_config)
|
||||
|
||||
# fix for windows mimetypes registry entries being borked
|
||||
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
|
||||
@ -223,7 +224,7 @@ def invoke_api():
|
||||
exc_info=e,
|
||||
)
|
||||
else:
|
||||
jurigged.watch(logger=InvokeAILogger.getLogger(name="jurigged").info)
|
||||
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
|
||||
|
||||
port = find_port(app_config.port)
|
||||
if port != app_config.port:
|
||||
@ -242,7 +243,7 @@ def invoke_api():
|
||||
|
||||
# replace uvicorn's loggers with InvokeAI's for consistent appearance
|
||||
for logname in ["uvicorn.access", "uvicorn"]:
|
||||
log = logging.getLogger(logname)
|
||||
log = InvokeAILogger.get_logger(logname)
|
||||
log.handlers.clear()
|
||||
for ch in logger.handlers:
|
||||
log.addHandler(ch)
|
||||
|
@ -7,8 +7,6 @@ from .services.config import InvokeAIAppConfig
|
||||
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
|
||||
# which are imported/used before parse_args() is called will get the default config values instead of the
|
||||
# values from the command line or config file.
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
config.parse_args()
|
||||
|
||||
if True: # hack to make flake8 happy with imports coming after setting up the config
|
||||
import argparse
|
||||
@ -61,8 +59,9 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
|
||||
if torch.backends.mps.is_available():
|
||||
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
|
||||
|
||||
|
||||
logger = InvokeAILogger().getLogger(config=config)
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
config.parse_args()
|
||||
logger = InvokeAILogger().get_logger(config=config)
|
||||
|
||||
|
||||
class CliCommand(BaseModel):
|
||||
|
@ -88,6 +88,12 @@ class FieldDescriptions:
|
||||
num_1 = "The first number"
|
||||
num_2 = "The second number"
|
||||
mask = "The mask to use for the operation"
|
||||
board = "The board to save the image to"
|
||||
image = "The image to process"
|
||||
tile_size = "Tile size"
|
||||
inclusive_low = "The inclusive low value"
|
||||
exclusive_high = "The exclusive high value"
|
||||
decimal_places = "The number of decimal places to round to"
|
||||
|
||||
|
||||
class Input(str, Enum):
|
||||
@ -173,6 +179,7 @@ class UIType(str, Enum):
|
||||
WorkflowField = "WorkflowField"
|
||||
IsIntermediate = "IsIntermediate"
|
||||
MetadataField = "MetadataField"
|
||||
BoardField = "BoardField"
|
||||
# endregion
|
||||
|
||||
|
||||
@ -656,6 +663,8 @@ def invocation(
|
||||
:param Optional[str] title: Adds a title to the invocation. Use if the auto-generated title isn't quite right. Defaults to None.
|
||||
:param Optional[list[str]] tags: Adds tags to the invocation. Invocations may be searched for by their tags. Defaults to None.
|
||||
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
|
||||
:param Optional[str] version: Adds a version to the invocation. Must be a valid semver string. Defaults to None.
|
||||
:param Optional[bool] use_cache: Whether or not to use the invocation cache. Defaults to True. The user may override this in the workflow editor.
|
||||
"""
|
||||
|
||||
def wrapper(cls: Type[GenericBaseInvocation]) -> Type[GenericBaseInvocation]:
|
||||
|
@ -559,3 +559,33 @@ class SamDetectorReproducibleColors(SamDetector):
|
||||
img[:, :] = ann_color
|
||||
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
|
||||
return np.array(final_img, dtype=np.uint8)
|
||||
|
||||
|
||||
@invocation(
|
||||
"color_map_image_processor",
|
||||
title="Color Map Processor",
|
||||
tags=["controlnet"],
|
||||
category="controlnet",
|
||||
version="1.0.0",
|
||||
)
|
||||
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
|
||||
"""Generates a color map from the provided image"""
|
||||
|
||||
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
|
||||
|
||||
def run_processor(self, image: Image.Image):
|
||||
image = image.convert("RGB")
|
||||
image = np.array(image, dtype=np.uint8)
|
||||
height, width = image.shape[:2]
|
||||
|
||||
width_tile_size = min(self.color_map_tile_size, width)
|
||||
height_tile_size = min(self.color_map_tile_size, height)
|
||||
|
||||
color_map = cv2.resize(
|
||||
image,
|
||||
(width // width_tile_size, height // height_tile_size),
|
||||
interpolation=cv2.INTER_CUBIC,
|
||||
)
|
||||
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
|
||||
color_map = Image.fromarray(color_map)
|
||||
return color_map
|
||||
|
@ -8,12 +8,12 @@ import numpy
|
||||
from PIL import Image, ImageChops, ImageFilter, ImageOps
|
||||
|
||||
from invokeai.app.invocations.metadata import CoreMetadata
|
||||
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
|
||||
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
|
||||
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
|
||||
from invokeai.backend.image_util.safety_checker import SafetyChecker
|
||||
|
||||
from ..models.image import ImageCategory, ResourceOrigin
|
||||
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
|
||||
from .baseinvocation import BaseInvocation, FieldDescriptions, Input, InputField, InvocationContext, invocation
|
||||
|
||||
|
||||
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
|
||||
@ -972,13 +972,14 @@ class ImageChannelMultiplyInvocation(BaseInvocation):
|
||||
title="Save Image",
|
||||
tags=["primitives", "image"],
|
||||
category="primitives",
|
||||
version="1.0.0",
|
||||
version="1.0.1",
|
||||
use_cache=False,
|
||||
)
|
||||
class SaveImageInvocation(BaseInvocation):
|
||||
"""Saves an image. Unlike an image primitive, this invocation stores a copy of the image."""
|
||||
|
||||
image: ImageField = InputField(description="The image to load")
|
||||
image: ImageField = InputField(description=FieldDescriptions.image)
|
||||
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
|
||||
metadata: CoreMetadata = InputField(
|
||||
default=None,
|
||||
description=FieldDescriptions.core_metadata,
|
||||
@ -992,6 +993,7 @@ class SaveImageInvocation(BaseInvocation):
|
||||
image=image,
|
||||
image_origin=ResourceOrigin.INTERNAL,
|
||||
image_category=ImageCategory.GENERAL,
|
||||
board_id=self.board.board_id if self.board else None,
|
||||
node_id=self.id,
|
||||
session_id=context.graph_execution_state_id,
|
||||
is_intermediate=self.is_intermediate,
|
||||
|
@ -58,9 +58,7 @@ class IPAdapterInvocation(BaseInvocation):
|
||||
# Inputs
|
||||
image: ImageField = InputField(description="The IP-Adapter image prompt.")
|
||||
ip_adapter_model: IPAdapterModelField = InputField(
|
||||
description="The IP-Adapter model.",
|
||||
title="IP-Adapter Model",
|
||||
input=Input.Direct,
|
||||
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
|
||||
)
|
||||
|
||||
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
|
||||
|
@ -65,13 +65,27 @@ class DivideInvocation(BaseInvocation):
|
||||
class RandomIntInvocation(BaseInvocation):
|
||||
"""Outputs a single random integer."""
|
||||
|
||||
low: int = InputField(default=0, description="The inclusive low value")
|
||||
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
|
||||
low: int = InputField(default=0, description=FieldDescriptions.inclusive_low)
|
||||
high: int = InputField(default=np.iinfo(np.int32).max, description=FieldDescriptions.exclusive_high)
|
||||
|
||||
def invoke(self, context: InvocationContext) -> IntegerOutput:
|
||||
return IntegerOutput(value=np.random.randint(self.low, self.high))
|
||||
|
||||
|
||||
@invocation("rand_float", title="Random Float", tags=["math", "float", "random"], category="math", version="1.0.0")
|
||||
class RandomFloatInvocation(BaseInvocation):
|
||||
"""Outputs a single random float"""
|
||||
|
||||
low: float = InputField(default=0.0, description=FieldDescriptions.inclusive_low)
|
||||
high: float = InputField(default=1.0, description=FieldDescriptions.exclusive_high)
|
||||
decimals: int = InputField(default=2, description=FieldDescriptions.decimal_places)
|
||||
|
||||
def invoke(self, context: InvocationContext) -> FloatOutput:
|
||||
random_float = np.random.uniform(self.low, self.high)
|
||||
rounded_float = round(random_float, self.decimals)
|
||||
return FloatOutput(value=rounded_float)
|
||||
|
||||
|
||||
@invocation(
|
||||
"float_to_int",
|
||||
title="Float To Integer",
|
||||
|
@ -42,7 +42,8 @@ class CoreMetadata(BaseModelExcludeNull):
|
||||
cfg_scale: float = Field(description="The classifier-free guidance scale parameter")
|
||||
steps: int = Field(description="The number of steps used for inference")
|
||||
scheduler: str = Field(description="The scheduler used for inference")
|
||||
clip_skip: int = Field(
|
||||
clip_skip: Optional[int] = Field(
|
||||
default=None,
|
||||
description="The number of skipped CLIP layers",
|
||||
)
|
||||
model: MainModelField = Field(description="The main model used for inference")
|
||||
@ -116,7 +117,8 @@ class MetadataAccumulatorInvocation(BaseInvocation):
|
||||
cfg_scale: float = InputField(description="The classifier-free guidance scale parameter")
|
||||
steps: int = InputField(description="The number of steps used for inference")
|
||||
scheduler: str = InputField(description="The scheduler used for inference")
|
||||
clip_skip: int = InputField(
|
||||
clip_skip: Optional[int] = Field(
|
||||
default=None,
|
||||
description="The number of skipped CLIP layers",
|
||||
)
|
||||
model: MainModelField = InputField(description="The main model used for inference")
|
||||
|
@ -226,6 +226,12 @@ class ImageField(BaseModel):
|
||||
image_name: str = Field(description="The name of the image")
|
||||
|
||||
|
||||
class BoardField(BaseModel):
|
||||
"""A board primitive field"""
|
||||
|
||||
board_id: str = Field(description="The id of the board")
|
||||
|
||||
|
||||
@invocation_output("image_output")
|
||||
class ImageOutput(BaseInvocationOutput):
|
||||
"""Base class for nodes that output a single image"""
|
||||
|
@ -241,7 +241,7 @@ class InvokeAIAppConfig(InvokeAISettings):
|
||||
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
|
||||
|
||||
# CACHE
|
||||
ram : Union[float, Literal["auto"]] = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number or 'auto')", category="Model Cache", )
|
||||
ram : Union[float, Literal["auto"]] = Field(default=7.5, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number or 'auto')", category="Model Cache", )
|
||||
vram : Union[float, Literal["auto"]] = Field(default=0.25, ge=0, description="Amount of VRAM reserved for model storage (floating point number or 'auto')", category="Model Cache", )
|
||||
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed", category="Model Cache", )
|
||||
|
||||
@ -277,6 +277,7 @@ class InvokeAIAppConfig(InvokeAISettings):
|
||||
|
||||
class Config:
|
||||
validate_assignment = True
|
||||
env_prefix = "INVOKEAI"
|
||||
|
||||
def parse_args(self, argv: Optional[list[str]] = None, conf: Optional[DictConfig] = None, clobber=False):
|
||||
"""
|
||||
|
@ -117,6 +117,10 @@ def are_connection_types_compatible(from_type: Any, to_type: Any) -> bool:
|
||||
if from_type is int and to_type is float:
|
||||
return True
|
||||
|
||||
# allow int|float -> str, pydantic will cast for us
|
||||
if (from_type is int or from_type is float) and to_type is str:
|
||||
return True
|
||||
|
||||
# if not issubclass(from_type, to_type):
|
||||
if not is_union_subtype(from_type, to_type):
|
||||
return False
|
||||
|
@ -2,6 +2,7 @@ from abc import ABC, abstractmethod
|
||||
from typing import Optional, Union
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||
|
||||
|
||||
class InvocationCacheBase(ABC):
|
||||
@ -32,7 +33,7 @@ class InvocationCacheBase(ABC):
|
||||
|
||||
@abstractmethod
|
||||
def delete(self, key: Union[int, str]) -> None:
|
||||
"""Deleteds an invocation output from the cache"""
|
||||
"""Deletes an invocation output from the cache"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
@ -44,3 +45,18 @@ class InvocationCacheBase(ABC):
|
||||
def create_key(self, invocation: BaseInvocation) -> int:
|
||||
"""Gets the key for the invocation's cache item"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def disable(self) -> None:
|
||||
"""Disables the cache, overriding the max cache size"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def enable(self) -> None:
|
||||
"""Enables the cache, letting the the max cache size take effect"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_status(self) -> InvocationCacheStatus:
|
||||
"""Returns the status of the cache"""
|
||||
pass
|
||||
|
@ -0,0 +1,9 @@
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class InvocationCacheStatus(BaseModel):
|
||||
size: int = Field(description="The current size of the invocation cache")
|
||||
hits: int = Field(description="The number of cache hits")
|
||||
misses: int = Field(description="The number of cache misses")
|
||||
enabled: bool = Field(description="Whether the invocation cache is enabled")
|
||||
max_size: int = Field(description="The maximum size of the invocation cache")
|
@ -1,81 +1,126 @@
|
||||
from queue import Queue
|
||||
from collections import OrderedDict
|
||||
from dataclasses import dataclass, field
|
||||
from threading import Lock
|
||||
from typing import Optional, Union
|
||||
|
||||
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
|
||||
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
|
||||
from invokeai.app.services.invoker import Invoker
|
||||
|
||||
|
||||
@dataclass(order=True)
|
||||
class CachedItem:
|
||||
invocation_output: BaseInvocationOutput = field(compare=False)
|
||||
invocation_output_json: str = field(compare=False)
|
||||
|
||||
|
||||
class MemoryInvocationCache(InvocationCacheBase):
|
||||
__cache: dict[Union[int, str], tuple[BaseInvocationOutput, str]]
|
||||
__max_cache_size: int
|
||||
__cache_ids: Queue
|
||||
__invoker: Invoker
|
||||
_cache: OrderedDict[Union[int, str], CachedItem]
|
||||
_max_cache_size: int
|
||||
_disabled: bool
|
||||
_hits: int
|
||||
_misses: int
|
||||
_invoker: Invoker
|
||||
_lock: Lock
|
||||
|
||||
def __init__(self, max_cache_size: int = 0) -> None:
|
||||
self.__cache = dict()
|
||||
self.__max_cache_size = max_cache_size
|
||||
self.__cache_ids = Queue()
|
||||
self._cache = OrderedDict()
|
||||
self._max_cache_size = max_cache_size
|
||||
self._disabled = False
|
||||
self._hits = 0
|
||||
self._misses = 0
|
||||
self._lock = Lock()
|
||||
|
||||
def start(self, invoker: Invoker) -> None:
|
||||
self.__invoker = invoker
|
||||
if self.__max_cache_size == 0:
|
||||
self._invoker = invoker
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
self.__invoker.services.images.on_deleted(self._delete_by_match)
|
||||
self.__invoker.services.latents.on_deleted(self._delete_by_match)
|
||||
self._invoker.services.images.on_deleted(self._delete_by_match)
|
||||
self._invoker.services.latents.on_deleted(self._delete_by_match)
|
||||
|
||||
def get(self, key: Union[int, str]) -> Optional[BaseInvocationOutput]:
|
||||
if self.__max_cache_size == 0:
|
||||
return
|
||||
|
||||
item = self.__cache.get(key, None)
|
||||
if item is not None:
|
||||
return item[0]
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0 or self._disabled:
|
||||
return None
|
||||
item = self._cache.get(key, None)
|
||||
if item is not None:
|
||||
self._hits += 1
|
||||
self._cache.move_to_end(key)
|
||||
return item.invocation_output
|
||||
self._misses += 1
|
||||
return None
|
||||
|
||||
def save(self, key: Union[int, str], invocation_output: BaseInvocationOutput) -> None:
|
||||
if self.__max_cache_size == 0:
|
||||
return
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0 or self._disabled or key in self._cache:
|
||||
return
|
||||
# If the cache is full, we need to remove the least used
|
||||
number_to_delete = len(self._cache) + 1 - self._max_cache_size
|
||||
self._delete_oldest_access(number_to_delete)
|
||||
self._cache[key] = CachedItem(invocation_output, invocation_output.json())
|
||||
|
||||
if key not in self.__cache:
|
||||
self.__cache[key] = (invocation_output, invocation_output.json())
|
||||
self.__cache_ids.put(key)
|
||||
if self.__cache_ids.qsize() > self.__max_cache_size:
|
||||
try:
|
||||
self.__cache.pop(self.__cache_ids.get())
|
||||
except KeyError:
|
||||
# this means the cache_ids are somehow out of sync w/ the cache
|
||||
pass
|
||||
def _delete_oldest_access(self, number_to_delete: int) -> None:
|
||||
number_to_delete = min(number_to_delete, len(self._cache))
|
||||
for _ in range(number_to_delete):
|
||||
self._cache.popitem(last=False)
|
||||
|
||||
def _delete(self, key: Union[int, str]) -> None:
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
if key in self._cache:
|
||||
del self._cache[key]
|
||||
|
||||
def delete(self, key: Union[int, str]) -> None:
|
||||
if self.__max_cache_size == 0:
|
||||
return
|
||||
|
||||
if key in self.__cache:
|
||||
del self.__cache[key]
|
||||
with self._lock:
|
||||
return self._delete(key)
|
||||
|
||||
def clear(self, *args, **kwargs) -> None:
|
||||
if self.__max_cache_size == 0:
|
||||
return
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
self._cache.clear()
|
||||
self._misses = 0
|
||||
self._hits = 0
|
||||
|
||||
self.__cache.clear()
|
||||
self.__cache_ids = Queue()
|
||||
|
||||
def create_key(self, invocation: BaseInvocation) -> int:
|
||||
@staticmethod
|
||||
def create_key(invocation: BaseInvocation) -> int:
|
||||
return hash(invocation.json(exclude={"id"}))
|
||||
|
||||
def disable(self) -> None:
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
self._disabled = True
|
||||
|
||||
def enable(self) -> None:
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
self._disabled = False
|
||||
|
||||
def get_status(self) -> InvocationCacheStatus:
|
||||
with self._lock:
|
||||
return InvocationCacheStatus(
|
||||
hits=self._hits,
|
||||
misses=self._misses,
|
||||
enabled=not self._disabled and self._max_cache_size > 0,
|
||||
size=len(self._cache),
|
||||
max_size=self._max_cache_size,
|
||||
)
|
||||
|
||||
def _delete_by_match(self, to_match: str) -> None:
|
||||
if self.__max_cache_size == 0:
|
||||
return
|
||||
|
||||
keys_to_delete = set()
|
||||
for key, value_tuple in self.__cache.items():
|
||||
if to_match in value_tuple[1]:
|
||||
keys_to_delete.add(key)
|
||||
|
||||
if not keys_to_delete:
|
||||
return
|
||||
|
||||
for key in keys_to_delete:
|
||||
self.delete(key)
|
||||
|
||||
self.__invoker.services.logger.debug(f"Deleted {len(keys_to_delete)} cached invocation outputs for {to_match}")
|
||||
with self._lock:
|
||||
if self._max_cache_size == 0:
|
||||
return
|
||||
keys_to_delete = set()
|
||||
for key, cached_item in self._cache.items():
|
||||
if to_match in cached_item.invocation_output_json:
|
||||
keys_to_delete.add(key)
|
||||
if not keys_to_delete:
|
||||
return
|
||||
for key in keys_to_delete:
|
||||
self._delete(key)
|
||||
self._invoker.services.logger.debug(
|
||||
f"Deleted {len(keys_to_delete)} cached invocation outputs for {to_match}"
|
||||
)
|
||||
|
@ -47,20 +47,27 @@ class DefaultSessionProcessor(SessionProcessorBase):
|
||||
async def _on_queue_event(self, event: FastAPIEvent) -> None:
|
||||
event_name = event[1]["event"]
|
||||
|
||||
match event_name:
|
||||
case "graph_execution_state_complete" | "invocation_error" | "session_retrieval_error" | "invocation_retrieval_error":
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
case "session_canceled" if self.__queue_item is not None and self.__queue_item.session_id == event[1][
|
||||
"data"
|
||||
]["graph_execution_state_id"]:
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
case "batch_enqueued":
|
||||
self._poll_now()
|
||||
case "queue_cleared":
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
# This was a match statement, but match is not supported on python 3.9
|
||||
if event_name in [
|
||||
"graph_execution_state_complete",
|
||||
"invocation_error",
|
||||
"session_retrieval_error",
|
||||
"invocation_retrieval_error",
|
||||
]:
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
elif (
|
||||
event_name == "session_canceled"
|
||||
and self.__queue_item is not None
|
||||
and self.__queue_item.session_id == event[1]["data"]["graph_execution_state_id"]
|
||||
):
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
elif event_name == "batch_enqueued":
|
||||
self._poll_now()
|
||||
elif event_name == "queue_cleared":
|
||||
self.__queue_item = None
|
||||
self._poll_now()
|
||||
|
||||
def resume(self) -> SessionProcessorStatus:
|
||||
if not self.__resume_event.is_set():
|
||||
@ -92,30 +99,34 @@ class DefaultSessionProcessor(SessionProcessorBase):
|
||||
self.__invoker.services.logger
|
||||
while not stop_event.is_set():
|
||||
poll_now_event.clear()
|
||||
try:
|
||||
# do not dequeue if there is already a session running
|
||||
if self.__queue_item is None and resume_event.is_set():
|
||||
queue_item = self.__invoker.services.session_queue.dequeue()
|
||||
|
||||
# do not dequeue if there is already a session running
|
||||
if self.__queue_item is None and resume_event.is_set():
|
||||
queue_item = self.__invoker.services.session_queue.dequeue()
|
||||
if queue_item is not None:
|
||||
self.__invoker.services.logger.debug(f"Executing queue item {queue_item.item_id}")
|
||||
self.__queue_item = queue_item
|
||||
self.__invoker.services.graph_execution_manager.set(queue_item.session)
|
||||
self.__invoker.invoke(
|
||||
session_queue_batch_id=queue_item.batch_id,
|
||||
session_queue_id=queue_item.queue_id,
|
||||
session_queue_item_id=queue_item.item_id,
|
||||
graph_execution_state=queue_item.session,
|
||||
invoke_all=True,
|
||||
)
|
||||
queue_item = None
|
||||
|
||||
if queue_item is not None:
|
||||
self.__invoker.services.logger.debug(f"Executing queue item {queue_item.item_id}")
|
||||
self.__queue_item = queue_item
|
||||
self.__invoker.services.graph_execution_manager.set(queue_item.session)
|
||||
self.__invoker.invoke(
|
||||
session_queue_batch_id=queue_item.batch_id,
|
||||
session_queue_id=queue_item.queue_id,
|
||||
session_queue_item_id=queue_item.item_id,
|
||||
graph_execution_state=queue_item.session,
|
||||
invoke_all=True,
|
||||
)
|
||||
queue_item = None
|
||||
|
||||
if queue_item is None:
|
||||
self.__invoker.services.logger.debug("Waiting for next polling interval or event")
|
||||
if queue_item is None:
|
||||
self.__invoker.services.logger.debug("Waiting for next polling interval or event")
|
||||
poll_now_event.wait(POLLING_INTERVAL)
|
||||
continue
|
||||
except Exception as e:
|
||||
self.__invoker.services.logger.error(f"Error in session processor: {e}")
|
||||
poll_now_event.wait(POLLING_INTERVAL)
|
||||
continue
|
||||
except Exception as e:
|
||||
self.__invoker.services.logger.error(f"Error in session processor: {e}")
|
||||
self.__invoker.services.logger.error(f"Fatal Error in session processor: {e}")
|
||||
pass
|
||||
finally:
|
||||
stop_event.clear()
|
||||
|
@ -162,15 +162,15 @@ class SessionQueueItemWithoutGraph(BaseModel):
|
||||
session_id: str = Field(
|
||||
description="The ID of the session associated with this queue item. The session doesn't exist in graph_executions until the queue item is executed."
|
||||
)
|
||||
field_values: Optional[list[NodeFieldValue]] = Field(
|
||||
default=None, description="The field values that were used for this queue item"
|
||||
)
|
||||
queue_id: str = Field(description="The id of the queue with which this item is associated")
|
||||
error: Optional[str] = Field(default=None, description="The error message if this queue item errored")
|
||||
created_at: Union[datetime.datetime, str] = Field(description="When this queue item was created")
|
||||
updated_at: Union[datetime.datetime, str] = Field(description="When this queue item was updated")
|
||||
started_at: Optional[Union[datetime.datetime, str]] = Field(description="When this queue item was started")
|
||||
completed_at: Optional[Union[datetime.datetime, str]] = Field(description="When this queue item was completed")
|
||||
queue_id: str = Field(description="The id of the queue with which this item is associated")
|
||||
field_values: Optional[list[NodeFieldValue]] = Field(
|
||||
default=None, description="The field values that were used for this queue item"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, queue_item_dict: dict) -> "SessionQueueItemDTO":
|
||||
|
@ -59,13 +59,14 @@ class SqliteSessionQueue(SessionQueueBase):
|
||||
|
||||
async def _on_session_event(self, event: FastAPIEvent) -> FastAPIEvent:
|
||||
event_name = event[1]["event"]
|
||||
match event_name:
|
||||
case "graph_execution_state_complete":
|
||||
await self._handle_complete_event(event)
|
||||
case "invocation_error" | "session_retrieval_error" | "invocation_retrieval_error":
|
||||
await self._handle_error_event(event)
|
||||
case "session_canceled":
|
||||
await self._handle_cancel_event(event)
|
||||
|
||||
# This was a match statement, but match is not supported on python 3.9
|
||||
if event_name == "graph_execution_state_complete":
|
||||
await self._handle_complete_event(event)
|
||||
elif event_name in ["invocation_error", "session_retrieval_error", "invocation_retrieval_error"]:
|
||||
await self._handle_error_event(event)
|
||||
elif event_name == "session_canceled":
|
||||
await self._handle_cancel_event(event)
|
||||
return event
|
||||
|
||||
async def _handle_complete_event(self, event: FastAPIEvent) -> None:
|
||||
|
@ -70,7 +70,6 @@ def get_literal_fields(field) -> list[Any]:
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
|
||||
Model_dir = "models"
|
||||
|
||||
Default_config_file = config.model_conf_path
|
||||
SD_Configs = config.legacy_conf_path
|
||||
|
||||
@ -93,7 +92,7 @@ INIT_FILE_PREAMBLE = """# InvokeAI initialization file
|
||||
# or renaming it and then running invokeai-configure again.
|
||||
"""
|
||||
|
||||
logger = InvokeAILogger.getLogger()
|
||||
logger = InvokeAILogger.get_logger()
|
||||
|
||||
|
||||
class DummyWidgetValue(Enum):
|
||||
@ -458,7 +457,7 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
|
||||
)
|
||||
self.add_widget_intelligent(
|
||||
npyscreen.TitleFixedText,
|
||||
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model.",
|
||||
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model (2GB for SD-1, 6GB for SDXL).",
|
||||
begin_entry_at=0,
|
||||
editable=False,
|
||||
color="CONTROL",
|
||||
@ -651,8 +650,19 @@ def edit_opts(program_opts: Namespace, invokeai_opts: Namespace) -> argparse.Nam
|
||||
return editApp.new_opts()
|
||||
|
||||
|
||||
def default_ramcache() -> float:
|
||||
"""Run a heuristic for the default RAM cache based on installed RAM."""
|
||||
|
||||
# Note that on my 64 GB machine, psutil.virtual_memory().total gives 62 GB,
|
||||
# So we adjust everthing down a bit.
|
||||
return (
|
||||
15.0 if MAX_RAM >= 60 else 7.5 if MAX_RAM >= 30 else 4 if MAX_RAM >= 14 else 2.1
|
||||
) # 2.1 is just large enough for sd 1.5 ;-)
|
||||
|
||||
|
||||
def default_startup_options(init_file: Path) -> Namespace:
|
||||
opts = InvokeAIAppConfig.get_config()
|
||||
opts.ram = default_ramcache()
|
||||
return opts
|
||||
|
||||
|
||||
@ -894,7 +904,7 @@ def main():
|
||||
if opt.full_precision:
|
||||
invoke_args.extend(["--precision", "float32"])
|
||||
config.parse_args(invoke_args)
|
||||
logger = InvokeAILogger().getLogger(config=config)
|
||||
logger = InvokeAILogger().get_logger(config=config)
|
||||
|
||||
errors = set()
|
||||
|
||||
|
@ -30,7 +30,7 @@ warnings.filterwarnings("ignore")
|
||||
|
||||
# --------------------------globals-----------------------
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
logger = InvokeAILogger.getLogger(name="InvokeAI")
|
||||
logger = InvokeAILogger.get_logger(name="InvokeAI")
|
||||
|
||||
# the initial "configs" dir is now bundled in the `invokeai.configs` package
|
||||
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
|
||||
@ -47,8 +47,14 @@ Config_preamble = """
|
||||
|
||||
LEGACY_CONFIGS = {
|
||||
BaseModelType.StableDiffusion1: {
|
||||
ModelVariantType.Normal: "v1-inference.yaml",
|
||||
ModelVariantType.Inpaint: "v1-inpainting-inference.yaml",
|
||||
ModelVariantType.Normal: {
|
||||
SchedulerPredictionType.Epsilon: "v1-inference.yaml",
|
||||
SchedulerPredictionType.VPrediction: "v1-inference-v.yaml",
|
||||
},
|
||||
ModelVariantType.Inpaint: {
|
||||
SchedulerPredictionType.Epsilon: "v1-inpainting-inference.yaml",
|
||||
SchedulerPredictionType.VPrediction: "v1-inpainting-inference-v.yaml",
|
||||
},
|
||||
},
|
||||
BaseModelType.StableDiffusion2: {
|
||||
ModelVariantType.Normal: {
|
||||
@ -69,14 +75,6 @@ LEGACY_CONFIGS = {
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelInstallList:
|
||||
"""Class for listing models to be installed/removed"""
|
||||
|
||||
install_models: List[str] = field(default_factory=list)
|
||||
remove_models: List[str] = field(default_factory=list)
|
||||
|
||||
|
||||
@dataclass
|
||||
class InstallSelections:
|
||||
install_models: List[str] = field(default_factory=list)
|
||||
@ -94,6 +92,7 @@ class ModelLoadInfo:
|
||||
installed: bool = False
|
||||
recommended: bool = False
|
||||
default: bool = False
|
||||
requires: Optional[List[str]] = field(default_factory=list)
|
||||
|
||||
|
||||
class ModelInstall(object):
|
||||
@ -131,8 +130,6 @@ class ModelInstall(object):
|
||||
|
||||
# supplement with entries in models.yaml
|
||||
installed_models = [x for x in self.mgr.list_models()]
|
||||
# suppresses autoloaded models
|
||||
# installed_models = [x for x in self.mgr.list_models() if not self._is_autoloaded(x)]
|
||||
|
||||
for md in installed_models:
|
||||
base = md["base_model"]
|
||||
@ -164,9 +161,12 @@ class ModelInstall(object):
|
||||
|
||||
def list_models(self, model_type):
|
||||
installed = self.mgr.list_models(model_type=model_type)
|
||||
print()
|
||||
print(f"Installed models of type `{model_type}`:")
|
||||
print(f"{'Model Key':50} Model Path")
|
||||
for i in installed:
|
||||
print(f"{i['model_name']}\t{i['base_model']}\t{i['path']}")
|
||||
print(f"{'/'.join([i['base_model'],i['model_type'],i['model_name']]):50} {i['path']}")
|
||||
print()
|
||||
|
||||
# logic here a little reversed to maintain backward compatibility
|
||||
def starter_models(self, all_models: bool = False) -> Set[str]:
|
||||
@ -204,6 +204,8 @@ class ModelInstall(object):
|
||||
job += 1
|
||||
|
||||
# add requested models
|
||||
self._remove_installed(selections.install_models)
|
||||
self._add_required_models(selections.install_models)
|
||||
for path in selections.install_models:
|
||||
logger.info(f"Installing {path} [{job}/{jobs}]")
|
||||
try:
|
||||
@ -263,6 +265,26 @@ class ModelInstall(object):
|
||||
|
||||
return models_installed
|
||||
|
||||
def _remove_installed(self, model_list: List[str]):
|
||||
all_models = self.all_models()
|
||||
for path in model_list:
|
||||
key = self.reverse_paths.get(path)
|
||||
if key and all_models[key].installed:
|
||||
logger.warning(f"{path} already installed. Skipping.")
|
||||
model_list.remove(path)
|
||||
|
||||
def _add_required_models(self, model_list: List[str]):
|
||||
additional_models = []
|
||||
all_models = self.all_models()
|
||||
for path in model_list:
|
||||
if not (key := self.reverse_paths.get(path)):
|
||||
continue
|
||||
for requirement in all_models[key].requires:
|
||||
requirement_key = self.reverse_paths.get(requirement)
|
||||
if not all_models[requirement_key].installed:
|
||||
additional_models.append(requirement)
|
||||
model_list.extend(additional_models)
|
||||
|
||||
# install a model from a local path. The optional info parameter is there to prevent
|
||||
# the model from being probed twice in the event that it has already been probed.
|
||||
def _install_path(self, path: Path, info: ModelProbeInfo = None) -> AddModelResult:
|
||||
@ -286,7 +308,7 @@ class ModelInstall(object):
|
||||
location = download_with_resume(url, Path(staging))
|
||||
if not location:
|
||||
logger.error(f"Unable to download {url}. Skipping.")
|
||||
info = ModelProbe().heuristic_probe(location)
|
||||
info = ModelProbe().heuristic_probe(location, self.prediction_helper)
|
||||
dest = self.config.models_path / info.base_type.value / info.model_type.value / location.name
|
||||
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||
models_path = shutil.move(location, dest)
|
||||
@ -393,7 +415,7 @@ class ModelInstall(object):
|
||||
possible_conf = path.with_suffix(".yaml")
|
||||
if possible_conf.exists():
|
||||
legacy_conf = str(self.relative_to_root(possible_conf))
|
||||
elif info.base_type == BaseModelType.StableDiffusion2:
|
||||
elif info.base_type in [BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2]:
|
||||
legacy_conf = Path(
|
||||
self.config.legacy_conf_dir,
|
||||
LEGACY_CONFIGS[info.base_type][info.variant_type][info.prediction_type],
|
||||
@ -492,7 +514,7 @@ def yes_or_no(prompt: str, default_yes=True):
|
||||
|
||||
# ---------------------------------------------
|
||||
def hf_download_from_pretrained(model_class: object, model_name: str, destination: Path, **kwargs):
|
||||
logger = InvokeAILogger.getLogger("InvokeAI")
|
||||
logger = InvokeAILogger.get_logger("InvokeAI")
|
||||
logger.addFilter(lambda x: "fp16 is not a valid" not in x.getMessage())
|
||||
|
||||
model = model_class.from_pretrained(
|
||||
|
@ -74,7 +74,7 @@ if is_accelerate_available():
|
||||
from accelerate import init_empty_weights
|
||||
from accelerate.utils import set_module_tensor_to_device
|
||||
|
||||
logger = InvokeAILogger.getLogger(__name__)
|
||||
logger = InvokeAILogger.get_logger(__name__)
|
||||
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().models_path / "core/convert"
|
||||
|
||||
|
||||
@ -1279,12 +1279,12 @@ def download_from_original_stable_diffusion_ckpt(
|
||||
extract_ema = original_config["model"]["params"]["use_ema"]
|
||||
|
||||
if (
|
||||
model_version == BaseModelType.StableDiffusion2
|
||||
model_version in [BaseModelType.StableDiffusion2, BaseModelType.StableDiffusion1]
|
||||
and original_config["model"]["params"].get("parameterization") == "v"
|
||||
):
|
||||
prediction_type = "v_prediction"
|
||||
upcast_attention = True
|
||||
image_size = 768
|
||||
image_size = 768 if model_version == BaseModelType.StableDiffusion2 else 512
|
||||
else:
|
||||
prediction_type = "epsilon"
|
||||
upcast_attention = False
|
||||
|
@ -90,8 +90,7 @@ class ModelProbe(object):
|
||||
to place it somewhere in the models directory hierarchy. If the model is
|
||||
already loaded into memory, you may provide it as model in order to avoid
|
||||
opening it a second time. The prediction_type_helper callable is a function that receives
|
||||
the path to the model and returns the BaseModelType. It is called to distinguish
|
||||
between V2-Base and V2-768 SD models.
|
||||
the path to the model and returns the SchedulerPredictionType.
|
||||
"""
|
||||
if model_path:
|
||||
format_type = "diffusers" if model_path.is_dir() else "checkpoint"
|
||||
@ -305,25 +304,36 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
|
||||
else:
|
||||
raise InvalidModelException("Cannot determine base type")
|
||||
|
||||
def get_scheduler_prediction_type(self) -> SchedulerPredictionType:
|
||||
def get_scheduler_prediction_type(self) -> Optional[SchedulerPredictionType]:
|
||||
"""Return model prediction type."""
|
||||
# if there is a .yaml associated with this checkpoint, then we do not need
|
||||
# to probe for the prediction type as it will be ignored.
|
||||
if self.checkpoint_path and self.checkpoint_path.with_suffix(".yaml").exists():
|
||||
return None
|
||||
|
||||
type = self.get_base_type()
|
||||
if type == BaseModelType.StableDiffusion1:
|
||||
return SchedulerPredictionType.Epsilon
|
||||
checkpoint = self.checkpoint
|
||||
state_dict = self.checkpoint.get("state_dict") or checkpoint
|
||||
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
|
||||
if key_name in state_dict and state_dict[key_name].shape[-1] == 1024:
|
||||
if "global_step" in checkpoint:
|
||||
if checkpoint["global_step"] == 220000:
|
||||
return SchedulerPredictionType.Epsilon
|
||||
elif checkpoint["global_step"] == 110000:
|
||||
return SchedulerPredictionType.VPrediction
|
||||
if (
|
||||
self.checkpoint_path and self.helper and not self.checkpoint_path.with_suffix(".yaml").exists()
|
||||
): # if a .yaml config file exists, then this step not needed
|
||||
return self.helper(self.checkpoint_path)
|
||||
else:
|
||||
return None
|
||||
if type == BaseModelType.StableDiffusion2:
|
||||
checkpoint = self.checkpoint
|
||||
state_dict = self.checkpoint.get("state_dict") or checkpoint
|
||||
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
|
||||
if key_name in state_dict and state_dict[key_name].shape[-1] == 1024:
|
||||
if "global_step" in checkpoint:
|
||||
if checkpoint["global_step"] == 220000:
|
||||
return SchedulerPredictionType.Epsilon
|
||||
elif checkpoint["global_step"] == 110000:
|
||||
return SchedulerPredictionType.VPrediction
|
||||
if self.helper and self.checkpoint_path:
|
||||
if helper_guess := self.helper(self.checkpoint_path):
|
||||
return helper_guess
|
||||
return SchedulerPredictionType.VPrediction # a guess for sd2 ckpts
|
||||
|
||||
elif type == BaseModelType.StableDiffusion1:
|
||||
if self.helper and self.checkpoint_path:
|
||||
if helper_guess := self.helper(self.checkpoint_path):
|
||||
return helper_guess
|
||||
return SchedulerPredictionType.Epsilon # a reasonable guess for sd1 ckpts
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class VaeCheckpointProbe(CheckpointProbeBase):
|
||||
|
@ -71,7 +71,13 @@ class ModelSearch(ABC):
|
||||
if any(
|
||||
[
|
||||
(path / x).exists()
|
||||
for x in {"config.json", "model_index.json", "learned_embeds.bin", "pytorch_lora_weights.bin"}
|
||||
for x in {
|
||||
"config.json",
|
||||
"model_index.json",
|
||||
"learned_embeds.bin",
|
||||
"pytorch_lora_weights.bin",
|
||||
"image_encoder.txt",
|
||||
}
|
||||
]
|
||||
):
|
||||
try:
|
||||
|
@ -24,7 +24,7 @@ from invokeai.backend.util.logging import InvokeAILogger
|
||||
# Modified ControlNetModel with encoder_attention_mask argument added
|
||||
|
||||
|
||||
logger = InvokeAILogger.getLogger(__name__)
|
||||
logger = InvokeAILogger.get_logger(__name__)
|
||||
|
||||
|
||||
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlnetMixin):
|
||||
|
@ -1,7 +1,6 @@
|
||||
# Copyright (c) 2023 Lincoln D. Stein and The InvokeAI Development Team
|
||||
|
||||
"""
|
||||
invokeai.backend.util.logging
|
||||
"""invokeai.backend.util.logging
|
||||
|
||||
Logging class for InvokeAI that produces console messages
|
||||
|
||||
@ -9,9 +8,9 @@ Usage:
|
||||
|
||||
from invokeai.backend.util.logging import InvokeAILogger
|
||||
|
||||
logger = InvokeAILogger.getLogger(name='InvokeAI') // Initialization
|
||||
logger = InvokeAILogger.get_logger(name='InvokeAI') // Initialization
|
||||
(or)
|
||||
logger = InvokeAILogger.getLogger(__name__) // To use the filename
|
||||
logger = InvokeAILogger.get_logger(__name__) // To use the filename
|
||||
logger.configure()
|
||||
|
||||
logger.critical('this is critical') // Critical Message
|
||||
@ -34,13 +33,13 @@ IAILogger.debug('this is a debugging message')
|
||||
## Configuration
|
||||
|
||||
The default configuration will print to stderr on the console. To add
|
||||
additional logging handlers, call getLogger with an initialized InvokeAIAppConfig
|
||||
additional logging handlers, call get_logger with an initialized InvokeAIAppConfig
|
||||
object:
|
||||
|
||||
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
config.parse_args()
|
||||
logger = InvokeAILogger.getLogger(config=config)
|
||||
logger = InvokeAILogger.get_logger(config=config)
|
||||
|
||||
### Three command-line options control logging:
|
||||
|
||||
@ -173,6 +172,7 @@ InvokeAI:
|
||||
log_level: info
|
||||
log_format: color
|
||||
```
|
||||
|
||||
"""
|
||||
|
||||
import logging.handlers
|
||||
@ -193,39 +193,35 @@ except ImportError:
|
||||
|
||||
# module level functions
|
||||
def debug(msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().debug(msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().debug(msg, *args, **kwargs)
|
||||
|
||||
|
||||
def info(msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().info(msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().info(msg, *args, **kwargs)
|
||||
|
||||
|
||||
def warning(msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().warning(msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().warning(msg, *args, **kwargs)
|
||||
|
||||
|
||||
def error(msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().error(msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().error(msg, *args, **kwargs)
|
||||
|
||||
|
||||
def critical(msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().critical(msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().critical(msg, *args, **kwargs)
|
||||
|
||||
|
||||
def log(level, msg, *args, **kwargs):
|
||||
InvokeAILogger.getLogger().log(level, msg, *args, **kwargs)
|
||||
InvokeAILogger.get_logger().log(level, msg, *args, **kwargs)
|
||||
|
||||
|
||||
def disable(level=logging.CRITICAL):
|
||||
InvokeAILogger.getLogger().disable(level)
|
||||
InvokeAILogger.get_logger().disable(level)
|
||||
|
||||
|
||||
def basicConfig(**kwargs):
|
||||
InvokeAILogger.getLogger().basicConfig(**kwargs)
|
||||
|
||||
|
||||
def getLogger(name: str = None) -> logging.Logger:
|
||||
return InvokeAILogger.getLogger(name)
|
||||
InvokeAILogger.get_logger().basicConfig(**kwargs)
|
||||
|
||||
|
||||
_FACILITY_MAP = (
|
||||
@ -351,7 +347,7 @@ class InvokeAILogger(object):
|
||||
loggers = dict()
|
||||
|
||||
@classmethod
|
||||
def getLogger(
|
||||
def get_logger(
|
||||
cls, name: str = "InvokeAI", config: InvokeAIAppConfig = InvokeAIAppConfig.get_config()
|
||||
) -> logging.Logger:
|
||||
if name in cls.loggers:
|
||||
@ -360,13 +356,13 @@ class InvokeAILogger(object):
|
||||
else:
|
||||
logger = logging.getLogger(name)
|
||||
logger.setLevel(config.log_level.upper()) # yes, strings work here
|
||||
for ch in cls.getLoggers(config):
|
||||
for ch in cls.get_loggers(config):
|
||||
logger.addHandler(ch)
|
||||
cls.loggers[name] = logger
|
||||
return cls.loggers[name]
|
||||
|
||||
@classmethod
|
||||
def getLoggers(cls, config: InvokeAIAppConfig) -> list[logging.Handler]:
|
||||
def get_loggers(cls, config: InvokeAIAppConfig) -> list[logging.Handler]:
|
||||
handler_strs = config.log_handlers
|
||||
handlers = list()
|
||||
for handler in handler_strs:
|
||||
|
@ -103,3 +103,35 @@ sd-1/lora/LowRA:
|
||||
recommended: True
|
||||
sd-1/lora/Ink scenery:
|
||||
path: https://civitai.com/api/download/models/83390
|
||||
sd-1/ip_adapter/ip_adapter_sd15:
|
||||
repo_id: InvokeAI/ip_adapter_sd15
|
||||
recommended: True
|
||||
requires:
|
||||
- InvokeAI/ip_adapter_sd_image_encoder
|
||||
description: IP-Adapter for SD 1.5 models
|
||||
sd-1/ip_adapter/ip_adapter_plus_sd15:
|
||||
repo_id: InvokeAI/ip_adapter_plus_sd15
|
||||
recommended: False
|
||||
requires:
|
||||
- InvokeAI/ip_adapter_sd_image_encoder
|
||||
description: Refined IP-Adapter for SD 1.5 models
|
||||
sd-1/ip_adapter/ip_adapter_plus_face_sd15:
|
||||
repo_id: InvokeAI/ip_adapter_plus_face_sd15
|
||||
recommended: False
|
||||
requires:
|
||||
- InvokeAI/ip_adapter_sd_image_encoder
|
||||
description: Refined IP-Adapter for SD 1.5 models, adapted for faces
|
||||
sdxl/ip_adapter/ip_adapter_sdxl:
|
||||
repo_id: InvokeAI/ip_adapter_sdxl
|
||||
recommended: False
|
||||
requires:
|
||||
- InvokeAI/ip_adapter_sdxl_image_encoder
|
||||
description: IP-Adapter for SDXL models
|
||||
any/clip_vision/ip_adapter_sd_image_encoder:
|
||||
repo_id: InvokeAI/ip_adapter_sd_image_encoder
|
||||
recommended: False
|
||||
description: Required model for using IP-Adapters with SD-1/2 models
|
||||
any/clip_vision/ip_adapter_sdxl_image_encoder:
|
||||
repo_id: InvokeAI/ip_adapter_sdxl_image_encoder
|
||||
recommended: False
|
||||
description: Required model for using IP-Adapters with SDXL models
|
||||
|
80
invokeai/configs/stable-diffusion/v1-inference-v.yaml
Normal file
80
invokeai/configs/stable-diffusion/v1-inference-v.yaml
Normal file
@ -0,0 +1,80 @@
|
||||
model:
|
||||
base_learning_rate: 1.0e-04
|
||||
target: invokeai.backend.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
parameterization: "v"
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.0120
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: "jpg"
|
||||
cond_stage_key: "txt"
|
||||
image_size: 64
|
||||
channels: 4
|
||||
cond_stage_trainable: false # Note: different from the one we trained before
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
use_ema: False
|
||||
|
||||
scheduler_config: # 10000 warmup steps
|
||||
target: invokeai.backend.stable_diffusion.lr_scheduler.LambdaLinearScheduler
|
||||
params:
|
||||
warm_up_steps: [ 10000 ]
|
||||
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
|
||||
f_start: [ 1.e-6 ]
|
||||
f_max: [ 1. ]
|
||||
f_min: [ 1. ]
|
||||
|
||||
personalization_config:
|
||||
target: invokeai.backend.stable_diffusion.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ['sculpture']
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 1
|
||||
progressive_words: False
|
||||
|
||||
unet_config:
|
||||
target: invokeai.backend.stable_diffusion.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32 # unused
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [ 4, 2, 1 ]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1, 2, 4, 4 ]
|
||||
num_heads: 8
|
||||
use_spatial_transformer: True
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: True
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: invokeai.backend.stable_diffusion.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: invokeai.backend.stable_diffusion.encoders.modules.WeightedFrozenCLIPEmbedder
|
@ -45,7 +45,7 @@ from invokeai.frontend.install.widgets import (
|
||||
)
|
||||
|
||||
config = InvokeAIAppConfig.get_config()
|
||||
logger = InvokeAILogger.getLogger()
|
||||
logger = InvokeAILogger.get_logger()
|
||||
|
||||
# build a table mapping all non-printable characters to None
|
||||
# for stripping control characters
|
||||
@ -101,11 +101,12 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
"STARTER MODELS",
|
||||
"MAIN MODELS",
|
||||
"CONTROLNETS",
|
||||
"IP-ADAPTERS",
|
||||
"LORA/LYCORIS",
|
||||
"TEXTUAL INVERSION",
|
||||
],
|
||||
value=[self.current_tab],
|
||||
columns=5,
|
||||
columns=6,
|
||||
max_height=2,
|
||||
relx=8,
|
||||
scroll_exit=True,
|
||||
@ -130,6 +131,13 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
)
|
||||
bottom_of_table = max(bottom_of_table, self.nextrely)
|
||||
|
||||
self.nextrely = top_of_table
|
||||
self.ipadapter_models = self.add_model_widgets(
|
||||
model_type=ModelType.IPAdapter,
|
||||
window_width=window_width,
|
||||
)
|
||||
bottom_of_table = max(bottom_of_table, self.nextrely)
|
||||
|
||||
self.nextrely = top_of_table
|
||||
self.lora_models = self.add_model_widgets(
|
||||
model_type=ModelType.Lora,
|
||||
@ -343,6 +351,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
self.starter_pipelines,
|
||||
self.pipeline_models,
|
||||
self.controlnet_models,
|
||||
self.ipadapter_models,
|
||||
self.lora_models,
|
||||
self.ti_models,
|
||||
]
|
||||
@ -532,6 +541,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
self.starter_pipelines,
|
||||
self.pipeline_models,
|
||||
self.controlnet_models,
|
||||
self.ipadapter_models,
|
||||
self.lora_models,
|
||||
self.ti_models,
|
||||
]
|
||||
@ -553,6 +563,25 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
|
||||
if downloads := section.get("download_ids"):
|
||||
selections.install_models.extend(downloads.value.split())
|
||||
|
||||
# NOT NEEDED - DONE IN BACKEND NOW
|
||||
# # special case for the ipadapter_models. If any of the adapters are
|
||||
# # chosen, then we add the corresponding encoder(s) to the install list.
|
||||
# section = self.ipadapter_models
|
||||
# if section.get("models_selected"):
|
||||
# selected_adapters = [
|
||||
# self.all_models[section["models"][x]].name for x in section.get("models_selected").value
|
||||
# ]
|
||||
# encoders = []
|
||||
# if any(["sdxl" in x for x in selected_adapters]):
|
||||
# encoders.append("ip_adapter_sdxl_image_encoder")
|
||||
# if any(["sd15" in x for x in selected_adapters]):
|
||||
# encoders.append("ip_adapter_sd_image_encoder")
|
||||
# for encoder in encoders:
|
||||
# key = f"any/clip_vision/{encoder}"
|
||||
# repo_id = f"InvokeAI/{encoder}"
|
||||
# if key not in self.all_models:
|
||||
# selections.install_models.append(repo_id)
|
||||
|
||||
|
||||
class AddModelApplication(npyscreen.NPSAppManaged):
|
||||
def __init__(self, opt):
|
||||
@ -652,7 +681,7 @@ def process_and_execute(
|
||||
translator = StderrToMessage(conn_out)
|
||||
sys.stderr = translator
|
||||
sys.stdout = translator
|
||||
logger = InvokeAILogger.getLogger()
|
||||
logger = InvokeAILogger.get_logger()
|
||||
logger.handlers.clear()
|
||||
logger.addHandler(logging.StreamHandler(translator))
|
||||
|
||||
@ -765,7 +794,7 @@ def main():
|
||||
if opt.full_precision:
|
||||
invoke_args.extend(["--precision", "float32"])
|
||||
config.parse_args(invoke_args)
|
||||
logger = InvokeAILogger().getLogger(config=config)
|
||||
logger = InvokeAILogger().get_logger(config=config)
|
||||
|
||||
if not config.model_conf_path.exists():
|
||||
logger.info("Your InvokeAI root directory is not set up. Calling invokeai-configure.")
|
||||
|
2
invokeai/frontend/web/dist/locales/en.json
vendored
2
invokeai/frontend/web/dist/locales/en.json
vendored
@ -635,7 +635,7 @@
|
||||
"onnxModels": "Onnx",
|
||||
"pathToCustomConfig": "Path To Custom Config",
|
||||
"pickModelType": "Pick Model Type",
|
||||
"predictionType": "Prediction Type (for Stable Diffusion 2.x Models only)",
|
||||
"predictionType": "Prediction Type (for Stable Diffusion 2.x Models and occasional Stable Diffusion 1.x Models)",
|
||||
"quickAdd": "Quick Add",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "Online repository of your model",
|
||||
|
@ -58,6 +58,7 @@
|
||||
"githubLabel": "Github",
|
||||
"hotkeysLabel": "Hotkeys",
|
||||
"imagePrompt": "Image Prompt",
|
||||
"imageFailedToLoad": "Unable to Load Image",
|
||||
"img2img": "Image To Image",
|
||||
"langArabic": "العربية",
|
||||
"langBrPortuguese": "Português do Brasil",
|
||||
@ -79,8 +80,9 @@
|
||||
"lightMode": "Light Mode",
|
||||
"linear": "Linear",
|
||||
"load": "Load",
|
||||
"loading": "Loading",
|
||||
"loading": "Loading $t({{noun}})...",
|
||||
"loadingInvokeAI": "Loading Invoke AI",
|
||||
"learnMore": "Learn More",
|
||||
"modelManager": "Model Manager",
|
||||
"nodeEditor": "Node Editor",
|
||||
"nodes": "Workflow Editor",
|
||||
@ -135,6 +137,8 @@
|
||||
"bgth": "bg_th",
|
||||
"canny": "Canny",
|
||||
"cannyDescription": "Canny edge detection",
|
||||
"colorMap": "Color",
|
||||
"colorMapDescription": "Generates a color map from the image",
|
||||
"coarse": "Coarse",
|
||||
"contentShuffle": "Content Shuffle",
|
||||
"contentShuffleDescription": "Shuffles the content in an image",
|
||||
@ -158,6 +162,7 @@
|
||||
"hideAdvanced": "Hide Advanced",
|
||||
"highThreshold": "High Threshold",
|
||||
"imageResolution": "Image Resolution",
|
||||
"colorMapTileSize": "Tile Size",
|
||||
"importImageFromCanvas": "Import Image From Canvas",
|
||||
"importMaskFromCanvas": "Import Mask From Canvas",
|
||||
"incompatibleBaseModel": "Incompatible base model:",
|
||||
@ -264,6 +269,22 @@
|
||||
"graphQueued": "Graph queued",
|
||||
"graphFailedToQueue": "Failed to queue graph"
|
||||
},
|
||||
"invocationCache": {
|
||||
"invocationCache": "Invocation Cache",
|
||||
"cacheSize": "Cache Size",
|
||||
"maxCacheSize": "Max Cache Size",
|
||||
"hits": "Cache Hits",
|
||||
"misses": "Cache Misses",
|
||||
"clear": "Clear",
|
||||
"clearSucceeded": "Invocation Cache Cleared",
|
||||
"clearFailed": "Problem Clearing Invocation Cache",
|
||||
"enable": "Enable",
|
||||
"enableSucceeded": "Invocation Cache Enabled",
|
||||
"enableFailed": "Problem Enabling Invocation Cache",
|
||||
"disable": "Disable",
|
||||
"disableSucceeded": "Invocation Cache Disabled",
|
||||
"disableFailed": "Problem Disabling Invocation Cache"
|
||||
},
|
||||
"gallery": {
|
||||
"allImagesLoaded": "All Images Loaded",
|
||||
"assets": "Assets",
|
||||
@ -635,7 +656,7 @@
|
||||
"onnxModels": "Onnx",
|
||||
"pathToCustomConfig": "Path To Custom Config",
|
||||
"pickModelType": "Pick Model Type",
|
||||
"predictionType": "Prediction Type (for Stable Diffusion 2.x Models only)",
|
||||
"predictionType": "Prediction Type (for Stable Diffusion 2.x Models and occasional Stable Diffusion 1.x Models)",
|
||||
"quickAdd": "Quick Add",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "Online repository of your model",
|
||||
@ -685,6 +706,8 @@
|
||||
"addNodeToolTip": "Add Node (Shift+A, Space)",
|
||||
"animatedEdges": "Animated Edges",
|
||||
"animatedEdgesHelp": "Animate selected edges and edges connected to selected nodes",
|
||||
"boardField": "Board",
|
||||
"boardFieldDescription": "A gallery board",
|
||||
"boolean": "Booleans",
|
||||
"booleanCollection": "Boolean Collection",
|
||||
"booleanCollectionDescription": "A collection of booleans.",
|
||||
@ -694,6 +717,7 @@
|
||||
"cannotConnectInputToInput": "Cannot connect input to input",
|
||||
"cannotConnectOutputToOutput": "Cannot connect output to output",
|
||||
"cannotConnectToSelf": "Cannot connect to self",
|
||||
"cannotDuplicateConnection": "Cannot create duplicate connections",
|
||||
"clipField": "Clip",
|
||||
"clipFieldDescription": "Tokenizer and text_encoder submodels.",
|
||||
"collection": "Collection",
|
||||
@ -872,7 +896,7 @@
|
||||
"zoomOutNodes": "Zoom Out"
|
||||
},
|
||||
"parameters": {
|
||||
"aspectRatio": "Ratio",
|
||||
"aspectRatio": "Aspect Ratio",
|
||||
"boundingBoxHeader": "Bounding Box",
|
||||
"boundingBoxHeight": "Bounding Box Height",
|
||||
"boundingBoxWidth": "Bounding Box Width",
|
||||
@ -1004,8 +1028,8 @@
|
||||
"label": "Seed Behaviour",
|
||||
"perIterationLabel": "Seed per Iteration",
|
||||
"perIterationDesc": "Use a different seed for each iteration",
|
||||
"perPromptLabel": "Seed per Prompt",
|
||||
"perPromptDesc": "Use a different seed for each prompt"
|
||||
"perPromptLabel": "Seed per Image",
|
||||
"perPromptDesc": "Use a different seed for each image"
|
||||
}
|
||||
},
|
||||
"sdxl": {
|
||||
@ -1157,131 +1181,205 @@
|
||||
"popovers": {
|
||||
"clipSkip": {
|
||||
"heading": "CLIP Skip",
|
||||
"paragraph": "Choose how many layers of the CLIP model to skip. Certain models are better suited to be used with CLIP Skip."
|
||||
},
|
||||
"compositingBlur": {
|
||||
"heading": "Blur",
|
||||
"paragraph": "The blur radius of the mask."
|
||||
},
|
||||
"compositingBlurMethod": {
|
||||
"heading": "Blur Method",
|
||||
"paragraph": "The method of blur applied to the masked area."
|
||||
},
|
||||
"compositingCoherencePass": {
|
||||
"heading": "Coherence Pass",
|
||||
"paragraph": "Composite the Inpainted/Outpainted images."
|
||||
},
|
||||
"compositingCoherenceMode": {
|
||||
"heading": "Mode",
|
||||
"paragraph": "The mode of the Coherence Pass."
|
||||
},
|
||||
"compositingCoherenceSteps": {
|
||||
"heading": "Steps",
|
||||
"paragraph": "Number of steps in the Coherence Pass. Similar to Denoising Steps."
|
||||
},
|
||||
"compositingStrength": {
|
||||
"heading": "Strength",
|
||||
"paragraph": "Amount of noise added for the Coherence Pass. Similar to Denoising Strength."
|
||||
},
|
||||
"compositingMaskAdjustments": {
|
||||
"heading": "Mask Adjustments",
|
||||
"paragraph": "Adjust the mask."
|
||||
},
|
||||
"controlNetBeginEnd": {
|
||||
"heading": "Begin / End Step Percentage",
|
||||
"paragraph": "Which parts of the denoising process will have the ControlNet applied. ControlNets applied at the start of the process guide composition, and ControlNets applied at the end guide details."
|
||||
},
|
||||
"controlNetControlMode": {
|
||||
"heading": "Control Mode",
|
||||
"paragraph": "Lends more weight to either the prompt or ControlNet."
|
||||
},
|
||||
"controlNetResizeMode": {
|
||||
"heading": "Resize Mode",
|
||||
"paragraph": "How the ControlNet image will be fit to the image generation Ratio"
|
||||
},
|
||||
"controlNetToggle": {
|
||||
"heading": "Enable ControlNet",
|
||||
"paragraph": "ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
|
||||
},
|
||||
"controlNetWeight": {
|
||||
"heading": "Weight",
|
||||
"paragraph": "How strongly the ControlNet will impact the generated image."
|
||||
},
|
||||
"dynamicPromptsToggle": {
|
||||
"heading": "Enable Dynamic Prompts",
|
||||
"paragraph": "Dynamic prompts allow multiple options within a prompt. Dynamic prompts can be used by: {option1|option2|option3}. Combinations of prompts will be randomly generated until the “Images” number has been reached."
|
||||
},
|
||||
"dynamicPromptsCombinatorial": {
|
||||
"heading": "Combinatorial Generation",
|
||||
"paragraph": "Generate an image for every possible combination of Dynamic Prompt until the Max Prompts is reached."
|
||||
},
|
||||
"infillMethod": {
|
||||
"heading": "Infill Method",
|
||||
"paragraph": "Method to infill the selected area."
|
||||
},
|
||||
"lora": {
|
||||
"heading": "LoRA",
|
||||
"paragraph": "Weight of the LoRA. Higher weight will lead to larger impacts on the final image."
|
||||
},
|
||||
"noiseEnable": {
|
||||
"heading": "Enable Noise Settings",
|
||||
"paragraph": "Advanced control over noise generation."
|
||||
},
|
||||
"noiseUseCPU": {
|
||||
"heading": "Use CPU Noise",
|
||||
"paragraph": "Uses the CPU to generate random noise."
|
||||
},
|
||||
"paramCFGScale": {
|
||||
"heading": "CFG Scale",
|
||||
"paragraph": "Controls how much your prompt influences the generation process."
|
||||
},
|
||||
"paramDenoisingStrength": {
|
||||
"heading": "Denoising Strength",
|
||||
"paragraph": "How much noise is added to the input image. 0 will result in an identical image, while 1 will result in a completely new image."
|
||||
},
|
||||
"paramImages": {
|
||||
"heading": "Images",
|
||||
"paragraph": "Number of images that will be generated."
|
||||
},
|
||||
"paramModel": {
|
||||
"heading": "Model",
|
||||
"paragraph": "Model used for the denoising steps. Different models are trained to specialize in producing different aesthetic results and content."
|
||||
"paragraphs": [
|
||||
"Choose how many layers of the CLIP model to skip.",
|
||||
"Some models work better with certain CLIP Skip settings.",
|
||||
"A higher value typically results in a less detailed image."
|
||||
]
|
||||
},
|
||||
"paramNegativeConditioning": {
|
||||
"heading": "Negative Prompts",
|
||||
"paragraph": "This is where you enter your negative prompts."
|
||||
"heading": "Negative Prompt",
|
||||
"paragraphs": [
|
||||
"The generation process avoids the concepts in the negative prompt. Use this to exclude qualities or objects from the output.",
|
||||
"Supports Compel syntax and embeddings."
|
||||
]
|
||||
},
|
||||
"paramPositiveConditioning": {
|
||||
"heading": "Positive Prompts",
|
||||
"paragraph": "This is where you enter your positive prompts."
|
||||
},
|
||||
"paramRatio": {
|
||||
"heading": "Ratio",
|
||||
"paragraph": "The ratio of the dimensions of the image generated. An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
|
||||
"heading": "Positive Prompt",
|
||||
"paragraphs": [
|
||||
"Guides the generation process. You may use any words or phrases.",
|
||||
"Compel and Dynamic Prompts syntaxes and embeddings."
|
||||
]
|
||||
},
|
||||
"paramScheduler": {
|
||||
"heading": "Scheduler",
|
||||
"paragraph": "Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
|
||||
"paragraphs": [
|
||||
"Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
|
||||
]
|
||||
},
|
||||
"compositingBlur": {
|
||||
"heading": "Blur",
|
||||
"paragraphs": ["The blur radius of the mask."]
|
||||
},
|
||||
"compositingBlurMethod": {
|
||||
"heading": "Blur Method",
|
||||
"paragraphs": ["The method of blur applied to the masked area."]
|
||||
},
|
||||
"compositingCoherencePass": {
|
||||
"heading": "Coherence Pass",
|
||||
"paragraphs": [
|
||||
"A second round of denoising helps to composite the Inpainted/Outpainted image."
|
||||
]
|
||||
},
|
||||
"compositingCoherenceMode": {
|
||||
"heading": "Mode",
|
||||
"paragraphs": ["The mode of the Coherence Pass."]
|
||||
},
|
||||
"compositingCoherenceSteps": {
|
||||
"heading": "Steps",
|
||||
"paragraphs": [
|
||||
"Number of denoising steps used in the Coherence Pass.",
|
||||
"Same as the main Steps parameter."
|
||||
]
|
||||
},
|
||||
"compositingStrength": {
|
||||
"heading": "Strength",
|
||||
"paragraphs": [
|
||||
"Denoising strength for the Coherence Pass.",
|
||||
"Same as the Image to Image Denoising Strength parameter."
|
||||
]
|
||||
},
|
||||
"compositingMaskAdjustments": {
|
||||
"heading": "Mask Adjustments",
|
||||
"paragraphs": ["Adjust the mask."]
|
||||
},
|
||||
"controlNetBeginEnd": {
|
||||
"heading": "Begin / End Step Percentage",
|
||||
"paragraphs": [
|
||||
"Which steps of the denoising process will have the ControlNet applied.",
|
||||
"ControlNets applied at the beginning of the process guide composition, and ControlNets applied at the end guide details."
|
||||
]
|
||||
},
|
||||
"controlNetControlMode": {
|
||||
"heading": "Control Mode",
|
||||
"paragraphs": [
|
||||
"Lends more weight to either the prompt or ControlNet."
|
||||
]
|
||||
},
|
||||
"controlNetResizeMode": {
|
||||
"heading": "Resize Mode",
|
||||
"paragraphs": [
|
||||
"How the ControlNet image will be fit to the image output size."
|
||||
]
|
||||
},
|
||||
"controlNet": {
|
||||
"heading": "ControlNet",
|
||||
"paragraphs": [
|
||||
"ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
|
||||
]
|
||||
},
|
||||
"controlNetWeight": {
|
||||
"heading": "Weight",
|
||||
"paragraphs": [
|
||||
"How strongly the ControlNet will impact the generated image."
|
||||
]
|
||||
},
|
||||
"dynamicPrompts": {
|
||||
"heading": "Dynamic Prompts",
|
||||
"paragraphs": [
|
||||
"Dynamic Prompts parses a single prompt into many.",
|
||||
"The basic syntax is \"a {red|green|blue} ball\". This will produce three prompts: \"a red ball\", \"a green ball\" and \"a blue ball\".",
|
||||
"You can use the syntax as many times as you like in a single prompt, but be sure to keep the number of prompts generated in check with the Max Prompts setting."
|
||||
]
|
||||
},
|
||||
"dynamicPromptsMaxPrompts": {
|
||||
"heading": "Max Prompts",
|
||||
"paragraphs": [
|
||||
"Limits the number of prompts that can be generated by Dynamic Prompts."
|
||||
]
|
||||
},
|
||||
"dynamicPromptsSeedBehaviour": {
|
||||
"heading": "Seed Behaviour",
|
||||
"paragraphs": [
|
||||
"Controls how the seed is used when generating prompts.",
|
||||
"Per Iteration will use a unique seed for each iteration. Use this to explore prompt variations on a single seed.",
|
||||
"For example, if you have 5 prompts, each image will use the same seed.",
|
||||
"Per Image will use a unique seed for each image. This provides more variation."
|
||||
]
|
||||
},
|
||||
"infillMethod": {
|
||||
"heading": "Infill Method",
|
||||
"paragraphs": ["Method to infill the selected area."]
|
||||
},
|
||||
"lora": {
|
||||
"heading": "LoRA Weight",
|
||||
"paragraphs": [
|
||||
"Higher LoRA weight will lead to larger impacts on the final image."
|
||||
]
|
||||
},
|
||||
"noiseUseCPU": {
|
||||
"heading": "Use CPU Noise",
|
||||
"paragraphs": [
|
||||
"Controls whether noise is generated on the CPU or GPU.",
|
||||
"With CPU Noise enabled, a particular seed will produce the same image on any machine.",
|
||||
"There is no performance impact to enabling CPU Noise."
|
||||
]
|
||||
},
|
||||
"paramCFGScale": {
|
||||
"heading": "CFG Scale",
|
||||
"paragraphs": [
|
||||
"Controls how much your prompt influences the generation process."
|
||||
]
|
||||
},
|
||||
"paramDenoisingStrength": {
|
||||
"heading": "Denoising Strength",
|
||||
"paragraphs": [
|
||||
"How much noise is added to the input image.",
|
||||
"0 will result in an identical image, while 1 will result in a completely new image."
|
||||
]
|
||||
},
|
||||
"paramIterations": {
|
||||
"heading": "Iterations",
|
||||
"paragraphs": [
|
||||
"The number of images to generate.",
|
||||
"If Dynamic Prompts is enabled, each of the prompts will be generated this many times."
|
||||
]
|
||||
},
|
||||
"paramModel": {
|
||||
"heading": "Model",
|
||||
"paragraphs": [
|
||||
"Model used for the denoising steps.",
|
||||
"Different models are typically trained to specialize in producing particular aesthetic results and content."
|
||||
]
|
||||
},
|
||||
"paramRatio": {
|
||||
"heading": "Aspect Ratio",
|
||||
"paragraphs": [
|
||||
"The aspect ratio of the dimensions of the image generated.",
|
||||
"An image size (in number of pixels) equivalent to 512x512 is recommended for SD1.5 models and a size equivalent to 1024x1024 is recommended for SDXL models."
|
||||
]
|
||||
},
|
||||
"paramSeed": {
|
||||
"heading": "Seed",
|
||||
"paragraph": "Controls the starting noise used for generation. Disable “Random Seed” to produce identical results with the same generation settings."
|
||||
"paragraphs": [
|
||||
"Controls the starting noise used for generation.",
|
||||
"Disable “Random Seed” to produce identical results with the same generation settings."
|
||||
]
|
||||
},
|
||||
"paramSteps": {
|
||||
"heading": "Steps",
|
||||
"paragraph": "Number of steps that will be performed in each generation. Higher step counts will typically create better images but will require more generation time."
|
||||
"paragraphs": [
|
||||
"Number of steps that will be performed in each generation.",
|
||||
"Higher step counts will typically create better images but will require more generation time."
|
||||
]
|
||||
},
|
||||
"paramVAE": {
|
||||
"heading": "VAE",
|
||||
"paragraph": "Model used for translating AI output into the final image."
|
||||
"paragraphs": [
|
||||
"Model used for translating AI output into the final image."
|
||||
]
|
||||
},
|
||||
"paramVAEPrecision": {
|
||||
"heading": "VAE Precision",
|
||||
"paragraph": "The precision used during VAE encoding and decoding. Fp16/Half precision is more efficient, at the expense of minor image variations."
|
||||
"paragraphs": [
|
||||
"The precision used during VAE encoding and decoding. FP16/half precision is more efficient, at the expense of minor image variations."
|
||||
]
|
||||
},
|
||||
"scaleBeforeProcessing": {
|
||||
"heading": "Scale Before Processing",
|
||||
"paragraph": "Scales the selected area to the size best suited for the model before the image generation process."
|
||||
"paragraphs": [
|
||||
"Scales the selected area to the size best suited for the model before the image generation process."
|
||||
]
|
||||
}
|
||||
},
|
||||
"ui": {
|
||||
@ -1346,6 +1444,8 @@
|
||||
"showCanvasDebugInfo": "Show Additional Canvas Info",
|
||||
"showGrid": "Show Grid",
|
||||
"showHide": "Show/Hide",
|
||||
"showResultsOn": "Show Results (On)",
|
||||
"showResultsOff": "Show Results (Off)",
|
||||
"showIntermediates": "Show Intermediates",
|
||||
"snapToGrid": "Snap to Grid",
|
||||
"undo": "Undo"
|
||||
|
@ -36,7 +36,8 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
|
||||
|
||||
const logger = useLogger('system');
|
||||
const dispatch = useAppDispatch();
|
||||
const { handlePreselectedImage } = usePreselectedImage();
|
||||
const { handleSendToCanvas, handleSendToImg2Img, handleUseAllMetadata } =
|
||||
usePreselectedImage(selectedImage?.imageName);
|
||||
const handleReset = useCallback(() => {
|
||||
localStorage.clear();
|
||||
location.reload();
|
||||
@ -59,8 +60,22 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage }: Props) => {
|
||||
}, [dispatch]);
|
||||
|
||||
useEffect(() => {
|
||||
handlePreselectedImage(selectedImage);
|
||||
}, [handlePreselectedImage, selectedImage]);
|
||||
if (selectedImage && selectedImage.action === 'sendToCanvas') {
|
||||
handleSendToCanvas();
|
||||
}
|
||||
}, [selectedImage, handleSendToCanvas]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'sendToImg2Img') {
|
||||
handleSendToImg2Img();
|
||||
}
|
||||
}, [selectedImage, handleSendToImg2Img]);
|
||||
|
||||
useEffect(() => {
|
||||
if (selectedImage && selectedImage.action === 'useAllParameters') {
|
||||
handleUseAllMetadata();
|
||||
}
|
||||
}, [selectedImage, handleUseAllMetadata]);
|
||||
|
||||
const headerComponent = useStore($headerComponent);
|
||||
|
||||
|
@ -17,7 +17,10 @@ import '../../i18n';
|
||||
import AppDndContext from '../../features/dnd/components/AppDndContext';
|
||||
import { $customStarUI, CustomStarUi } from 'app/store/nanostores/customStarUI';
|
||||
import { $headerComponent } from 'app/store/nanostores/headerComponent';
|
||||
import { $queueId, DEFAULT_QUEUE_ID } from 'features/queue/store/nanoStores';
|
||||
import {
|
||||
$queueId,
|
||||
DEFAULT_QUEUE_ID,
|
||||
} from 'features/queue/store/queueNanoStore';
|
||||
|
||||
const App = lazy(() => import('./App'));
|
||||
const ThemeLocaleProvider = lazy(() => import('./ThemeLocaleProvider'));
|
||||
|
@ -5,7 +5,7 @@ import {
|
||||
} from '@chakra-ui/react';
|
||||
import { ReactNode, memo, useEffect, useMemo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { theme as invokeAITheme } from 'theme/theme';
|
||||
import { TOAST_OPTIONS, theme as invokeAITheme } from 'theme/theme';
|
||||
|
||||
import '@fontsource-variable/inter';
|
||||
import { MantineProvider } from '@mantine/core';
|
||||
@ -39,7 +39,11 @@ function ThemeLocaleProvider({ children }: ThemeLocaleProviderProps) {
|
||||
|
||||
return (
|
||||
<MantineProvider theme={mantineTheme}>
|
||||
<ChakraProvider theme={theme} colorModeManager={manager}>
|
||||
<ChakraProvider
|
||||
theme={theme}
|
||||
colorModeManager={manager}
|
||||
toastOptions={TOAST_OPTIONS}
|
||||
>
|
||||
{children}
|
||||
</ChakraProvider>
|
||||
</MantineProvider>
|
||||
|
@ -54,21 +54,6 @@ import { addModelSelectedListener } from './listeners/modelSelected';
|
||||
import { addModelsLoadedListener } from './listeners/modelsLoaded';
|
||||
import { addDynamicPromptsListener } from './listeners/promptChanged';
|
||||
import { addReceivedOpenAPISchemaListener } from './listeners/receivedOpenAPISchema';
|
||||
import {
|
||||
addSessionCanceledFulfilledListener,
|
||||
addSessionCanceledPendingListener,
|
||||
addSessionCanceledRejectedListener,
|
||||
} from './listeners/sessionCanceled';
|
||||
import {
|
||||
addSessionCreatedFulfilledListener,
|
||||
addSessionCreatedPendingListener,
|
||||
addSessionCreatedRejectedListener,
|
||||
} from './listeners/sessionCreated';
|
||||
import {
|
||||
addSessionInvokedFulfilledListener,
|
||||
addSessionInvokedPendingListener,
|
||||
addSessionInvokedRejectedListener,
|
||||
} from './listeners/sessionInvoked';
|
||||
import { addSocketConnectedEventListener as addSocketConnectedListener } from './listeners/socketio/socketConnected';
|
||||
import { addSocketDisconnectedEventListener as addSocketDisconnectedListener } from './listeners/socketio/socketDisconnected';
|
||||
import { addGeneratorProgressEventListener as addGeneratorProgressListener } from './listeners/socketio/socketGeneratorProgress';
|
||||
@ -86,6 +71,7 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
|
||||
import { addTabChangedListener } from './listeners/tabChanged';
|
||||
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
|
||||
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
|
||||
import { addBatchEnqueuedListener } from './listeners/batchEnqueued';
|
||||
|
||||
export const listenerMiddleware = createListenerMiddleware();
|
||||
|
||||
@ -136,6 +122,7 @@ addEnqueueRequestedCanvasListener();
|
||||
addEnqueueRequestedNodes();
|
||||
addEnqueueRequestedLinear();
|
||||
addAnyEnqueuedListener();
|
||||
addBatchEnqueuedListener();
|
||||
|
||||
// Canvas actions
|
||||
addCanvasSavedToGalleryListener();
|
||||
@ -175,21 +162,6 @@ addSessionRetrievalErrorEventListener();
|
||||
addInvocationRetrievalErrorEventListener();
|
||||
addSocketQueueItemStatusChangedEventListener();
|
||||
|
||||
// Session Created
|
||||
addSessionCreatedPendingListener();
|
||||
addSessionCreatedFulfilledListener();
|
||||
addSessionCreatedRejectedListener();
|
||||
|
||||
// Session Invoked
|
||||
addSessionInvokedPendingListener();
|
||||
addSessionInvokedFulfilledListener();
|
||||
addSessionInvokedRejectedListener();
|
||||
|
||||
// Session Canceled
|
||||
addSessionCanceledPendingListener();
|
||||
addSessionCanceledFulfilledListener();
|
||||
addSessionCanceledRejectedListener();
|
||||
|
||||
// ControlNet
|
||||
addControlNetImageProcessedListener();
|
||||
addControlNetAutoProcessListener();
|
||||
|
@ -0,0 +1,96 @@
|
||||
import { createStandaloneToast } from '@chakra-ui/react';
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { parseify } from 'common/util/serialize';
|
||||
import { zPydanticValidationError } from 'features/system/store/zodSchemas';
|
||||
import { t } from 'i18next';
|
||||
import { get, truncate, upperFirst } from 'lodash-es';
|
||||
import { queueApi } from 'services/api/endpoints/queue';
|
||||
import { TOAST_OPTIONS, theme } from 'theme/theme';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
const { toast } = createStandaloneToast({
|
||||
theme: theme,
|
||||
defaultOptions: TOAST_OPTIONS.defaultOptions,
|
||||
});
|
||||
|
||||
export const addBatchEnqueuedListener = () => {
|
||||
// success
|
||||
startAppListening({
|
||||
matcher: queueApi.endpoints.enqueueBatch.matchFulfilled,
|
||||
effect: async (action) => {
|
||||
const response = action.payload;
|
||||
const arg = action.meta.arg.originalArgs;
|
||||
logger('queue').debug(
|
||||
{ enqueueResult: parseify(response) },
|
||||
'Batch enqueued'
|
||||
);
|
||||
|
||||
if (!toast.isActive('batch-queued')) {
|
||||
toast({
|
||||
id: 'batch-queued',
|
||||
title: t('queue.batchQueued'),
|
||||
description: t('queue.batchQueuedDesc', {
|
||||
item_count: response.enqueued,
|
||||
direction: arg.prepend ? t('queue.front') : t('queue.back'),
|
||||
}),
|
||||
duration: 1000,
|
||||
status: 'success',
|
||||
});
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
// error
|
||||
startAppListening({
|
||||
matcher: queueApi.endpoints.enqueueBatch.matchRejected,
|
||||
effect: async (action) => {
|
||||
const response = action.payload;
|
||||
const arg = action.meta.arg.originalArgs;
|
||||
|
||||
if (!response) {
|
||||
toast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
description: 'Unknown Error',
|
||||
});
|
||||
logger('queue').error(
|
||||
{ batchConfig: parseify(arg), error: parseify(response) },
|
||||
t('queue.batchFailedToQueue')
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const result = zPydanticValidationError.safeParse(response);
|
||||
if (result.success) {
|
||||
result.data.data.detail.map((e) => {
|
||||
toast({
|
||||
id: 'batch-failed-to-queue',
|
||||
title: truncate(upperFirst(e.msg), { length: 128 }),
|
||||
status: 'error',
|
||||
description: truncate(
|
||||
`Path:
|
||||
${e.loc.join('.')}`,
|
||||
{ length: 128 }
|
||||
),
|
||||
});
|
||||
});
|
||||
} else {
|
||||
let detail = 'Unknown Error';
|
||||
if (response.status === 403 && 'body' in response) {
|
||||
detail = get(response, 'body.detail', 'Unknown Error');
|
||||
} else if (response.status === 403 && 'error' in response) {
|
||||
detail = get(response, 'error.detail', 'Unknown Error');
|
||||
}
|
||||
toast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
description: detail,
|
||||
});
|
||||
}
|
||||
logger('queue').error(
|
||||
{ batchConfig: parseify(arg), error: parseify(response) },
|
||||
t('queue.batchFailedToQueue')
|
||||
);
|
||||
},
|
||||
});
|
||||
};
|
@ -25,7 +25,7 @@ export const addBoardIdSelectedListener = () => {
|
||||
const state = getState();
|
||||
|
||||
const board_id = boardIdSelected.match(action)
|
||||
? action.payload
|
||||
? action.payload.boardId
|
||||
: state.gallery.selectedBoardId;
|
||||
|
||||
const galleryView = galleryViewChanged.match(action)
|
||||
@ -55,7 +55,12 @@ export const addBoardIdSelectedListener = () => {
|
||||
|
||||
if (boardImagesData) {
|
||||
const firstImage = imagesSelectors.selectAll(boardImagesData)[0];
|
||||
dispatch(imageSelected(firstImage ?? null));
|
||||
const selectedImage = imagesSelectors.selectById(
|
||||
boardImagesData,
|
||||
action.payload.selectedImageName
|
||||
);
|
||||
|
||||
dispatch(imageSelected(selectedImage || firstImage || null));
|
||||
} else {
|
||||
// board has no images - deselect
|
||||
dispatch(imageSelected(null));
|
||||
|
@ -3,7 +3,7 @@ import { startAppListening } from '..';
|
||||
import { $logger } from 'app/logging/logger';
|
||||
import { getBaseLayerBlob } from 'features/canvas/util/getBaseLayerBlob';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { copyBlobToClipboard } from 'features/canvas/util/copyBlobToClipboard';
|
||||
import { copyBlobToClipboard } from 'features/system/util/copyBlobToClipboard';
|
||||
import { t } from 'i18next';
|
||||
|
||||
export const addCanvasCopiedToClipboardListener = () => {
|
||||
@ -15,10 +15,12 @@ export const addCanvasCopiedToClipboardListener = () => {
|
||||
.child({ namespace: 'canvasCopiedToClipboardListener' });
|
||||
const state = getState();
|
||||
|
||||
const blob = await getBaseLayerBlob(state);
|
||||
try {
|
||||
const blob = getBaseLayerBlob(state);
|
||||
|
||||
if (!blob) {
|
||||
moduleLog.error('Problem getting base layer blob');
|
||||
copyBlobToClipboard(blob);
|
||||
} catch (err) {
|
||||
moduleLog.error(String(err));
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('toast.problemCopyingCanvas'),
|
||||
@ -29,8 +31,6 @@ export const addCanvasCopiedToClipboardListener = () => {
|
||||
return;
|
||||
}
|
||||
|
||||
copyBlobToClipboard(blob);
|
||||
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('toast.canvasCopiedClipboard'),
|
||||
|
@ -15,10 +15,11 @@ export const addCanvasDownloadedAsImageListener = () => {
|
||||
.child({ namespace: 'canvasSavedToGalleryListener' });
|
||||
const state = getState();
|
||||
|
||||
const blob = await getBaseLayerBlob(state);
|
||||
|
||||
if (!blob) {
|
||||
moduleLog.error('Problem getting base layer blob');
|
||||
let blob;
|
||||
try {
|
||||
blob = await getBaseLayerBlob(state);
|
||||
} catch (err) {
|
||||
moduleLog.error(String(err));
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('toast.problemDownloadingCanvas'),
|
||||
|
@ -3,9 +3,9 @@ import { canvasImageToControlNet } from 'features/canvas/store/actions';
|
||||
import { getBaseLayerBlob } from 'features/canvas/util/getBaseLayerBlob';
|
||||
import { controlNetImageChanged } from 'features/controlNet/store/controlNetSlice';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { startAppListening } from '..';
|
||||
import { t } from 'i18next';
|
||||
|
||||
export const addCanvasImageToControlNetListener = () => {
|
||||
startAppListening({
|
||||
@ -14,10 +14,11 @@ export const addCanvasImageToControlNetListener = () => {
|
||||
const log = logger('canvas');
|
||||
const state = getState();
|
||||
|
||||
const blob = await getBaseLayerBlob(state);
|
||||
|
||||
if (!blob) {
|
||||
log.error('Problem getting base layer blob');
|
||||
let blob;
|
||||
try {
|
||||
blob = await getBaseLayerBlob(state, true);
|
||||
} catch (err) {
|
||||
log.error(String(err));
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('toast.problemSavingCanvas'),
|
||||
@ -35,10 +36,10 @@ export const addCanvasImageToControlNetListener = () => {
|
||||
file: new File([blob], 'savedCanvas.png', {
|
||||
type: 'image/png',
|
||||
}),
|
||||
image_category: 'mask',
|
||||
image_category: 'control',
|
||||
is_intermediate: false,
|
||||
board_id: autoAddBoardId === 'none' ? undefined : autoAddBoardId,
|
||||
crop_visible: true,
|
||||
crop_visible: false,
|
||||
postUploadAction: {
|
||||
type: 'TOAST',
|
||||
toastOptions: { title: t('toast.canvasSentControlnetAssets') },
|
||||
|
@ -3,9 +3,9 @@ import { canvasMaskToControlNet } from 'features/canvas/store/actions';
|
||||
import { getCanvasData } from 'features/canvas/util/getCanvasData';
|
||||
import { controlNetImageChanged } from 'features/controlNet/store/controlNetSlice';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { startAppListening } from '..';
|
||||
import { t } from 'i18next';
|
||||
|
||||
export const addCanvasMaskToControlNetListener = () => {
|
||||
startAppListening({
|
||||
@ -50,7 +50,7 @@ export const addCanvasMaskToControlNetListener = () => {
|
||||
image_category: 'mask',
|
||||
is_intermediate: false,
|
||||
board_id: autoAddBoardId === 'none' ? undefined : autoAddBoardId,
|
||||
crop_visible: true,
|
||||
crop_visible: false,
|
||||
postUploadAction: {
|
||||
type: 'TOAST',
|
||||
toastOptions: { title: t('toast.maskSentControlnetAssets') },
|
||||
|
@ -13,10 +13,11 @@ export const addCanvasSavedToGalleryListener = () => {
|
||||
const log = logger('canvas');
|
||||
const state = getState();
|
||||
|
||||
const blob = await getBaseLayerBlob(state);
|
||||
|
||||
if (!blob) {
|
||||
log.error('Problem getting base layer blob');
|
||||
let blob;
|
||||
try {
|
||||
blob = await getBaseLayerBlob(state);
|
||||
} catch (err) {
|
||||
log.error(String(err));
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('toast.problemSavingCanvas'),
|
||||
|
@ -12,8 +12,6 @@ import { getCanvasGenerationMode } from 'features/canvas/util/getCanvasGeneratio
|
||||
import { canvasGraphBuilt } from 'features/nodes/store/actions';
|
||||
import { buildCanvasGraph } from 'features/nodes/util/graphBuilders/buildCanvasGraph';
|
||||
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { queueApi } from 'services/api/endpoints/queue';
|
||||
import { ImageDTO } from 'services/api/types';
|
||||
@ -140,8 +138,6 @@ export const addEnqueueRequestedCanvasListener = () => {
|
||||
const enqueueResult = await req.unwrap();
|
||||
req.reset();
|
||||
|
||||
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
|
||||
|
||||
const batchId = enqueueResult.batch.batch_id as string; // we know the is a string, backend provides it
|
||||
|
||||
// Prep the canvas staging area if it is not yet initialized
|
||||
@ -158,28 +154,8 @@ export const addEnqueueRequestedCanvasListener = () => {
|
||||
|
||||
// Associate the session with the canvas session ID
|
||||
dispatch(canvasBatchIdAdded(batchId));
|
||||
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchQueued'),
|
||||
description: t('queue.batchQueuedDesc', {
|
||||
item_count: enqueueResult.enqueued,
|
||||
direction: prepend ? t('queue.front') : t('queue.back'),
|
||||
}),
|
||||
status: 'success',
|
||||
})
|
||||
);
|
||||
} catch {
|
||||
log.error(
|
||||
{ batchConfig: parseify(batchConfig) },
|
||||
t('queue.batchFailedToQueue')
|
||||
);
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
})
|
||||
);
|
||||
// no-op
|
||||
}
|
||||
},
|
||||
});
|
||||
|
@ -1,13 +1,9 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { enqueueRequested } from 'app/store/actions';
|
||||
import { parseify } from 'common/util/serialize';
|
||||
import { prepareLinearUIBatch } from 'features/nodes/util/graphBuilders/buildLinearBatchConfig';
|
||||
import { buildLinearImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearImageToImageGraph';
|
||||
import { buildLinearSDXLImageToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLImageToImageGraph';
|
||||
import { buildLinearSDXLTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearSDXLTextToImageGraph';
|
||||
import { buildLinearTextToImageGraph } from 'features/nodes/util/graphBuilders/buildLinearTextToImageGraph';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { queueApi } from 'services/api/endpoints/queue';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
@ -18,7 +14,6 @@ export const addEnqueueRequestedLinear = () => {
|
||||
(action.payload.tabName === 'txt2img' ||
|
||||
action.payload.tabName === 'img2img'),
|
||||
effect: async (action, { getState, dispatch }) => {
|
||||
const log = logger('queue');
|
||||
const state = getState();
|
||||
const model = state.generation.model;
|
||||
const { prepend } = action.payload;
|
||||
@ -41,38 +36,12 @@ export const addEnqueueRequestedLinear = () => {
|
||||
|
||||
const batchConfig = prepareLinearUIBatch(state, graph, prepend);
|
||||
|
||||
try {
|
||||
const req = dispatch(
|
||||
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
|
||||
fixedCacheKey: 'enqueueBatch',
|
||||
})
|
||||
);
|
||||
const enqueueResult = await req.unwrap();
|
||||
req.reset();
|
||||
|
||||
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchQueued'),
|
||||
description: t('queue.batchQueuedDesc', {
|
||||
item_count: enqueueResult.enqueued,
|
||||
direction: prepend ? t('queue.front') : t('queue.back'),
|
||||
}),
|
||||
status: 'success',
|
||||
})
|
||||
);
|
||||
} catch {
|
||||
log.error(
|
||||
{ batchConfig: parseify(batchConfig) },
|
||||
t('queue.batchFailedToQueue')
|
||||
);
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
})
|
||||
);
|
||||
}
|
||||
const req = dispatch(
|
||||
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
|
||||
fixedCacheKey: 'enqueueBatch',
|
||||
})
|
||||
);
|
||||
req.reset();
|
||||
},
|
||||
});
|
||||
};
|
||||
|
@ -1,9 +1,5 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { enqueueRequested } from 'app/store/actions';
|
||||
import { parseify } from 'common/util/serialize';
|
||||
import { buildNodesGraph } from 'features/nodes/util/graphBuilders/buildNodesGraph';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { queueApi } from 'services/api/endpoints/queue';
|
||||
import { BatchConfig } from 'services/api/types';
|
||||
import { startAppListening } from '..';
|
||||
@ -13,9 +9,7 @@ export const addEnqueueRequestedNodes = () => {
|
||||
predicate: (action): action is ReturnType<typeof enqueueRequested> =>
|
||||
enqueueRequested.match(action) && action.payload.tabName === 'nodes',
|
||||
effect: async (action, { getState, dispatch }) => {
|
||||
const log = logger('queue');
|
||||
const state = getState();
|
||||
const { prepend } = action.payload;
|
||||
const graph = buildNodesGraph(state.nodes);
|
||||
const batchConfig: BatchConfig = {
|
||||
batch: {
|
||||
@ -25,38 +19,12 @@ export const addEnqueueRequestedNodes = () => {
|
||||
prepend: action.payload.prepend,
|
||||
};
|
||||
|
||||
try {
|
||||
const req = dispatch(
|
||||
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
|
||||
fixedCacheKey: 'enqueueBatch',
|
||||
})
|
||||
);
|
||||
const enqueueResult = await req.unwrap();
|
||||
req.reset();
|
||||
|
||||
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchQueued'),
|
||||
description: t('queue.batchQueuedDesc', {
|
||||
item_count: enqueueResult.enqueued,
|
||||
direction: prepend ? t('queue.front') : t('queue.back'),
|
||||
}),
|
||||
status: 'success',
|
||||
})
|
||||
);
|
||||
} catch {
|
||||
log.error(
|
||||
{ batchConfig: parseify(batchConfig) },
|
||||
'Failed to enqueue batch'
|
||||
);
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
})
|
||||
);
|
||||
}
|
||||
const req = dispatch(
|
||||
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
|
||||
fixedCacheKey: 'enqueueBatch',
|
||||
})
|
||||
);
|
||||
req.reset();
|
||||
},
|
||||
});
|
||||
};
|
||||
|
@ -4,7 +4,9 @@ import { parseify } from 'common/util/serialize';
|
||||
import { setInitialCanvasImage } from 'features/canvas/store/canvasSlice';
|
||||
import {
|
||||
controlNetImageChanged,
|
||||
controlNetIsEnabledChanged,
|
||||
ipAdapterImageChanged,
|
||||
isIPAdapterEnabledChanged,
|
||||
} from 'features/controlNet/store/controlNetSlice';
|
||||
import {
|
||||
TypesafeDraggableData,
|
||||
@ -99,6 +101,12 @@ export const addImageDroppedListener = () => {
|
||||
controlNetId,
|
||||
})
|
||||
);
|
||||
dispatch(
|
||||
controlNetIsEnabledChanged({
|
||||
controlNetId,
|
||||
isEnabled: true,
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -111,6 +119,7 @@ export const addImageDroppedListener = () => {
|
||||
activeData.payload.imageDTO
|
||||
) {
|
||||
dispatch(ipAdapterImageChanged(activeData.payload.imageDTO));
|
||||
dispatch(isIPAdapterEnabledChanged(true));
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -3,7 +3,9 @@ import { logger } from 'app/logging/logger';
|
||||
import { setInitialCanvasImage } from 'features/canvas/store/canvasSlice';
|
||||
import {
|
||||
controlNetImageChanged,
|
||||
controlNetIsEnabledChanged,
|
||||
ipAdapterImageChanged,
|
||||
isIPAdapterEnabledChanged,
|
||||
} from 'features/controlNet/store/controlNetSlice';
|
||||
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
|
||||
import { initialImageChanged } from 'features/parameters/store/generationSlice';
|
||||
@ -87,6 +89,12 @@ export const addImageUploadedFulfilledListener = () => {
|
||||
|
||||
if (postUploadAction?.type === 'SET_CONTROLNET_IMAGE') {
|
||||
const { controlNetId } = postUploadAction;
|
||||
dispatch(
|
||||
controlNetIsEnabledChanged({
|
||||
controlNetId,
|
||||
isEnabled: true,
|
||||
})
|
||||
);
|
||||
dispatch(
|
||||
controlNetImageChanged({
|
||||
controlNetId,
|
||||
@ -104,6 +112,7 @@ export const addImageUploadedFulfilledListener = () => {
|
||||
|
||||
if (postUploadAction?.type === 'SET_IP_ADAPTER_IMAGE') {
|
||||
dispatch(ipAdapterImageChanged(imageDTO));
|
||||
dispatch(isIPAdapterEnabledChanged(true));
|
||||
dispatch(
|
||||
addToast({
|
||||
...DEFAULT_UPLOADED_TOAST,
|
||||
|
@ -1,44 +0,0 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { serializeError } from 'serialize-error';
|
||||
import { sessionCanceled } from 'services/api/thunks/session';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
export const addSessionCanceledPendingListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCanceled.pending,
|
||||
effect: () => {
|
||||
//
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionCanceledFulfilledListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCanceled.fulfilled,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
const { session_id } = action.meta.arg;
|
||||
log.debug({ session_id }, `Session canceled (${session_id})`);
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionCanceledRejectedListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCanceled.rejected,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
const { session_id } = action.meta.arg;
|
||||
if (action.payload) {
|
||||
const { error } = action.payload;
|
||||
log.error(
|
||||
{
|
||||
session_id,
|
||||
error: serializeError(error),
|
||||
},
|
||||
`Problem canceling session`
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
};
|
@ -1,45 +0,0 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { parseify } from 'common/util/serialize';
|
||||
import { serializeError } from 'serialize-error';
|
||||
import { sessionCreated } from 'services/api/thunks/session';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
export const addSessionCreatedPendingListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCreated.pending,
|
||||
effect: () => {
|
||||
//
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionCreatedFulfilledListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCreated.fulfilled,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
const session = action.payload;
|
||||
log.debug(
|
||||
{ session: parseify(session) },
|
||||
`Session created (${session.id})`
|
||||
);
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionCreatedRejectedListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionCreated.rejected,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
if (action.payload) {
|
||||
const { error, status } = action.payload;
|
||||
const graph = parseify(action.meta.arg);
|
||||
log.error(
|
||||
{ graph, status, error: serializeError(error) },
|
||||
`Problem creating session`
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
};
|
@ -1,44 +0,0 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { serializeError } from 'serialize-error';
|
||||
import { sessionInvoked } from 'services/api/thunks/session';
|
||||
import { startAppListening } from '..';
|
||||
|
||||
export const addSessionInvokedPendingListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionInvoked.pending,
|
||||
effect: () => {
|
||||
//
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionInvokedFulfilledListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionInvoked.fulfilled,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
const { session_id } = action.meta.arg;
|
||||
log.debug({ session_id }, `Session invoked (${session_id})`);
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
export const addSessionInvokedRejectedListener = () => {
|
||||
startAppListening({
|
||||
actionCreator: sessionInvoked.rejected,
|
||||
effect: (action) => {
|
||||
const log = logger('session');
|
||||
const { session_id } = action.meta.arg;
|
||||
if (action.payload) {
|
||||
const { error } = action.payload;
|
||||
log.error(
|
||||
{
|
||||
session_id,
|
||||
error: serializeError(error),
|
||||
},
|
||||
`Problem invoking session`
|
||||
);
|
||||
}
|
||||
},
|
||||
});
|
||||
};
|
@ -17,7 +17,8 @@ import {
|
||||
} from 'services/events/actions';
|
||||
import { startAppListening } from '../..';
|
||||
|
||||
const nodeDenylist = ['load_image'];
|
||||
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
|
||||
const nodeDenylist = ['load_image', 'image'];
|
||||
|
||||
export const addInvocationCompleteEventListener = () => {
|
||||
startAppListening({
|
||||
@ -37,6 +38,7 @@ export const addInvocationCompleteEventListener = () => {
|
||||
const { image_name } = result.image;
|
||||
const { canvas, gallery } = getState();
|
||||
|
||||
// This populates the `getImageDTO` cache
|
||||
const imageDTO = await dispatch(
|
||||
imagesApi.endpoints.getImageDTO.initiate(image_name)
|
||||
).unwrap();
|
||||
@ -52,54 +54,59 @@ export const addInvocationCompleteEventListener = () => {
|
||||
if (!imageDTO.is_intermediate) {
|
||||
/**
|
||||
* Cache updates for when an image result is received
|
||||
* - *add* to getImageDTO
|
||||
* - IF `autoAddBoardId` is set:
|
||||
* - THEN add it to the board_id/images
|
||||
* - ELSE (`autoAddBoardId` is not set):
|
||||
* - THEN add it to the no_board/images
|
||||
* - add it to the no_board/images
|
||||
*/
|
||||
|
||||
const { autoAddBoardId } = gallery;
|
||||
if (autoAddBoardId && autoAddBoardId !== 'none') {
|
||||
dispatch(
|
||||
imagesApi.endpoints.addImageToBoard.initiate({
|
||||
board_id: autoAddBoardId,
|
||||
imageDTO,
|
||||
})
|
||||
);
|
||||
} else {
|
||||
dispatch(
|
||||
imagesApi.util.updateQueryData(
|
||||
'listImages',
|
||||
{
|
||||
board_id: 'none',
|
||||
categories: IMAGE_CATEGORIES,
|
||||
},
|
||||
(draft) => {
|
||||
imagesAdapter.addOne(draft, imageDTO);
|
||||
}
|
||||
)
|
||||
);
|
||||
}
|
||||
dispatch(
|
||||
imagesApi.util.updateQueryData(
|
||||
'listImages',
|
||||
{
|
||||
board_id: imageDTO.board_id ?? 'none',
|
||||
categories: IMAGE_CATEGORIES,
|
||||
},
|
||||
(draft) => {
|
||||
imagesAdapter.addOne(draft, imageDTO);
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
dispatch(
|
||||
imagesApi.util.invalidateTags([
|
||||
{ type: 'BoardImagesTotal', id: autoAddBoardId },
|
||||
{ type: 'BoardAssetsTotal', id: autoAddBoardId },
|
||||
{ type: 'BoardImagesTotal', id: imageDTO.board_id },
|
||||
{ type: 'BoardAssetsTotal', id: imageDTO.board_id },
|
||||
])
|
||||
);
|
||||
|
||||
const { selectedBoardId, shouldAutoSwitch } = gallery;
|
||||
const { shouldAutoSwitch } = gallery;
|
||||
|
||||
// If auto-switch is enabled, select the new image
|
||||
if (shouldAutoSwitch) {
|
||||
// if auto-add is enabled, switch the board as the image comes in
|
||||
if (autoAddBoardId && autoAddBoardId !== selectedBoardId) {
|
||||
dispatch(boardIdSelected(autoAddBoardId));
|
||||
dispatch(galleryViewChanged('images'));
|
||||
} else if (!autoAddBoardId) {
|
||||
// if auto-add is enabled, switch the gallery view and board if needed as the image comes in
|
||||
if (gallery.galleryView !== 'images') {
|
||||
dispatch(galleryViewChanged('images'));
|
||||
}
|
||||
|
||||
if (
|
||||
imageDTO.board_id &&
|
||||
imageDTO.board_id !== gallery.selectedBoardId
|
||||
) {
|
||||
dispatch(
|
||||
boardIdSelected({
|
||||
boardId: imageDTO.board_id,
|
||||
selectedImageName: imageDTO.image_name,
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
if (!imageDTO.board_id && gallery.selectedBoardId !== 'none') {
|
||||
dispatch(
|
||||
boardIdSelected({
|
||||
boardId: 'none',
|
||||
selectedImageName: imageDTO.image_name,
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
dispatch(imageSelected(imageDTO));
|
||||
}
|
||||
}
|
||||
|
@ -35,6 +35,7 @@ export const addSocketQueueItemStatusChangedEventListener = () => {
|
||||
queueApi.util.invalidateTags([
|
||||
'CurrentSessionQueueItem',
|
||||
'NextSessionQueueItem',
|
||||
'InvocationCacheStatus',
|
||||
{ type: 'SessionQueueItem', id: item_id },
|
||||
{ type: 'SessionQueueItemDTO', id: item_id },
|
||||
{ type: 'BatchStatus', id: queue_batch_id },
|
||||
|
@ -18,11 +18,14 @@ export const addUpscaleRequestedListener = () => {
|
||||
const log = logger('session');
|
||||
|
||||
const { image_name } = action.payload;
|
||||
const { esrganModelName } = getState().postprocessing;
|
||||
const state = getState();
|
||||
const { esrganModelName } = state.postprocessing;
|
||||
const { autoAddBoardId } = state.gallery;
|
||||
|
||||
const graph = buildAdHocUpscaleGraph({
|
||||
image_name,
|
||||
esrganModelName,
|
||||
autoAddBoardId,
|
||||
});
|
||||
|
||||
try {
|
||||
|
@ -1,54 +0,0 @@
|
||||
import { logger } from 'app/logging/logger';
|
||||
import { AppThunkDispatch } from 'app/store/store';
|
||||
import { parseify } from 'common/util/serialize';
|
||||
import { addToast } from 'features/system/store/systemSlice';
|
||||
import { t } from 'i18next';
|
||||
import { queueApi } from 'services/api/endpoints/queue';
|
||||
import { BatchConfig } from 'services/api/types';
|
||||
|
||||
export const enqueueBatch = async (
|
||||
batchConfig: BatchConfig,
|
||||
dispatch: AppThunkDispatch
|
||||
) => {
|
||||
const log = logger('session');
|
||||
const { prepend } = batchConfig;
|
||||
|
||||
try {
|
||||
const req = dispatch(
|
||||
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {
|
||||
fixedCacheKey: 'enqueueBatch',
|
||||
})
|
||||
);
|
||||
const enqueueResult = await req.unwrap();
|
||||
req.reset();
|
||||
|
||||
dispatch(
|
||||
queueApi.endpoints.resumeProcessor.initiate(undefined, {
|
||||
fixedCacheKey: 'resumeProcessor',
|
||||
})
|
||||
);
|
||||
|
||||
log.debug({ enqueueResult: parseify(enqueueResult) }, 'Batch enqueued');
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchQueued'),
|
||||
description: t('queue.batchQueuedDesc', {
|
||||
item_count: enqueueResult.enqueued,
|
||||
direction: prepend ? t('queue.front') : t('queue.back'),
|
||||
}),
|
||||
status: 'success',
|
||||
})
|
||||
);
|
||||
} catch {
|
||||
log.error(
|
||||
{ batchConfig: parseify(batchConfig) },
|
||||
t('queue.batchFailedToQueue')
|
||||
);
|
||||
dispatch(
|
||||
addToast({
|
||||
title: t('queue.batchFailedToQueue'),
|
||||
status: 'error',
|
||||
})
|
||||
);
|
||||
}
|
||||
};
|
@ -21,7 +21,8 @@ export type AppFeature =
|
||||
| 'multiselect'
|
||||
| 'pauseQueue'
|
||||
| 'resumeQueue'
|
||||
| 'prependQueue';
|
||||
| 'prependQueue'
|
||||
| 'invocationCache';
|
||||
|
||||
/**
|
||||
* A disable-able Stable Diffusion feature
|
||||
|
@ -1,18 +1,9 @@
|
||||
import { chakra, ChakraProps } from '@chakra-ui/react';
|
||||
import { Box, ChakraProps } from '@chakra-ui/react';
|
||||
import { memo } from 'react';
|
||||
import { RgbaColorPicker } from 'react-colorful';
|
||||
import { ColorPickerBaseProps, RgbaColor } from 'react-colorful/dist/types';
|
||||
|
||||
type IAIColorPickerProps = Omit<ColorPickerBaseProps<RgbaColor>, 'color'> &
|
||||
ChakraProps & {
|
||||
pickerColor: RgbaColor;
|
||||
styleClass?: string;
|
||||
};
|
||||
|
||||
const ChakraRgbaColorPicker = chakra(RgbaColorPicker, {
|
||||
baseStyle: { paddingInline: 4 },
|
||||
shouldForwardProp: (prop) => !['pickerColor'].includes(prop),
|
||||
});
|
||||
type IAIColorPickerProps = ColorPickerBaseProps<RgbaColor>;
|
||||
|
||||
const colorPickerStyles: NonNullable<ChakraProps['sx']> = {
|
||||
width: 6,
|
||||
@ -20,19 +11,17 @@ const colorPickerStyles: NonNullable<ChakraProps['sx']> = {
|
||||
borderColor: 'base.100',
|
||||
};
|
||||
|
||||
const IAIColorPicker = (props: IAIColorPickerProps) => {
|
||||
const { styleClass = '', ...rest } = props;
|
||||
const sx = {
|
||||
'.react-colorful__hue-pointer': colorPickerStyles,
|
||||
'.react-colorful__saturation-pointer': colorPickerStyles,
|
||||
'.react-colorful__alpha-pointer': colorPickerStyles,
|
||||
};
|
||||
|
||||
const IAIColorPicker = (props: IAIColorPickerProps) => {
|
||||
return (
|
||||
<ChakraRgbaColorPicker
|
||||
sx={{
|
||||
'.react-colorful__hue-pointer': colorPickerStyles,
|
||||
'.react-colorful__saturation-pointer': colorPickerStyles,
|
||||
'.react-colorful__alpha-pointer': colorPickerStyles,
|
||||
}}
|
||||
className={styleClass}
|
||||
{...rest}
|
||||
/>
|
||||
<Box sx={sx}>
|
||||
<RgbaColorPicker {...props} />
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
||||
|
@ -31,7 +31,7 @@ const IAIDroppable = (props: IAIDroppableProps) => {
|
||||
insetInlineStart={0}
|
||||
w="full"
|
||||
h="full"
|
||||
pointerEvents="none"
|
||||
pointerEvents={active ? 'auto' : 'none'}
|
||||
>
|
||||
<AnimatePresence>
|
||||
{isValidDrop(data, active) && (
|
||||
|
@ -81,3 +81,38 @@ export const IAINoContentFallback = (props: IAINoImageFallbackProps) => {
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
|
||||
type IAINoImageFallbackWithSpinnerProps = FlexProps & {
|
||||
label?: string;
|
||||
};
|
||||
|
||||
export const IAINoContentFallbackWithSpinner = (
|
||||
props: IAINoImageFallbackWithSpinnerProps
|
||||
) => {
|
||||
const { sx, ...rest } = props;
|
||||
|
||||
return (
|
||||
<Flex
|
||||
sx={{
|
||||
w: 'full',
|
||||
h: 'full',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
borderRadius: 'base',
|
||||
flexDir: 'column',
|
||||
gap: 2,
|
||||
userSelect: 'none',
|
||||
opacity: 0.7,
|
||||
color: 'base.700',
|
||||
_dark: {
|
||||
color: 'base.500',
|
||||
},
|
||||
...sx,
|
||||
}}
|
||||
{...rest}
|
||||
>
|
||||
<Spinner size="xl" />
|
||||
{props.label && <Text textAlign="center">{props.label}</Text>}
|
||||
</Flex>
|
||||
);
|
||||
};
|
||||
|
@ -1,112 +0,0 @@
|
||||
import {
|
||||
Button,
|
||||
Popover,
|
||||
PopoverTrigger,
|
||||
PopoverContent,
|
||||
PopoverArrow,
|
||||
PopoverCloseButton,
|
||||
PopoverHeader,
|
||||
PopoverBody,
|
||||
PopoverProps,
|
||||
Flex,
|
||||
Text,
|
||||
Image,
|
||||
} from '@chakra-ui/react';
|
||||
import { useAppSelector } from '../../app/store/storeHooks';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
interface Props extends PopoverProps {
|
||||
details: string;
|
||||
children: JSX.Element;
|
||||
image?: string;
|
||||
buttonLabel?: string;
|
||||
buttonHref?: string;
|
||||
placement?: PopoverProps['placement'];
|
||||
}
|
||||
|
||||
function IAIInformationalPopover({
|
||||
details,
|
||||
image,
|
||||
buttonLabel,
|
||||
buttonHref,
|
||||
children,
|
||||
placement,
|
||||
}: Props): JSX.Element {
|
||||
const shouldEnableInformationalPopovers = useAppSelector(
|
||||
(state) => state.system.shouldEnableInformationalPopovers
|
||||
);
|
||||
const { t } = useTranslation();
|
||||
|
||||
const heading = t(`popovers.${details}.heading`);
|
||||
const paragraph = t(`popovers.${details}.paragraph`);
|
||||
|
||||
if (!shouldEnableInformationalPopovers) {
|
||||
return children;
|
||||
} else {
|
||||
return (
|
||||
<Popover
|
||||
placement={placement || 'top'}
|
||||
closeOnBlur={false}
|
||||
trigger="hover"
|
||||
variant="informational"
|
||||
>
|
||||
<PopoverTrigger>
|
||||
<div>{children}</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent>
|
||||
<PopoverArrow />
|
||||
<PopoverCloseButton />
|
||||
|
||||
<PopoverBody>
|
||||
<Flex
|
||||
sx={{
|
||||
gap: 3,
|
||||
flexDirection: 'column',
|
||||
width: '100%',
|
||||
alignItems: 'center',
|
||||
}}
|
||||
>
|
||||
{image && (
|
||||
<Image
|
||||
sx={{
|
||||
objectFit: 'contain',
|
||||
maxW: '60%',
|
||||
maxH: '60%',
|
||||
backgroundColor: 'white',
|
||||
}}
|
||||
src={image}
|
||||
alt="Optional Image"
|
||||
/>
|
||||
)}
|
||||
<Flex
|
||||
sx={{
|
||||
gap: 3,
|
||||
flexDirection: 'column',
|
||||
width: '100%',
|
||||
p: 3,
|
||||
pt: heading ? 0 : 3,
|
||||
}}
|
||||
>
|
||||
{heading && <PopoverHeader>{heading}</PopoverHeader>}
|
||||
<Text sx={{ px: 3 }}>{paragraph}</Text>
|
||||
{buttonLabel && (
|
||||
<Flex sx={{ px: 3 }} justifyContent="flex-end">
|
||||
<Button
|
||||
onClick={() => window.open(buttonHref)}
|
||||
size="sm"
|
||||
variant="invokeAIOutline"
|
||||
>
|
||||
{buttonLabel}
|
||||
</Button>
|
||||
</Flex>
|
||||
)}
|
||||
</Flex>
|
||||
</Flex>
|
||||
</PopoverBody>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export default IAIInformationalPopover;
|
@ -0,0 +1,155 @@
|
||||
import {
|
||||
Box,
|
||||
BoxProps,
|
||||
Button,
|
||||
Divider,
|
||||
Flex,
|
||||
Heading,
|
||||
Image,
|
||||
Popover,
|
||||
PopoverBody,
|
||||
PopoverCloseButton,
|
||||
PopoverContent,
|
||||
PopoverProps,
|
||||
PopoverTrigger,
|
||||
Portal,
|
||||
Text,
|
||||
forwardRef,
|
||||
} from '@chakra-ui/react';
|
||||
import { merge, omit } from 'lodash-es';
|
||||
import { PropsWithChildren, memo, useCallback, useMemo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { FaExternalLinkAlt } from 'react-icons/fa';
|
||||
import { useAppSelector } from '../../../app/store/storeHooks';
|
||||
import {
|
||||
Feature,
|
||||
OPEN_DELAY,
|
||||
POPOVER_DATA,
|
||||
POPPER_MODIFIERS,
|
||||
} from './constants';
|
||||
|
||||
type Props = PropsWithChildren & {
|
||||
feature: Feature;
|
||||
wrapperProps?: BoxProps;
|
||||
popoverProps?: PopoverProps;
|
||||
};
|
||||
|
||||
const IAIInformationalPopover = forwardRef(
|
||||
({ feature, children, wrapperProps, ...rest }: Props, ref) => {
|
||||
const { t } = useTranslation();
|
||||
const shouldEnableInformationalPopovers = useAppSelector(
|
||||
(state) => state.system.shouldEnableInformationalPopovers
|
||||
);
|
||||
|
||||
const data = useMemo(() => POPOVER_DATA[feature], [feature]);
|
||||
|
||||
const popoverProps = useMemo(
|
||||
() => merge(omit(data, ['image', 'href', 'buttonLabel']), rest),
|
||||
[data, rest]
|
||||
);
|
||||
|
||||
const heading = useMemo<string | undefined>(
|
||||
() => t(`popovers.${feature}.heading`),
|
||||
[feature, t]
|
||||
);
|
||||
|
||||
const paragraphs = useMemo<string[]>(
|
||||
() =>
|
||||
t(`popovers.${feature}.paragraphs`, {
|
||||
returnObjects: true,
|
||||
}) ?? [],
|
||||
[feature, t]
|
||||
);
|
||||
|
||||
const handleClick = useCallback(() => {
|
||||
if (!data?.href) {
|
||||
return;
|
||||
}
|
||||
window.open(data.href);
|
||||
}, [data?.href]);
|
||||
|
||||
if (!shouldEnableInformationalPopovers) {
|
||||
return (
|
||||
<Box ref={ref} w="full" {...wrapperProps}>
|
||||
{children}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Popover
|
||||
isLazy
|
||||
closeOnBlur={false}
|
||||
trigger="hover"
|
||||
variant="informational"
|
||||
openDelay={OPEN_DELAY}
|
||||
modifiers={POPPER_MODIFIERS}
|
||||
placement="top"
|
||||
{...popoverProps}
|
||||
>
|
||||
<PopoverTrigger>
|
||||
<Box ref={ref} w="full" {...wrapperProps}>
|
||||
{children}
|
||||
</Box>
|
||||
</PopoverTrigger>
|
||||
<Portal>
|
||||
<PopoverContent w={96}>
|
||||
<PopoverCloseButton />
|
||||
<PopoverBody>
|
||||
<Flex
|
||||
sx={{
|
||||
gap: 2,
|
||||
flexDirection: 'column',
|
||||
alignItems: 'flex-start',
|
||||
}}
|
||||
>
|
||||
{heading && (
|
||||
<>
|
||||
<Heading size="sm">{heading}</Heading>
|
||||
<Divider />
|
||||
</>
|
||||
)}
|
||||
{data?.image && (
|
||||
<>
|
||||
<Image
|
||||
sx={{
|
||||
objectFit: 'contain',
|
||||
maxW: '60%',
|
||||
maxH: '60%',
|
||||
backgroundColor: 'white',
|
||||
}}
|
||||
src={data.image}
|
||||
alt="Optional Image"
|
||||
/>
|
||||
<Divider />
|
||||
</>
|
||||
)}
|
||||
{paragraphs.map((p) => (
|
||||
<Text key={p}>{p}</Text>
|
||||
))}
|
||||
{data?.href && (
|
||||
<>
|
||||
<Divider />
|
||||
<Button
|
||||
pt={1}
|
||||
onClick={handleClick}
|
||||
leftIcon={<FaExternalLinkAlt />}
|
||||
alignSelf="flex-end"
|
||||
variant="link"
|
||||
>
|
||||
{t('common.learnMore') ?? heading}
|
||||
</Button>
|
||||
</>
|
||||
)}
|
||||
</Flex>
|
||||
</PopoverBody>
|
||||
</PopoverContent>
|
||||
</Portal>
|
||||
</Popover>
|
||||
);
|
||||
}
|
||||
);
|
||||
|
||||
IAIInformationalPopover.displayName = 'IAIInformationalPopover';
|
||||
|
||||
export default memo(IAIInformationalPopover);
|
@ -0,0 +1,98 @@
|
||||
import { PopoverProps } from '@chakra-ui/react';
|
||||
|
||||
export type Feature =
|
||||
| 'clipSkip'
|
||||
| 'paramNegativeConditioning'
|
||||
| 'paramPositiveConditioning'
|
||||
| 'paramScheduler'
|
||||
| 'compositingBlur'
|
||||
| 'compositingBlurMethod'
|
||||
| 'compositingCoherencePass'
|
||||
| 'compositingCoherenceMode'
|
||||
| 'compositingCoherenceSteps'
|
||||
| 'compositingStrength'
|
||||
| 'compositingMaskAdjustments'
|
||||
| 'controlNetBeginEnd'
|
||||
| 'controlNetControlMode'
|
||||
| 'controlNetResizeMode'
|
||||
| 'controlNet'
|
||||
| 'controlNetWeight'
|
||||
| 'dynamicPrompts'
|
||||
| 'dynamicPromptsMaxPrompts'
|
||||
| 'dynamicPromptsSeedBehaviour'
|
||||
| 'infillMethod'
|
||||
| 'lora'
|
||||
| 'noiseUseCPU'
|
||||
| 'paramCFGScale'
|
||||
| 'paramDenoisingStrength'
|
||||
| 'paramIterations'
|
||||
| 'paramModel'
|
||||
| 'paramRatio'
|
||||
| 'paramSeed'
|
||||
| 'paramSteps'
|
||||
| 'paramVAE'
|
||||
| 'paramVAEPrecision'
|
||||
| 'scaleBeforeProcessing';
|
||||
|
||||
export type PopoverData = PopoverProps & {
|
||||
image?: string;
|
||||
href?: string;
|
||||
buttonLabel?: string;
|
||||
};
|
||||
|
||||
export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
|
||||
paramNegativeConditioning: {
|
||||
placement: 'right',
|
||||
},
|
||||
controlNet: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000105880',
|
||||
},
|
||||
lora: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000159072',
|
||||
},
|
||||
compositingCoherenceMode: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000158838',
|
||||
},
|
||||
infillMethod: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
|
||||
},
|
||||
scaleBeforeProcessing: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
|
||||
},
|
||||
paramIterations: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000159073',
|
||||
},
|
||||
paramPositiveConditioning: {
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000096606-tips-on-crafting-prompts',
|
||||
placement: 'right',
|
||||
},
|
||||
paramScheduler: {
|
||||
placement: 'right',
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000159073',
|
||||
},
|
||||
paramModel: {
|
||||
placement: 'right',
|
||||
href: 'https://support.invoke.ai/support/solutions/articles/151000096601-what-is-a-model-which-should-i-use-',
|
||||
},
|
||||
paramRatio: {
|
||||
gutter: 16,
|
||||
},
|
||||
controlNetControlMode: {
|
||||
placement: 'right',
|
||||
},
|
||||
controlNetResizeMode: {
|
||||
placement: 'right',
|
||||
},
|
||||
paramVAE: {
|
||||
placement: 'right',
|
||||
},
|
||||
paramVAEPrecision: {
|
||||
placement: 'right',
|
||||
},
|
||||
} as const;
|
||||
|
||||
export const OPEN_DELAY = 1000; // in milliseconds
|
||||
|
||||
export const POPPER_MODIFIERS: PopoverProps['modifiers'] = [
|
||||
{ name: 'preventOverflow', options: { padding: 10 } },
|
||||
];
|
@ -1,4 +1,4 @@
|
||||
import { FormControl, FormLabel, Tooltip } from '@chakra-ui/react';
|
||||
import { FormControl, FormLabel, Tooltip, forwardRef } from '@chakra-ui/react';
|
||||
import { MultiSelect, MultiSelectProps } from '@mantine/core';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { shiftKeyPressed } from 'features/ui/store/hotkeysSlice';
|
||||
@ -11,7 +11,7 @@ type IAIMultiSelectProps = Omit<MultiSelectProps, 'label'> & {
|
||||
label?: string;
|
||||
};
|
||||
|
||||
const IAIMantineMultiSelect = (props: IAIMultiSelectProps) => {
|
||||
const IAIMantineMultiSelect = forwardRef((props: IAIMultiSelectProps, ref) => {
|
||||
const {
|
||||
searchable = true,
|
||||
tooltip,
|
||||
@ -44,25 +44,23 @@ const IAIMantineMultiSelect = (props: IAIMultiSelectProps) => {
|
||||
|
||||
return (
|
||||
<Tooltip label={tooltip} placement="top" hasArrow isOpen={true}>
|
||||
<MultiSelect
|
||||
label={
|
||||
label ? (
|
||||
<FormControl isDisabled={disabled}>
|
||||
<FormLabel>{label}</FormLabel>
|
||||
</FormControl>
|
||||
) : undefined
|
||||
}
|
||||
ref={inputRef}
|
||||
disabled={disabled}
|
||||
onKeyDown={handleKeyDown}
|
||||
onKeyUp={handleKeyUp}
|
||||
searchable={searchable}
|
||||
maxDropdownHeight={300}
|
||||
styles={styles}
|
||||
{...rest}
|
||||
/>
|
||||
<FormControl ref={ref} isDisabled={disabled} position="static">
|
||||
{label && <FormLabel>{label}</FormLabel>}
|
||||
<MultiSelect
|
||||
ref={inputRef}
|
||||
disabled={disabled}
|
||||
onKeyDown={handleKeyDown}
|
||||
onKeyUp={handleKeyUp}
|
||||
searchable={searchable}
|
||||
maxDropdownHeight={300}
|
||||
styles={styles}
|
||||
{...rest}
|
||||
/>
|
||||
</FormControl>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
});
|
||||
|
||||
IAIMantineMultiSelect.displayName = 'IAIMantineMultiSelect';
|
||||
|
||||
export default memo(IAIMantineMultiSelect);
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { FormControl, FormLabel, Tooltip } from '@chakra-ui/react';
|
||||
import { FormControl, FormLabel, Tooltip, forwardRef } from '@chakra-ui/react';
|
||||
import { Select, SelectProps } from '@mantine/core';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { shiftKeyPressed } from 'features/ui/store/hotkeysSlice';
|
||||
@ -17,7 +17,7 @@ type IAISelectProps = Omit<SelectProps, 'label'> & {
|
||||
inputRef?: RefObject<HTMLInputElement>;
|
||||
};
|
||||
|
||||
const IAIMantineSearchableSelect = (props: IAISelectProps) => {
|
||||
const IAIMantineSearchableSelect = forwardRef((props: IAISelectProps, ref) => {
|
||||
const {
|
||||
searchable = true,
|
||||
tooltip,
|
||||
@ -70,28 +70,26 @@ const IAIMantineSearchableSelect = (props: IAISelectProps) => {
|
||||
|
||||
return (
|
||||
<Tooltip label={tooltip} placement="top" hasArrow>
|
||||
<Select
|
||||
ref={inputRef}
|
||||
label={
|
||||
label ? (
|
||||
<FormControl isDisabled={disabled}>
|
||||
<FormLabel>{label}</FormLabel>
|
||||
</FormControl>
|
||||
) : undefined
|
||||
}
|
||||
disabled={disabled}
|
||||
searchValue={searchValue}
|
||||
onSearchChange={setSearchValue}
|
||||
onChange={handleChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
onKeyUp={handleKeyUp}
|
||||
searchable={searchable}
|
||||
maxDropdownHeight={300}
|
||||
styles={styles}
|
||||
{...rest}
|
||||
/>
|
||||
<FormControl ref={ref} isDisabled={disabled} position="static">
|
||||
{label && <FormLabel>{label}</FormLabel>}
|
||||
<Select
|
||||
ref={inputRef}
|
||||
disabled={disabled}
|
||||
searchValue={searchValue}
|
||||
onSearchChange={setSearchValue}
|
||||
onChange={handleChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
onKeyUp={handleKeyUp}
|
||||
searchable={searchable}
|
||||
maxDropdownHeight={300}
|
||||
styles={styles}
|
||||
{...rest}
|
||||
/>
|
||||
</FormControl>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
});
|
||||
|
||||
IAIMantineSearchableSelect.displayName = 'IAIMantineSearchableSelect';
|
||||
|
||||
export default memo(IAIMantineSearchableSelect);
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { FormControl, FormLabel, Tooltip } from '@chakra-ui/react';
|
||||
import { FormControl, FormLabel, Tooltip, forwardRef } from '@chakra-ui/react';
|
||||
import { Select, SelectProps } from '@mantine/core';
|
||||
import { useMantineSelectStyles } from 'mantine-theme/hooks/useMantineSelectStyles';
|
||||
import { RefObject, memo } from 'react';
|
||||
@ -15,28 +15,26 @@ export type IAISelectProps = Omit<SelectProps, 'label'> & {
|
||||
label?: string;
|
||||
};
|
||||
|
||||
const IAIMantineSelect = (props: IAISelectProps) => {
|
||||
const IAIMantineSelect = forwardRef((props: IAISelectProps, ref) => {
|
||||
const { tooltip, inputRef, label, disabled, required, ...rest } = props;
|
||||
|
||||
const styles = useMantineSelectStyles();
|
||||
|
||||
return (
|
||||
<Tooltip label={tooltip} placement="top" hasArrow>
|
||||
<Select
|
||||
label={
|
||||
label ? (
|
||||
<FormControl isRequired={required} isDisabled={disabled}>
|
||||
<FormLabel>{label}</FormLabel>
|
||||
</FormControl>
|
||||
) : undefined
|
||||
}
|
||||
disabled={disabled}
|
||||
ref={inputRef}
|
||||
styles={styles}
|
||||
{...rest}
|
||||
/>
|
||||
<FormControl
|
||||
ref={ref}
|
||||
isRequired={required}
|
||||
isDisabled={disabled}
|
||||
position="static"
|
||||
>
|
||||
<FormLabel>{label}</FormLabel>
|
||||
<Select disabled={disabled} ref={inputRef} styles={styles} {...rest} />
|
||||
</FormControl>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
});
|
||||
|
||||
IAIMantineSelect.displayName = 'IAIMantineSelect';
|
||||
|
||||
export default memo(IAIMantineSelect);
|
||||
|
@ -13,6 +13,7 @@ import {
|
||||
NumberInputStepperProps,
|
||||
Tooltip,
|
||||
TooltipProps,
|
||||
forwardRef,
|
||||
} from '@chakra-ui/react';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { stopPastePropagation } from 'common/util/stopPastePropagation';
|
||||
@ -50,7 +51,7 @@ interface Props extends Omit<NumberInputProps, 'onChange'> {
|
||||
/**
|
||||
* Customized Chakra FormControl + NumberInput multi-part component.
|
||||
*/
|
||||
const IAINumberInput = (props: Props) => {
|
||||
const IAINumberInput = forwardRef((props: Props, ref) => {
|
||||
const {
|
||||
label,
|
||||
isDisabled = false,
|
||||
@ -141,6 +142,7 @@ const IAINumberInput = (props: Props) => {
|
||||
return (
|
||||
<Tooltip {...tooltipProps}>
|
||||
<FormControl
|
||||
ref={ref}
|
||||
isDisabled={isDisabled}
|
||||
isInvalid={isInvalid}
|
||||
{...formControlProps}
|
||||
@ -172,6 +174,8 @@ const IAINumberInput = (props: Props) => {
|
||||
</FormControl>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
});
|
||||
|
||||
IAINumberInput.displayName = 'IAINumberInput';
|
||||
|
||||
export default memo(IAINumberInput);
|
||||
|
@ -22,6 +22,7 @@ import {
|
||||
SliderTrackProps,
|
||||
Tooltip,
|
||||
TooltipProps,
|
||||
forwardRef,
|
||||
} from '@chakra-ui/react';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import { roundDownToMultiple } from 'common/util/roundDownToMultiple';
|
||||
@ -71,7 +72,7 @@ export type IAIFullSliderProps = {
|
||||
sliderIAIIconButtonProps?: IAIIconButtonProps;
|
||||
};
|
||||
|
||||
const IAISlider = (props: IAIFullSliderProps) => {
|
||||
const IAISlider = forwardRef((props: IAIFullSliderProps, ref) => {
|
||||
const [showTooltip, setShowTooltip] = useState(false);
|
||||
const {
|
||||
label,
|
||||
@ -187,6 +188,7 @@ const IAISlider = (props: IAIFullSliderProps) => {
|
||||
|
||||
return (
|
||||
<FormControl
|
||||
ref={ref}
|
||||
onClick={forceInputBlur}
|
||||
sx={
|
||||
isCompact
|
||||
@ -354,6 +356,8 @@ const IAISlider = (props: IAIFullSliderProps) => {
|
||||
</HStack>
|
||||
</FormControl>
|
||||
);
|
||||
};
|
||||
});
|
||||
|
||||
IAISlider.displayName = 'IAISlider';
|
||||
|
||||
export default memo(IAISlider);
|
||||
|
@ -72,4 +72,6 @@ const IAISwitch = (props: IAISwitchProps) => {
|
||||
);
|
||||
};
|
||||
|
||||
IAISwitch.displayName = 'IAISwitch';
|
||||
|
||||
export default memo(IAISwitch);
|
||||
|
@ -139,6 +139,11 @@ const IAICanvas = () => {
|
||||
const { handleDragStart, handleDragMove, handleDragEnd } =
|
||||
useCanvasDragMove();
|
||||
|
||||
const handleContextMenu = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>) => e.evt.preventDefault(),
|
||||
[]
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
if (!containerRef.current) {
|
||||
return;
|
||||
@ -205,9 +210,7 @@ const IAICanvas = () => {
|
||||
onDragStart={handleDragStart}
|
||||
onDragMove={handleDragMove}
|
||||
onDragEnd={handleDragEnd}
|
||||
onContextMenu={(e: KonvaEventObject<MouseEvent>) =>
|
||||
e.evt.preventDefault()
|
||||
}
|
||||
onContextMenu={handleContextMenu}
|
||||
onWheel={handleWheel}
|
||||
draggable={(tool === 'move' || isStaging) && !isModifyingBoundingBox}
|
||||
>
|
||||
@ -223,7 +226,11 @@ const IAICanvas = () => {
|
||||
>
|
||||
<IAICanvasObjectRenderer />
|
||||
</Layer>
|
||||
<Layer id="mask" visible={isMaskEnabled} listening={false}>
|
||||
<Layer
|
||||
id="mask"
|
||||
visible={isMaskEnabled && !isStaging}
|
||||
listening={false}
|
||||
>
|
||||
<IAICanvasMaskLines visible={true} listening={false} />
|
||||
<IAICanvasMaskCompositer listening={false} />
|
||||
</Layer>
|
||||
|
@ -1,26 +1,27 @@
|
||||
import { skipToken } from '@reduxjs/toolkit/dist/query';
|
||||
import { Image, Rect } from 'react-konva';
|
||||
import { memo } from 'react';
|
||||
import { Image } from 'react-konva';
|
||||
import { $authToken } from 'services/api/client';
|
||||
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
|
||||
import useImage from 'use-image';
|
||||
import { CanvasImage } from '../store/canvasTypes';
|
||||
import { $authToken } from 'services/api/client';
|
||||
import { memo } from 'react';
|
||||
import IAICanvasImageErrorFallback from './IAICanvasImageErrorFallback';
|
||||
|
||||
type IAICanvasImageProps = {
|
||||
canvasImage: CanvasImage;
|
||||
};
|
||||
const IAICanvasImage = (props: IAICanvasImageProps) => {
|
||||
const { width, height, x, y, imageName } = props.canvasImage;
|
||||
const { x, y, imageName } = props.canvasImage;
|
||||
const { currentData: imageDTO, isError } = useGetImageDTOQuery(
|
||||
imageName ?? skipToken
|
||||
);
|
||||
const [image] = useImage(
|
||||
const [image, status] = useImage(
|
||||
imageDTO?.image_url ?? '',
|
||||
$authToken.get() ? 'use-credentials' : 'anonymous'
|
||||
);
|
||||
|
||||
if (isError) {
|
||||
return <Rect x={x} y={y} width={width} height={height} fill="red" />;
|
||||
if (isError || status === 'failed') {
|
||||
return <IAICanvasImageErrorFallback canvasImage={props.canvasImage} />;
|
||||
}
|
||||
|
||||
return <Image x={x} y={y} image={image} listening={false} />;
|
||||
|
@ -0,0 +1,44 @@
|
||||
import { useColorModeValue, useToken } from '@chakra-ui/react';
|
||||
import { memo } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { Group, Rect, Text } from 'react-konva';
|
||||
import { CanvasImage } from '../store/canvasTypes';
|
||||
|
||||
type IAICanvasImageErrorFallbackProps = {
|
||||
canvasImage: CanvasImage;
|
||||
};
|
||||
const IAICanvasImageErrorFallback = ({
|
||||
canvasImage,
|
||||
}: IAICanvasImageErrorFallbackProps) => {
|
||||
const [errorColorLight, errorColorDark, fontColorLight, fontColorDark] =
|
||||
useToken('colors', ['base.400', 'base.500', 'base.700', 'base.900']);
|
||||
const errorColor = useColorModeValue(errorColorLight, errorColorDark);
|
||||
const fontColor = useColorModeValue(fontColorLight, fontColorDark);
|
||||
const { t } = useTranslation();
|
||||
return (
|
||||
<Group>
|
||||
<Rect
|
||||
x={canvasImage.x}
|
||||
y={canvasImage.y}
|
||||
width={canvasImage.width}
|
||||
height={canvasImage.height}
|
||||
fill={errorColor}
|
||||
/>
|
||||
<Text
|
||||
x={canvasImage.x}
|
||||
y={canvasImage.y}
|
||||
width={canvasImage.width}
|
||||
height={canvasImage.height}
|
||||
align="center"
|
||||
verticalAlign="middle"
|
||||
fontFamily='"Inter Variable", sans-serif'
|
||||
fontSize={canvasImage.width / 16}
|
||||
fontStyle="600"
|
||||
text={t('common.imageFailedToLoad')}
|
||||
fill={fontColor}
|
||||
/>
|
||||
</Group>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(IAICanvasImageErrorFallback);
|
@ -3,10 +3,9 @@ import { useAppSelector } from 'app/store/storeHooks';
|
||||
import { canvasSelector } from 'features/canvas/store/canvasSelectors';
|
||||
import { GroupConfig } from 'konva/lib/Group';
|
||||
import { isEqual } from 'lodash-es';
|
||||
|
||||
import { memo } from 'react';
|
||||
import { Group, Rect } from 'react-konva';
|
||||
import IAICanvasImage from './IAICanvasImage';
|
||||
import { memo } from 'react';
|
||||
|
||||
const selector = createSelector(
|
||||
[canvasSelector],
|
||||
@ -15,11 +14,11 @@ const selector = createSelector(
|
||||
layerState,
|
||||
shouldShowStagingImage,
|
||||
shouldShowStagingOutline,
|
||||
boundingBoxCoordinates: { x, y },
|
||||
boundingBoxDimensions: { width, height },
|
||||
boundingBoxCoordinates: stageBoundingBoxCoordinates,
|
||||
boundingBoxDimensions: stageBoundingBoxDimensions,
|
||||
} = canvas;
|
||||
|
||||
const { selectedImageIndex, images } = layerState.stagingArea;
|
||||
const { selectedImageIndex, images, boundingBox } = layerState.stagingArea;
|
||||
|
||||
return {
|
||||
currentStagingAreaImage:
|
||||
@ -30,10 +29,10 @@ const selector = createSelector(
|
||||
isOnLastImage: selectedImageIndex === images.length - 1,
|
||||
shouldShowStagingImage,
|
||||
shouldShowStagingOutline,
|
||||
x,
|
||||
y,
|
||||
width,
|
||||
height,
|
||||
x: boundingBox?.x ?? stageBoundingBoxCoordinates.x,
|
||||
y: boundingBox?.y ?? stageBoundingBoxCoordinates.y,
|
||||
width: boundingBox?.width ?? stageBoundingBoxDimensions.width,
|
||||
height: boundingBox?.height ?? stageBoundingBoxDimensions.height,
|
||||
};
|
||||
},
|
||||
{
|
||||
|
@ -14,6 +14,7 @@ import {
|
||||
|
||||
import { skipToken } from '@reduxjs/toolkit/dist/query';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAIButton from 'common/components/IAIButton';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { useHotkeys } from 'react-hotkeys-hook';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
@ -23,8 +24,8 @@ import {
|
||||
FaCheck,
|
||||
FaEye,
|
||||
FaEyeSlash,
|
||||
FaPlus,
|
||||
FaSave,
|
||||
FaTimes,
|
||||
} from 'react-icons/fa';
|
||||
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
|
||||
import { stagingAreaImageSaved } from '../store/actions';
|
||||
@ -41,10 +42,10 @@ const selector = createSelector(
|
||||
} = canvas;
|
||||
|
||||
return {
|
||||
currentIndex: selectedImageIndex,
|
||||
total: images.length,
|
||||
currentStagingAreaImage:
|
||||
images.length > 0 ? images[selectedImageIndex] : undefined,
|
||||
isOnFirstImage: selectedImageIndex === 0,
|
||||
isOnLastImage: selectedImageIndex === images.length - 1,
|
||||
shouldShowStagingImage,
|
||||
shouldShowStagingOutline,
|
||||
};
|
||||
@ -55,10 +56,10 @@ const selector = createSelector(
|
||||
const IAICanvasStagingAreaToolbar = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const {
|
||||
isOnFirstImage,
|
||||
isOnLastImage,
|
||||
currentStagingAreaImage,
|
||||
shouldShowStagingImage,
|
||||
currentIndex,
|
||||
total,
|
||||
} = useAppSelector(selector);
|
||||
|
||||
const { t } = useTranslation();
|
||||
@ -71,39 +72,6 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
dispatch(setShouldShowStagingOutline(false));
|
||||
}, [dispatch]);
|
||||
|
||||
useHotkeys(
|
||||
['left'],
|
||||
() => {
|
||||
handlePrevImage();
|
||||
},
|
||||
{
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
}
|
||||
);
|
||||
|
||||
useHotkeys(
|
||||
['right'],
|
||||
() => {
|
||||
handleNextImage();
|
||||
},
|
||||
{
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
}
|
||||
);
|
||||
|
||||
useHotkeys(
|
||||
['enter'],
|
||||
() => {
|
||||
handleAccept();
|
||||
},
|
||||
{
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
}
|
||||
);
|
||||
|
||||
const handlePrevImage = useCallback(
|
||||
() => dispatch(prevStagingAreaImage()),
|
||||
[dispatch]
|
||||
@ -119,10 +87,45 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
useHotkeys(['left'], handlePrevImage, {
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
});
|
||||
|
||||
useHotkeys(['right'], handleNextImage, {
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
});
|
||||
|
||||
useHotkeys(['enter'], () => handleAccept, {
|
||||
enabled: () => true,
|
||||
preventDefault: true,
|
||||
});
|
||||
|
||||
const { data: imageDTO } = useGetImageDTOQuery(
|
||||
currentStagingAreaImage?.imageName ?? skipToken
|
||||
);
|
||||
|
||||
const handleToggleShouldShowStagingImage = useCallback(() => {
|
||||
dispatch(setShouldShowStagingImage(!shouldShowStagingImage));
|
||||
}, [dispatch, shouldShowStagingImage]);
|
||||
|
||||
const handleSaveToGallery = useCallback(() => {
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
|
||||
dispatch(
|
||||
stagingAreaImageSaved({
|
||||
imageDTO,
|
||||
})
|
||||
);
|
||||
}, [dispatch, imageDTO]);
|
||||
|
||||
const handleDiscardStagingArea = useCallback(() => {
|
||||
dispatch(discardStagedImages());
|
||||
}, [dispatch]);
|
||||
|
||||
if (!currentStagingAreaImage) {
|
||||
return null;
|
||||
}
|
||||
@ -131,11 +134,12 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
<Flex
|
||||
pos="absolute"
|
||||
bottom={4}
|
||||
gap={2}
|
||||
w="100%"
|
||||
align="center"
|
||||
justify="center"
|
||||
onMouseOver={handleMouseOver}
|
||||
onMouseOut={handleMouseOut}
|
||||
onMouseEnter={handleMouseOver}
|
||||
onMouseLeave={handleMouseOut}
|
||||
>
|
||||
<ButtonGroup isAttached borderRadius="base" shadow="dark-lg">
|
||||
<IAIIconButton
|
||||
@ -144,16 +148,29 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
icon={<FaArrowLeft />}
|
||||
onClick={handlePrevImage}
|
||||
colorScheme="accent"
|
||||
isDisabled={isOnFirstImage}
|
||||
isDisabled={!shouldShowStagingImage}
|
||||
/>
|
||||
<IAIButton
|
||||
colorScheme="accent"
|
||||
pointerEvents="none"
|
||||
isDisabled={!shouldShowStagingImage}
|
||||
sx={{
|
||||
background: 'base.600',
|
||||
_dark: {
|
||||
background: 'base.800',
|
||||
},
|
||||
}}
|
||||
>{`${currentIndex + 1}/${total}`}</IAIButton>
|
||||
<IAIIconButton
|
||||
tooltip={`${t('unifiedCanvas.next')} (Right)`}
|
||||
aria-label={`${t('unifiedCanvas.next')} (Right)`}
|
||||
icon={<FaArrowRight />}
|
||||
onClick={handleNextImage}
|
||||
colorScheme="accent"
|
||||
isDisabled={isOnLastImage}
|
||||
isDisabled={!shouldShowStagingImage}
|
||||
/>
|
||||
</ButtonGroup>
|
||||
<ButtonGroup isAttached borderRadius="base" shadow="dark-lg">
|
||||
<IAIIconButton
|
||||
tooltip={`${t('unifiedCanvas.accept')} (Enter)`}
|
||||
aria-label={`${t('unifiedCanvas.accept')} (Enter)`}
|
||||
@ -162,13 +179,19 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
colorScheme="accent"
|
||||
/>
|
||||
<IAIIconButton
|
||||
tooltip={t('unifiedCanvas.showHide')}
|
||||
aria-label={t('unifiedCanvas.showHide')}
|
||||
tooltip={
|
||||
shouldShowStagingImage
|
||||
? t('unifiedCanvas.showResultsOn')
|
||||
: t('unifiedCanvas.showResultsOff')
|
||||
}
|
||||
aria-label={
|
||||
shouldShowStagingImage
|
||||
? t('unifiedCanvas.showResultsOn')
|
||||
: t('unifiedCanvas.showResultsOff')
|
||||
}
|
||||
data-alert={!shouldShowStagingImage}
|
||||
icon={shouldShowStagingImage ? <FaEye /> : <FaEyeSlash />}
|
||||
onClick={() =>
|
||||
dispatch(setShouldShowStagingImage(!shouldShowStagingImage))
|
||||
}
|
||||
onClick={handleToggleShouldShowStagingImage}
|
||||
colorScheme="accent"
|
||||
/>
|
||||
<IAIIconButton
|
||||
@ -176,24 +199,14 @@ const IAICanvasStagingAreaToolbar = () => {
|
||||
aria-label={t('unifiedCanvas.saveToGallery')}
|
||||
isDisabled={!imageDTO || !imageDTO.is_intermediate}
|
||||
icon={<FaSave />}
|
||||
onClick={() => {
|
||||
if (!imageDTO) {
|
||||
return;
|
||||
}
|
||||
|
||||
dispatch(
|
||||
stagingAreaImageSaved({
|
||||
imageDTO,
|
||||
})
|
||||
);
|
||||
}}
|
||||
onClick={handleSaveToGallery}
|
||||
colorScheme="accent"
|
||||
/>
|
||||
<IAIIconButton
|
||||
tooltip={t('unifiedCanvas.discardAll')}
|
||||
aria-label={t('unifiedCanvas.discardAll')}
|
||||
icon={<FaPlus style={{ transform: 'rotate(45deg)' }} />}
|
||||
onClick={() => dispatch(discardStagedImages())}
|
||||
icon={<FaTimes />}
|
||||
onClick={handleDiscardStagingArea}
|
||||
colorScheme="error"
|
||||
fontSize={20}
|
||||
/>
|
||||
|
@ -213,45 +213,45 @@ const IAICanvasBoundingBox = (props: IAICanvasBoundingBoxPreviewProps) => {
|
||||
[scaledStep]
|
||||
);
|
||||
|
||||
const handleStartedTransforming = () => {
|
||||
const handleStartedTransforming = useCallback(() => {
|
||||
dispatch(setIsTransformingBoundingBox(true));
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
const handleEndedTransforming = () => {
|
||||
const handleEndedTransforming = useCallback(() => {
|
||||
dispatch(setIsTransformingBoundingBox(false));
|
||||
dispatch(setIsMovingBoundingBox(false));
|
||||
dispatch(setIsMouseOverBoundingBox(false));
|
||||
setIsMouseOverBoundingBoxOutline(false);
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
const handleStartedMoving = () => {
|
||||
const handleStartedMoving = useCallback(() => {
|
||||
dispatch(setIsMovingBoundingBox(true));
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
const handleEndedModifying = () => {
|
||||
const handleEndedModifying = useCallback(() => {
|
||||
dispatch(setIsTransformingBoundingBox(false));
|
||||
dispatch(setIsMovingBoundingBox(false));
|
||||
dispatch(setIsMouseOverBoundingBox(false));
|
||||
setIsMouseOverBoundingBoxOutline(false);
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
const handleMouseOver = () => {
|
||||
const handleMouseOver = useCallback(() => {
|
||||
setIsMouseOverBoundingBoxOutline(true);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const handleMouseOut = () => {
|
||||
const handleMouseOut = useCallback(() => {
|
||||
!isTransformingBoundingBox &&
|
||||
!isMovingBoundingBox &&
|
||||
setIsMouseOverBoundingBoxOutline(false);
|
||||
};
|
||||
}, [isMovingBoundingBox, isTransformingBoundingBox]);
|
||||
|
||||
const handleMouseEnterBoundingBox = () => {
|
||||
const handleMouseEnterBoundingBox = useCallback(() => {
|
||||
dispatch(setIsMouseOverBoundingBox(true));
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
const handleMouseLeaveBoundingBox = () => {
|
||||
const handleMouseLeaveBoundingBox = useCallback(() => {
|
||||
dispatch(setIsMouseOverBoundingBox(false));
|
||||
};
|
||||
}, [dispatch]);
|
||||
|
||||
return (
|
||||
<Group {...rest}>
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { ButtonGroup, Flex } from '@chakra-ui/react';
|
||||
import { Box, ButtonGroup, Flex } from '@chakra-ui/react';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import IAIButton from 'common/components/IAIButton';
|
||||
@ -135,11 +135,12 @@ const IAICanvasMaskOptions = () => {
|
||||
dispatch(setShouldPreserveMaskedArea(e.target.checked))
|
||||
}
|
||||
/>
|
||||
<IAIColorPicker
|
||||
sx={{ paddingTop: 2, paddingBottom: 2 }}
|
||||
pickerColor={maskColor}
|
||||
onChange={(newColor) => dispatch(setMaskColor(newColor))}
|
||||
/>
|
||||
<Box sx={{ paddingTop: 2, paddingBottom: 2 }}>
|
||||
<IAIColorPicker
|
||||
color={maskColor}
|
||||
onChange={(newColor) => dispatch(setMaskColor(newColor))}
|
||||
/>
|
||||
</Box>
|
||||
<IAIButton size="sm" leftIcon={<FaSave />} onClick={handleSaveMask}>
|
||||
Save Mask
|
||||
</IAIButton>
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { ButtonGroup, Flex } from '@chakra-ui/react';
|
||||
import { ButtonGroup, Flex, Box } from '@chakra-ui/react';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
@ -237,15 +237,18 @@ const IAICanvasToolChooserOptions = () => {
|
||||
sliderNumberInputProps={{ max: 500 }}
|
||||
/>
|
||||
</Flex>
|
||||
<IAIColorPicker
|
||||
<Box
|
||||
sx={{
|
||||
width: '100%',
|
||||
paddingTop: 2,
|
||||
paddingBottom: 2,
|
||||
}}
|
||||
pickerColor={brushColor}
|
||||
onChange={(newColor) => dispatch(setBrushColor(newColor))}
|
||||
/>
|
||||
>
|
||||
<IAIColorPicker
|
||||
color={brushColor}
|
||||
onChange={(newColor) => dispatch(setBrushColor(newColor))}
|
||||
/>
|
||||
</Box>
|
||||
</Flex>
|
||||
</IAIPopover>
|
||||
</ButtonGroup>
|
||||
|
@ -6,7 +6,7 @@ export const canvasSelector = (state: RootState): CanvasState => state.canvas;
|
||||
|
||||
export const isStagingSelector = createSelector(
|
||||
[stateSelector],
|
||||
({ canvas }) => canvas.layerState.stagingArea.images.length > 0
|
||||
({ canvas }) => canvas.batchIds.length > 0
|
||||
);
|
||||
|
||||
export const initialCanvasImageSelector = (
|
||||
|
@ -8,7 +8,6 @@ import { setAspectRatio } from 'features/parameters/store/generationSlice';
|
||||
import { IRect, Vector2d } from 'konva/lib/types';
|
||||
import { clamp, cloneDeep } from 'lodash-es';
|
||||
import { RgbaColor } from 'react-colorful';
|
||||
import { sessionCanceled } from 'services/api/thunks/session';
|
||||
import { ImageDTO } from 'services/api/types';
|
||||
import calculateCoordinates from '../util/calculateCoordinates';
|
||||
import calculateScale from '../util/calculateScale';
|
||||
@ -187,7 +186,7 @@ export const canvasSlice = createSlice({
|
||||
state.pastLayerStates.push(cloneDeep(state.layerState));
|
||||
|
||||
state.layerState = {
|
||||
...initialLayerState,
|
||||
...cloneDeep(initialLayerState),
|
||||
objects: [
|
||||
{
|
||||
kind: 'image',
|
||||
@ -201,6 +200,7 @@ export const canvasSlice = createSlice({
|
||||
],
|
||||
};
|
||||
state.futureLayerStates = [];
|
||||
state.batchIds = [];
|
||||
|
||||
const newScale = calculateScale(
|
||||
stageDimensions.width,
|
||||
@ -350,11 +350,14 @@ export const canvasSlice = createSlice({
|
||||
state.pastLayerStates.shift();
|
||||
}
|
||||
|
||||
state.layerState.stagingArea = { ...initialLayerState.stagingArea };
|
||||
state.layerState.stagingArea = cloneDeep(
|
||||
cloneDeep(initialLayerState)
|
||||
).stagingArea;
|
||||
|
||||
state.futureLayerStates = [];
|
||||
state.shouldShowStagingOutline = true;
|
||||
state.shouldShowStagingOutline = true;
|
||||
state.shouldShowStagingImage = true;
|
||||
state.batchIds = [];
|
||||
},
|
||||
addFillRect: (state) => {
|
||||
const { boundingBoxCoordinates, boundingBoxDimensions, brushColor } =
|
||||
@ -491,8 +494,9 @@ export const canvasSlice = createSlice({
|
||||
resetCanvas: (state) => {
|
||||
state.pastLayerStates.push(cloneDeep(state.layerState));
|
||||
|
||||
state.layerState = initialLayerState;
|
||||
state.layerState = cloneDeep(initialLayerState);
|
||||
state.futureLayerStates = [];
|
||||
state.batchIds = [];
|
||||
},
|
||||
canvasResized: (
|
||||
state,
|
||||
@ -617,25 +621,22 @@ export const canvasSlice = createSlice({
|
||||
return;
|
||||
}
|
||||
|
||||
const currentIndex = state.layerState.stagingArea.selectedImageIndex;
|
||||
const length = state.layerState.stagingArea.images.length;
|
||||
const nextIndex = state.layerState.stagingArea.selectedImageIndex + 1;
|
||||
const lastIndex = state.layerState.stagingArea.images.length - 1;
|
||||
|
||||
state.layerState.stagingArea.selectedImageIndex = Math.min(
|
||||
currentIndex + 1,
|
||||
length - 1
|
||||
);
|
||||
state.layerState.stagingArea.selectedImageIndex =
|
||||
nextIndex > lastIndex ? 0 : nextIndex;
|
||||
},
|
||||
prevStagingAreaImage: (state) => {
|
||||
if (!state.layerState.stagingArea.images.length) {
|
||||
return;
|
||||
}
|
||||
|
||||
const currentIndex = state.layerState.stagingArea.selectedImageIndex;
|
||||
const prevIndex = state.layerState.stagingArea.selectedImageIndex - 1;
|
||||
const lastIndex = state.layerState.stagingArea.images.length - 1;
|
||||
|
||||
state.layerState.stagingArea.selectedImageIndex = Math.max(
|
||||
currentIndex - 1,
|
||||
0
|
||||
);
|
||||
state.layerState.stagingArea.selectedImageIndex =
|
||||
prevIndex < 0 ? lastIndex : prevIndex;
|
||||
},
|
||||
commitStagingAreaImage: (state) => {
|
||||
if (!state.layerState.stagingArea.images.length) {
|
||||
@ -657,13 +658,12 @@ export const canvasSlice = createSlice({
|
||||
...imageToCommit,
|
||||
});
|
||||
}
|
||||
state.layerState.stagingArea = {
|
||||
...initialLayerState.stagingArea,
|
||||
};
|
||||
state.layerState.stagingArea = cloneDeep(initialLayerState).stagingArea;
|
||||
|
||||
state.futureLayerStates = [];
|
||||
state.shouldShowStagingOutline = true;
|
||||
state.shouldShowStagingImage = true;
|
||||
state.batchIds = [];
|
||||
},
|
||||
fitBoundingBoxToStage: (state) => {
|
||||
const {
|
||||
@ -786,11 +786,6 @@ export const canvasSlice = createSlice({
|
||||
},
|
||||
},
|
||||
extraReducers: (builder) => {
|
||||
builder.addCase(sessionCanceled.pending, (state) => {
|
||||
if (!state.layerState.stagingArea.images.length) {
|
||||
state.layerState.stagingArea = initialLayerState.stagingArea;
|
||||
}
|
||||
});
|
||||
builder.addCase(setAspectRatio, (state, action) => {
|
||||
const ratio = action.payload;
|
||||
if (ratio) {
|
||||
|
@ -1,15 +1,18 @@
|
||||
import { getCanvasBaseLayer } from './konvaInstanceProvider';
|
||||
import { RootState } from 'app/store/store';
|
||||
import { getCanvasBaseLayer } from './konvaInstanceProvider';
|
||||
import { konvaNodeToBlob } from './konvaNodeToBlob';
|
||||
|
||||
/**
|
||||
* Get the canvas base layer blob, with or without bounding box according to `shouldCropToBoundingBoxOnSave`
|
||||
*/
|
||||
export const getBaseLayerBlob = async (state: RootState) => {
|
||||
export const getBaseLayerBlob = async (
|
||||
state: RootState,
|
||||
alwaysUseBoundingBox: boolean = false
|
||||
) => {
|
||||
const canvasBaseLayer = getCanvasBaseLayer();
|
||||
|
||||
if (!canvasBaseLayer) {
|
||||
return;
|
||||
throw new Error('Problem getting base layer blob');
|
||||
}
|
||||
|
||||
const {
|
||||
@ -24,14 +27,15 @@ export const getBaseLayerBlob = async (state: RootState) => {
|
||||
|
||||
const absPos = clonedBaseLayer.getAbsolutePosition();
|
||||
|
||||
const boundingBox = shouldCropToBoundingBoxOnSave
|
||||
? {
|
||||
x: boundingBoxCoordinates.x + absPos.x,
|
||||
y: boundingBoxCoordinates.y + absPos.y,
|
||||
width: boundingBoxDimensions.width,
|
||||
height: boundingBoxDimensions.height,
|
||||
}
|
||||
: clonedBaseLayer.getClientRect();
|
||||
const boundingBox =
|
||||
shouldCropToBoundingBoxOnSave || alwaysUseBoundingBox
|
||||
? {
|
||||
x: boundingBoxCoordinates.x + absPos.x,
|
||||
y: boundingBoxCoordinates.y + absPos.y,
|
||||
width: boundingBoxDimensions.width,
|
||||
height: boundingBoxDimensions.height,
|
||||
}
|
||||
: clonedBaseLayer.getClientRect();
|
||||
|
||||
return konvaNodeToBlob(clonedBaseLayer, boundingBox);
|
||||
};
|
||||
|
@ -1,12 +1,12 @@
|
||||
import { Box, Flex } from '@chakra-ui/react';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { ChangeEvent, memo, useCallback } from 'react';
|
||||
import { FaCopy, FaTrash } from 'react-icons/fa';
|
||||
import {
|
||||
ControlNetConfig,
|
||||
controlNetDuplicated,
|
||||
controlNetRemoved,
|
||||
controlNetToggled,
|
||||
controlNetIsEnabledChanged,
|
||||
} from '../store/controlNetSlice';
|
||||
import ParamControlNetModel from './parameters/ParamControlNetModel';
|
||||
import ParamControlNetWeight from './parameters/ParamControlNetWeight';
|
||||
@ -77,9 +77,17 @@ const ControlNet = (props: ControlNetProps) => {
|
||||
);
|
||||
}, [controlNetId, dispatch]);
|
||||
|
||||
const handleToggleIsEnabled = useCallback(() => {
|
||||
dispatch(controlNetToggled({ controlNetId }));
|
||||
}, [controlNetId, dispatch]);
|
||||
const handleToggleIsEnabled = useCallback(
|
||||
(e: ChangeEvent<HTMLInputElement>) => {
|
||||
dispatch(
|
||||
controlNetIsEnabledChanged({
|
||||
controlNetId,
|
||||
isEnabled: e.target.checked,
|
||||
})
|
||||
);
|
||||
},
|
||||
[controlNetId, dispatch]
|
||||
);
|
||||
|
||||
return (
|
||||
<Flex
|
||||
@ -106,8 +114,8 @@ const ControlNet = (props: ControlNetProps) => {
|
||||
sx={{
|
||||
w: 'full',
|
||||
minW: 0,
|
||||
opacity: isEnabled ? 1 : 0.5,
|
||||
pointerEvents: isEnabled ? 'auto' : 'none',
|
||||
// opacity: isEnabled ? 1 : 0.5,
|
||||
// pointerEvents: isEnabled ? 'auto' : 'none',
|
||||
transitionProperty: 'common',
|
||||
transitionDuration: '0.1s',
|
||||
}}
|
||||
|
@ -13,6 +13,7 @@ import {
|
||||
import { setHeight, setWidth } from 'features/parameters/store/generationSlice';
|
||||
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
|
||||
import { memo, useCallback, useMemo, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { FaRulerVertical, FaSave, FaUndo } from 'react-icons/fa';
|
||||
import {
|
||||
useAddImageToBoardMutation,
|
||||
@ -26,7 +27,6 @@ import {
|
||||
ControlNetConfig,
|
||||
controlNetImageChanged,
|
||||
} from '../store/controlNetSlice';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
type Props = {
|
||||
controlNet: ControlNetConfig;
|
||||
@ -52,7 +52,6 @@ const ControlNetImagePreview = ({ isSmall, controlNet }: Props) => {
|
||||
controlImage: controlImageName,
|
||||
processedControlImage: processedControlImageName,
|
||||
processorType,
|
||||
isEnabled,
|
||||
controlNetId,
|
||||
} = controlNet;
|
||||
|
||||
@ -172,15 +171,13 @@ const ControlNetImagePreview = ({ isSmall, controlNet }: Props) => {
|
||||
h: isSmall ? 28 : 366, // magic no touch
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
pointerEvents: isEnabled ? 'auto' : 'none',
|
||||
opacity: isEnabled ? 1 : 0.5,
|
||||
}}
|
||||
>
|
||||
<IAIDndImage
|
||||
draggableData={draggableData}
|
||||
droppableData={droppableData}
|
||||
imageDTO={controlImage}
|
||||
isDropDisabled={shouldShowProcessedImage || !isEnabled}
|
||||
isDropDisabled={shouldShowProcessedImage}
|
||||
postUploadAction={postUploadAction}
|
||||
/>
|
||||
|
||||
@ -202,7 +199,6 @@ const ControlNetImagePreview = ({ isSmall, controlNet }: Props) => {
|
||||
droppableData={droppableData}
|
||||
imageDTO={processedControlImage}
|
||||
isUploadDisabled={true}
|
||||
isDropDisabled={!isEnabled}
|
||||
/>
|
||||
</Box>
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
import { memo } from 'react';
|
||||
import { ControlNetConfig } from '../store/controlNetSlice';
|
||||
import CannyProcessor from './processors/CannyProcessor';
|
||||
import ColorMapProcessor from './processors/ColorMapProcessor';
|
||||
import ContentShuffleProcessor from './processors/ContentShuffleProcessor';
|
||||
import HedProcessor from './processors/HedProcessor';
|
||||
import LineartAnimeProcessor from './processors/LineartAnimeProcessor';
|
||||
@ -30,6 +31,16 @@ const ControlNetProcessorComponent = (props: ControlNetProcessorProps) => {
|
||||
);
|
||||
}
|
||||
|
||||
if (processorNode.type === 'color_map_image_processor') {
|
||||
return (
|
||||
<ColorMapProcessor
|
||||
controlNetId={controlNetId}
|
||||
processorNode={processorNode}
|
||||
isEnabled={isEnabled}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
if (processorNode.type === 'hed_image_processor') {
|
||||
return (
|
||||
<HedProcessor
|
||||
|
@ -3,8 +3,8 @@ import { stateSelector } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAISwitch from 'common/components/IAISwitch';
|
||||
import { isIPAdapterEnableToggled } from 'features/controlNet/store/controlNetSlice';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { isIPAdapterEnabledChanged } from 'features/controlNet/store/controlNetSlice';
|
||||
import { ChangeEvent, memo, useCallback } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
const selector = createSelector(
|
||||
@ -22,9 +22,12 @@ const ParamIPAdapterFeatureToggle = () => {
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const handleChange = useCallback(() => {
|
||||
dispatch(isIPAdapterEnableToggled());
|
||||
}, [dispatch]);
|
||||
const handleChange = useCallback(
|
||||
(e: ChangeEvent<HTMLInputElement>) => {
|
||||
dispatch(isIPAdapterEnabledChanged(e.target.checked));
|
||||
},
|
||||
[dispatch]
|
||||
);
|
||||
|
||||
return (
|
||||
<IAISwitch
|
||||
|
@ -1,7 +1,9 @@
|
||||
import { Flex } from '@chakra-ui/react';
|
||||
import { createSelector } from '@reduxjs/toolkit';
|
||||
import { skipToken } from '@reduxjs/toolkit/dist/query';
|
||||
import { RootState } from 'app/store/store';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAIDndImage from 'common/components/IAIDndImage';
|
||||
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
|
||||
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
|
||||
@ -16,15 +18,17 @@ import { FaUndo } from 'react-icons/fa';
|
||||
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
|
||||
import { PostUploadAction } from 'services/api/types';
|
||||
|
||||
const selector = createSelector(
|
||||
stateSelector,
|
||||
({ controlNet }) => {
|
||||
const { ipAdapterInfo } = controlNet;
|
||||
return { ipAdapterInfo };
|
||||
},
|
||||
defaultSelectorOptions
|
||||
);
|
||||
|
||||
const ParamIPAdapterImage = () => {
|
||||
const ipAdapterInfo = useAppSelector(
|
||||
(state: RootState) => state.controlNet.ipAdapterInfo
|
||||
);
|
||||
|
||||
const isIPAdapterEnabled = useAppSelector(
|
||||
(state: RootState) => state.controlNet.isIPAdapterEnabled
|
||||
);
|
||||
|
||||
const { ipAdapterInfo } = useAppSelector(selector);
|
||||
const dispatch = useAppDispatch();
|
||||
const { t } = useTranslation();
|
||||
|
||||
@ -71,8 +75,6 @@ const ParamIPAdapterImage = () => {
|
||||
droppableData={droppableData}
|
||||
draggableData={draggableData}
|
||||
postUploadAction={postUploadAction}
|
||||
isUploadDisabled={!isIPAdapterEnabled}
|
||||
isDropDisabled={!isIPAdapterEnabled}
|
||||
dropLabel={t('toast.setIPAdapterImage')}
|
||||
noContentFallback={
|
||||
<IAINoContentFallback
|
||||
|
@ -11,6 +11,9 @@ import { useTranslation } from 'react-i18next';
|
||||
import { useGetIPAdapterModelsQuery } from 'services/api/endpoints/models';
|
||||
|
||||
const ParamIPAdapterModelSelect = () => {
|
||||
const isEnabled = useAppSelector(
|
||||
(state: RootState) => state.controlNet.isIPAdapterEnabled
|
||||
);
|
||||
const ipAdapterModel = useAppSelector(
|
||||
(state: RootState) => state.controlNet.ipAdapterInfo.model
|
||||
);
|
||||
@ -90,6 +93,7 @@ const ParamIPAdapterModelSelect = () => {
|
||||
data={data}
|
||||
onChange={handleValueChanged}
|
||||
sx={{ width: '100%' }}
|
||||
disabled={!isEnabled}
|
||||
/>
|
||||
);
|
||||
};
|
||||
|
@ -10,7 +10,7 @@ import {
|
||||
Tooltip,
|
||||
} from '@chakra-ui/react';
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
|
||||
import {
|
||||
ControlNetConfig,
|
||||
controlNetBeginStepPctChanged,
|
||||
@ -50,7 +50,7 @@ const ParamControlNetBeginEnd = (props: Props) => {
|
||||
);
|
||||
|
||||
return (
|
||||
<IAIInformationalPopover details="controlNetBeginEnd">
|
||||
<IAIInformationalPopover feature="controlNetBeginEnd">
|
||||
<FormControl isDisabled={!isEnabled}>
|
||||
<FormLabel>{t('controlnet.beginEndStepPercent')}</FormLabel>
|
||||
<HStack w="100%" gap={2} alignItems="center">
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
|
||||
import IAIMantineSelect from 'common/components/IAIMantineSelect';
|
||||
import {
|
||||
ControlModes,
|
||||
@ -35,7 +35,7 @@ export default function ParamControlNetControlMode(
|
||||
);
|
||||
|
||||
return (
|
||||
<IAIInformationalPopover details="controlNetControlMode">
|
||||
<IAIInformationalPopover feature="controlNetControlMode">
|
||||
<IAIMantineSelect
|
||||
disabled={!isEnabled}
|
||||
label={t('controlnet.controlMode')}
|
||||
|
@ -3,7 +3,7 @@ import { createSelector } from '@reduxjs/toolkit';
|
||||
import { stateSelector } from 'app/store/store';
|
||||
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
|
||||
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
|
||||
import IAISwitch from 'common/components/IAISwitch';
|
||||
import { isControlNetEnabledToggled } from 'features/controlNet/store/controlNetSlice';
|
||||
import { memo, useCallback } from 'react';
|
||||
@ -28,7 +28,7 @@ const ParamControlNetFeatureToggle = () => {
|
||||
|
||||
return (
|
||||
<Box width="100%">
|
||||
<IAIInformationalPopover details="controlNetToggle">
|
||||
<IAIInformationalPopover feature="controlNet">
|
||||
<IAISwitch
|
||||
label="Enable ControlNet"
|
||||
isChecked={isEnabled}
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
|
||||
import IAIMantineSelect from 'common/components/IAIMantineSelect';
|
||||
import {
|
||||
ControlNetConfig,
|
||||
@ -34,7 +34,7 @@ export default function ParamControlNetResizeMode(
|
||||
);
|
||||
|
||||
return (
|
||||
<IAIInformationalPopover details="controlNetResizeMode">
|
||||
<IAIInformationalPopover feature="controlNetResizeMode">
|
||||
<IAIMantineSelect
|
||||
disabled={!isEnabled}
|
||||
label={t('controlnet.resizeMode')}
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { useAppDispatch } from 'app/store/storeHooks';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover';
|
||||
import IAIInformationalPopover from 'common/components/IAIInformationalPopover/IAIInformationalPopover';
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import {
|
||||
ControlNetConfig,
|
||||
@ -24,7 +24,7 @@ const ParamControlNetWeight = (props: ParamControlNetWeightProps) => {
|
||||
);
|
||||
|
||||
return (
|
||||
<IAIInformationalPopover details="controlNetWeight">
|
||||
<IAIInformationalPopover feature="controlNetWeight">
|
||||
<IAISlider
|
||||
isDisabled={!isEnabled}
|
||||
label={t('controlnet.weight')}
|
||||
|
@ -0,0 +1,59 @@
|
||||
import IAISlider from 'common/components/IAISlider';
|
||||
import { CONTROLNET_PROCESSORS } from 'features/controlNet/store/constants';
|
||||
import { RequiredColorMapImageProcessorInvocation } from 'features/controlNet/store/types';
|
||||
import { memo, useCallback } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { useProcessorNodeChanged } from '../hooks/useProcessorNodeChanged';
|
||||
import ProcessorWrapper from './common/ProcessorWrapper';
|
||||
|
||||
const DEFAULTS = CONTROLNET_PROCESSORS.color_map_image_processor
|
||||
.default as RequiredColorMapImageProcessorInvocation;
|
||||
|
||||
type ColorMapProcessorProps = {
|
||||
controlNetId: string;
|
||||
processorNode: RequiredColorMapImageProcessorInvocation;
|
||||
isEnabled: boolean;
|
||||
};
|
||||
|
||||
const ColorMapProcessor = (props: ColorMapProcessorProps) => {
|
||||
const { controlNetId, processorNode, isEnabled } = props;
|
||||
const { color_map_tile_size } = processorNode;
|
||||
const processorChanged = useProcessorNodeChanged();
|
||||
const { t } = useTranslation();
|
||||
|
||||
const handleColorMapTileSizeChanged = useCallback(
|
||||
(v: number) => {
|
||||
processorChanged(controlNetId, { color_map_tile_size: v });
|
||||
},
|
||||
[controlNetId, processorChanged]
|
||||
);
|
||||
|
||||
const handleColorMapTileSizeReset = useCallback(() => {
|
||||
processorChanged(controlNetId, {
|
||||
color_map_tile_size: DEFAULTS.color_map_tile_size,
|
||||
});
|
||||
}, [controlNetId, processorChanged]);
|
||||
|
||||
return (
|
||||
<ProcessorWrapper>
|
||||
<IAISlider
|
||||
isDisabled={!isEnabled}
|
||||
label={t('controlnet.colorMapTileSize')}
|
||||
value={color_map_tile_size}
|
||||
onChange={handleColorMapTileSizeChanged}
|
||||
handleReset={handleColorMapTileSizeReset}
|
||||
withReset
|
||||
min={1}
|
||||
max={256}
|
||||
step={1}
|
||||
withInput
|
||||
withSliderMarks
|
||||
sliderNumberInputProps={{
|
||||
max: 4096,
|
||||
}}
|
||||
/>
|
||||
</ProcessorWrapper>
|
||||
);
|
||||
};
|
||||
|
||||
export default memo(ColorMapProcessor);
|
@ -4,5 +4,9 @@ import { PropsWithChildren } from 'react';
|
||||
type Props = PropsWithChildren;
|
||||
|
||||
export default function ProcessorWrapper(props: Props) {
|
||||
return <Flex sx={{ flexDirection: 'column', gap: 2 }}>{props.children}</Flex>;
|
||||
return (
|
||||
<Flex sx={{ flexDirection: 'column', gap: 2, pb: 2 }}>
|
||||
{props.children}
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
|
@ -1,8 +1,8 @@
|
||||
import i18n from 'i18next';
|
||||
import {
|
||||
ControlNetProcessorType,
|
||||
RequiredControlNetProcessorNode,
|
||||
} from './types';
|
||||
import i18n from 'i18next';
|
||||
|
||||
type ControlNetProcessorsDict = Record<
|
||||
ControlNetProcessorType,
|
||||
@ -50,6 +50,20 @@ export const CONTROLNET_PROCESSORS: ControlNetProcessorsDict = {
|
||||
high_threshold: 200,
|
||||
},
|
||||
},
|
||||
color_map_image_processor: {
|
||||
type: 'color_map_image_processor',
|
||||
get label() {
|
||||
return i18n.t('controlnet.colorMap');
|
||||
},
|
||||
get description() {
|
||||
return i18n.t('controlnet.colorMapDescription');
|
||||
},
|
||||
default: {
|
||||
id: 'color_map_image_processor',
|
||||
type: 'color_map_image_processor',
|
||||
color_map_tile_size: 64,
|
||||
},
|
||||
},
|
||||
content_shuffle_image_processor: {
|
||||
type: 'content_shuffle_image_processor',
|
||||
get label() {
|
||||
@ -240,4 +254,5 @@ export const CONTROLNET_MODEL_DEFAULT_PROCESSORS: {
|
||||
mediapipe: 'mediapipe_face_processor',
|
||||
pidi: 'pidi_image_processor',
|
||||
zoe: 'zoe_depth_image_processor',
|
||||
color: 'color_map_image_processor',
|
||||
};
|
||||
|
@ -6,7 +6,6 @@ import {
|
||||
import { cloneDeep, forEach } from 'lodash-es';
|
||||
import { imagesApi } from 'services/api/endpoints/images';
|
||||
import { components } from 'services/api/schema';
|
||||
import { isAnySessionRejected } from 'services/api/thunks/session';
|
||||
import { ImageDTO } from 'services/api/types';
|
||||
import { appSocketInvocationError } from 'services/events/actions';
|
||||
import { controlNetImageProcessed } from './actions';
|
||||
@ -99,6 +98,9 @@ export const controlNetSlice = createSlice({
|
||||
isControlNetEnabledToggled: (state) => {
|
||||
state.isEnabled = !state.isEnabled;
|
||||
},
|
||||
controlNetEnabled: (state) => {
|
||||
state.isEnabled = true;
|
||||
},
|
||||
controlNetAdded: (
|
||||
state,
|
||||
action: PayloadAction<{
|
||||
@ -112,6 +114,12 @@ export const controlNetSlice = createSlice({
|
||||
controlNetId,
|
||||
};
|
||||
},
|
||||
controlNetRecalled: (state, action: PayloadAction<ControlNetConfig>) => {
|
||||
const controlNet = action.payload;
|
||||
state.controlNets[controlNet.controlNetId] = {
|
||||
...controlNet,
|
||||
};
|
||||
},
|
||||
controlNetDuplicated: (
|
||||
state,
|
||||
action: PayloadAction<{
|
||||
@ -146,16 +154,16 @@ export const controlNetSlice = createSlice({
|
||||
const { controlNetId } = action.payload;
|
||||
delete state.controlNets[controlNetId];
|
||||
},
|
||||
controlNetToggled: (
|
||||
controlNetIsEnabledChanged: (
|
||||
state,
|
||||
action: PayloadAction<{ controlNetId: string }>
|
||||
action: PayloadAction<{ controlNetId: string; isEnabled: boolean }>
|
||||
) => {
|
||||
const { controlNetId } = action.payload;
|
||||
const { controlNetId, isEnabled } = action.payload;
|
||||
const cn = state.controlNets[controlNetId];
|
||||
if (!cn) {
|
||||
return;
|
||||
}
|
||||
cn.isEnabled = !cn.isEnabled;
|
||||
cn.isEnabled = isEnabled;
|
||||
},
|
||||
controlNetImageChanged: (
|
||||
state,
|
||||
@ -377,8 +385,8 @@ export const controlNetSlice = createSlice({
|
||||
controlNetReset: () => {
|
||||
return { ...initialControlNetState };
|
||||
},
|
||||
isIPAdapterEnableToggled: (state) => {
|
||||
state.isIPAdapterEnabled = !state.isIPAdapterEnabled;
|
||||
isIPAdapterEnabledChanged: (state, action: PayloadAction<boolean>) => {
|
||||
state.isIPAdapterEnabled = action.payload;
|
||||
},
|
||||
ipAdapterImageChanged: (state, action: PayloadAction<ImageDTO | null>) => {
|
||||
state.ipAdapterInfo.adapterImage = action.payload;
|
||||
@ -418,10 +426,6 @@ export const controlNetSlice = createSlice({
|
||||
state.pendingControlImages = [];
|
||||
});
|
||||
|
||||
builder.addMatcher(isAnySessionRejected, (state) => {
|
||||
state.pendingControlImages = [];
|
||||
});
|
||||
|
||||
builder.addMatcher(
|
||||
imagesApi.endpoints.deleteImage.matchFulfilled,
|
||||
(state, action) => {
|
||||
@ -444,13 +448,15 @@ export const controlNetSlice = createSlice({
|
||||
|
||||
export const {
|
||||
isControlNetEnabledToggled,
|
||||
controlNetEnabled,
|
||||
controlNetAdded,
|
||||
controlNetRecalled,
|
||||
controlNetDuplicated,
|
||||
controlNetAddedFromImage,
|
||||
controlNetRemoved,
|
||||
controlNetImageChanged,
|
||||
controlNetProcessedImageChanged,
|
||||
controlNetToggled,
|
||||
controlNetIsEnabledChanged,
|
||||
controlNetModelChanged,
|
||||
controlNetWeightChanged,
|
||||
controlNetBeginStepPctChanged,
|
||||
@ -461,7 +467,7 @@ export const {
|
||||
controlNetProcessorTypeChanged,
|
||||
controlNetReset,
|
||||
controlNetAutoConfigToggled,
|
||||
isIPAdapterEnableToggled,
|
||||
isIPAdapterEnabledChanged,
|
||||
ipAdapterImageChanged,
|
||||
ipAdapterWeightChanged,
|
||||
ipAdapterModelChanged,
|
||||
|
@ -1,6 +1,7 @@
|
||||
import { isObject } from 'lodash-es';
|
||||
import {
|
||||
CannyImageProcessorInvocation,
|
||||
ColorMapImageProcessorInvocation,
|
||||
ContentShuffleImageProcessorInvocation,
|
||||
HedImageProcessorInvocation,
|
||||
LineartAnimeImageProcessorInvocation,
|
||||
@ -20,6 +21,7 @@ import { O } from 'ts-toolbelt';
|
||||
*/
|
||||
export type ControlNetProcessorNode =
|
||||
| CannyImageProcessorInvocation
|
||||
| ColorMapImageProcessorInvocation
|
||||
| ContentShuffleImageProcessorInvocation
|
||||
| HedImageProcessorInvocation
|
||||
| LineartAnimeImageProcessorInvocation
|
||||
@ -47,6 +49,14 @@ export type RequiredCannyImageProcessorInvocation = O.Required<
|
||||
'type' | 'low_threshold' | 'high_threshold'
|
||||
>;
|
||||
|
||||
/**
|
||||
* The Color Map processor node, with parameters flagged as required
|
||||
*/
|
||||
export type RequiredColorMapImageProcessorInvocation = O.Required<
|
||||
ColorMapImageProcessorInvocation,
|
||||
'type' | 'color_map_tile_size'
|
||||
>;
|
||||
|
||||
/**
|
||||
* The ContentShuffle processor node, with parameters flagged as required
|
||||
*/
|
||||
@ -140,6 +150,7 @@ export type RequiredZoeDepthImageProcessorInvocation = O.Required<
|
||||
*/
|
||||
export type RequiredControlNetProcessorNode = O.Required<
|
||||
| RequiredCannyImageProcessorInvocation
|
||||
| RequiredColorMapImageProcessorInvocation
|
||||
| RequiredContentShuffleImageProcessorInvocation
|
||||
| RequiredHedImageProcessorInvocation
|
||||
| RequiredLineartAnimeImageProcessorInvocation
|
||||
@ -166,6 +177,22 @@ export const isCannyImageProcessorInvocation = (
|
||||
return false;
|
||||
};
|
||||
|
||||
/**
|
||||
* Type guard for ColorMapImageProcessorInvocation
|
||||
*/
|
||||
export const isColorMapImageProcessorInvocation = (
|
||||
obj: unknown
|
||||
): obj is ColorMapImageProcessorInvocation => {
|
||||
if (
|
||||
isObject(obj) &&
|
||||
'type' in obj &&
|
||||
obj.type === 'color_map_image_processor'
|
||||
) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
|
||||
/**
|
||||
* Type guard for ContentShuffleImageProcessorInvocation
|
||||
*/
|
||||
|
@ -2,7 +2,6 @@ import {
|
||||
DragOverlay,
|
||||
MouseSensor,
|
||||
TouchSensor,
|
||||
pointerWithin,
|
||||
useSensor,
|
||||
useSensors,
|
||||
} from '@dnd-kit/core';
|
||||
@ -14,6 +13,7 @@ import { AnimatePresence, motion } from 'framer-motion';
|
||||
import { PropsWithChildren, memo, useCallback, useState } from 'react';
|
||||
import { useScaledModifer } from '../hooks/useScaledCenteredModifer';
|
||||
import { DragEndEvent, DragStartEvent, TypesafeDraggableData } from '../types';
|
||||
import { customPointerWithin } from '../util/customPointerWithin';
|
||||
import { DndContextTypesafe } from './DndContextTypesafe';
|
||||
import DragPreview from './DragPreview';
|
||||
|
||||
@ -77,7 +77,7 @@ const AppDndContext = (props: PropsWithChildren) => {
|
||||
onDragStart={handleDragStart}
|
||||
onDragEnd={handleDragEnd}
|
||||
sensors={sensors}
|
||||
collisionDetection={pointerWithin}
|
||||
collisionDetection={customPointerWithin}
|
||||
autoScroll={false}
|
||||
>
|
||||
{props.children}
|
||||
@ -87,7 +87,7 @@ const AppDndContext = (props: PropsWithChildren) => {
|
||||
style={{
|
||||
width: 'min-content',
|
||||
height: 'min-content',
|
||||
cursor: 'none',
|
||||
cursor: 'grabbing',
|
||||
userSelect: 'none',
|
||||
// expand overlay to prevent cursor from going outside it and displaying
|
||||
padding: '10rem',
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user