Compare commits

..

1 Commits

20 changed files with 309 additions and 585 deletions

View File

@ -1,45 +0,0 @@
# Invocation API
Each invocation's `invoke` method is provided a single arg - the Invocation
Context.
This object provides access to various methods, used to interact with the
application. Loading and saving images, logging messages, etc.
!!! warning ""
This API may shift slightly until the release of v4.0.0 as we work through a few final updates to the Model Manager.
```py
class MyInvocation(BaseInvocation):
...
def invoke(self, context: InvocationContext) -> ImageOutput:
image_pil = context.images.get_pil(image_name)
# Do something to the image
image_dto = context.images.save(image_pil)
# Log a message
context.logger.info(f"Did something cool, image saved!")
...
```
<!-- prettier-ignore-start -->
::: invokeai.app.services.shared.invocation_context.InvocationContext
options:
members: false
::: invokeai.app.services.shared.invocation_context.ImagesInterface
::: invokeai.app.services.shared.invocation_context.TensorsInterface
::: invokeai.app.services.shared.invocation_context.ConditioningInterface
::: invokeai.app.services.shared.invocation_context.ModelsInterface
::: invokeai.app.services.shared.invocation_context.LoggerInterface
::: invokeai.app.services.shared.invocation_context.ConfigInterface
::: invokeai.app.services.shared.invocation_context.UtilInterface
::: invokeai.app.services.shared.invocation_context.BoardsInterface
<!-- prettier-ignore-end -->

View File

@ -1,148 +0,0 @@
# Invoke v4.0.0 Nodes API Migration guide
Invoke v4.0.0 is versioned as such due to breaking changes to the API utilized
by nodes, both core and custom.
## Motivation
Prior to v4.0.0, the `invokeai` python package has not be set up to be utilized
as a library. That is to say, it didn't have any explicitly public API, and node
authors had to work with the unstable internal application API.
v4.0.0 introduces a stable public API for nodes.
## Changes
There are two node-author-facing changes:
1. Import Paths
1. Invocation Context API
### Import Paths
All public objects are now exported from `invokeai.invocation_api`:
```py
# Old
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
InputField,
InvocationContext,
invocation,
)
from invokeai.app.invocations.primitives import ImageField
# New
from invokeai.invocation_api import (
BaseInvocation,
ImageField,
InputField,
InvocationContext,
invocation,
)
```
It's possible that we've missed some classes you need in your node. Please let
us know if that's the case.
### Invocation Context API
Most nodes utilize the Invocation Context, an object that is passed to the
`invoke` that provides access to data and services a node may need.
Until now, that object and the services it exposed were internal. Exposing them
to nodes means that changes to our internal implementation could break nodes.
The methods on the services are also often fairly complicated and allowed nodes
to footgun.
In v4.0.0, this object has been refactored to be much simpler.
See [INVOCATION_API](./INVOCATION_API.md) for full details of the API.
!!! warning ""
This API may shift slightly until the release of v4.0.0 as we work through a few final updates to the Model Manager.
#### Improved Service Methods
The biggest offender was the image save method:
```py
# Old
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
# New
image_dto = context.images.save(image=image)
```
Other methods are simplified, or enhanced with additional functionality:
```py
# Old
image = context.services.images.get_pil_image(image_name)
# New
image = context.images.get_pil(image_name)
image_cmyk = context.images.get_pil(image_name, "CMYK")
```
We also had some typing issues around tensors:
```py
# Old
# `latents` typed as `torch.Tensor`, but could be `ConditioningFieldData`
latents = context.services.latents.get(self.latents.latents_name)
# `data` typed as `torch.Tenssor,` but could be `ConditioningFieldData`
context.services.latents.save(latents_name, data)
# New - separate methods for tensors and conditioning data w/ correct typing
# Also, the service generates the names
tensor_name = context.tensors.save(tensor)
tensor = context.tensors.load(tensor_name)
# For conditioning
cond_name = context.conditioning.save(cond_data)
cond_data = context.conditioning.load(cond_name)
```
#### Output Construction
Core Outputs have builder functions right on them - no need to manually
construct these objects, or use an extra utility:
```py
# Old
image_output = ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
latents_output = build_latents_output(latents_name=name, latents=latents, seed=None)
noise_output = NoiseOutput(
noise=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
cond_output = ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
# New
image_output = ImageOutput.build(image_dto)
latents_output = LatentsOutput.build(latents_name=name, latents=noise, seed=self.seed)
noise_output = NoiseOutput.build(latents_name=name, latents=noise, seed=self.seed)
cond_output = ConditioningOutput.build(conditioning_name)
```
You can still create the objects using constructors if you want, but we suggest
using the builder methods.

View File

@ -0,0 +1,5 @@
mkdocs
mkdocs-material>=8, <9
mkdocs-git-revision-date-localized-plugin
mkdocs-redirects==1.2.0

View File

@ -0,0 +1,5 @@
:root {
--md-primary-fg-color: #35A4DB;
--md-primary-fg-color--light: #35A4DB;
--md-primary-fg-color--dark: #35A4DB;
}

View File

@ -2,7 +2,6 @@
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
import sys
from contextlib import asynccontextmanager
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__
@ -72,25 +71,9 @@ logger = InvokeAILogger.get_logger(config=app_config)
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
@asynccontextmanager
async def lifespan(app: FastAPI):
# Add startup event to load dependencies
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
yield
# Shut down threads
ApiDependencies.shutdown()
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(
title="Invoke - Community Edition",
docs_url=None,
redoc_url=None,
separate_input_output_schemas=False,
lifespan=lifespan,
)
app = FastAPI(title="Invoke - Community Edition", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
# Add event handler
event_handler_id: int = id(app)
@ -113,6 +96,18 @@ app.add_middleware(
app.add_middleware(GZipMiddleware, minimum_size=1000)
# Add startup event to load dependencies
@app.on_event("startup")
async def startup_event() -> None:
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
# Shut down threads
@app.on_event("shutdown")
async def shutdown_event() -> None:
ApiDependencies.shutdown()
# Include all routers
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")

View File

@ -1,15 +1,24 @@
from typing import Iterator, List, Optional, Tuple, Union, cast
from typing import Iterator, List, Optional, Tuple, Union
import torch
from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from transformers import CLIPTokenizer
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIComponent
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
OutputField,
UIComponent,
)
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.model_records import UnknownModelException
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
from invokeai.app.util.ti_utils import extract_ti_triggers_from_prompt
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import ModelType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
@ -17,9 +26,15 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
ExtraConditioningInfo,
SDXLConditioningInfo,
)
from invokeai.backend.textual_inversion import TextualInversionModelRaw
from invokeai.backend.util.devices import torch_dtype
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .model import ClipField
# unconditioned: Optional[torch.Tensor]
@ -55,11 +70,7 @@ class CompelInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.models.load(**self.clip.tokenizer.model_dump())
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**self.clip.text_encoder.model_dump())
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, CLIPTextModel)
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.clip.loras:
@ -71,10 +82,23 @@ class CompelInvocation(BaseInvocation):
# loras = [(context.models.get(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = generate_ti_list(self.prompt, text_encoder_info.config.base, context)
ti_list = []
for trigger in extract_ti_triggers_from_prompt(self.prompt):
name = trigger[1:-1]
try:
loaded_model = context.models.load_by_attrs(
model_name=name, base_model=text_encoder_info.config.base, model_type=ModelType.TextualInversion
).model
assert isinstance(loaded_model, TextualInversionModelRaw)
ti_list.append((name, loaded_model))
except UnknownModelException:
# print(e)
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
with (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
ModelPatcher.apply_ti(tokenizer_info.model, text_encoder_info.model, ti_list) as (
tokenizer,
ti_manager,
),
@ -82,9 +106,8 @@ class CompelInvocation(BaseInvocation):
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_model, self.clip.skipped_layers),
ModelPatcher.apply_clip_skip(text_encoder_info.model, self.clip.skipped_layers),
):
assert isinstance(text_encoder, CLIPTextModel)
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
@ -134,11 +157,7 @@ class SDXLPromptInvocationBase:
zero_on_empty: bool,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[ExtraConditioningInfo]]:
tokenizer_info = context.models.load(**clip_field.tokenizer.model_dump())
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**clip_field.text_encoder.model_dump())
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, (CLIPTextModel, CLIPTextModelWithProjection))
# return zero on empty
if prompt == "" and zero_on_empty:
@ -172,10 +191,25 @@ class SDXLPromptInvocationBase:
# loras = [(context.models.get(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = generate_ti_list(prompt, text_encoder_info.config.base, context)
ti_list = []
for trigger in extract_ti_triggers_from_prompt(prompt):
name = trigger[1:-1]
try:
ti_model = context.models.load_by_attrs(
model_name=name, base_model=text_encoder_info.config.base, model_type=ModelType.TextualInversion
).model
assert isinstance(ti_model, TextualInversionModelRaw)
ti_list.append((name, ti_model))
except UnknownModelException:
# print(e)
# import traceback
# print(traceback.format_exc())
logger.warning(f'trigger: "{trigger}" not found')
except ValueError:
logger.warning(f'trigger: "{trigger}" more than one similarly-named textual inversion models')
with (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
ModelPatcher.apply_ti(tokenizer_info.model, text_encoder_info.model, ti_list) as (
tokenizer,
ti_manager,
),
@ -183,10 +217,8 @@ class SDXLPromptInvocationBase:
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_model, clip_field.skipped_layers),
ModelPatcher.apply_clip_skip(text_encoder_info.model, clip_field.skipped_layers),
):
assert isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
text_encoder = cast(CLIPTextModel, text_encoder)
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,

View File

@ -576,7 +576,7 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.1",
version="1.0.0",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
@ -585,12 +585,13 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image

View File

@ -93,7 +93,7 @@ class IPAdapterInvocation(BaseInvocation):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
model_name=image_encoder_model_name, base_model=BaseModelType.Any, model_type=ModelType.CLIPVision
)
assert len(image_encoder_models) == 1
image_encoder_model = CLIPVisionModelField(key=image_encoder_models[0].key)

View File

@ -65,86 +65,75 @@ class InvocationContextInterface:
class BoardsInterface(InvocationContextInterface):
def create(self, board_name: str) -> BoardDTO:
"""Creates a board.
"""
Creates a board.
Args:
board_name: The name of the board to create.
Returns:
The created board DTO.
:param board_name: The name of the board to create.
"""
return self._services.boards.create(board_name)
def get_dto(self, board_id: str) -> BoardDTO:
"""Gets a board DTO.
"""
Gets a board DTO.
Args:
board_id: The ID of the board to get.
Returns:
The board DTO.
:param board_id: The ID of the board to get.
"""
return self._services.boards.get_dto(board_id)
def get_all(self) -> list[BoardDTO]:
"""Gets all boards.
Returns:
A list of all boards.
"""
Gets all boards.
"""
return self._services.boards.get_all()
def add_image_to_board(self, board_id: str, image_name: str) -> None:
"""Adds an image to a board.
"""
Adds an image to a board.
Args:
board_id: The ID of the board to add the image to.
image_name: The name of the image to add to the board.
:param board_id: The ID of the board to add the image to.
:param image_name: The name of the image to add to the board.
"""
return self._services.board_images.add_image_to_board(board_id, image_name)
def get_all_image_names_for_board(self, board_id: str) -> list[str]:
"""Gets all image names for a board.
"""
Gets all image names for a board.
Args:
board_id: The ID of the board to get the image names for.
Returns:
A list of all image names for the board.
:param board_id: The ID of the board to get the image names for.
"""
return self._services.board_images.get_all_board_image_names_for_board(board_id)
class LoggerInterface(InvocationContextInterface):
def debug(self, message: str) -> None:
"""Logs a debug message.
"""
Logs a debug message.
Args:
message: The message to log.
:param message: The message to log.
"""
self._services.logger.debug(message)
def info(self, message: str) -> None:
"""Logs an info message.
"""
Logs an info message.
Args:
message: The message to log.
:param message: The message to log.
"""
self._services.logger.info(message)
def warning(self, message: str) -> None:
"""Logs a warning message.
"""
Logs a warning message.
Args:
message: The message to log.
:param message: The message to log.
"""
self._services.logger.warning(message)
def error(self, message: str) -> None:
"""Logs an error message.
"""
Logs an error message.
Args:
message: The message to log.
:param message: The message to log.
"""
self._services.logger.error(message)
@ -157,23 +146,20 @@ class ImagesInterface(InvocationContextInterface):
image_category: ImageCategory = ImageCategory.GENERAL,
metadata: Optional[MetadataField] = None,
) -> ImageDTO:
"""Saves an image, returning its DTO.
"""
Saves an image, returning its DTO.
If the current queue item has a workflow or metadata, it is automatically saved with the image.
Args:
image: The image to save, as a PIL image.
board_id: The board ID to add the image to, if it should be added. It the invocation \
:param image: The image to save, as a PIL image.
:param board_id: The board ID to add the image to, if it should be added. It the invocation \
inherits from `WithBoard`, that board will be used automatically. **Use this only if \
you want to override or provide a board manually!**
image_category: The category of the image. Only the GENERAL category is added \
:param image_category: The category of the image. Only the GENERAL category is added \
to the gallery.
metadata: The metadata to save with the image, if it should have any. If the \
:param metadata: The metadata to save with the image, if it should have any. If the \
invocation inherits from `WithMetadata`, that metadata will be used automatically. \
**Use this only if you want to override or provide metadata manually!**
Returns:
The saved image DTO.
"""
# If `metadata` is provided directly, use that. Else, use the metadata provided by `WithMetadata`, falling back to None.
@ -203,14 +189,11 @@ class ImagesInterface(InvocationContextInterface):
)
def get_pil(self, image_name: str, mode: IMAGE_MODES | None = None) -> Image:
"""Gets an image as a PIL Image object.
"""
Gets an image as a PIL Image object.
Args:
image_name: The name of the image to get.
mode: The color mode to convert the image to. If None, the original mode is used.
Returns:
The image as a PIL Image object.
:param image_name: The name of the image to get.
:param mode: The color mode to convert the image to. If None, the original mode is used.
"""
image = self._services.images.get_pil_image(image_name)
if mode and mode != image.mode:
@ -223,76 +206,58 @@ class ImagesInterface(InvocationContextInterface):
return image
def get_metadata(self, image_name: str) -> Optional[MetadataField]:
"""Gets an image's metadata, if it has any.
"""
Gets an image's metadata, if it has any.
Args:
image_name: The name of the image to get the metadata for.
Returns:
The image's metadata, if it has any.
:param image_name: The name of the image to get the metadata for.
"""
return self._services.images.get_metadata(image_name)
def get_dto(self, image_name: str) -> ImageDTO:
"""Gets an image as an ImageDTO object.
"""
Gets an image as an ImageDTO object.
Args:
image_name: The name of the image to get.
Returns:
The image as an ImageDTO object.
:param image_name: The name of the image to get.
"""
return self._services.images.get_dto(image_name)
class TensorsInterface(InvocationContextInterface):
def save(self, tensor: Tensor) -> str:
"""Saves a tensor, returning its name.
"""
Saves a tensor, returning its name.
Args:
tensor: The tensor to save.
Returns:
The name of the saved tensor.
:param tensor: The tensor to save.
"""
name = self._services.tensors.save(obj=tensor)
return name
def load(self, name: str) -> Tensor:
"""Loads a tensor by name.
"""
Loads a tensor by name.
Args:
name: The name of the tensor to load.
Returns:
The loaded tensor.
:param name: The name of the tensor to load.
"""
return self._services.tensors.load(name)
class ConditioningInterface(InvocationContextInterface):
def save(self, conditioning_data: ConditioningFieldData) -> str:
"""Saves a conditioning data object, returning its name.
"""
Saves a conditioning data object, returning its name.
Args:
conditioning_data: The conditioning data to save.
Returns:
The name of the saved conditioning data.
:param conditioning_data: The conditioning data to save.
"""
name = self._services.conditioning.save(obj=conditioning_data)
return name
def load(self, name: str) -> ConditioningFieldData:
"""Loads conditioning data by name.
"""
Loads conditioning data by name.
Args:
name: The name of the conditioning data to load.
Returns:
The loaded conditioning data.
:param name: The name of the conditioning data to load.
"""
return self._services.conditioning.load(name)
@ -300,25 +265,20 @@ class ConditioningInterface(InvocationContextInterface):
class ModelsInterface(InvocationContextInterface):
def exists(self, key: str) -> bool:
"""Checks if a model exists.
"""
Checks if a model exists.
Args:
key: The key of the model.
Returns:
True if the model exists, False if not.
:param key: The key of the model.
"""
return self._services.model_manager.store.exists(key)
def load(self, key: str, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
"""Loads a model.
"""
Loads a model.
Args:
key: The key of the model.
submodel_type: The submodel of the model to get.
Returns:
An object representing the loaded model.
:param key: The key of the model.
:param submodel_type: The submodel of the model to get.
:returns: An object representing the loaded model.
"""
# The model manager emits events as it loads the model. It needs the context data to build
@ -329,95 +289,75 @@ class ModelsInterface(InvocationContextInterface):
)
def load_by_attrs(
self, name: str, base: BaseModelType, type: ModelType, submodel_type: Optional[SubModelType] = None
self, model_name: str, base_model: BaseModelType, model_type: ModelType, submodel: Optional[SubModelType] = None
) -> LoadedModel:
"""Loads a model by its attributes.
"""
Loads a model by its attributes.
Args:
name: Name of the model.
base: The models' base type, e.g. `BaseModelType.StableDiffusion1`, `BaseModelType.StableDiffusionXL`, etc.
type: Type of the model, e.g. `ModelType.Main`, `ModelType.Vae`, etc.
submodel_type: The type of submodel to load, e.g. `SubModelType.UNet`, `SubModelType.TextEncoder`, etc. Only main
models have submodels.
Returns:
An object representing the loaded model.
:param model_name: Name of to be fetched.
:param base_model: Base model
:param model_type: Type of the model
:param submodel: For main (pipeline models), the submodel to fetch
"""
return self._services.model_manager.load_model_by_attr(
model_name=name,
base_model=base,
model_type=type,
submodel=submodel_type,
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=submodel,
context_data=self._data,
)
def get_config(self, key: str) -> AnyModelConfig:
"""Gets a model's config.
"""
Gets a model's info, an dict-like object.
Args:
key: The key of the model.
Returns:
The model's config.
:param key: The key of the model.
"""
return self._services.model_manager.store.get_model(key=key)
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""Gets a model's metadata, if it has any.
"""
Gets a model's metadata, if it has any.
Args:
key: The key of the model.
Returns:
The model's metadata, if it has any.
:param key: The key of the model.
"""
return self._services.model_manager.store.get_metadata(key=key)
def search_by_path(self, path: Path) -> list[AnyModelConfig]:
"""Searches for models by path.
"""
Searches for models by path.
Args:
path: The path to search for.
Returns:
A list of models that match the path.
:param path: The path to search for.
"""
return self._services.model_manager.store.search_by_path(path)
def search_by_attrs(
self,
name: Optional[str] = None,
base: Optional[BaseModelType] = None,
type: Optional[ModelType] = None,
format: Optional[ModelFormat] = None,
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> list[AnyModelConfig]:
"""Searches for models by attributes.
"""
Searches for models by attributes.
Args:
name: The name to search for (exact match).
base: The base to search for, e.g. `BaseModelType.StableDiffusion1`, `BaseModelType.StableDiffusionXL`, etc.
type: Type type of model to search for, e.g. `ModelType.Main`, `ModelType.Vae`, etc.
format: The format of model to search for, e.g. `ModelFormat.Checkpoint`, `ModelFormat.Diffusers`, etc.
Returns:
A list of models that match the attributes.
:param model_name: Name of to be fetched.
:param base_model: Base model
:param model_type: Type of the model
:param submodel: For main (pipeline models), the submodel to fetch
"""
return self._services.model_manager.store.search_by_attr(
model_name=name,
base_model=base,
model_type=type,
model_format=format,
model_name=model_name,
base_model=base_model,
model_type=model_type,
model_format=model_format,
)
class ConfigInterface(InvocationContextInterface):
def get(self) -> InvokeAIAppConfig:
"""Gets the app's config.
Returns:
The app's config.
"""
"""Gets the app's config."""
return self._services.configuration.get_config()
@ -430,11 +370,7 @@ class UtilInterface(InvocationContextInterface):
self._cancel_event = cancel_event
def is_canceled(self) -> bool:
"""Checks if the current session has been canceled.
Returns:
True if the current session has been canceled, False if not.
"""
"""Checks if the current invocation has been canceled."""
return self._cancel_event.is_set()
def sd_step_callback(self, intermediate_state: PipelineIntermediateState, base_model: BaseModelType) -> None:
@ -444,9 +380,8 @@ class UtilInterface(InvocationContextInterface):
This should be called after each denoising step.
Args:
intermediate_state: The intermediate state of the diffusion pipeline.
base_model: The base model for the current denoising step.
:param intermediate_state: The intermediate state of the diffusion pipeline.
:param base_model: The base model for the current denoising step.
"""
stable_diffusion_step_callback(
@ -459,17 +394,8 @@ class UtilInterface(InvocationContextInterface):
class InvocationContext:
"""Provides access to various services and data for the current invocation.
Attributes:
images (ImagesInterface): Methods to save, get and update images and their metadata.
tensors (TensorsInterface): Methods to save and get tensors, including image, noise, masks, and masked images.
conditioning (ConditioningInterface): Methods to save and get conditioning data.
models (ModelsInterface): Methods to check if a model exists, get a model, and get a model's info.
logger (LoggerInterface): The app logger.
config (ConfigInterface): The app config.
util (UtilInterface): Utility methods, including a method to check if an invocation was canceled and step callbacks.
boards (BoardsInterface): Methods to interact with boards.
"""
The `InvocationContext` provides access to various services and data for the current invocation.
"""
def __init__(
@ -512,14 +438,11 @@ def build_invocation_context(
data: InvocationContextData,
cancel_event: threading.Event,
) -> InvocationContext:
"""Builds the invocation context for a specific invocation execution.
"""
Builds the invocation context for a specific invocation execution.
Args:
services: The invocation services to wrap.
data: The invocation context data.
Returns:
The invocation context.
:param services: The invocation services to wrap.
:param data: The invocation context data.
"""
logger = LoggerInterface(services=services, data=data)

View File

@ -1,11 +1,6 @@
import re
from typing import List, Tuple
import invokeai.backend.util.logging as logger
from invokeai.app.services.model_records import UnknownModelException
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType, ModelType
from invokeai.backend.textual_inversion import TextualInversionModelRaw
from typing import List
def extract_ti_triggers_from_prompt(prompt: str) -> List[str]:
@ -13,35 +8,3 @@ def extract_ti_triggers_from_prompt(prompt: str) -> List[str]:
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", prompt):
ti_triggers.append(str(trigger))
return ti_triggers
def generate_ti_list(
prompt: str, base: BaseModelType, context: InvocationContext
) -> List[Tuple[str, TextualInversionModelRaw]]:
ti_list: List[Tuple[str, TextualInversionModelRaw]] = []
for trigger in extract_ti_triggers_from_prompt(prompt):
name_or_key = trigger[1:-1]
try:
loaded_model = context.models.load(key=name_or_key)
model = loaded_model.model
assert isinstance(model, TextualInversionModelRaw)
assert loaded_model.config.base == base
ti_list.append((name_or_key, model))
except UnknownModelException:
try:
loaded_model = context.models.load_by_attrs(
name=name_or_key, base=base, type=ModelType.TextualInversion
)
model = loaded_model.model
assert isinstance(model, TextualInversionModelRaw)
assert loaded_model.config.base == base
ti_list.append((name_or_key, model))
except UnknownModelException:
pass
except ValueError:
logger.warning(f'trigger: "{trigger}" more than one similarly-named textual inversion models')
except AssertionError:
logger.warning(f'trigger: "{trigger}" not a valid textual inversion model for this graph')
except Exception:
logger.warning(f'Failed to load TI model for trigger: "{trigger}"')
return ti_list

View File

@ -17,8 +17,6 @@ from invokeai.backend.util.util import download_with_progress_bar
config = InvokeAIAppConfig.get_config()
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
DEPTH_ANYTHING_MODELS = {
"large": {
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitl14.pth?download=true",
@ -55,9 +53,9 @@ transform = Compose(
class DepthAnythingDetector:
def __init__(self) -> None:
self.model = None
self.model_size: Union[DEPTH_ANYTHING_MODEL_SIZES, None] = None
self.model_size: Union[Literal["large", "base", "small"], None] = None
def load_model(self, model_size: DEPTH_ANYTHING_MODEL_SIZES = "small"):
def load_model(self, model_size=Literal["large", "base", "small"]):
DEPTH_ANYTHING_MODEL_PATH = pathlib.Path(config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"])
if not DEPTH_ANYTHING_MODEL_PATH.exists():
download_with_progress_bar(DEPTH_ANYTHING_MODELS[model_size]["url"], DEPTH_ANYTHING_MODEL_PATH)
@ -86,19 +84,16 @@ class DepthAnythingDetector:
self.model.to(device)
return self
def __call__(self, image: Image.Image, resolution: int = 512):
if self.model is None:
raise Exception("Depth Anything Model not loaded")
def __call__(self, image, resolution=512, offload=False):
image = np.array(image, dtype=np.uint8)
image = image[:, :, ::-1] / 255.0
np_image = np.array(image, dtype=np.uint8)
np_image = np_image[:, :, ::-1] / 255.0
image_height, image_width = np_image.shape[:2]
np_image = transform({"image": image})["image"]
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(choose_torch_device())
image_height, image_width = image.shape[:2]
image = transform({"image": image})["image"]
image = torch.from_numpy(image).unsqueeze(0).to(choose_torch_device())
with torch.no_grad():
depth = self.model(tensor_image)
depth = self.model(image)
depth = F.interpolate(depth[None], (image_height, image_width), mode="bilinear", align_corners=False)[0, 0]
depth = (depth - depth.min()) / (depth.max() - depth.min()) * 255.0
@ -108,4 +103,7 @@ class DepthAnythingDetector:
new_height = int(image_height * (resolution / image_width))
depth_map = depth_map.resize((resolution, new_height))
if offload:
del self.model
return depth_map

View File

@ -160,7 +160,7 @@ class CivitaiMetadataFetch(ModelMetadataFetchBase):
nsfw=model_json["nsfw"],
restrictions=LicenseRestrictions(
AllowNoCredit=model_json["allowNoCredit"],
AllowCommercialUse={CommercialUsage(x) for x in model_json["allowCommercialUse"]},
AllowCommercialUse=CommercialUsage(model_json["allowCommercialUse"]),
AllowDerivatives=model_json["allowDerivatives"],
AllowDifferentLicense=model_json["allowDifferentLicense"],
),

View File

@ -54,8 +54,8 @@ class LicenseRestrictions(BaseModel):
AllowDifferentLicense: bool = Field(
description="if true, derivatives of this model be redistributed under a different license", default=False
)
AllowCommercialUse: Optional[Set[CommercialUsage] | CommercialUsage] = Field(
description="Type of commercial use allowed if no commercial use is allowed.", default=None
AllowCommercialUse: Optional[CommercialUsage] = Field(
description="Type of commercial use allowed or 'No' if no commercial use is allowed.", default=None
)
@ -142,10 +142,7 @@ class CivitaiMetadata(ModelMetadataWithFiles):
if self.restrictions.AllowCommercialUse is None:
return False
else:
# accommodate schema change
acu = self.restrictions.AllowCommercialUse
commercial_usage = acu if isinstance(acu, set) else {acu}
return CommercialUsage.No not in commercial_usage
return self.restrictions.AllowCommercialUse != CommercialUsage("None")
@property
def allow_derivatives(self) -> bool:

View File

@ -28,7 +28,6 @@ from typing import Callable, Optional, Set, Union
from pydantic import BaseModel, Field
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
default_logger: Logger = InvokeAILogger.get_logger()
@ -118,10 +117,13 @@ class ModelSearch(ModelSearchBase):
"""
models_found: Set[Path] = Field(default_factory=set)
config: InvokeAIAppConfig = InvokeAIAppConfig.get_config()
scanned_dirs: Set[Path] = Field(default_factory=set)
pruned_paths: Set[Path] = Field(default_factory=set)
def search_started(self) -> None:
self.models_found = set()
self.scanned_dirs = set()
self.pruned_paths = set()
if self.on_search_started:
self.on_search_started(self._directory)
@ -137,53 +139,53 @@ class ModelSearch(ModelSearchBase):
def search(self, directory: Union[Path, str]) -> Set[Path]:
self._directory = Path(directory)
if not self._directory.is_absolute():
self._directory = self.config.models_path / self._directory
self.stats = SearchStats() # zero out
self.search_started() # This will initialize _models_found to empty
self._walk_directory(self._directory)
self._walk_directory(directory)
self.search_completed()
return self.models_found
def _walk_directory(self, path: Union[Path, str], max_depth: int = 20) -> None:
absolute_path = Path(path)
if (
len(absolute_path.parts) - len(self._directory.parts) > max_depth
or not absolute_path.exists()
or absolute_path.parent in self.models_found
):
return
entries = os.scandir(absolute_path.as_posix())
entries = [entry for entry in entries if not entry.name.startswith(".")]
dirs = [entry for entry in entries if entry.is_dir()]
file_names = [entry.name for entry in entries if entry.is_file()]
if any(
x in file_names
for x in [
"config.json",
"model_index.json",
"learned_embeds.bin",
"pytorch_lora_weights.bin",
"image_encoder.txt",
]
):
try:
self.model_found(absolute_path)
return
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))
return
def _walk_directory(self, path: Union[Path, str]) -> None:
for root, dirs, files in os.walk(path, followlinks=True):
# don't descend into directories that start with a "."
# to avoid the Mac .DS_STORE issue.
if str(Path(root).name).startswith("."):
self.pruned_paths.add(Path(root))
if any(Path(root).is_relative_to(x) for x in self.pruned_paths):
continue
for n in file_names:
if n.endswith((".ckpt", ".bin", ".pth", ".safetensors", ".pt")):
try:
self.model_found(absolute_path / n)
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))
self.stats.items_scanned += len(dirs) + len(files)
for d in dirs:
path = Path(root) / d
if path.parent in self.scanned_dirs:
self.scanned_dirs.add(path)
continue
if any(
(path / x).exists()
for x in [
"config.json",
"model_index.json",
"learned_embeds.bin",
"pytorch_lora_weights.bin",
"image_encoder.txt",
]
):
self.scanned_dirs.add(path)
try:
self.model_found(path)
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))
for d in dirs:
self._walk_directory(absolute_path / d)
for f in files:
path = Path(root) / f
if path.parent in self.scanned_dirs:
continue
if path.suffix in {".ckpt", ".bin", ".pth", ".safetensors", ".pt"}:
try:
self.model_found(path)
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))

View File

@ -4,12 +4,12 @@ from __future__ import annotations
import pickle
from contextlib import contextmanager
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
from typing import Any, Dict, Iterator, List, Optional, Tuple
import numpy as np
import torch
from diffusers import OnnxRuntimeModel, UNet2DConditionModel
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from transformers import CLIPTextModel, CLIPTokenizer
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager import AnyModel
@ -168,7 +168,7 @@ class ModelPatcher:
def apply_ti(
cls,
tokenizer: CLIPTokenizer,
text_encoder: Union[CLIPTextModel, CLIPTextModelWithProjection],
text_encoder: CLIPTextModel,
ti_list: List[Tuple[str, TextualInversionModelRaw]],
) -> Iterator[Tuple[CLIPTokenizer, TextualInversionManager]]:
init_tokens_count = None
@ -265,7 +265,7 @@ class ModelPatcher:
@contextmanager
def apply_clip_skip(
cls,
text_encoder: Union[CLIPTextModel, CLIPTextModelWithProjection],
text_encoder: CLIPTextModel,
clip_skip: int,
) -> None:
skipped_layers = []

View File

@ -15,7 +15,7 @@ edit_uri: edit/main/docs/
copyright: Copyright &copy; 2023 InvokeAI Team
extra_css:
- stylesheets/extra.css
- stylesheets/extra.css
# Configuration
theme:
@ -25,7 +25,18 @@ theme:
edit: material/file-document-edit-outline
favicon: img/favicon.ico
palette:
scheme: slate
# - media: '(prefers-color-scheme: light)'
# scheme: default
# primary: custom
# toggle:
# icon: material/lightbulb
# name: Switch to dark mode
- media: '(prefers-color-scheme: dark)'
scheme: slate
primary: custom
# toggle:
# icon: material/lightbulb-outline
# name: Switch to light mode
features:
- navigation.instant
- navigation.tabs
@ -37,6 +48,7 @@ theme:
- search.suggest
- toc.integrate
# Extensions
markdown_extensions:
- abbr
@ -54,8 +66,8 @@ markdown_extensions:
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite
@ -85,25 +97,12 @@ plugins:
enable_creation_date: true
- redirects:
redirect_maps:
'installation/INSTALL_AUTOMATED.md': 'installation/010_INSTALL_AUTOMATED.md'
'installation/INSTALL_MANUAL.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_SOURCE.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_DOCKER.md': 'installation/040_INSTALL_DOCKER.md'
'installation/INSTALLING_MODELS.md': 'installation/050_INSTALLING_MODELS.md'
'installation/INSTALL_PATCHMATCH.md': 'installation/060_INSTALL_PATCHMATCH.md'
- mkdocstrings:
handlers:
python:
options:
separate_signature: true
show_signature_annotations: true
parameter_headings: false
signature_crossrefs: true
show_source: false
summary: true
show_root_heading: true
show_root_full_path: false
show_bases: false
'installation/INSTALL_AUTOMATED.md': 'installation/010_INSTALL_AUTOMATED.md'
'installation/INSTALL_MANUAL.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_SOURCE.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_DOCKER.md': 'installation/040_INSTALL_DOCKER.md'
'installation/INSTALLING_MODELS.md': 'installation/050_INSTALLING_MODELS.md'
'installation/INSTALL_PATCHMATCH.md': 'installation/060_INSTALL_PATCHMATCH.md'
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
@ -116,8 +115,8 @@ extra:
property: G-2X4JR4S4FB
nav:
- Home: 'index.md'
- Installation:
- Home: 'index.md'
- Installation:
- Overview: 'installation/INSTALLATION.md'
- Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md'
- Installing Manually: 'installation/020_INSTALL_MANUAL.md'
@ -128,14 +127,14 @@ nav:
- Installing xFormers: 'installation/070_INSTALL_XFORMERS.md'
- Developers Documentation: 'installation/Developers_documentation/BUILDING_BINARY_INSTALLERS.md'
- Deprecated Documentation:
- Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md'
- Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md'
- Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md'
- Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_MAC.md'
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
- Workflows & Nodes:
- Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md'
- Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md'
- Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md'
- Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_MAC.md'
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
- Workflows & Nodes:
- Nodes Overview: 'nodes/overview.md'
- Workflow Editor Basics: 'nodes/NODES.md'
- List of Default Nodes: 'nodes/defaultNodes.md'
@ -143,10 +142,8 @@ nav:
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
- Facetool Node: 'nodes/detailedNodes/faceTools.md'
- Contributing Nodes: 'nodes/contributingNodes.md'
- Migrating from v3 to v4: 'nodes/NODES_MIGRATION_V3_V4.md'
- Invocation API: 'nodes/INVOCATION_API.md'
- Community Nodes: 'nodes/communityNodes.md'
- Features:
- Community Nodes: 'nodes/communityNodes.md'
- Features:
- Overview: 'features/index.md'
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
- Configuration: 'features/CONFIGURATION.md'
@ -155,35 +152,35 @@ nav:
- Controlling Logging: 'features/LOGGING.md'
- LoRAs & LCM-LoRAs: 'features/LORAS.md'
- Model Merging: 'features/MODEL_MERGING.md'
- Workflows & Nodes: 'nodes/overview'
- Workflows & Nodes : 'nodes/overview'
- NSFW Checker: 'features/WATERMARK+NSFW.md'
- Postprocessing: 'features/POSTPROCESS.md'
- Prompting Features: 'features/PROMPTS.md'
- Textual Inversions:
- Textual Inversions: 'features/TEXTUAL_INVERSIONS.md'
- Textual Inversion Training: 'features/TRAINING.md'
- Textual Inversions: 'features/TEXTUAL_INVERSIONS.md'
- Textual Inversion Training: 'features/TRAINING.md'
- Unified Canvas: 'features/UNIFIED_CANVAS.md'
- InvokeAI Web Server: 'features/WEB.md'
- WebUI Hotkeys: 'features/WEBUIHOTKEYS.md'
- Maintenance Utilities: 'features/UTILITIES.md'
- WebUI Hotkeys: "features/WEBUIHOTKEYS.md"
- Maintenance Utilities: "features/UTILITIES.md"
- Other: 'features/OTHER.md'
- Contributing:
- Contributing:
- How to Contribute: 'contributing/CONTRIBUTING.md'
- InvokeAI Code of Conduct: 'CODE_OF_CONDUCT.md'
- Development:
- Overview: 'contributing/contribution_guides/development.md'
- New Contributors: 'contributing/contribution_guides/newContributorChecklist.md'
- Model Manager v2: 'contributing/MODEL_MANAGER.md'
- Frontend Documentation: 'contributing/contribution_guides/contributingToFrontend.md'
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
- Testing: 'contributing/TESTS.md'
- Overview: 'contributing/contribution_guides/development.md'
- New Contributors: 'contributing/contribution_guides/newContributorChecklist.md'
- Model Manager v2: 'contributing/MODEL_MANAGER.md'
- Frontend Documentation: 'contributing/contribution_guides/contributingToFrontend.md'
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
- Testing: 'contributing/TESTS.md'
- Documentation: 'contributing/contribution_guides/documentation.md'
- Nodes: 'contributing/INVOCATIONS.md'
- Model Manager: 'contributing/MODEL_MANAGER.md'
- Download Queue: 'contributing/DOWNLOAD_QUEUE.md'
- Translation: 'contributing/contribution_guides/translation.md'
- Tutorials: 'contributing/contribution_guides/tutorials.md'
- Deprecated:
- Deprecated:
- InvokeAI 2.3: 'deprecated/2to3.md'
- Command Line Interface: 'deprecated/CLI.md'
- Changelog: 'CHANGELOG.md'
@ -193,11 +190,11 @@ nav:
- Inpainting: 'deprecated/INPAINTING.md'
- Outpainting: 'deprecated/OUTPAINTING.md'
- Troubleshooting: 'help/deprecated/TROUBLESHOOT.md'
- Help:
- Help:
- Getting Started: 'help/gettingStartedWithAI.md'
- FAQs: 'help/FAQ.md'
- Diffusion Overview: 'help/diffusion.md'
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
- Other:
- Other:
- Contributors: 'other/CONTRIBUTORS.md'
- CompViz-README: 'other/README-CompViz.md'

View File

@ -105,10 +105,9 @@ dependencies = [
"onnx-directml" = ["onnxruntime-directml"]
"dist" = ["pip-tools", "pipdeptree", "twine"]
"docs" = [
"mkdocs-material",
"mkdocs-material<9.0",
"mkdocs-git-revision-date-localized-plugin",
"mkdocs-redirects",
"mkdocstrings[python]",
"mkdocs-redirects==1.2.0",
]
"dev" = ["jurigged", "pudb", "snakeviz", "gprof2dot"]
"test" = [

File diff suppressed because one or more lines are too long

View File

@ -133,7 +133,7 @@ def test_metadata_civitai_fetch(mm2_session: Session) -> None:
assert metadata.id == 215485
assert metadata.author == "test_author" # note that this is not the same as the original from Civitai
assert metadata.allow_commercial_use # changed to make sure we are reading locally not remotely
assert CommercialUsage("RentCivit") in metadata.restrictions.AllowCommercialUse
assert metadata.restrictions.AllowCommercialUse == CommercialUsage("RentCivit")
assert metadata.version_id == 242807
assert metadata.tags == {"tool", "turbo", "sdxl turbo"}