2023-05-18 14:48:23 +00:00
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
2023-05-04 05:20:30 +00:00
""" Invokeai configuration system.
Arguments and fields are taken from the pydantic definition of the
model . Defaults can be set by creating a yaml configuration file that
2023-05-17 16:19:19 +00:00
has a top - level key of " InvokeAI " and subheadings for each of the
categories returned by ` invokeai - - help ` . The file looks like this :
2023-05-04 05:20:30 +00:00
[ file : invokeai . yaml ]
2023-05-17 16:19:19 +00:00
InvokeAI :
2023-05-17 23:45:51 +00:00
Web Server :
host : 127.0 .0 .1
2023-08-17 18:32:05 +00:00
port : 9090
2023-05-17 16:19:19 +00:00
allow_origins : [ ]
allow_credentials : true
allow_methods :
- ' * '
allow_headers :
- ' * '
2023-08-17 18:32:05 +00:00
Features :
esrgan : true
internet_available : true
log_tokenization : false
patchmatch : true
ignore_missing_core_models : false
Paths :
autoimport_dir : autoimport
lora_dir : null
embedding_dir : null
controlnet_dir : null
models_dir : models
legacy_conf_dir : configs / stable - diffusion
db_dir : databases
outdir : / home / lstein / invokeai - main / outputs
use_memory_db : false
Logging :
log_handlers :
- console
log_format : plain
log_level : info
Model Cache :
ram : 13.5
vram : 0.25
lazy_offload : true
2023-11-02 16:07:48 +00:00
log_memory_usage : false
2023-08-17 18:32:05 +00:00
Device :
device : auto
precision : auto
Generation :
sequential_guidance : false
attention_type : xformers
attention_slice_size : auto
force_tiled_decode : false
2023-05-04 05:20:30 +00:00
The default name of the configuration file is ` invokeai . yaml ` , located
2023-05-17 16:19:19 +00:00
in INVOKEAI_ROOT . You can replace supersede this by providing any
OmegaConf dictionary object initialization time :
2023-05-04 05:20:30 +00:00
omegaconf = OmegaConf . load ( ' /tmp/init.yaml ' )
2023-05-26 01:10:00 +00:00
conf = InvokeAIAppConfig ( )
conf . parse_args ( conf = omegaconf )
2023-05-04 05:20:30 +00:00
2023-05-26 01:10:00 +00:00
InvokeAIAppConfig . parse_args ( ) will parse the contents of ` sys . argv `
at initialization time . You may pass a list of strings in the optional
2023-05-04 05:20:30 +00:00
` argv ` argument to use instead of the system argv :
2023-08-17 18:32:05 +00:00
conf . parse_args ( argv = [ ' --log_tokenization ' ] )
2023-05-04 05:20:30 +00:00
2023-05-26 01:10:00 +00:00
It is also possible to set a value at initialization time . However , if
you call parse_args ( ) it may be overwritten .
2023-05-04 05:20:30 +00:00
2023-08-17 18:32:05 +00:00
conf = InvokeAIAppConfig ( log_tokenization = True )
conf . parse_args ( argv = [ ' --no-log_tokenization ' ] )
conf . log_tokenization
2023-05-26 01:10:00 +00:00
# False
To avoid this , use ` get_config ( ) ` to retrieve the application - wide
configuration object . This will retain any properties set at object
creation time :
2023-08-17 18:32:05 +00:00
conf = InvokeAIAppConfig . get_config ( log_tokenization = True )
conf . parse_args ( argv = [ ' --no-log_tokenization ' ] )
conf . log_tokenization
2023-05-26 01:10:00 +00:00
# True
2023-05-04 05:20:30 +00:00
Any setting can be overwritten by setting an environment variable of
2023-05-17 16:19:19 +00:00
form : " INVOKEAI_<setting> " , as in :
2023-05-04 05:20:30 +00:00
2023-05-17 16:19:19 +00:00
export INVOKEAI_port = 8080
2023-05-04 05:20:30 +00:00
Order of precedence ( from highest ) :
1 ) initialization options
2 ) command line options
3 ) environment variable options
4 ) config file options
5 ) pydantic defaults
2023-05-26 01:10:00 +00:00
Typical usage at the top level file :
2023-05-04 05:20:30 +00:00
from invokeai . app . services . config import InvokeAIAppConfig
2023-07-26 10:53:35 +00:00
# get global configuration and print its cache size
2023-05-26 00:41:26 +00:00
conf = InvokeAIAppConfig . get_config ( )
conf . parse_args ( )
2023-08-17 18:32:05 +00:00
print ( conf . ram_cache_size )
2023-05-04 05:20:30 +00:00
2023-05-26 01:10:00 +00:00
Typical usage in a backend module :
from invokeai . app . services . config import InvokeAIAppConfig
2023-07-26 10:53:35 +00:00
# get global configuration and print its cache size value
2023-05-26 01:10:00 +00:00
conf = InvokeAIAppConfig . get_config ( )
2023-08-17 18:32:05 +00:00
print ( conf . ram_cache_size )
2023-05-04 05:20:30 +00:00
Computed properties :
The InvokeAIAppConfig object has a series of properties that
resolve paths relative to the runtime root directory . They each return
a Path object :
root_path - path to InvokeAI root
output_path - path to default outputs directory
conf - alias for the above
embedding_path - path to the embeddings directory
lora_path - path to the LoRA directory
2023-05-17 23:12:01 +00:00
2023-05-04 05:20:30 +00:00
In most cases , you will want to create a single InvokeAIAppConfig
2023-05-26 00:41:26 +00:00
object for the entire application . The InvokeAIAppConfig . get_config ( ) function
2023-05-04 05:20:30 +00:00
does this :
2023-05-26 00:41:26 +00:00
config = InvokeAIAppConfig . get_config ( )
config . parse_args ( ) # read values from the command line/config file
2023-05-04 05:20:30 +00:00
print ( config . root )
2023-05-17 16:19:19 +00:00
# Subclassing
If you wish to create a similar class , please subclass the
` InvokeAISettings ` class and define a Literal field named " type " ,
which is set to the desired top - level name . For example , to create a
" InvokeBatch " configuration , define like this :
class InvokeBatch ( InvokeAISettings ) :
type : Literal [ " InvokeBatch " ] = " InvokeBatch "
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
node_count : int = Field ( default = 1 , description = " Number of nodes to run on " , json_schema_extra = dict ( category = ' Resources ' ) )
cpu_count : int = Field ( default = 8 , description = " Number of GPUs to run on per node " , json_schema_extra = dict ( category = ' Resources ' ) )
2023-05-17 16:19:19 +00:00
This will now read and write from the " InvokeBatch " section of the
config file , look for environment variables named INVOKEBATCH_ * , and
accept the command - line arguments ` - - node_count ` and ` - - cpu_count ` . The
two configs are kept in separate sections of the config file :
# invokeai.yaml
InvokeBatch :
Resources :
node_count : 1
cpu_count : 8
InvokeAI :
Paths :
root : / home / lstein / invokeai - main
legacy_conf_dir : configs / stable - diffusion
outdir : outputs
. . .
2023-05-26 01:10:00 +00:00
2023-07-27 14:54:01 +00:00
"""
2024-02-29 23:04:59 +00:00
2023-05-26 00:41:26 +00:00
from __future__ import annotations
2023-08-18 01:48:04 +00:00
2023-05-04 05:20:30 +00:00
import os
2024-03-08 02:32:26 +00:00
import re
2023-05-04 05:20:30 +00:00
from pathlib import Path
2024-02-04 22:23:10 +00:00
from typing import Any , ClassVar , Dict , List , Literal , Optional
2023-08-23 01:15:59 +00:00
2023-09-11 13:57:41 +00:00
from omegaconf import DictConfig , OmegaConf
2024-03-08 02:32:26 +00:00
from pydantic import BaseModel , Field , field_validator
2023-11-27 06:38:16 +00:00
from pydantic . config import JsonDict
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
from pydantic_settings import SettingsConfigDict
2023-08-18 01:48:04 +00:00
feat: refactor services folder/module structure
Refactor services folder/module structure.
**Motivation**
While working on our services I've repeatedly encountered circular imports and a general lack of clarity regarding where to put things. The structure introduced goes a long way towards resolving those issues, setting us up for a clean structure going forward.
**Services**
Services are now in their own folder with a few files:
- `services/{service_name}/__init__.py`: init as needed, mostly empty now
- `services/{service_name}/{service_name}_base.py`: the base class for the service
- `services/{service_name}/{service_name}_{impl_type}.py`: the default concrete implementation of the service - typically one of `sqlite`, `default`, or `memory`
- `services/{service_name}/{service_name}_common.py`: any common items - models, exceptions, utilities, etc
Though it's a bit verbose to have the service name both as the folder name and the prefix for files, I found it is _extremely_ confusing to have all of the base classes just be named `base.py`. So, at the cost of some verbosity when importing things, I've included the service name in the filename.
There are some minor logic changes. For example, in `InvocationProcessor`, instead of assigning the model manager service to a variable to be used later in the file, the service is used directly via the `Invoker`.
**Shared**
Things that are used across disparate services are in `services/shared/`:
- `default_graphs.py`: previously in `services/`
- `graphs.py`: previously in `services/`
- `paginatation`: generic pagination models used in a few services
- `sqlite`: the `SqliteDatabase` class, other sqlite-specific things
2023-09-24 08:11:07 +00:00
from . config_base import InvokeAISettings
2023-05-04 05:20:30 +00:00
INIT_FILE = Path ( " invokeai.yaml " )
2023-06-04 00:24:41 +00:00
DB_FILE = Path ( " invokeai.db " )
2023-05-04 05:20:30 +00:00
LEGACY_INIT_FILE = Path ( " invokeai.init " )
2024-02-09 21:42:33 +00:00
DEFAULT_RAM_CACHE = 10.0
DEFAULT_VRAM_CACHE = 0.25
DEFAULT_CONVERT_CACHE = 20.0
2023-08-02 13:44:06 +00:00
2023-08-02 18:28:19 +00:00
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
class Categories ( object ) :
2023-11-26 22:00:27 +00:00
""" Category headers for configuration variable groups. """
WebServer : JsonDict = { " category " : " Web Server " }
Features : JsonDict = { " category " : " Features " }
Paths : JsonDict = { " category " : " Paths " }
Logging : JsonDict = { " category " : " Logging " }
Development : JsonDict = { " category " : " Development " }
2024-03-08 02:51:48 +00:00
CLIArgs : JsonDict = { " category " : " CLIArgs " }
2024-03-08 02:51:17 +00:00
ModelInstall : JsonDict = { " category " : " Model Install " }
2023-11-26 22:00:27 +00:00
ModelCache : JsonDict = { " category " : " Model Cache " }
Device : JsonDict = { " category " : " Device " }
Generation : JsonDict = { " category " : " Generation " }
Queue : JsonDict = { " category " : " Queue " }
Nodes : JsonDict = { " category " : " Nodes " }
MemoryPerformance : JsonDict = { " category " : " Memory/Performance " }
2024-03-08 00:46:02 +00:00
Deprecated : JsonDict = { " category " : " Deprecated " }
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
2024-03-08 02:32:26 +00:00
class URLRegexToken ( BaseModel ) :
url_regex : str = Field ( description = " Regular expression to match against the URL " )
token : str = Field ( description = " Token to use when the URL matches the regex " )
@field_validator ( " url_regex " )
@classmethod
def validate_url_regex ( cls , v : str ) - > str :
""" Validate that the value is a valid regex. """
try :
re . compile ( v )
except re . error as e :
raise ValueError ( f " Invalid regex: { e } " )
return v
2023-05-04 05:20:30 +00:00
class InvokeAIAppConfig ( InvokeAISettings ) :
2024-03-08 00:54:35 +00:00
""" Invoke App Configuration
Attributes :
host : * * Web Server * * : IP address to bind to . Use ` 0.0 .0 .0 ` to serve to your local network .
port : * * Web Server * * : Port to bind to .
allow_origins : * * Web Server * * : Allowed CORS origins .
allow_credentials : * * Web Server * * : Allow CORS credentials .
allow_methods : * * Web Server * * : Methods allowed for CORS .
allow_headers : * * Web Server * * : Headers allowed for CORS .
ssl_certfile : * * Web Server * * : SSL certificate file for HTTPS .
ssl_keyfile : * * Web Server * * : SSL key file for HTTPS .
esrgan : * * Features * * : Enables or disables the upscaling code .
internet_available : * * Features * * : If true , attempt to download models on the fly ; otherwise only use local models .
log_tokenization : * * Features * * : Enable logging of parsed prompt tokens .
patchmatch : * * Features * * : Enable patchmatch inpaint code .
ignore_missing_core_models : * * Features * * : Ignore missing core models on startup . If ` True ` , the app will attempt to download missing models on startup .
root : * * Paths * * : The InvokeAI runtime root directory .
autoimport_dir : * * Paths * * : Path to a directory of models files to be imported on startup .
models_dir : * * Paths * * : Path to the models directory .
convert_cache_dir : * * Paths * * : Path to the converted models cache directory . When loading a non - diffusers model , it will be converted and store on disk at this location .
legacy_conf_dir : * * Paths * * : Path to directory of legacy checkpoint config files .
db_dir : * * Paths * * : Path to InvokeAI databases directory .
outdir : * * Paths * * : Path to directory for outputs .
custom_nodes_dir : * * Paths * * : Path to directory for custom nodes .
from_file : * * Paths * * : Take command input from the indicated file ( command - line client only ) .
log_handlers : * * Logging * * : Log handler . Valid options are " console " , " file=<path> " , " syslog=path|address:host:port " , " http=<url> " .
log_format : * * Logging * * : Log format . Use " plain " for text - only , " color " for colorized output , " legacy " for 2.3 - style logging and " syslog " for syslog - style .
log_level : * * Logging * * : Emit logging messages at this level or higher .
log_sql : * * Logging * * : Log SQL queries . ` log_level ` must be ` debug ` for this to do anything . Extremely verbose .
2024-03-09 06:23:36 +00:00
use_memory_db : * * Development * * : Use in - memory database . Useful for development .
2024-03-08 00:54:35 +00:00
dev_reload : * * Development * * : Automatically reload when Python sources are changed . Does not reload node definitions .
profile_graphs : * * Development * * : Enable graph profiling using ` cProfile ` .
profile_prefix : * * Development * * : An optional prefix for profile output files .
profiles_dir : * * Development * * : Path to profiles output directory .
2024-03-08 18:28:17 +00:00
version : * * CLIArgs * * : CLI arg - show InvokeAI version and exit .
2024-03-09 06:23:36 +00:00
skip_model_hash : * * Model Install * * : Skip model hashing , instead assigning a UUID to models . Useful when using a memory db to reduce model installation time , or if you don ' t care about storing stable hashes for models.
2024-03-08 18:28:17 +00:00
remote_api_tokens : * * Model Install * * : List of regular expression and token pairs used when downloading models from URLs . The download URL is tested against the regex , and if it matches , the token is provided in as a Bearer token .
2024-03-08 00:54:35 +00:00
ram : * * Model Cache * * : Maximum memory amount used by memory model cache for rapid switching ( GB ) .
vram : * * Model Cache * * : Amount of VRAM reserved for model storage ( GB )
convert_cache : * * Model Cache * * : Maximum size of on - disk converted models cache ( GB )
lazy_offload : * * Model Cache * * : Keep models in VRAM until their space is needed .
log_memory_usage : * * Model Cache * * : If True , a memory snapshot will be captured before and after every model cache operation , and the result will be logged ( at debug level ) . There is a time cost to capturing the memory snapshots , so it is recommended to only enable this feature if you are actively inspecting the model cache ' s behaviour.
device : * * Device * * : Preferred execution device . ` auto ` will choose the device depending on the hardware platform and the installed torch capabilities .
precision : * * Device * * : Floating point precision . ` float16 ` will consume half the memory of ` float32 ` but produce slightly lower - quality images . The ` auto ` setting will guess the proper precision based on your video card and operating system .
sequential_guidance : * * Generation * * : Whether to calculate guidance in serial instead of in parallel , lowering memory requirements .
attention_type : * * Generation * * : Attention type .
attention_slice_size : * * Generation * * : Slice size , valid when attention_type == " sliced " .
force_tiled_decode : * * Generation * * : Whether to enable tiled VAE decode ( reduces memory consumption with some performance penalty ) .
png_compress_level : * * Generation * * : The compress_level setting of PIL . Image . save ( ) , used for PNG encoding . All settings are lossless . 0 = no compression , 1 = fastest with slightly larger filesize , 9 = slowest with smallest filesize . 1 is typically the best setting .
max_queue_size : * * Queue * * : Maximum number of items in the session queue .
allow_nodes : * * Nodes * * : List of nodes to allow . Omit to allow all .
deny_nodes : * * Nodes * * : List of nodes to deny . Omit to deny none .
node_cache_size : * * Nodes * * : How many cached nodes to keep in memory .
"""
2023-07-27 14:54:01 +00:00
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
singleton_config : ClassVar [ Optional [ InvokeAIAppConfig ] ] = None
2024-01-14 19:54:53 +00:00
singleton_init : ClassVar [ Optional [ Dict [ str , Any ] ] ] = None
2023-07-04 21:05:35 +00:00
2023-05-04 05:20:30 +00:00
# fmt: off
2023-05-17 16:19:19 +00:00
type : Literal [ " InvokeAI " ] = " InvokeAI "
2023-08-17 17:47:26 +00:00
# WEB
2024-03-07 23:47:28 +00:00
host : str = Field ( default = " 127.0.0.1 " , description = " IP address to bind to. Use `0.0.0.0` to serve to your local network. " , json_schema_extra = Categories . WebServer )
port : int = Field ( default = 9090 , description = " Port to bind to. " , json_schema_extra = Categories . WebServer )
allow_origins : List [ str ] = Field ( default = [ ] , description = " Allowed CORS origins. " , json_schema_extra = Categories . WebServer )
allow_credentials : bool = Field ( default = True , description = " Allow CORS credentials. " , json_schema_extra = Categories . WebServer )
allow_methods : List [ str ] = Field ( default = [ " * " ] , description = " Methods allowed for CORS. " , json_schema_extra = Categories . WebServer )
allow_headers : List [ str ] = Field ( default = [ " * " ] , description = " Headers allowed for CORS. " , json_schema_extra = Categories . WebServer )
2023-12-12 00:48:32 +00:00
# SSL options correspond to https://www.uvicorn.org/settings/#https
2024-03-07 23:47:28 +00:00
ssl_certfile : Optional [ Path ] = Field ( default = None , description = " SSL certificate file for HTTPS. " , json_schema_extra = Categories . WebServer )
ssl_keyfile : Optional [ Path ] = Field ( default = None , description = " SSL key file for HTTPS. " , json_schema_extra = Categories . WebServer )
2023-05-17 19:22:58 +00:00
2023-08-17 17:47:26 +00:00
# FEATURES
2024-03-07 23:47:28 +00:00
esrgan : bool = Field ( default = True , description = " Enables or disables the upscaling code. " , json_schema_extra = Categories . Features )
# TODO(psyche): This is not used anywhere.
internet_available : bool = Field ( default = True , description = " If true, attempt to download models on the fly; otherwise only use local models. " , json_schema_extra = Categories . Features )
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
log_tokenization : bool = Field ( default = False , description = " Enable logging of parsed prompt tokens. " , json_schema_extra = Categories . Features )
2024-03-08 00:46:02 +00:00
patchmatch : bool = Field ( default = True , description = " Enable patchmatch inpaint code. " , json_schema_extra = Categories . Features )
2024-03-07 23:47:28 +00:00
ignore_missing_core_models : bool = Field ( default = False , description = ' Ignore missing core models on startup. If `True`, the app will attempt to download missing models on startup. ' , json_schema_extra = Categories . Features )
2023-05-17 19:22:58 +00:00
2023-08-17 17:47:26 +00:00
# PATHS
2024-03-07 23:47:28 +00:00
root : Optional [ Path ] = Field ( default = None , description = ' The InvokeAI runtime root directory. ' , json_schema_extra = Categories . Paths )
2023-11-26 23:35:27 +00:00
autoimport_dir : Path = Field ( default = Path ( ' autoimport ' ) , description = ' Path to a directory of models files to be imported on startup. ' , json_schema_extra = Categories . Paths )
2024-03-07 23:47:28 +00:00
models_dir : Path = Field ( default = Path ( ' models ' ) , description = ' Path to the models directory. ' , json_schema_extra = Categories . Paths )
convert_cache_dir : Path = Field ( default = Path ( ' models/.cache ' ) , description = ' Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location. ' , json_schema_extra = Categories . Paths )
legacy_conf_dir : Path = Field ( default = Path ( ' configs/stable-diffusion ' ) , description = ' Path to directory of legacy checkpoint config files. ' , json_schema_extra = Categories . Paths )
db_dir : Path = Field ( default = Path ( ' databases ' ) , description = ' Path to InvokeAI databases directory. ' , json_schema_extra = Categories . Paths )
outdir : Path = Field ( default = Path ( ' outputs ' ) , description = ' Path to directory for outputs. ' , json_schema_extra = Categories . Paths )
custom_nodes_dir : Path = Field ( default = Path ( ' nodes ' ) , description = ' Path to directory for custom nodes. ' , json_schema_extra = Categories . Paths )
# TODO(psyche): This is not used anywhere.
from_file : Optional [ Path ] = Field ( default = None , description = ' Take command input from the indicated file (command-line client only). ' , json_schema_extra = Categories . Paths )
2023-07-04 21:05:35 +00:00
2023-08-17 17:47:26 +00:00
# LOGGING
2024-03-07 23:47:28 +00:00
log_handlers : List [ str ] = Field ( default = [ " console " ] , description = ' Log handler. Valid options are " console " , " file=<path> " , " syslog=path|address:host:port " , " http=<url> " . ' , json_schema_extra = Categories . Logging )
2023-05-25 03:57:15 +00:00
# note - would be better to read the log_format values from logging.py, but this creates circular dependencies issues
2024-03-07 23:47:28 +00:00
log_format : Literal [ ' plain ' , ' color ' , ' syslog ' , ' legacy ' ] = Field ( default = " color " , description = ' Log format. Use " plain " for text-only, " color " for colorized output, " legacy " for 2.3-style logging and " syslog " for syslog-style. ' , json_schema_extra = Categories . Logging )
log_level : Literal [ " debug " , " info " , " warning " , " error " , " critical " ] = Field ( default = " info " , description = " Emit logging messages at this level or higher. " , json_schema_extra = Categories . Logging )
log_sql : bool = Field ( default = False , description = " Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose. " , json_schema_extra = Categories . Logging )
2023-07-08 00:47:29 +00:00
2024-01-31 10:51:57 +00:00
# Development
2024-03-09 06:22:55 +00:00
use_memory_db : bool = Field ( default = False , description = ' Use in-memory database. Useful for development. ' , json_schema_extra = Categories . Development )
2024-03-07 23:47:28 +00:00
dev_reload : bool = Field ( default = False , description = " Automatically reload when Python sources are changed. Does not reload node definitions. " , json_schema_extra = Categories . Development )
profile_graphs : bool = Field ( default = False , description = " Enable graph profiling using `cProfile`. " , json_schema_extra = Categories . Development )
2024-01-31 10:51:57 +00:00
profile_prefix : Optional [ str ] = Field ( default = None , description = " An optional prefix for profile output files. " , json_schema_extra = Categories . Development )
2024-03-07 23:47:28 +00:00
profiles_dir : Path = Field ( default = Path ( ' profiles ' ) , description = " Path to profiles output directory. " , json_schema_extra = Categories . Development )
2023-08-18 01:48:04 +00:00
2024-03-07 23:47:28 +00:00
version : bool = Field ( default = False , description = " CLI arg - show InvokeAI version and exit. " , json_schema_extra = Categories . CLIArgs )
2023-08-17 17:47:26 +00:00
# CACHE
2024-03-07 23:47:28 +00:00
ram : float = Field ( default = DEFAULT_RAM_CACHE , gt = 0 , description = " Maximum memory amount used by memory model cache for rapid switching (GB). " , json_schema_extra = Categories . ModelCache , )
vram : float = Field ( default = DEFAULT_VRAM_CACHE , ge = 0 , description = " Amount of VRAM reserved for model storage (GB) " , json_schema_extra = Categories . ModelCache , )
2024-02-09 21:42:33 +00:00
convert_cache : float = Field ( default = DEFAULT_CONVERT_CACHE , ge = 0 , description = " Maximum size of on-disk converted models cache (GB) " , json_schema_extra = Categories . ModelCache )
2024-02-04 03:55:09 +00:00
2024-03-07 23:47:28 +00:00
lazy_offload : bool = Field ( default = True , description = " Keep models in VRAM until their space is needed. " , json_schema_extra = Categories . ModelCache , )
2023-11-02 17:31:10 +00:00
log_memory_usage : bool = Field ( default = False , description = " If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache ' s behaviour. " , json_schema_extra = Categories . ModelCache )
2023-08-17 17:47:26 +00:00
# DEVICE
2024-03-08 00:46:02 +00:00
device : Literal [ " auto " , " cpu " , " cuda " , " cuda:1 " , " mps " ] = Field ( default = " auto " , description = " Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities. " , json_schema_extra = Categories . Device )
precision : Literal [ " auto " , " float16 " , " bfloat16 " , " float32 " , " autocast " ] = Field ( default = " auto " , description = " Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system. " , json_schema_extra = Categories . Device )
2023-08-17 17:47:26 +00:00
# GENERATION
2024-03-07 23:47:28 +00:00
sequential_guidance : bool = Field ( default = False , description = " Whether to calculate guidance in serial instead of in parallel, lowering memory requirements. " , json_schema_extra = Categories . Generation )
attention_type : Literal [ " auto " , " normal " , " xformers " , " sliced " , " torch-sdp " ] = Field ( default = " auto " , description = " Attention type. " , json_schema_extra = Categories . Generation )
attention_slice_size : Literal [ " auto " , " balanced " , " max " , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ] = Field ( default = " auto " , description = ' Slice size, valid when attention_type== " sliced " . ' , json_schema_extra = Categories . Generation )
force_tiled_decode : bool = Field ( default = False , description = " Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty). " , json_schema_extra = Categories . Generation )
png_compress_level : int = Field ( default = 1 , description = " The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting. " , json_schema_extra = Categories . Generation )
2023-08-20 19:57:15 +00:00
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
# QUEUE
2024-03-07 23:47:28 +00:00
max_queue_size : int = Field ( default = 10000 , gt = 0 , description = " Maximum number of items in the session queue. " , json_schema_extra = Categories . Queue )
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
feat(backend): allow/deny nodes
Allow denying and explicitly allowing nodes. When a not-allowed node is used, a pydantic `ValidationError` will be raised.
- When collecting all invocations, check against the allowlist and denylist first. When pydantic constructs any unions related to nodes, the denied nodes will be omitted
- Add `allow_nodes` and `deny_nodes` to `InvokeAIAppConfig`. These are `Union[list[str], None]`, and may be populated with the `type` of invocations.
- When `allow_nodes` is `None`, allow all nodes, else if it is `list[str]`, only allow nodes in the list
- When `deny_nodes` is `None`, deny no nodes, else if it is `list[str]`, deny nodes in the list
- `deny_nodes` overrides `allow_nodes`
2023-09-06 01:54:37 +00:00
# NODES
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
allow_nodes : Optional [ List [ str ] ] = Field ( default = None , description = " List of nodes to allow. Omit to allow all. " , json_schema_extra = Categories . Nodes )
deny_nodes : Optional [ List [ str ] ] = Field ( default = None , description = " List of nodes to deny. Omit to deny none. " , json_schema_extra = Categories . Nodes )
2024-03-07 23:47:28 +00:00
node_cache_size : int = Field ( default = 512 , description = " How many cached nodes to keep in memory. " , json_schema_extra = Categories . Nodes )
feat(backend): allow/deny nodes
Allow denying and explicitly allowing nodes. When a not-allowed node is used, a pydantic `ValidationError` will be raised.
- When collecting all invocations, check against the allowlist and denylist first. When pydantic constructs any unions related to nodes, the denied nodes will be omitted
- Add `allow_nodes` and `deny_nodes` to `InvokeAIAppConfig`. These are `Union[list[str], None]`, and may be populated with the `type` of invocations.
- When `allow_nodes` is `None`, allow all nodes, else if it is `list[str]`, only allow nodes in the list
- When `deny_nodes` is `None`, deny no nodes, else if it is `list[str]`, deny nodes in the list
- `deny_nodes` overrides `allow_nodes`
2023-09-06 01:54:37 +00:00
2024-03-09 06:22:55 +00:00
# MODEL INSTALL
skip_model_hash : bool = Field ( default = False , description = " Skip model hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don ' t care about storing stable hashes for models. " , json_schema_extra = Categories . ModelInstall )
2024-03-08 18:58:53 +00:00
remote_api_tokens : Optional [ list [ URLRegexToken ] ] = Field (
default = None ,
description = " List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token. " ,
json_schema_extra = Categories . ModelInstall
)
2024-03-07 16:39:20 +00:00
2024-03-08 00:46:02 +00:00
# TODO(psyche): Can we just remove these then?
2023-08-17 17:47:26 +00:00
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
2024-03-08 00:46:02 +00:00
always_use_cpu : bool = Field ( default = False , description = " If true, use the CPU for rendering even if a GPU is available. " , json_schema_extra = Categories . Deprecated )
max_cache_size : Optional [ float ] = Field ( default = None , gt = 0 , description = " Maximum memory amount used by model cache for rapid switching " , json_schema_extra = Categories . Deprecated )
max_vram_cache_size : Optional [ float ] = Field ( default = None , ge = 0 , description = " Amount of VRAM reserved for model storage " , json_schema_extra = Categories . Deprecated )
xformers_enabled : bool = Field ( default = True , description = " Enable/disable memory-efficient attention " , json_schema_extra = Categories . Deprecated )
tiled_decode : bool = Field ( default = False , description = " Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty) " , json_schema_extra = Categories . Deprecated )
lora_dir : Optional [ Path ] = Field ( default = None , description = ' Path to a directory of LoRA/LyCORIS models to be imported on startup. ' , json_schema_extra = Categories . Deprecated )
embedding_dir : Optional [ Path ] = Field ( default = None , description = ' Path to a directory of Textual Inversion embeddings to be imported on startup. ' , json_schema_extra = Categories . Deprecated )
controlnet_dir : Optional [ Path ] = Field ( default = None , description = ' Path to a directory of ControlNet embeddings to be imported on startup. ' , json_schema_extra = Categories . Deprecated )
conf_path : Path = Field ( default = Path ( ' configs/models.yaml ' ) , description = ' Path to models definition file ' , json_schema_extra = Categories . Deprecated )
2024-01-31 01:46:42 +00:00
2023-11-26 23:35:27 +00:00
# this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
2023-08-17 17:47:26 +00:00
# See InvokeAIAppConfig subclass below for CACHE and DEVICE categories
2023-05-04 05:20:30 +00:00
# fmt: on
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
model_config = SettingsConfigDict ( validate_assignment = True , env_prefix = " INVOKEAI " )
2023-08-07 18:04:53 +00:00
feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-09-24 08:11:07 +00:00
def parse_args (
self ,
argv : Optional [ list [ str ] ] = None ,
conf : Optional [ DictConfig ] = None ,
2024-01-14 19:54:53 +00:00
clobber : Optional [ bool ] = False ,
) - > None :
2023-07-27 14:54:01 +00:00
"""
2023-11-26 22:00:27 +00:00
Update settings with contents of init file , environment , and command - line settings .
2023-05-04 05:20:30 +00:00
: param conf : alternate Omegaconf dictionary object
: param argv : aternate sys . argv list
2023-05-26 00:41:26 +00:00
: param clobber : ovewrite any initialization parameters passed during initialization
2023-07-27 14:54:01 +00:00
"""
2023-05-04 05:20:30 +00:00
# Set the runtime root directory. We parse command-line switches here
# in order to pick up the --root_dir option.
2023-05-26 00:41:26 +00:00
super ( ) . parse_args ( argv )
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
loaded_conf = None
2023-05-17 16:19:19 +00:00
if conf is None :
2023-05-04 05:20:30 +00:00
try :
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
loaded_conf = OmegaConf . load ( self . root_dir / INIT_FILE )
2023-08-17 22:45:25 +00:00
except Exception :
2023-05-04 05:20:30 +00:00
pass
feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 05:09:24 +00:00
if isinstance ( loaded_conf , DictConfig ) :
InvokeAISettings . initconf = loaded_conf
else :
InvokeAISettings . initconf = conf
2023-07-04 21:05:35 +00:00
2023-05-04 05:20:30 +00:00
# parse args again in order to pick up settings in configuration file
2023-05-26 00:41:26 +00:00
super ( ) . parse_args ( argv )
2023-05-04 05:20:30 +00:00
2023-05-26 00:41:26 +00:00
if self . singleton_init and not clobber :
2024-02-02 09:55:19 +00:00
# When setting values in this way, set validate_assignment to true if you want to validate the value.
2024-02-02 09:48:05 +00:00
for k , v in self . singleton_init . items ( ) :
setattr ( self , k , v )
2023-05-04 05:20:30 +00:00
2023-05-26 00:41:26 +00:00
@classmethod
2024-01-14 19:54:53 +00:00
def get_config ( cls , * * kwargs : Any ) - > InvokeAIAppConfig :
2023-11-24 04:15:32 +00:00
""" Return a singleton InvokeAIAppConfig configuration object. """
2023-05-26 00:41:26 +00:00
if (
cls . singleton_config is None
2023-08-17 22:45:25 +00:00
or type ( cls . singleton_config ) is not cls
2023-05-26 00:41:26 +00:00
or ( kwargs and cls . singleton_init != kwargs )
) :
cls . singleton_config = cls ( * * kwargs )
cls . singleton_init = kwargs
return cls . singleton_config
2023-07-04 21:05:35 +00:00
2023-05-04 05:20:30 +00:00
@property
def root_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to the runtime root directory. """
2023-08-01 01:15:44 +00:00
if self . root :
2023-07-30 17:37:18 +00:00
root = Path ( self . root ) . expanduser ( ) . absolute ( )
2023-05-04 05:20:30 +00:00
else :
2023-08-01 02:36:11 +00:00
root = self . find_root ( ) . expanduser ( ) . absolute ( )
2023-08-01 02:27:07 +00:00
self . root = root # insulate ourselves from relative paths that may change
2023-12-22 12:30:51 +00:00
return root . resolve ( )
2023-05-04 05:20:30 +00:00
@property
def root_dir ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Alias for above. """
2023-05-04 05:20:30 +00:00
return self . root_path
def _resolve ( self , partial_path : Path ) - > Path :
return ( self . root_path / partial_path ) . resolve ( )
2023-05-30 17:49:43 +00:00
@property
def init_file_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to invokeai.yaml. """
2023-11-26 23:35:27 +00:00
resolved_path = self . _resolve ( INIT_FILE )
assert resolved_path is not None
return resolved_path
2023-05-30 17:49:43 +00:00
2023-05-04 05:20:30 +00:00
@property
2023-11-26 23:35:27 +00:00
def output_path ( self ) - > Optional [ Path ] :
2023-11-26 22:00:27 +00:00
""" Path to defaults outputs directory. """
2023-05-04 05:20:30 +00:00
return self . _resolve ( self . outdir )
@property
2023-06-04 00:24:41 +00:00
def db_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to the invokeai.db file. """
2023-11-26 23:35:27 +00:00
db_dir = self . _resolve ( self . db_dir )
assert db_dir is not None
return db_dir / DB_FILE
2023-06-04 00:24:41 +00:00
2023-05-04 05:20:30 +00:00
@property
2023-12-04 22:08:33 +00:00
def model_conf_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to models configuration file. """
2023-05-04 05:20:30 +00:00
return self . _resolve ( self . conf_path )
@property
2023-12-04 22:08:33 +00:00
def legacy_conf_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to directory of legacy configuration files (e.g. v1-inference.yaml). """
2023-05-04 05:20:30 +00:00
return self . _resolve ( self . legacy_conf_dir )
@property
2023-12-04 22:08:33 +00:00
def models_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to the models directory. """
2023-06-09 03:11:53 +00:00
return self . _resolve ( self . models_dir )
2023-05-30 04:38:37 +00:00
2024-02-04 03:55:09 +00:00
@property
def models_convert_cache_path ( self ) - > Path :
""" Path to the converted cache models directory. """
return self . _resolve ( self . convert_cache_dir )
2023-10-19 06:51:55 +00:00
@property
def custom_nodes_path ( self ) - > Path :
2023-11-26 22:00:27 +00:00
""" Path to the custom nodes directory. """
2023-11-26 23:35:27 +00:00
custom_nodes_path = self . _resolve ( self . custom_nodes_dir )
assert custom_nodes_path is not None
return custom_nodes_path
2023-10-19 06:51:55 +00:00
2023-05-04 05:20:30 +00:00
# the following methods support legacy calls leftover from the Globals era
@property
def full_precision ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return true if precision set to float32. """
2023-05-04 05:20:30 +00:00
return self . precision == " float32 "
2023-05-16 05:50:01 +00:00
@property
def try_patchmatch ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return true if patchmatch true. """
2023-05-16 05:50:01 +00:00
return self . patchmatch
2023-07-26 10:53:35 +00:00
@property
def nsfw_checker ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return value for NSFW checker. The NSFW node is always active and disabled from Web UI. """
2023-07-26 10:53:35 +00:00
return True
@property
def invisible_watermark ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return value of invisible watermark. It is always active and disabled from Web UI. """
2023-07-26 10:53:35 +00:00
return True
2023-07-27 14:54:01 +00:00
2023-08-17 03:30:00 +00:00
@property
2024-02-04 03:55:09 +00:00
def ram_cache_size ( self ) - > float :
""" Return the ram cache size using the legacy or modern setting (GB). """
2023-08-17 17:47:26 +00:00
return self . max_cache_size or self . ram
2023-08-17 03:30:00 +00:00
@property
2024-02-04 03:55:09 +00:00
def vram_cache_size ( self ) - > float :
""" Return the vram cache size using the legacy or modern setting (GB). """
2023-08-17 17:47:26 +00:00
return self . max_vram_cache_size or self . vram
2024-02-04 03:55:09 +00:00
@property
def convert_cache_size ( self ) - > float :
""" Return the convert cache size on disk (GB). """
return self . convert_cache
2023-08-17 17:47:26 +00:00
@property
def use_cpu ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return true if the device is set to CPU or the always_use_cpu flag is set. """
2023-08-17 17:47:26 +00:00
return self . always_use_cpu or self . device == " cpu "
@property
def disable_xformers ( self ) - > bool :
2023-11-26 22:00:27 +00:00
""" Return true if enable_xformers is false (reversed logic) and attention type is not set to xformers. """
2023-08-17 17:47:26 +00:00
disabled_in_config = not self . xformers_enabled
return disabled_in_config and self . attention_type != " xformers "
2024-01-31 10:51:57 +00:00
@property
def profiles_path ( self ) - > Path :
""" Path to the graph profiles directory. """
return self . _resolve ( self . profiles_dir )
2023-05-04 05:20:30 +00:00
@staticmethod
def find_root ( ) - > Path :
2023-11-26 22:00:27 +00:00
""" Choose the runtime root directory when not specified on command line or init file. """
2023-05-04 05:20:30 +00:00
return _find_root ( )
2024-03-08 00:52:21 +00:00
@staticmethod
def generate_docstrings ( ) - > str :
""" Helper function for mkdocs. Generates a docstring for the InvokeAIAppConfig class.
You shouldn ' t run this manually. Instead, run `scripts/update-config-docstring.py` to update the docstring.
A makefile target is also available : ` make update - config - docstring ` .
See that script for more information about why this is necessary .
"""
docstring = ' " " " Invoke App Configuration \n \n '
2024-03-09 06:20:31 +00:00
docstring + = " Attributes: "
2024-03-08 00:52:21 +00:00
field_descriptions : dict [ str , list [ str ] ] = { }
for k , v in InvokeAIAppConfig . model_fields . items ( ) :
if not isinstance ( v . json_schema_extra , dict ) :
# Should never happen
continue
category = v . json_schema_extra . get ( " category " , None )
if not isinstance ( category , str ) or category == " Deprecated " :
continue
if not field_descriptions . get ( category ) :
field_descriptions [ category ] = [ ]
field_descriptions [ category ] . append ( f " { k } : ** { category } **: { v . description } " )
for c in [
" Web Server " ,
" Features " ,
" Paths " ,
" Logging " ,
" Development " ,
2024-03-08 18:28:17 +00:00
" CLIArgs " ,
" Model Install " ,
2024-03-08 00:52:21 +00:00
" Model Cache " ,
" Device " ,
" Generation " ,
" Queue " ,
" Nodes " ,
] :
docstring + = " \n "
docstring + = " \n " . join ( field_descriptions [ c ] )
docstring + = ' \n " " " '
return docstring
2023-05-17 19:22:58 +00:00
2024-01-14 19:54:53 +00:00
def get_invokeai_config ( * * kwargs : Any ) - > InvokeAIAppConfig :
2023-11-26 22:00:27 +00:00
""" Legacy function which returns InvokeAIAppConfig.get_config(). """
2023-05-26 00:41:26 +00:00
return InvokeAIAppConfig . get_config ( * * kwargs )
2023-08-17 03:30:00 +00:00
2023-08-17 17:47:26 +00:00
def _find_root ( ) - > Path :
venv = Path ( os . environ . get ( " VIRTUAL_ENV " ) or " . " )
if os . environ . get ( " INVOKEAI_ROOT " ) :
root = Path ( os . environ [ " INVOKEAI_ROOT " ] )
2023-11-10 23:44:43 +00:00
elif any ( ( venv . parent / x ) . exists ( ) for x in [ INIT_FILE , LEGACY_INIT_FILE ] ) :
2023-08-17 17:47:26 +00:00
root = ( venv . parent ) . resolve ( )
else :
root = Path ( " ~/invokeai " ) . expanduser ( ) . resolve ( )
return root