diff --git a/docs/features/CONFIGURATION.md b/docs/features/CONFIGURATION.md index b417ec4980..8ec1856802 100644 --- a/docs/features/CONFIGURATION.md +++ b/docs/features/CONFIGURATION.md @@ -6,85 +6,63 @@ title: Configuration ## Intro -InvokeAI has numerous runtime settings which can be used to adjust -many aspects of its operations, including the location of files and -directories, memory usage, and performance. These settings can be -viewed and customized in several ways: +Runtime settings, including the location of files and +directories, memory usage, and performance, are managed via the +`invokeai.yaml` config file. -1. By editing settings in the `invokeai.yaml` file. -2. By setting environment variables. -3. On the command-line, when InvokeAI is launched. - -In addition, the most commonly changed settings are accessible +The most commonly changed settings are also accessible graphically via the `invokeai-configure` script. -### How the Configuration System Works +### InvokeAI Root Directory -When InvokeAI is launched, the very first thing it needs to do is to -find its "root" directory, which contains its configuration files, -installed models, its database of images, and the folder(s) of -generated images themselves. In this document, the root directory will -be referred to as ROOT. +On startup, InvokeAI searches for its "root" directory. This is the directory +that contains models, images, the database, and so on. It also contains +a configuration file called `invokeai.yaml`. -#### Finding the Root Directory +InvokeAI searches for the root directory in this order: -To find its root directory, InvokeAI uses the following recipe: +1. The `--root ` CLI arg. +2. The environment variable INVOKEAI_ROOT. +3. The directory containing the currently active virtual environment. +4. Fallback: a directory in the current user's home directory named `invokeai`. -1. It first looks for the argument `--root ` on the command line - it was launched from, and uses the indicated path if present. +### InvokeAI Configuration File -2. Next it looks for the environment variable INVOKEAI_ROOT, and uses - the directory path found there if present. +Inside the root directory, we read settings from the `invokeai.yaml` file. -3. If neither of these are present, then InvokeAI looks for the - folder containing the `.venv` Python virtual environment directory for - the currently active environment. This directory is checked for files - expected inside the InvokeAI root before it is used. +It has two sections - one for internal use and one for user settings: -4. Finally, InvokeAI looks for a directory in the current user's home - directory named `invokeai`. +```yaml +# Internal metadata - do not edit: +meta: + schema_version: 4 -#### Reading the InvokeAI Configuration File - -Once the root directory has been located, InvokeAI looks for a file -named `ROOT/invokeai.yaml`, and if present reads configuration values -from it. The top of this file looks like this: - -``` -InvokeAI: - Web Server: - host: localhost - port: 9090 - allow_origins: [] - allow_credentials: true - allow_methods: - - '*' - allow_headers: - - '*' - Features: - esrgan: true - internet_available: true - log_tokenization: false - patchmatch: true - restore: true -... +# Put user settings here: +host: 0.0.0.0 +models_dir: /external_drive/invokeai/models +ram: 24 +precision: float16 ``` -This lines in this file are used to establish default values for -Invoke's settings. In the above fragment, the Web Server's listening -port is set to 9090 by the `port` setting. +In this example, we've changed a few settings: -You can edit this file with a text editor such as "Notepad" (do not -use Word or any other word processor). When editing, be careful to -maintain the indentation, and do not add extraneous text, as syntax -errors will prevent InvokeAI from launching. A basic guide to the -format of YAML files can be found -[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/). +- `host: 0.0.0.0`: allow other machines on the network to connect +- `models_dir: /external_drive/invokeai/models`: store model files here +- `ram: 24`: set the model RAM cache to a max of 24GB +- `precision: float16`: use more efficient FP16 precision + +The settings in this file will override the defaults. You only need +to change this file if the default for a particular setting doesn't +work for you. + +Some settings, like [Model Marketplace API Keys], require the YAML +to be formatted correctly. Here is a [basic guide to YAML files]. You can fix a broken `invokeai.yaml` by deleting it and running the configuration script again -- option [6] in the launcher, "Re-run the configure script". + -#### Reading the Command Line +### CLI Args -Lastly, InvokeAI takes settings from the command line, which override -everything else. The command-line settings have the same name as the -corresponding configuration file settings, preceded by a `--`, for -example `--port 8000`. +A subset of settings may be specified using CLI args: -If you are using the launcher (`invoke.sh` or `invoke.bat`) to launch -InvokeAI, then just pass the command-line arguments to the launcher: +- `--root`: specify the root directory +- `--ignore_missing_core-models`: if set, do not check for models needed + to convert checkpoint/safetensor models to diffusers -``` -invoke.bat --port 8000 --host 0.0.0.0 -``` +### All Settings -The arguments will be applied when you select the web server option -(and the other options as well). - -If, on the other hand, you prefer to launch InvokeAI directly from the -command line, you would first activate the virtual environment (known -as the "developer's console" in the launcher), and run `invokeai-web`: - -``` -> C:\Users\Fred\invokeai\.venv\scripts\activate -(.venv) > invokeai-web --port 8000 --host 0.0.0.0 -``` - -You can get a listing and brief instructions for each of the -command-line options by giving the `--help` argument: - -``` -(.venv) > invokeai-web --help -usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ...]] [--allow_credentials | --no-allow_credentials] [--allow_methods [ALLOW_METHODS ...]] - [--allow_headers [ALLOW_HEADERS ...]] [--esrgan | --no-esrgan] [--internet_available | --no-internet_available] [--log_tokenization | --no-log_tokenization] - [--patchmatch | --no-patchmatch] [--restore | --no-restore] - [--always_use_cpu | --no-always_use_cpu] [--free_gpu_mem | --no-free_gpu_mem] [--max_loaded_models MAX_LOADED_MODELS] [--max_cache_size MAX_CACHE_SIZE] - [--max_vram_cache_size MAX_VRAM_CACHE_SIZE] [--gpu_mem_reserved GPU_MEM_RESERVED] [--precision {auto,float16,float32,autocast}] - [--sequential_guidance | --no-sequential_guidance] [--xformers_enabled | --no-xformers_enabled] [--tiled_decode | --no-tiled_decode] [--root ROOT] - [--autoimport_dir AUTOIMPORT_DIR] [--lora_dir LORA_DIR] [--embedding_dir EMBEDDING_DIR] [--controlnet_dir CONTROLNET_DIR] [--conf_path CONF_PATH] - [--models_dir MODELS_DIR] [--legacy_conf_dir LEGACY_CONF_DIR] [--db_dir DB_DIR] [--outdir OUTDIR] [--from_file FROM_FILE] - [--use_memory_db | --no-use_memory_db] [--model MODEL] [--log_handlers [LOG_HANDLERS ...]] [--log_format {plain,color,syslog,legacy}] - [--log_level {debug,info,warning,error,critical}] [--version | --no-version] -``` - -## The Configuration Settings - -The config is managed by the `InvokeAIAppConfig` class, which is a pydantic model. The below docs are autogenerated from the class. - -When editing your `invokeai.yaml` file, you'll need to put settings under their appropriate group. The group for each setting is denoted in the table below. +The config is managed by the `InvokeAIAppConfig` class. The below docs are autogenerated from the class. Following the table are additional explanations for certain settings. ::: invokeai.app.services.config.config_default.InvokeAIAppConfig options: - heading_level: 3 + heading_level: 4 + members: false + show_docstring_description: false + group_by_category: true + show_category_heading: false -### Model Marketplace API Keys +#### Model Marketplace API Keys Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your `invokeai.yaml` file to provide that API key. @@ -181,7 +126,7 @@ InvokeAI: The provided token will be added as a `Bearer` token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization. -### Model Hashing +#### Model Hashing Models are hashed during installation, providing a stable identifier for models across all platforms. The default algorithm is `blake3`, with a multi-threaded implementation. @@ -203,7 +148,7 @@ InvokeAI: Most common algorithms are supported, like `md5`, `sha256`, and `sha512`. These are typically much, much slower than `blake3`. -### Paths +#### Paths These options set the paths of various directories and files used by InvokeAI. Relative paths are interpreted relative to the root directory, so @@ -215,7 +160,7 @@ Note that the autoimport directory will be searched recursively, allowing you to organize the models into folders and subfolders in any way you wish. -### Logging +#### Logging Several different log handler destinations are available, and multiple destinations are supported by providing a list: @@ -257,3 +202,6 @@ The `log_format` option provides several alternative formats: - `plain` - same as above, but monochrome text only - `syslog` - the log level and error message only, allowing the syslog system to attach the time and date - `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases. + +[basic guide to yaml files]: https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/ +[Model Marketplace API Keys]: #model-marketplace-api-keys