From fb1ae55010cd061c21ec71a2ecaf6db65fb9bfdc Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Fri, 8 Mar 2024 11:55:05 +1100 Subject: [PATCH] docs: update CONFIGURATION.md to use autogenerated docs --- docs/features/CONFIGURATION.md | 123 +++++++-------------------------- 1 file changed, 26 insertions(+), 97 deletions(-) diff --git a/docs/features/CONFIGURATION.md b/docs/features/CONFIGURATION.md index f98037d968..84b2b3b054 100644 --- a/docs/features/CONFIGURATION.md +++ b/docs/features/CONFIGURATION.md @@ -31,18 +31,18 @@ be referred to as ROOT. To find its root directory, InvokeAI uses the following recipe: 1. It first looks for the argument `--root ` on the command line -it was launched from, and uses the indicated path if present. + it was launched from, and uses the indicated path if present. 2. Next it looks for the environment variable INVOKEAI_ROOT, and uses -the directory path found there if present. + the directory path found there if present. 3. If neither of these are present, then InvokeAI looks for the -folder containing the `.venv` Python virtual environment directory for -the currently active environment. This directory is checked for files -expected inside the InvokeAI root before it is used. + folder containing the `.venv` Python virtual environment directory for + the currently active environment. This directory is checked for files + expected inside the InvokeAI root before it is used. 4. Finally, InvokeAI looks for a directory in the current user's home -directory named `invokeai`. + directory named `invokeai`. #### Reading the InvokeAI Configuration File @@ -149,104 +149,33 @@ usage: InvokeAI [-h] [--host HOST] [--port PORT] [--allow_origins [ALLOW_ORIGINS ## The Configuration Settings -The configuration settings are divided into several distinct -groups in `invokeia.yaml`: +The config is managed by the `InvokeAIAppConfig` class, which is a pydantic model. The below docs are autogenerated from the class. -### Web Server +When editing your `invokeai.yaml` file, you'll need to put settings under their appropriate group. The group for each setting is denoted in the table below. -| Setting | Default Value | Description | -|---------------------|---------------|----------------------------------------------------------------------------------------------------------------------------| -| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on | -| `port` | `9090` | Network port number that the web server will listen on | -| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` | -| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) | -| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API | -| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API | -| `ssl_certfile` | null | Path to an SSL certificate file, used to enable HTTPS. | -| `ssl_keyfile` | null | Path to an SSL keyfile, if the key is not included in the certificate file. | - -The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs]. - -### Features - -These configuration settings allow you to enable and disable various InvokeAI features: - -| Setting | Default Value | Description | -|----------|----------------|--------------| -| `esrgan` | `true` | Activate the ESRGAN upscaling options| -| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet | -| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected | -| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting | - -### Generation - -These options tune InvokeAI's memory and performance characteristics. - -| Setting | Default Value | Description | -|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss | -| `attention_type` | `auto` | Select the type of attention to use. One of `auto`,`normal`,`xformers`,`sliced`, or `torch-sdp` | -| `attention_slice_size` | `auto` | When "sliced" attention is selected, set the slice size. One of `auto`, `balanced`, `max` or the integers 1-8| -| `force_tiled_decode` | `false` | Force the VAE step to decode in tiles, reducing memory consumption at the cost of performance | - -### Device - -These options configure the generation execution device. - -| Setting | Default Value | Description | -|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `device` | `auto` | Preferred execution device. One of `auto`, `cpu`, `cuda`, `cuda:1`, `mps`. `auto` will choose the device depending on the hardware platform and the installed torch capabilities. | -| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system | +Following the table are additional explanations for certain settings. + +::: invokeai.app.services.config.config_default.InvokeAIAppConfig + options: + heading_level: 3 + members: false + ### Paths These options set the paths of various directories and files used by -InvokeAI. Relative paths are interpreted relative to INVOKEAI_ROOT, so -if INVOKEAI_ROOT is `/home/fred/invokeai` and the path is +InvokeAI. Relative paths are interpreted relative to the root directory, so +if root is `/home/fred/invokeai` and the path is `autoimport/main`, then the corresponding directory will be located at `/home/fred/invokeai/autoimport/main`. -| Setting | Default Value | Description | -|----------|----------------|--------------| -| `autoimport_dir` | `autoimport/main` | At startup time, read and import any main model files found in this directory | -| `lora_dir` | `autoimport/lora` | At startup time, read and import any LoRA/LyCORIS models found in this directory | -| `embedding_dir` | `autoimport/embedding` | At startup time, read and import any textual inversion (embedding) models found in this directory | -| `controlnet_dir` | `autoimport/controlnet` | At startup time, read and import any ControlNet models found in this directory | -| `conf_path` | `configs/models.yaml` | Location of the `models.yaml` model configuration file | -| `models_dir` | `models` | Location of the directory containing models installed by InvokeAI's model manager | -| `legacy_conf_dir` | `configs/stable-diffusion` | Location of the directory containing the .yaml configuration files for legacy checkpoint models | -| `db_dir` | `databases` | Location of the directory containing InvokeAI's image, schema and session database | -| `outdir` | `outputs` | Location of the directory in which the gallery of generated and uploaded images will be stored | -| `use_memory_db` | `false` | Keep database information in memory rather than on disk; this will not preserve image gallery information across restarts | - -Note that the autoimport directories will be searched recursively, +Note that the autoimport directory will be searched recursively, allowing you to organize the models into folders and subfolders in any -way you wish. In addition, while we have split up autoimport -directories by the type of model they contain, this isn't -necessary. You can combine different model types in the same folder -and InvokeAI will figure out what they are. So you can easily use just -one autoimport directory by commenting out the unneeded paths: - -``` -Paths: - autoimport_dir: autoimport -# lora_dir: null -# embedding_dir: null -# controlnet_dir: null -``` +way you wish. ### Logging -These settings control the information, warning, and debugging -messages printed to the console log while InvokeAI is running: - -| Setting | Default Value | Description | -|----------|----------------|--------------| -| `log_handlers` | `console` | This controls where log messages are sent, and can be a list of one or more destinations. Values include `console`, `file`, `syslog` and `http`. These are described in more detail below | -| `log_format` | `color` | This controls the formatting of the log messages. Values are `plain`, `color`, `legacy` and `syslog` | -| `log_level` | `debug` | This filters messages according to the level of severity and can be one of `debug`, `info`, `warning`, `error` and `critical`. For example, setting to `warning` will display all messages at the warning level or higher, but won't display "debug" or "info" messages | - Several different log handler destinations are available, and multiple destinations are supported by providing a list: ``` @@ -256,9 +185,9 @@ Several different log handler destinations are available, and multiple destinati - file=/var/log/invokeai.log ``` -* `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched. +- `console` is the default. It prints log messages to the command-line window from which InvokeAI was launched. -* `syslog` is only available on Linux and Macintosh systems. It uses +- `syslog` is only available on Linux and Macintosh systems. It uses the operating system's "syslog" facility to write log file entries locally or to a remote logging machine. `syslog` offers a variety of configuration options: @@ -271,7 +200,7 @@ Several different log handler destinations are available, and multiple destinati - Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets. ``` -* `http` can be used to log to a remote web server. The server must be +- `http` can be used to log to a remote web server. The server must be properly configured to receive and act on log messages. The option accepts the URL to the web server, and a `method` argument indicating whether the message should be submitted using the GET or @@ -283,7 +212,7 @@ Several different log handler destinations are available, and multiple destinati The `log_format` option provides several alternative formats: -* `color` - default format providing time, date and a message, using text colors to distinguish different log severities -* `plain` - same as above, but monochrome text only -* `syslog` - the log level and error message only, allowing the syslog system to attach the time and date -* `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases. +- `color` - default format providing time, date and a message, using text colors to distinguish different log severities +- `plain` - same as above, but monochrome text only +- `syslog` - the log level and error message only, allowing the syslog system to attach the time and date +- `legacy` - a format similar to the one used by the legacy 2.3 InvokeAI releases.