mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
documentation fixes
This commit is contained in:
parent
3fa7170566
commit
efcb3a9d08
@ -617,8 +617,6 @@ sections describe what's new for InvokeAI.
|
|||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
||||||
backward compatibility.
|
backward compatibility.
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||||
- Support for [inpainting](deprecated/INPAINTING.md) and
|
|
||||||
[outpainting](features/OUTPAINTING.md)
|
|
||||||
- img2img runs on all k\* samplers
|
- img2img runs on all k\* samplers
|
||||||
- Support for
|
- Support for
|
||||||
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
||||||
|
@ -76,10 +76,10 @@ From top to bottom, these are:
|
|||||||
with outpainting,and modify interior portions of the image with
|
with outpainting,and modify interior portions of the image with
|
||||||
inpainting, erase portions of a starting image and have the AI fill in
|
inpainting, erase portions of a starting image and have the AI fill in
|
||||||
the erased region from a text prompt.
|
the erased region from a text prompt.
|
||||||
4. Workflow Management (not yet implemented) - this panel will allow you to create
|
4. Node Editor - this panel allows you to create
|
||||||
pipelines of common operations and combine them into workflows.
|
pipelines of common operations and combine them into workflows.
|
||||||
5. Training (not yet implemented) - this panel will provide an interface to [textual
|
5. Model Manager - this panel allows you to import and configure new
|
||||||
inversion training](TEXTUAL_INVERSION.md) and fine tuning.
|
models using URLs, local paths, or HuggingFace diffusers repo_ids.
|
||||||
|
|
||||||
The inpainting, outpainting and postprocessing tabs are currently in
|
The inpainting, outpainting and postprocessing tabs are currently in
|
||||||
development. However, limited versions of their features can already be accessed
|
development. However, limited versions of their features can already be accessed
|
||||||
|
@ -37,7 +37,7 @@ guide also covers optimizing models to load quickly.
|
|||||||
Teach an old model new tricks. Merge 2-3 models together to create a
|
Teach an old model new tricks. Merge 2-3 models together to create a
|
||||||
new model that combines characteristics of the originals.
|
new model that combines characteristics of the originals.
|
||||||
|
|
||||||
## * [Textual Inversion](TEXTUAL_INVERSION.md)
|
## * [Textual Inversion](TRAINING.md)
|
||||||
Personalize models by adding your own style or subjects.
|
Personalize models by adding your own style or subjects.
|
||||||
|
|
||||||
# Other Features
|
# Other Features
|
||||||
|
@ -146,7 +146,6 @@ This method is recommended for those familiar with running Docker containers
|
|||||||
- [Installing](installation/050_INSTALLING_MODELS.md)
|
- [Installing](installation/050_INSTALLING_MODELS.md)
|
||||||
- [Model Merging](features/MODEL_MERGING.md)
|
- [Model Merging](features/MODEL_MERGING.md)
|
||||||
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
|
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
|
||||||
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
|
|
||||||
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
|
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
|
||||||
<!-- seperator -->
|
<!-- seperator -->
|
||||||
### Prompt Engineering
|
### Prompt Engineering
|
||||||
|
@ -354,8 +354,8 @@ experimental versions later.
|
|||||||
|
|
||||||
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
|
12. **InvokeAI Options**: You can launch InvokeAI with several different command-line arguments that
|
||||||
customize its behavior. For example, you can change the location of the
|
customize its behavior. For example, you can change the location of the
|
||||||
image output directory, or select your favorite sampler. See the
|
image output directory or balance memory usage vs performance. See
|
||||||
[Command-Line Interface](../features/CLI.md) for a full list of the options.
|
[Configuration](../features/CONFIGURATION.md) for a full list of the options.
|
||||||
|
|
||||||
- To set defaults that will take effect every time you launch InvokeAI,
|
- To set defaults that will take effect every time you launch InvokeAI,
|
||||||
use a text editor (e.g. Notepad) to exit the file
|
use a text editor (e.g. Notepad) to exit the file
|
||||||
|
@ -256,7 +256,7 @@ manager, please follow these steps:
|
|||||||
|
|
||||||
10. Render away!
|
10. Render away!
|
||||||
|
|
||||||
Browse the [features](../features/CLI.md) section to learn about all the
|
Browse the [features](../features/index.md) section to learn about all the
|
||||||
things you can do with InvokeAI.
|
things you can do with InvokeAI.
|
||||||
|
|
||||||
|
|
||||||
@ -270,7 +270,7 @@ manager, please follow these steps:
|
|||||||
|
|
||||||
12. Other scripts
|
12. Other scripts
|
||||||
|
|
||||||
The [Textual Inversion](../features/TEXTUAL_INVERSION.md) script can be launched with the command:
|
The [Textual Inversion](../features/TRAINING.md) script can be launched with the command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invokeai-ti --gui
|
invokeai-ti --gui
|
||||||
|
@ -43,24 +43,7 @@ InvokeAI comes with support for a good set of starter models. You'll
|
|||||||
find them listed in the master models file
|
find them listed in the master models file
|
||||||
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
|
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
|
||||||
subset that are currently installed are found in
|
subset that are currently installed are found in
|
||||||
`configs/models.yaml`. As of v2.3.1, the list of starter models is:
|
`configs/models.yaml`.
|
||||||
|
|
||||||
|Model Name | HuggingFace Repo ID | Description | URL |
|
|
||||||
|---------- | ---------- | ----------- | --- |
|
|
||||||
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|
|
||||||
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|
|
||||||
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|
|
||||||
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-inpainting|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-inpainting |
|
|
||||||
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|
|
||||||
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|
|
||||||
|d&d-diffusion-1.0|0xJustin/Dungeons-and-Diffusion|Dungeons & Dragons characters (2.13 GB)|https://huggingface.co/0xJustin/Dungeons-and-Diffusion |
|
|
||||||
|dreamlike-photoreal-2.0|dreamlike-art/dreamlike-photoreal-2.0|A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)|https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
|
|
||||||
|inkpunk-1.0|Envvi/Inkpunk-Diffusion|Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)|https://huggingface.co/Envvi/Inkpunk-Diffusion |
|
|
||||||
|openjourney-4.0|prompthero/openjourney|An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)|https://huggingface.co/prompthero/openjourney |
|
|
||||||
|portrait-plus-1.0|wavymulder/portraitplus|An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)|https://huggingface.co/wavymulder/portraitplus |
|
|
||||||
|seek-art-mega-1.0|coreco/seek.art_MEGA|A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)|https://huggingface.co/coreco/seek.art_MEGA |
|
|
||||||
|trinart-2.0|naclbit/trinart_stable_diffusion_v2|An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)|https://huggingface.co/naclbit/trinart_stable_diffusion_v2 |
|
|
||||||
|waifu-diffusion-1.4|hakurei/waifu-diffusion|An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)|https://huggingface.co/hakurei/waifu-diffusion |
|
|
||||||
|
|
||||||
Note that these files are covered by an "Ethical AI" license which
|
Note that these files are covered by an "Ethical AI" license which
|
||||||
forbids certain uses. When you initially download them, you are asked
|
forbids certain uses. When you initially download them, you are asked
|
||||||
@ -71,8 +54,7 @@ with the model terms by visiting the URLs in the table above.
|
|||||||
|
|
||||||
## Community-Contributed Models
|
## Community-Contributed Models
|
||||||
|
|
||||||
There are too many to list here and more are being contributed every
|
[HuggingFace](https://huggingface.co/models?library=diffusers)
|
||||||
day. [HuggingFace](https://huggingface.co/models?library=diffusers)
|
|
||||||
is a great resource for diffusers models, and is also the home of a
|
is a great resource for diffusers models, and is also the home of a
|
||||||
[fast-growing repository](https://huggingface.co/sd-concepts-library)
|
[fast-growing repository](https://huggingface.co/sd-concepts-library)
|
||||||
of embedding (".bin") models that add subjects and/or styles to your
|
of embedding (".bin") models that add subjects and/or styles to your
|
||||||
@ -86,310 +68,106 @@ only `.safetensors` and `.ckpt` models, but they can be easily loaded
|
|||||||
into InvokeAI and/or converted into optimized `diffusers` models. Be
|
into InvokeAI and/or converted into optimized `diffusers` models. Be
|
||||||
aware that CIVITAI hosts many models that generate NSFW content.
|
aware that CIVITAI hosts many models that generate NSFW content.
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
InvokeAI 2.3.x does not support directly importing and
|
|
||||||
running Stable Diffusion version 2 checkpoint models. You may instead
|
|
||||||
convert them into `diffusers` models using the conversion methods
|
|
||||||
described below.
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
There are multiple ways to install and manage models:
|
There are two ways to install and manage models:
|
||||||
|
|
||||||
1. The `invokeai-configure` script which will download and install them for you.
|
1. The `invokeai-model-install` script which will download and install
|
||||||
|
them for you. In addition to supporting main models, you can install
|
||||||
|
ControlNet, LoRA and Textual Inversion models.
|
||||||
|
|
||||||
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
|
2. The web interface (WebUI) has a GUI for importing and managing
|
||||||
models files.
|
|
||||||
|
|
||||||
3. The web interface (WebUI) has a GUI for importing and managing
|
|
||||||
models.
|
models.
|
||||||
|
|
||||||
### Installation via `invokeai-configure`
|
3. By placing models (or symbolic links to models) inside one of the
|
||||||
|
InvokeAI root directory's `autoimport` folder.
|
||||||
|
|
||||||
From the `invoke` launcher, choose option (6) "re-run the configure
|
### Installation via `invokeai-model-install`
|
||||||
script to download new models." This will launch the same script that
|
|
||||||
prompted you to select models at install time. You can use this to add
|
|
||||||
models that you skipped the first time around. It is all right to
|
|
||||||
specify a model that was previously downloaded; the script will just
|
|
||||||
confirm that the files are complete.
|
|
||||||
|
|
||||||
### Installation via the CLI
|
From the `invoke` launcher, choose option [5] "Download and install
|
||||||
|
models." This will launch the same script that prompted you to select
|
||||||
|
models at install time. You can use this to add models that you
|
||||||
|
skipped the first time around. It is all right to specify a model that
|
||||||
|
was previously downloaded; the script will just confirm that the files
|
||||||
|
are complete.
|
||||||
|
|
||||||
You can install a new model, including any of the community-supported ones, via
|
The installer has different panels for installing main models from
|
||||||
the command-line client's `!import_model` command.
|
HuggingFace, models from Civitai and other arbitrary web sites,
|
||||||
|
ControlNet models, LoRA/LyCORIS models, and Textual Inversion
|
||||||
|
embeddings. Each section has a text box in which you can enter a new
|
||||||
|
model to install. You can refer to a model using its:
|
||||||
|
|
||||||
#### Installing individual `.ckpt` and `.safetensors` models
|
1. Local path to the .ckpt, .safetensors or diffusers folder on your local machine
|
||||||
|
2. A directory on your machine that contains multiple models
|
||||||
|
3. A URL that points to a downloadable model
|
||||||
|
4. A HuggingFace repo id
|
||||||
|
|
||||||
If the model is already downloaded to your local disk, use
|
Previously-installed models are shown with checkboxes. Uncheck a box
|
||||||
`!import_model /path/to/file.ckpt` to load it. For example:
|
to unregister the model from InvokeAI. Models that are physically
|
||||||
|
installed inside the InvokeAI root directory will be deleted and
|
||||||
|
purged (after a confirmation warning). Models that are located outside
|
||||||
|
the InvokeAI root directory will be unregistered but not deleted.
|
||||||
|
|
||||||
```bash
|
Note: The installer script uses a console-based text interface that requires
|
||||||
invoke> !import_model C:/Users/fred/Downloads/martians.safetensors
|
significant amounts of horizontal and vertical space. If the display
|
||||||
|
looks messed up, just enlarge the terminal window and/or relaunch the
|
||||||
|
script.
|
||||||
|
|
||||||
|
If you wish you can script model addition and deletion, as well as
|
||||||
|
listing installed models. Start the "developer's console" and give the
|
||||||
|
command `invokeai-model-install --help`. This will give you a series
|
||||||
|
of command-line parameters that will let you control model
|
||||||
|
installation. Examples:
|
||||||
|
|
||||||
|
```
|
||||||
|
# (list all controlnet models)
|
||||||
|
invokeai-model-install --list controlnet
|
||||||
|
|
||||||
|
# (install the model at the indicated URL)
|
||||||
|
invokeai-model-install --add http://civitai.com/2860
|
||||||
|
|
||||||
|
# (delete the named model)
|
||||||
|
invokeai-model-install --delete sd-1/main/analog-diffusion
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! tip "Forward Slashes"
|
### Installation via the Web GUI
|
||||||
On Windows systems, use forward slashes rather than backslashes
|
|
||||||
in your file paths.
|
|
||||||
If you do use backslashes,
|
|
||||||
you must double them like this:
|
|
||||||
`C:\\Users\\fred\\Downloads\\martians.safetensors`
|
|
||||||
|
|
||||||
Alternatively you can directly import the file using its URL:
|
To install a new model using the Web GUI, do the following:
|
||||||
|
|
||||||
```bash
|
1. Open the InvokeAI Model Manager (cube at the bottom of the
|
||||||
invoke> !import_model https://example.org/sd_models/martians.safetensors
|
left-hand panel) and navigate to *Import Models*
|
||||||
```
|
|
||||||
|
|
||||||
For this to work, the URL must not be password-protected. Otherwise
|
2. In the field labeled *Location* type in the path to the model you
|
||||||
you will receive a 404 error.
|
wish to install. You may use a URL, HuggingFace repo id, or a path on
|
||||||
|
your local disk.
|
||||||
|
|
||||||
When you import a legacy model, the CLI will first ask you what type
|
3. Alternatively, the *Scan for Models* button allows you to paste in
|
||||||
of model this is. You can indicate whether it is a model based on
|
the path to a folder somewhere on your machine. It will be scanned for
|
||||||
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
|
importable models and prompt you to add the ones of your choice.
|
||||||
or a 1.x inpainting model. Be careful to indicate the correct model
|
|
||||||
type, or it will not load correctly. You can correct the model type
|
|
||||||
after the fact using the `!edit_model` command.
|
|
||||||
|
|
||||||
The system will then ask you a few other questions about the model,
|
4. Press *Add Model* and wait for confirmation that the model
|
||||||
including what size image it was trained on (usually 512x512), what
|
was added.
|
||||||
name and description you wish to use for it, and whether you would
|
|
||||||
like to install a custom VAE (variable autoencoder) file for the
|
|
||||||
model. For recent models, the answer to the VAE question is usually
|
|
||||||
"no," but it won't hurt to answer "yes".
|
|
||||||
|
|
||||||
After importing, the model will load. If this is successful, you will
|
To delete a model, Select *Model Manager* to list all the currently
|
||||||
be asked if you want to keep the model loaded in memory to start
|
installed models. Press the trash can icons to delete any models you
|
||||||
generating immediately. You'll also be asked if you wish to make this
|
wish to get rid of. Models whose weights are located inside the
|
||||||
the default model on startup. You can change this later using
|
InvokeAI `models` directory will be purged from disk, while those
|
||||||
`!edit_model`.
|
located outside will be unregistered from InvokeAI, but not deleted.
|
||||||
|
|
||||||
#### Importing a batch of `.ckpt` and `.safetensors` models from a directory
|
You can see where model weights are located by clicking on the model name.
|
||||||
|
This will bring up an editable info panel showing the model's characteristics,
|
||||||
|
including the `Model Location` of its files.
|
||||||
|
|
||||||
You may also point `!import_model` to a directory containing a set of
|
### Installation via the `autoimport` function
|
||||||
`.ckpt` or `.safetensors` files. They will be imported _en masse_.
|
|
||||||
|
|
||||||
!!! example
|
In the InvokeAI root directory you will find a series of folders under
|
||||||
|
`autoimport`, one each for main models, controlnets, embeddings and
|
||||||
|
Loras. Any models that you add to these directories will be scanned
|
||||||
|
at startup time and registered automatically.
|
||||||
|
|
||||||
```console
|
You may create symbolic links from these folders to models located
|
||||||
invoke> !import_model C:/Users/fred/Downloads/civitai_models/
|
elsewhere on disk and they will be autoimported. You can also create
|
||||||
```
|
subfolders and organize them as you wish.
|
||||||
|
|
||||||
You will be given the option to import all models found in the
|
The location of the autoimport directories are controlled by settings
|
||||||
directory, or select which ones to import. If there are subfolders
|
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
|
||||||
within the directory, they will be searched for models to import.
|
|
||||||
|
|
||||||
#### Installing `diffusers` models
|
|
||||||
|
|
||||||
You can install a `diffusers` model from the HuggingFace site using
|
|
||||||
`!import_model` and the HuggingFace repo_id for the model:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !import_model andite/anything-v4.0
|
|
||||||
```
|
|
||||||
|
|
||||||
Alternatively, you can download the model to disk and import it from
|
|
||||||
there. The model may be distributed as a ZIP file, or as a Git
|
|
||||||
repository:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !import_model C:/Users/fred/Downloads/andite--anything-v4.0
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! tip "The CLI supports file path autocompletion"
|
|
||||||
Type a bit of the path name and hit ++tab++ in order to get a choice of
|
|
||||||
possible completions.
|
|
||||||
|
|
||||||
!!! tip "On Windows, you can drag model files onto the command-line"
|
|
||||||
Once you have typed in `!import_model `, you can drag the
|
|
||||||
model file or directory onto the command-line to insert the model path. This way, you don't need to
|
|
||||||
type it or copy/paste. However, you will need to reverse or
|
|
||||||
double backslashes as noted above.
|
|
||||||
|
|
||||||
Before installing, the CLI will ask you for a short name and
|
|
||||||
description for the model, whether to make this the default model that
|
|
||||||
is loaded at InvokeAI startup time, and whether to replace its
|
|
||||||
VAE. Generally the answer to the latter question is "no".
|
|
||||||
|
|
||||||
### Converting legacy models into `diffusers`
|
|
||||||
|
|
||||||
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
|
|
||||||
models file into `diffusers` and install it.This will enable the model
|
|
||||||
to load and run faster without loss of image quality.
|
|
||||||
|
|
||||||
The usage is identical to `!import_model`. You may point the command
|
|
||||||
to either a downloaded model file on disk, or to a (non-password
|
|
||||||
protected) URL:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
|
|
||||||
```
|
|
||||||
|
|
||||||
After a successful conversion, the CLI will offer you the option of
|
|
||||||
deleting the original `.ckpt` or `.safetensors` file.
|
|
||||||
|
|
||||||
### Optimizing a previously-installed model
|
|
||||||
|
|
||||||
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
|
|
||||||
file and wish to convert it into a `diffusers` model, you can do this
|
|
||||||
without re-downloading and converting the original file using the
|
|
||||||
`!optimize_model` command. Simply pass the short name of an existing
|
|
||||||
installed model:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !optimize_model martians-v1.0
|
|
||||||
```
|
|
||||||
|
|
||||||
The model will be converted into `diffusers` format and replace the
|
|
||||||
previously installed version. You will again be offered the
|
|
||||||
opportunity to delete the original `.ckpt` or `.safetensors` file.
|
|
||||||
|
|
||||||
### Related CLI Commands
|
|
||||||
|
|
||||||
There are a whole series of additional model management commands in
|
|
||||||
the CLI that you can read about in [Command-Line
|
|
||||||
Interface](../features/CLI.md). These include:
|
|
||||||
|
|
||||||
* `!models` - List all installed models
|
|
||||||
* `!switch <model name>` - Switch to the indicated model
|
|
||||||
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
|
|
||||||
* `!del_model <model name>` - Delete the indicated model
|
|
||||||
|
|
||||||
### Manually editing `configs/models.yaml`
|
|
||||||
|
|
||||||
|
|
||||||
If you are comfortable with a text editor then you may simply edit `models.yaml`
|
|
||||||
directly.
|
|
||||||
|
|
||||||
You will need to download the desired `.ckpt/.safetensors` file and
|
|
||||||
place it somewhere on your machine's filesystem. Alternatively, for a
|
|
||||||
`diffusers` model, record the repo_id or download the whole model
|
|
||||||
directory. Then using a **text** editor (e.g. the Windows Notepad
|
|
||||||
application), open the file `configs/models.yaml`, and add a new
|
|
||||||
stanza that follows this model:
|
|
||||||
|
|
||||||
#### A legacy model
|
|
||||||
|
|
||||||
A legacy `.ckpt` or `.safetensors` entry will look like this:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
arabian-nights-1.0:
|
|
||||||
description: A great fine-tune in Arabian Nights style
|
|
||||||
weights: ./path/to/arabian-nights-1.0.ckpt
|
|
||||||
config: ./configs/stable-diffusion/v1-inference.yaml
|
|
||||||
format: ckpt
|
|
||||||
width: 512
|
|
||||||
height: 512
|
|
||||||
default: false
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
|
|
||||||
|
|
||||||
#### A diffusers model
|
|
||||||
|
|
||||||
A stanza for a `diffusers` model will look like this for a HuggingFace
|
|
||||||
model with a repository ID:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
arabian-nights-1.1:
|
|
||||||
description: An even better fine-tune of the Arabian Nights
|
|
||||||
repo_id: captahab/arabian-nights-1.1
|
|
||||||
format: diffusers
|
|
||||||
default: true
|
|
||||||
```
|
|
||||||
|
|
||||||
And for a downloaded directory:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
arabian-nights-1.1:
|
|
||||||
description: An even better fine-tune of the Arabian Nights
|
|
||||||
path: /path/to/captahab-arabian-nights-1.1
|
|
||||||
format: diffusers
|
|
||||||
default: true
|
|
||||||
```
|
|
||||||
|
|
||||||
There is additional syntax for indicating an external VAE to use with
|
|
||||||
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
|
|
||||||
|
|
||||||
After you save the modified `models.yaml` file relaunch
|
|
||||||
`invokeai`. The new model will now be available for your use.
|
|
||||||
|
|
||||||
### Installation via the WebUI
|
|
||||||
|
|
||||||
To access the WebUI Model Manager, click on the button that looks like
|
|
||||||
a cube in the upper right side of the browser screen. This will bring
|
|
||||||
up a dialogue that lists the models you have already installed, and
|
|
||||||
allows you to load, delete or edit them:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
![model-manager](../assets/installing-models/webui-models-1.png)
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
To add a new model, click on **+ Add New** and select to either a
|
|
||||||
checkpoint/safetensors model, or a diffusers model:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
![model-manager-add-new](../assets/installing-models/webui-models-2.png)
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
In this example, we chose **Add Diffusers**. As shown in the figure
|
|
||||||
below, a new dialogue prompts you to enter the name to use for the
|
|
||||||
model, its description, and either the location of the `diffusers`
|
|
||||||
model on disk, or its Repo ID on the HuggingFace web site. If you
|
|
||||||
choose to enter a path to disk, the system will autocomplete for you
|
|
||||||
as you type:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
![model-manager-add-diffusers](../assets/installing-models/webui-models-3.png)
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Press **Add Model** at the bottom of the dialogue (scrolled out of
|
|
||||||
site in the figure), and the model will be downloaded, imported, and
|
|
||||||
registered in `models.yaml`.
|
|
||||||
|
|
||||||
The **Add Checkpoint/Safetensor Model** option is similar, except that
|
|
||||||
in this case you can choose to scan an entire folder for
|
|
||||||
checkpoint/safetensors files to import. Simply type in the path of the
|
|
||||||
directory and press the "Search" icon. This will display the
|
|
||||||
`.ckpt` and `.safetensors` found inside the directory and its
|
|
||||||
subfolders, and allow you to choose which ones to import:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
![model-manager-add-checkpoint](../assets/installing-models/webui-models-4.png)
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
## Model Management Startup Options
|
|
||||||
|
|
||||||
The `invoke` launcher and the `invokeai` script accept a series of
|
|
||||||
command-line arguments that modify InvokeAI's behavior when loading
|
|
||||||
models. These can be provided on the command line, or added to the
|
|
||||||
InvokeAI root directory's `invokeai.init` initialization file.
|
|
||||||
|
|
||||||
The arguments are:
|
|
||||||
|
|
||||||
* `--model <model name>` -- Start up with the indicated model loaded
|
|
||||||
* `--ckpt_convert` -- When a checkpoint/safetensors model is loaded, convert it into a `diffusers` model in memory. This does not permanently save the converted model to disk.
|
|
||||||
* `--autoconvert <path/to/directory>` -- Scan the indicated directory path for new checkpoint/safetensors files, convert them into `diffusers` models, and import them into InvokeAI.
|
|
||||||
|
|
||||||
Here is an example of providing an argument on the command line using
|
|
||||||
the `invoke.sh` launch script:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.sh --autoconvert /home/fred/stable-diffusion-checkpoints
|
|
||||||
```
|
|
||||||
|
|
||||||
And here is what the same argument looks like in `invokeai.init`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
--outdir="/home/fred/invokeai/outputs
|
|
||||||
--no-nsfw_checker
|
|
||||||
--autoconvert /home/fred/stable-diffusion-checkpoints
|
|
||||||
```
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -1,4 +1,4 @@
|
|||||||
import{v as g,gr as Ke,s as x,L as Ba,gs as Ea,ae as ca,ah as c,gt as b,gu as Da,gv as t,gw as Ra,gx as h,gy as ba,gz as ja,gA as Ha,a_ as Wa,gB as La,Z as Va,gC as qa,gD as Oa,gE as Na,gF as Ga,aW as Ua}from"./index-77a6baac.js";import{e as n,o as Sr,h as Ya,i as Ka,j as Xa,k as Za,n as Ja,p as Qa,l as et,s as rt,q as at,t as tt,v as ot,w as nt,M as it}from"./MantineProvider-5ea5d91e.js";var va=String.raw,ua=va`
|
import{v as g,gr as Ke,s as x,L as Ba,gs as Ea,ae as ca,ah as c,gt as b,gu as Da,gv as t,gw as Ra,gx as h,gy as ba,gz as ja,gA as Ha,a_ as Wa,gB as La,Z as Va,gC as qa,gD as Oa,gE as Na,gF as Ga,aW as Ua}from"./index-c1428e4e.js";import{e as n,o as Sr,h as Ya,i as Ka,j as Xa,k as Za,n as Ja,p as Qa,l as et,s as rt,q as at,t as tt,v as ot,w as nt,M as it}from"./MantineProvider-f3412ed4.js";var va=String.raw,ua=va`
|
||||||
:root,
|
:root,
|
||||||
:host {
|
:host {
|
||||||
--chakra-vh: 100vh;
|
--chakra-vh: 100vh;
|
File diff suppressed because one or more lines are too long
2
invokeai/frontend/web/dist/index.html
vendored
2
invokeai/frontend/web/dist/index.html
vendored
@ -12,7 +12,7 @@
|
|||||||
margin: 0;
|
margin: 0;
|
||||||
}
|
}
|
||||||
</style>
|
</style>
|
||||||
<script type="module" crossorigin src="./assets/index-77a6baac.js"></script>
|
<script type="module" crossorigin src="./assets/index-c1428e4e.js"></script>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
<body dir="ltr">
|
<body dir="ltr">
|
||||||
|
@ -1219,7 +1219,7 @@ export type components = {
|
|||||||
* @description The nodes in this graph
|
* @description The nodes in this graph
|
||||||
*/
|
*/
|
||||||
nodes?: {
|
nodes?: {
|
||||||
[key: string]: (components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
|
[key: string]: (components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
|
||||||
};
|
};
|
||||||
/**
|
/**
|
||||||
* Edges
|
* Edges
|
||||||
@ -1262,7 +1262,7 @@ export type components = {
|
|||||||
* @description The results of node executions
|
* @description The results of node executions
|
||||||
*/
|
*/
|
||||||
results: {
|
results: {
|
||||||
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
|
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
|
||||||
};
|
};
|
||||||
/**
|
/**
|
||||||
* Errors
|
* Errors
|
||||||
@ -4300,7 +4300,7 @@ export type components = {
|
|||||||
};
|
};
|
||||||
/**
|
/**
|
||||||
* SDXLRawPromptInvocation
|
* SDXLRawPromptInvocation
|
||||||
* @description Parse prompt using compel package to conditioning.
|
* @description Pass unmodified prompt to conditioning without compel processing.
|
||||||
*/
|
*/
|
||||||
SDXLRawPromptInvocation: {
|
SDXLRawPromptInvocation: {
|
||||||
/**
|
/**
|
||||||
@ -5257,18 +5257,18 @@ export type components = {
|
|||||||
* @enum {string}
|
* @enum {string}
|
||||||
*/
|
*/
|
||||||
StableDiffusion2ModelFormat: "checkpoint" | "diffusers";
|
StableDiffusion2ModelFormat: "checkpoint" | "diffusers";
|
||||||
/**
|
|
||||||
* StableDiffusionXLModelFormat
|
|
||||||
* @description An enumeration.
|
|
||||||
* @enum {string}
|
|
||||||
*/
|
|
||||||
StableDiffusionXLModelFormat: "checkpoint" | "diffusers";
|
|
||||||
/**
|
/**
|
||||||
* StableDiffusion1ModelFormat
|
* StableDiffusion1ModelFormat
|
||||||
* @description An enumeration.
|
* @description An enumeration.
|
||||||
* @enum {string}
|
* @enum {string}
|
||||||
*/
|
*/
|
||||||
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
|
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
|
||||||
|
/**
|
||||||
|
* StableDiffusionXLModelFormat
|
||||||
|
* @description An enumeration.
|
||||||
|
* @enum {string}
|
||||||
|
*/
|
||||||
|
StableDiffusionXLModelFormat: "checkpoint" | "diffusers";
|
||||||
};
|
};
|
||||||
responses: never;
|
responses: never;
|
||||||
parameters: never;
|
parameters: never;
|
||||||
@ -5379,7 +5379,7 @@ export type operations = {
|
|||||||
};
|
};
|
||||||
requestBody: {
|
requestBody: {
|
||||||
content: {
|
content: {
|
||||||
"application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
"application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
responses: {
|
responses: {
|
||||||
@ -5416,7 +5416,7 @@ export type operations = {
|
|||||||
};
|
};
|
||||||
requestBody: {
|
requestBody: {
|
||||||
content: {
|
content: {
|
||||||
"application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
"application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["RealESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
responses: {
|
responses: {
|
||||||
|
@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
|
|||||||
[project]
|
[project]
|
||||||
name = "InvokeAI"
|
name = "InvokeAI"
|
||||||
description = "An implementation of Stable Diffusion which provides various new features and options to aid the image generation process"
|
description = "An implementation of Stable Diffusion which provides various new features and options to aid the image generation process"
|
||||||
requires-python = ">=3.9, <3.11"
|
requires-python = "==3.10"
|
||||||
readme = { content-type = "text/markdown", file = "README.md" }
|
readme = { content-type = "text/markdown", file = "README.md" }
|
||||||
keywords = ["stable-diffusion", "AI"]
|
keywords = ["stable-diffusion", "AI"]
|
||||||
dynamic = ["version"]
|
dynamic = ["version"]
|
||||||
@ -23,8 +23,6 @@ classifiers = [
|
|||||||
'Operating System :: MacOS',
|
'Operating System :: MacOS',
|
||||||
'Operating System :: Microsoft :: Windows',
|
'Operating System :: Microsoft :: Windows',
|
||||||
'Programming Language :: Python :: 3 :: Only',
|
'Programming Language :: Python :: 3 :: Only',
|
||||||
'Programming Language :: Python :: 3.8',
|
|
||||||
'Programming Language :: Python :: 3.9',
|
|
||||||
'Programming Language :: Python :: 3.10',
|
'Programming Language :: Python :: 3.10',
|
||||||
'Topic :: Artistic Software',
|
'Topic :: Artistic Software',
|
||||||
'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
|
'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
|
||||||
|
Loading…
Reference in New Issue
Block a user