finished work on INSTALLING MODELS

This commit is contained in:
Lincoln Stein 2023-02-08 00:40:21 -05:00
parent 24792eb5da
commit 8b2ed9b8fd
6 changed files with 288 additions and 184 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 283 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 253 KiB

View File

@ -57,6 +57,13 @@ containerization technologies such as Docker.
For downloads and instructions, visit the [NVIDIA CUDA Container For downloads and instructions, visit the [NVIDIA CUDA Container
Runtime Site](https://developer.nvidia.com/nvidia-container-runtime) Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/cu117` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
## :simple-amd: ROCm ## :simple-amd: ROCm
### Linux Install ### Linux Install

View File

@ -4,249 +4,346 @@ title: Installing Models
# :octicons-paintbrush-16: Installing Models # :octicons-paintbrush-16: Installing Models
## Model Weight Files ## Checkpoint and Diffusers Models
The model weight files ('\*.ckpt') are the Stable Diffusion "secret sauce". They The model checkpoint files ('\*.ckpt') are the Stable Diffusion
are the product of training the AI on millions of captioned images gathered from "secret sauce". They are the product of training the AI on millions of
multiple sources. captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file, which many Originally there was only a single Stable Diffusion weights file,
people named `model.ckpt`. Now there are dozens or more that have been "fine which many people named `model.ckpt`. Now there are dozens or more
tuned" to provide particulary styles, genres, or other features. InvokeAI allows that have been fine tuned to provide particulary styles, genres, or
you to install and run multiple model weight files and switch between them other features. In addition, there are several new formats that
quickly in the command-line and web interfaces. improve on the original checkpoint format: a `.safetensors` format
which prevents malware from masquerading as a model, and `diffusers`
models, the most recent innovation.
This manual will guide you through installing and configuring model weight InvokeAI supports all three formats but strongly prefers the
files. `diffusers` format. These are distributed as directories containing
multiple subfolders, each of which contains a different aspect of the
model. The advantage of this is that the models load from disk really
fast. Another advantage is that `diffusers` models are supported by a
large and active set of open source developers working at and with
HuggingFace organization, and improvements in both rendering quality
and performance are being made at a rapid pace. Among other features
is the ability to download and install a `diffusers` model just by
providing its HuggingFace repository ID.
While InvokeAI will continue to support `.ckpt` and `.safetensors`
models for the near future, these are deprecated and support will
likely be withdrawn at some point in the not-too-distant future.
This manual will guide you through installing and configuring model
weight files and converting legacy `.ckpt` and `.safetensors` files
into performant `diffusers` models.
## Base Models ## Base Models
InvokeAI comes with support for a good initial set of models listed in the model InvokeAI comes with support for a good set of starter models. You'll
configuration file `configs/models.yaml`. They are: find them listed in the master models file
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
subset that are currently installed are found in
`configs/models.yaml`. The current list is:
| Model | Weight File | Description | DOWNLOAD FROM | | Model | HuggingFace Repo ID | Description | URL
| -------------------- | --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------- | | -------------------- | --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-5 | | stable-diffusion-1.5 | runwayml/stable-diffusion-v1-5 | Most recent version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-5 |
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original | | stable-diffusion-1.4 | runwayml/stable-diffusion-v1-4 | Previous version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-4 |
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting | | inpainting-1.5 | runwayml/stable-diffusion-inpainting | Stable diffusion 1.5 optimized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 | | stable-diffusion-2.1-base |stabilityai/stable-diffusion-2-1-base | Stable Diffusion version 2.1 trained on 512 pixel images | https://huggingface.co/stabilityai/stable-diffusion-2-1-base |
| `<all models>` | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ | | stable-diffusion-2.1-768 |stabilityai/stable-diffusion-2-1 | Stable Diffusion version 2.1 trained on 768 pixel images | https://huggingface.co/stabilityai/stable-diffusion-2-1 |
| dreamlike-diffusion-1.0 | dreamlike-art/dreamlike-diffusion-1.0 | An SD 1.5 model finetuned on high quality art | https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 |
| dreamlike-photoreal-2.0 | dreamlike-art/dreamlike-photoreal-2.0 | A photorealistic model trained on 768 pixel images| https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
| openjourney-4.0 | prompthero/openjourney | An SD 1.5 model finetuned on Midjourney images prompt with "mdjrny-v4 style" | https://huggingface.co/prompthero/openjourney |
| nitro-diffusion-1.0 | nitrosocke/Nitro-Diffusion | An SD 1.5 model finetuned on three styles, prompt with "archer style", "arcane style" or "modern disney style" | https://huggingface.co/nitrosocke/Nitro-Diffusion|
| trinart-2.0 | naclbit/trinart_stable_diffusion_v2 | An SD 1.5 model finetuned with ~40,000 assorted high resolution manga/anime-style pictures | https://huggingface.co/naclbit/trinart_stable_diffusion_v2|
| trinart-characters-2_0 | naclbit/trinart_derrida_characters_v2_stable_diffusion | An SD 1.5 model finetuned with 19.2M manga/anime-style pictures | https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion|
Note that these files are covered by an "Ethical AI" license which forbids Note that these files are covered by an "Ethical AI" license which forbids
certain uses. You will need to create an account on the Hugging Face website and certain uses. When you initially download them, you are asked to
accept the license terms before you can access the files. accept the license terms.
The predefined configuration file for InvokeAI (located at
`configs/models.yaml`) provides entries for each of these weights files.
`stable-diffusion-1.5` is the default model used, and we strongly recommend that
you install this weights file if nothing else.
## Community-Contributed Models ## Community-Contributed Models
There are too many to list here and more are being contributed every day. There are too many to list here and more are being contributed every
Hugging Face maintains a day. [HuggingFace](https://huggingface.co/models?library=diffusers)
[fast-growing repository](https://huggingface.co/sd-concepts-library) of is a great resource for diffusers models, and is also the home of a
fine-tune (".bin") models that can be imported into InvokeAI by passing the [fast-growing repository](https://huggingface.co/sd-concepts-library)
`--embedding_path` option to the `invoke.py` command. of embedding (".bin") models that add subjects and/or styles to your
images. The latter are automatically installed on the fly when you
include the text `<concept-name>` in your prompt. See [Concepts
Library](../features/CONCEPTS.md) for more information.
[This page](https://rentry.org/sdmodels) hosts a large list of official and Another popular site for community-contributed models is
unofficial Stable Diffusion models and where they can be obtained. [CIVITAI](https://civitai.com). This extensive site currently supports
only `.safetensors` and `.ckpt` models, but they can be easily loaded
into InvokeAI and/or converted into optimized `diffusers` models. Be
aware that CIVITAI hosts many models that generate NSFW content.
## Installation ## Installation
There are three ways to install weights files: There are multiple ways to install and manage models:
1. During InvokeAI installation, the `invokeai-configure` script can download 1. The `invokeai-configure` script which will download and install them for you.
them for you.
2. You can use the command-line interface (CLI) to import, configure and modify 2. The command-line tool (CLI) has commands that allows you to import, configure and modify
new models files. models files.
3. You can download the files manually and add the appropriate entries to 3. The web interface (WebUI) has a GUI for importing and managing
`models.yaml`. models.
### Installation via `invokeai-configure` ### Installation via `invokeai-configure`
This is the most automatic way. Run `invokeai-configure` from the From the `invoke` launcher, choose option (6) "re-run the configure
console. It will ask you to select which models to download and lead you through script to download new models." This will launch the same script that
the steps of setting up a Hugging Face account if you haven't done so already. prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
To start, run `invokeai-configure` from within the InvokeAI: specify a model that was previously downloaded; the script will just
directory confirm that the files are complete.
!!! example ""
```text
Loading Python libraries...
** INTRODUCTION **
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
and other large models that are needed for text to image generation. At any point you may interrupt
this program and resume later.
** WEIGHT SELECTION **
Would you like to download the Stable Diffusion model weights now? [y]
Choose the weight file(s) you wish to download. Before downloading you
will be given the option to view and change your selections.
[1] stable-diffusion-1.5:
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
Download? [y]
[2] inpainting-1.5:
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
Download? [y]
[3] stable-diffusion-1.4:
The original Stable Diffusion version 1.4 weight file (4.27 GB)
Download? [n] n
[4] waifu-diffusion-1.3:
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27 GB)
Download? [n] y
[5] ft-mse-improved-autoencoder-840000:
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
Download? [y] y
The following weight files will be downloaded:
[1] stable-diffusion-1.5*
[2] inpainting-1.5
[4] waifu-diffusion-1.3
[5] ft-mse-improved-autoencoder-840000
*default
Ok to download? [y]
** LICENSE AGREEMENT FOR WEIGHT FILES **
1. To download the Stable Diffusion weight files you need to read and accept the
CreativeML Responsible AI license. If you have not already done so, please
create an account using the "Sign Up" button:
https://huggingface.co
You will need to verify your email address as part of the HuggingFace
registration process.
2. After creating the account, login under your account and accept
the license terms located here:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
Press <enter> when you are ready to continue:
...
```
When the script is complete, you will find the downloaded weights files in
`models/ldm/stable-diffusion-v1` and a matching configuration file in
`configs/models.yaml`.
You can run the script again to add any models you didn't select the first time.
Note that as a safety measure the script will _never_ remove a
previously-installed weights file. You will have to do this manually.
### Installation via the CLI ### Installation via the CLI
You can install a new model, including any of the community-supported ones, via You can install a new model, including any of the community-supported ones, via
the command-line client's `!import_model` command. the command-line client's `!import_model` command.
1. First download the desired model weights file and place it under #### Installing `.ckpt` and `.safetensors` models
`models/ldm/stable-diffusion-v1/`. You may rename the weights file to
something more memorable if you wish. Record the path of the weights file
(e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`. If the model is already downloaded to your local disk, use
`!import_model /path/to/file.ckpt` to load it. For example:
3. At the `invoke>` command-line, enter the command ```bash
`!import_model <path to model>`. For example: invoke> !import_model C:/Users/fred/Downloads/martians.safetensors
```
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt` !!! tip "Forward Slashes"
On Windows systems, use forward slashes rather than backslashes
in your file paths.
If you do use backslashes,
you must double them like this:
`C:\\Users\\fred\\Downloads\\martians.safetensors`
!!! tip "the CLI supports file path autocompletion" Alternatively you can directly import the file using its URL:
```bash
invoke> !import_model https://example.org/sd_models/martians.safetensors
```
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
When you import a legacy model, the CLI will ask you a few questions
about the model, including what size image it was trained on (usually
512x512), what name and description you wish to use for it, what
configuration file to use for it (usually the default
`v1-inference.yaml`), whether you'd like to make this model the
default at startup time, and whether you would like to install a
custom VAE (variable autoencoder) file for the model. For recent
models, the answer to the VAE question is usually "no," but it won't
hurt to answer "yes".
#### Installing `diffusers` models
You can install a `diffusers` model from the HuggingFace site using
`!import_model` and the HuggingFace repo_id for the model:
```bash
invoke> !import_model andite/anything-v4.0
```
Alternatively, you can download the model to disk and import it from
there. The model may be distributed as a ZIP file, or as a Git
repository:
```bash
invoke> !import_model C:/Users/fred/Downloads/andite--anything-v4.0
```
!!! tip "The CLI supports file path autocompletion"
Type a bit of the path name and hit ++tab++ in order to get a choice of Type a bit of the path name and hit ++tab++ in order to get a choice of
possible completions. possible completions.
!!! tip "on Windows, you can drag model files onto the command-line" !!! tip "On Windows, you can drag model files onto the command-line"
Once you have typed in `!import_model `, you can drag the
model file or directory onto the command-line to insert the model path. This way, you don't need to
type it or copy/paste. However, you will need to reverse or
double backslashes as noted above.
Once you have typed in `!import_model `, you can drag the model `.ckpt` file Before installing, the CLI will ask you for a short name and
onto the command-line to insert the model path. This way, you don't need to description for the model, whether to make this the default model that
type it or copy/paste. is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
4. Follow the wizard's instructions to complete installation as shown in the ### Converting legacy models into `diffusers`
example here:
!!! example "" The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
```text The usage is identical to `!import_model`. You may point the command
invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt to either a downloaded model file on disk, or to a (non-password
>> Model import in process. Please enter the values needed to configure this model: protected) URL:
Name for this model: arabian-nights ```bash
Description of this model: Arabian Nights Fine Tune v1.0 invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
Configuration file for this model: configs/stable-diffusion/v1-inference.yaml ```
Default image width: 512
Default image height: 512
>> New configuration:
arabian-nights:
config: configs/stable-diffusion/v1-inference.yaml
description: Arabian Nights Fine Tune v1.0
height: 512
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
```
If you've previously installed the fine-tune VAE file After a successful conversion, the CLI will offer you the option of
`vae-ft-mse-840000-ema-pruned.ckpt`, the wizard will also ask you if you want to deleting the original `.ckpt` or `.safetensors` file.
add this VAE to the model.
The appropriate entry for this model will be added to `configs/models.yaml` and ### Optimizing a previously-installed model
it will be available to use in the CLI immediately.
The CLI has additional commands for switching among, viewing, editing, deleting Lastly, if you have previously installed a `.ckpt` or `.safetensors`
the available models. These are described in file and wish to convert it into a `diffusers` model, you can do this
[Command Line Client](../features/CLI.md#model-selection-and-importation), but without re-downloading and converting the original file using the
the two most frequently-used are `!models` and `!switch <name of model>`. The `!optimize_model` command. Simply pass the short name of an existing
first prints a table of models that InvokeAI knows about and their load status. installed model:
The second will load the requested model and lets you switch back and forth
quickly among loaded models.
### Manually editing of `configs/models.yaml` ```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml` If you are comfortable with a text editor then you may simply edit `models.yaml`
directly. directly.
First you need to download the desired .ckpt file and place it in You will need to download the desired `.ckpt/.safetensors` file and
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the previous place it somewhere on your machine's filesystem. Alternatively, for a
section. Record the path to the weights file, e.g. `diffusers` model, record the repo_id or download the whole model
`models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt` directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
Then using a **text** editor (e.g. the Windows Notepad application), open the #### A legacy model
file `configs/models.yaml`, and add a new stanza that follows this model:
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml ```yaml
arabian-nights-1.0: arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style description: A great fine-tune in Arabian Nights style
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512 width: 512
height: 512 height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: false default: false
``` ```
| name | description | Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| arabian-nights-1.0 | This is the name of the model that you will refer to from within the CLI and the WebGUI when you need to load and use the model. | #### A diffusers model
| description | Any description that you want to add to the model to remind you what it is. |
| weights | Relative path to the .ckpt weights file for this model. | A stanza for a `diffusers` model will look like this for a HuggingFace
| config | This is the confusingly-named configuration file for the model itself. Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens to need a custom configuration, in which case the place you downloaded it from will tell you what to use instead. For example, the runwayML custom inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`. This is already inclued in the InvokeAI distribution and is configured automatically for you by the `invokeai-configure` script. | model with a repository ID:
| vae | If you want to add a VAE file to the model, then enter its path here. |
| width, height | This is the width and height of the images used to train the model. Currently they are always 512 and 512. | ```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
a cute in the upper right side of the browser screen. This will bring
up a dialogue that lists the models you have already installed, and
allows you to load, delete or edit them:
<figure markdown>
![model-manager](../assets/installing-models/webui-models-1.png)
</figure>
To add a new model, click on **+ Add New** and select to either a
checkpoint/safetensors model, or a diffusers model:
<figure markdown>
![model-manager-add-new](../assets/installing-models/webui-models-2.png)
</figure>
In this example, we chose **Add Diffusers**. As shown in the figure
below, a new dialogue prompts you to enter the name to use for the
model, its description, and either the location of the `diffusers`
model on disk, or its Repo ID on the HuggingFace web site. If you
choose to enter a path to disk, the system will autocomplete for you
as you type:
<figure markdown>
![model-manager-add-diffusers](../assets/installing-models/webui-models-3.png)
</figure>
Press **Add Model** at the bottom of the dialogue (scrolled out of
site in the figure), and the model will be downloaded, imported, and
registered in `models.yaml`.
The **Add Checkpoint/Safetensor Model** option is similar, except that
in this case you can choose to scan an entire folder for
checkpoint/safetensors files to import. Simply type in the path of the
directory and press the "Search" icon. This will display the
`.ckpt` and `.safetensors` found inside the directory and its
subfolders, and allow you to choose which ones to import:
<figure markdown>
![model-manager-add-checkpoint](../assets/installing-models/webui-models-4.png)
</figure>
## Model Management Startup Options
The `invoke` launcher and the `invokeai` script accept a series of
command-line arguments that modify InvokeAI's behavior when loading
models. These can be provided on the command line, or added to the
InvokeAI root directory's `invokeai.init` initialization file.
The arguments are:
* `--model <model name>` -- Start up with the indicated model loaded
* `--ckpt_convert` -- When a checkpoint/safetensors model is loaded, convert it into a `diffusers` model in memory. This does not permanently save the converted model to disk.
* `--autoconvert <path/to/directory>` -- Scan the indicated directory path for new checkpoint/safetensors files, convert them into `diffusers` models, and import them into InvokeAI.
Here is an example of providing an argument on the command line using
the `invoke.sh` launch script:
```bash
invoke.sh --autoconvert /home/fred/stable-diffusion-checkpoints
```
And here is what the same argument looks like in `invokeai.init`:
```
--outdir="/home/fred/invokeai/outputs
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```
Save the `models.yaml` and relaunch InvokeAI. The new model should now be
available for your use.