Bring main back into a consistent state with other branches

- Due to misuse of rebase command, main was transiently
  in an inconsistent state.

- This repairs the damage, and adds a few post-release
  patches that ensure stable conda installs on Mac and Windows.
This commit is contained in:
Lincoln Stein
2022-11-03 15:44:06 -04:00
370 changed files with 32709 additions and 9854 deletions

View File

@ -0,0 +1,267 @@
---
title: Installing Models
---
# :octicons-paintbrush-16: Installing Models
## Model Weight Files
The model weight files ('*.ckpt') are the Stable Diffusion "secret
sauce". They are the product of training the AI on millions of
captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are dozens or more
that have been "fine tuned" to provide particulary styles, genres, or
other features. InvokeAI allows you to install and run multiple model
weight files and switch between them quickly in the command-line and
web interfaces.
This manual will guide you through installing and configuring model
weight files.
## Base Models
InvokeAI comes with support for a good initial set of models listed in
the model configuration file `configs/models.yaml`. They are:
| Model | Weight File | Description | DOWNLOAD FROM |
| ---------------------- | ----------------------------- |--------------------------------- | ----------------|
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model| https://huggingface.co/runwayml/stable-diffusion-v1-5 |
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original |
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 |
| <all models> | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ |
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. You will need to create an account on the
Hugging Face website and accept the license terms before you can
access the files.
The predefined configuration file for InvokeAI (located at
`configs/models.yaml`) provides entries for each of these weights
files. `stable-diffusion-1.5` is the default model used, and we
strongly recommend that you install this weights file if nothing else.
## Community-Contributed Models
There are too many to list here and more are being contributed every
day. Hugging Face maintains a [fast-growing
repository](https://huggingface.co/sd-concepts-library) of fine-tune
(".bin") models that can be imported into InvokeAI by passing the
`--embedding_path` option to the `invoke.py` command.
[This page](https://rentry.org/sdmodels) hosts a large list of
official and unofficial Stable Diffusion models and where they can be
obtained.
## Installation
There are three ways to install weights files:
1. During InvokeAI installation, the `preload_models.py` script can
download them for you.
2. You can use the command-line interface (CLI) to import, configure
and modify new models files.
3. You can download the files manually and add the appropriate entries
to `models.yaml`.
### Installation via `preload_models.py`
This is the most automatic way. Run `scripts/preload_models.py` from
the console. It will ask you to select which models to download and
lead you through the steps of setting up a Hugging Face account if you
haven't done so already.
To start, from within the InvokeAI directory run the command `python
scripts/preload_models.py` (Linux/MacOS) or `python
scripts\preload_models.py` (Windows):
```
Loading Python libraries...
** INTRODUCTION **
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
and other large models that are needed for text to image generation. At any point you may interrupt
this program and resume later.
** WEIGHT SELECTION **
Would you like to download the Stable Diffusion model weights now? [y]
Choose the weight file(s) you wish to download. Before downloading you
will be given the option to view and change your selections.
[1] stable-diffusion-1.5:
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
Download? [y]
[2] inpainting-1.5:
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
Download? [y]
[3] stable-diffusion-1.4:
The original Stable Diffusion version 1.4 weight file (4.27 GB)
Download? [n] n
[4] waifu-diffusion-1.3:
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
Download? [n] y
[5] ft-mse-improved-autoencoder-840000:
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
Download? [y] y
The following weight files will be downloaded:
[1] stable-diffusion-1.5*
[2] inpainting-1.5
[4] waifu-diffusion-1.3
[5] ft-mse-improved-autoencoder-840000
*default
Ok to download? [y]
** LICENSE AGREEMENT FOR WEIGHT FILES **
1. To download the Stable Diffusion weight files you need to read and accept the
CreativeML Responsible AI license. If you have not already done so, please
create an account using the "Sign Up" button:
https://huggingface.co
You will need to verify your email address as part of the HuggingFace
registration process.
2. After creating the account, login under your account and accept
the license terms located here:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
Press <enter> when you are ready to continue:
...
```
When the script is complete, you will find the downloaded weights
files in `models/ldm/stable-diffusion-v1` and a matching configuration
file in `configs/models.yaml`.
You can run the script again to add any models you didn't select the
first time. Note that as a safety measure the script will _never_
remove a previously-installed weights file. You will have to do this
manually.
### Installation via the CLI
You can install a new model, including any of the community-supported
ones, via the command-line client's `!import_model` command.
1. First download the desired model weights file and place it under `models/ldm/stable-diffusion-v1/`.
You may rename the weights file to something more memorable if you wish. Record the path of the
weights file (e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`.
3. At the `invoke>` command-line, enter the command `!import_model <path to model>`.
For example:
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
(Hint - the CLI supports file path autocompletion. Type a bit of the path
name and hit <tab> in order to get a choice of possible completions.)
4. Follow the wizard's instructions to complete installation as shown in the example
here:
```
invoke> <b>!import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>arabian-nights</b>
Description of this model: <b>Arabian Nights Fine Tune v1.0</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
arabian-nights:
config: configs/stable-diffusion/v1-inference.yaml
description: Arabian Nights Fine Tune v1.0
height: 512
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
```
If you've previously installed the fine-tune VAE file `vae-ft-mse-840000-ema-pruned.ckpt`,
the wizard will also ask you if you want to add this VAE to the model.
The appropriate entry for this model will be added to `configs/models.yaml` and it will
be available to use in the CLI immediately.
The CLI has additional commands for switching among, viewing, editing,
deleting the available models. These are described in [Command Line
Client](../features/CLI.md#model-selection-and-importation), but the two most
frequently-used are `!models` and `!switch <name of model>`. The first
prints a table of models that InvokeAI knows about and their load
status. The second will load the requested model and lets you switch
back and forth quickly among loaded models.
### Manually editing of `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit
`models.yaml` directly.
First you need to download the desired .ckpt file and place it in
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the
previous section. Record the path to the weights file,
e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
Then using a **text** editor (e.g. the Windows Notepad application),
open the file `configs/models.yaml`, and add a new stanza that follows
this model:
```
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: false
```
* arabian-nights-1.0
- This is the name of the model that you will refer to from within the
CLI and the WebGUI when you need to load and use the model.
* description
- Any description that you want to add to the model to remind you what
it is.
* weights
- Relative path to the .ckpt weights file for this model.
* config
- This is the confusingly-named configuration file for the model itself.
Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens
to need a custom configuration, in which case the place you downloaded it
from will tell you what to use instead. For example, the runwayML custom
inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`.
This is already inclued in the InvokeAI distribution and is configured automatically
for you by the `preload_models.py` script.
* vae
- If you want to add a VAE file to the model, then enter its path here.
* width, height
- This is the width and height of the images used to train the model.
Currently they are always 512 and 512.
Save the `models.yaml` and relaunch InvokeAI. The new model should now be
available for your use.

View File

@ -36,20 +36,6 @@ another environment with NVIDIA GPUs on-premises or in the cloud.
### Prerequisites
#### Get the data files
Go to
[Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original),
and click "Access repository" to Download the model file `sd-v1-4.ckpt` (~4 GB)
to `~/Downloads`. You'll need to create an account but it's quick and free.
Also download the face restoration model.
```Shell
cd ~/Downloads
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
```
#### Install [Docker](https://github.com/santisbon/guides#docker)
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
@ -57,86 +43,61 @@ CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
#### Get a Huggingface-Token
Go to [Hugging Face](https://huggingface.co/settings/tokens), create a token and
temporary place it somewhere like a open texteditor window (but dont save it!,
only keep it open, we need it in the next step)
### Setup
Set the fork you want to use and other variables.
```Shell
TAG_STABLE_DIFFUSION="santisbon/stable-diffusion"
PLATFORM="linux/arm64"
GITHUB_STABLE_DIFFUSION="-b orig-gfpgan https://github.com/santisbon/stable-diffusion.git"
REQS_STABLE_DIFFUSION="requirements-linux-arm64.txt"
CONDA_SUBDIR="osx-arm64"
!!! tip
echo $TAG_STABLE_DIFFUSION
echo $PLATFORM
echo $GITHUB_STABLE_DIFFUSION
echo $REQS_STABLE_DIFFUSION
echo $CONDA_SUBDIR
I preffer to save my env vars
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
them when I come back.
The build- and run- scripts contain default values for almost everything,
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
created in the last step.
Some Suggestions of variables you may want to change besides the Token:
| Environment-Variable | Description |
| ------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| `HUGGINGFACE_TOKEN="hg_aewirhghlawrgkjbarug2"` | This is the only required variable, without you can't get the checkpoint |
| `ARCH=aarch64` | if you are using a ARM based CPU |
| `INVOKEAI_TAG=yourname/invokeai:latest` | the Container Repository / Tag which will be used |
| `INVOKEAI_CONDA_ENV_FILE=environment-linux-aarch64.yml` | since environment.yml wouldn't work with aarch |
| `INVOKEAI_GIT="-b branchname https://github.com/username/reponame"` | if you want to use your own fork |
#### Build the Image
I provided a build script, which is located in `docker-build/build.sh` but still
needs to be executed from the Repository root.
```bash
docker-build/build.sh
```
Create a Docker volume for the downloaded model files.
The build Script not only builds the container, but also creates the docker
volume if not existing yet, or if empty it will just download the models. When
it is done you can run the container via the run script
```Shell
docker volume create my-vol
```bash
docker-build/run.sh
```
Copy the data files to the Docker volume using a lightweight Linux container.
We'll need the models at run time. You just need to create the container with
the mountpoint; no need to run this dummy container.
When used without arguments, the container will start the website and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
```Shell
cd ~/Downloads # or wherever you saved the files
!!! warning "Deprecated"
docker create --platform $PLATFORM --name dummy --mount source=my-vol,target=/data alpine
docker cp sd-v1-4.ckpt dummy:/data
docker cp GFPGANv1.4.pth dummy:/data
```
Get the repo and download the Miniconda installer (we'll need it at build time).
Replace the URL with the version matching your container OS and the architecture
it will run on.
```Shell
cd ~
git clone $GITHUB_STABLE_DIFFUSION
cd stable-diffusion/docker-build
chmod +x entrypoint.sh
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh && chmod +x anaconda.sh
```
Build the Docker image. Give it any tag `-t` that you want.
Choose the Linux container's host platform: x86-64/Intel is `amd64`. Apple
silicon is `arm64`. If deploying the container to the cloud to leverage powerful
GPU instances you'll be on amd64 hardware but if you're just trying this out
locally on Apple silicon choose arm64.
The application uses libraries that need to match the host environment so use
the appropriate requirements file.
Tip: Check that your shell session has the env variables set above.
```Shell
docker build -t $TAG_STABLE_DIFFUSION \
--platform $PLATFORM \
--build-arg gsd=$GITHUB_STABLE_DIFFUSION \
--build-arg rsd=$REQS_STABLE_DIFFUSION \
--build-arg cs=$CONDA_SUBDIR \
.
```
Run a container using your built image.
Tip: Make sure you've created and populated the Docker volume (above).
```Shell
docker run -it \
--rm \
--platform $PLATFORM \
--name stable-diffusion \
--hostname stable-diffusion \
--mount source=my-vol,target=/data \
$TAG_STABLE_DIFFUSION
```
From here on it is the rest of the previous Docker-Docs, which will still
provide usefull informations for one or the other.
## Usage (time to have fun)
@ -240,7 +201,8 @@ server with:
python3 scripts/invoke.py --full_precision --web
```
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
If it's running on your Mac point your Mac web browser to
<http://127.0.0.1:9090>
Press Control-C at the command line to stop the web server.

View File

@ -1,5 +1,5 @@
---
title: Linux
title: Manual Installation, Linux
---
# :fontawesome-brands-linux: Linux
@ -43,6 +43,7 @@ title: Linux
environment named `invokeai` and activate the environment.
```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
@ -51,58 +52,54 @@ title: Linux
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
above.
6. Load a couple of small machine-learning models required by stable diffusion:
6. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
```
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
This step is necessary because I modified the original just-in-time
model loading scheme to allow the script to work on GPU machines that are not
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
same thing.
7. Now you need to install the weights for the stable diffusion model.
7. Start generating images!
- For running with the released weights, you will first need to set up an acount
with [Hugging Face](https://huggingface.co).
- Use your credentials to log in, and then point your browser [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original).
- You may be asked to sign a license agreement at this point.
- Click on "Files and versions" near the top of the page, and then click on the
file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click
the "download" link. Save the file somewhere safe on your local machine.
# Command-line interface
(invokeai) python scripts/invoke.py
Now run the following commands from within the stable-diffusion directory.
This will create a symbolic link from the stable-diffusion model.ckpt file, to
the true location of the `sd-v1-4.ckpt` file.
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
```bash
(invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
(invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
```
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
8. Start generating images!
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
```bash
# for the pre-release weights use the -l or --liaon400m switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
# for the post-release weights do not use the switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
# for additional configuration switches and arguments, use -h or --help
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
```
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
9. Subsequently, to relaunch the script, be sure to run "conda
activate invokeai" (step 5, second command), enter the `InvokeAI`
directory, and then launch the invoke script (step 8). If you forget
to activate the 'invokeai' environment, the script will fail with
multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
This distribution is changing rapidly. If you used the `git clone`
method (step 5) to download the InvokeAI directory, then to update to
the latest and greatest version, launch the Anaconda window, enter
`InvokeAI` and type:
```bash
(invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
```

View File

@ -1,5 +1,5 @@
---
title: macOS
title: Manual Installation, macOS
---
# :fontawesome-brands-apple: macOS
@ -19,18 +19,9 @@ an issue on Github and we will do our best to help.
## Installation
First you need to download a large checkpoint file.
1. Sign up at https://huggingface.co
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder). You may want to move it somewhere else for longer term storage - SD needs this file to run.
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
!!! todo "Homebrew"
If you have no brew installation yet (otherwise skip):
First you will install the "brew" package manager. Skip this if brew is already installed.
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
@ -94,25 +85,6 @@ While that is downloading, open Terminal and run the following commands one at a
cd InvokeAI
```
!!! todo "Wait until the checkpoint-file download finished, then proceed"
We will leave the big checkpoint wherever you stashed it for long-term storage,
and make a link to it from the repo's folder. This allows you to use it for
other repos, or if you need to delete Invoke AI, you won't have to download it again.
```{.bash .annotate}
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads" # (1)!
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
```
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
!!! todo "Create the environment & install packages"
=== "M1 Mac"
@ -131,25 +103,40 @@ While that is downloading, open Terminal and run the following commands one at a
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
# This will download some bits and pieces and make take a while
(invokeai) python scripts/preload_models.py
# Run SD!
(invokeai) python scripts/dream.py
# or run the web interface!
(invokeai) python scripts/invoke.py --web
# The original scripts should work as well.
(invokeai) python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
```
!!! info
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
it isn't required but won't hurt.
!!! todo "Download the model weight files"
The `preload_models.py` script downloads and installs the model weight
files for you. It will lead you through the process of getting a Hugging Face
account, accepting the Stable Diffusion model weight license agreement, and
creating a download token:
# This will take some time, depending on the speed of your internet connection
# and will consume about 10GB of space
(invokeai) python scripts/preload_models.py
!! todo "Run InvokeAI!"
# Command-line interface
(invokeai) python scripts/invoke.py
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
---
## Common problems

View File

@ -1,5 +1,5 @@
---
title: Windows
title: Manual Installation, Windows
---
# :fontawesome-brands-windows: Windows
@ -69,49 +69,42 @@ in the wiki
environment file isn't specified, conda will default to `environment.yml`. You will need
to provide the `-f` option if you wish to load a different environment file at any point.
7. Run the command:
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```batch
python scripts\preload_models.py
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
```
This installs several machine learning models that stable diffusion requires.
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
Note: This step is required. This was done because some users may might be
blocked by firewalls or have limited internet connectivity for the models to
be downloaded just-in-time.
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
same thing.
8. Now you need to install the weights for the big stable diffusion model.
8. Start generating images!
1. For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
2. Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
3. You may be asked to sign a license agreement at this point.
4. Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
5. The weight file is >4 GB in size, so
downloading may take a while.
# Command-line interface
(invokeai) python scripts/invoke.py
Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
```batch
mkdir -p models\ldm\stable-diffusion-v1
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
```
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
9. Start generating images!
```batch title="for the pre-release weights"
python scripts\invoke.py -l
```
```batch title="for the post-release weights"
python scripts\invoke.py
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
9. Subsequently, to relaunch the script, first activate the Anaconda
command window (step 3),enter the InvokeAI directory (step 5, `cd
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
launch the invoke script (step 9).
!!! tip "Tildebyte has written an alternative"