updated documentation

This commit is contained in:
Lincoln Stein 2022-11-02 17:28:50 -04:00
parent 7c68eff99f
commit 16a52a607d
3 changed files with 87 additions and 99 deletions

View File

@ -43,6 +43,7 @@ title: Manual Installation, Linux
environment named `invokeai` and activate the environment.
```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
@ -51,50 +52,54 @@ title: Manual Installation, Linux
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
above.
6. Load a couple of small machine-learning models required by stable diffusion:
6. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
```
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
This step is necessary because I modified the original just-in-time
model loading scheme to allow the script to work on GPU machines that are not
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
same thing.
7. Install the weights for the stable diffusion model.
7. Start generating images!
- Sign up at https://huggingface.co
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
- Accept the terms and click Access Repository
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
# Command-line interface
(invokeai) python scripts/invoke.py
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
for details.
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
8. Start generating images!
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
```bash
# for the pre-release weights use the -l or --liaon400m switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
# for the post-release weights do not use the switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
# for additional configuration switches and arguments, use -h or --help
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
```
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
9. Subsequently, to relaunch the script, be sure to run "conda
activate invokeai" (step 5, second command), enter the `InvokeAI`
directory, and then launch the invoke script (step 8). If you forget
to activate the 'invokeai' environment, the script will fail with
multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
This distribution is changing rapidly. If you used the `git clone`
method (step 5) to download the InvokeAI directory, then to update to
the latest and greatest version, launch the Anaconda window, enter
`InvokeAI` and type:
```bash
(invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
```

View File

@ -19,24 +19,9 @@ an issue on Github and we will do our best to help.
## Installation
First you need to download a large checkpoint file.
1. Sign up at https://huggingface.co
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository
4. Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
There are many other models that you can try. Please see [../features/INSTALLING_MODELS.md]
for details.
While that is downloading, open Terminal and run the following
commands one at a time, reading the comments and taking care to run
the appropriate command for your Mac's architecture (Intel or M1).
!!! todo "Homebrew"
If you have no brew installation yet (otherwise skip):
First you will install the "brew" package manager. Skip this if brew is already installed.
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
@ -100,25 +85,6 @@ the appropriate command for your Mac's architecture (Intel or M1).
cd InvokeAI
```
!!! todo "Wait until the checkpoint-file download finished, then proceed"
We will leave the big checkpoint wherever you stashed it for long-term storage,
and make a link to it from the repo's folder. This allows you to use it for
other repos, or if you need to delete Invoke AI, you won't have to download it again.
```{.bash .annotate}
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads" # (1)!
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
```
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
!!! todo "Create the environment & install packages"
=== "M1 Mac"
@ -137,25 +103,40 @@ the appropriate command for your Mac's architecture (Intel or M1).
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
# This will download some bits and pieces and make take a while
(invokeai) python scripts/preload_models.py
# Run SD!
(invokeai) python scripts/dream.py
# or run the web interface!
(invokeai) python scripts/invoke.py --web
# The original scripts should work as well.
(invokeai) python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
```
!!! info
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
it isn't required but won't hurt.
!!! todo "Download the model weight files"
The `preload_models.py` script downloads and installs the model weight
files for you. It will lead you through the process of getting a Hugging Face
account, accepting the Stable Diffusion model weight license agreement, and
creating a download token:
# This will take some time, depending on the speed of your internet connection
# and will consume about 10GB of space
(invokeai) python scripts/preload_models.py
!! todo "Run InvokeAI!"
# Command-line interface
(invokeai) python scripts/invoke.py
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
---
## Common problems

View File

@ -69,40 +69,42 @@ in the wiki
environment file isn't specified, conda will default to `environment.yml`. You will need
to provide the `-f` option if you wish to load a different environment file at any point.
7. Run the command:
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```batch
python scripts\preload_models.py
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
```
This installs several machine learning models that stable diffusion requires.
!!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
Note: This step is required. This was done because some users may might be
blocked by firewalls or have limited internet connectivity for the models to
be downloaded just-in-time.
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the
same thing.
8. Now you need to install the weights for the big stable diffusion model.
8. Start generating images!
- Sign up at https://huggingface.co
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
- Accept the terms and click Access Repository
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
# Command-line interface
(invokeai) python scripts/invoke.py
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
for details.
# or run the web interface on localhost:9090!
(invokeai) python scripts/invoke.py --web
9. Start generating images!
# or run the web interface on your machine's network interface!
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
```batch title="for the pre-release weights"
python scripts\invoke.py -l
```
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
```batch title="for the post-release weights"
python scripts\invoke.py
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
9. Subsequently, to relaunch the script, first activate the Anaconda
command window (step 3),enter the InvokeAI directory (step 5, `cd
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
launch the invoke script (step 9).
!!! tip "Tildebyte has written an alternative"