Documentation updates for 2.2 release

This commit is contained in:
Lincoln Stein 2022-12-01 12:10:47 +00:00
parent bde456f9fa
commit eba31e7caf
12 changed files with 42 additions and 77 deletions

View File

@ -171,12 +171,12 @@ title: Changelog
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
unecesarry caches by @mauwii in
https://github.com/invoke-ai/InvokeAI/pull/1293
- add --no-interactive to preload_models step by @mauwii in
- add --no-interactive to configure_invokeai step by @mauwii in
https://github.com/invoke-ai/InvokeAI/pull/1302
- 1-click installer and updater. Uses micromamba to install git and conda into a
contained environment (if necessary) before running the normal installation
script by @cmdr2 in https://github.com/invoke-ai/InvokeAI/pull/1253
- preload_models.py script downloads the weight files by @lstein in
- configure_invokeai.py script downloads the weight files by @lstein in
https://github.com/invoke-ai/InvokeAI/pull/1290
## v2.0.1 <small>(13 October 2022)</small>

View File

@ -120,7 +120,7 @@ A number of caveats:
(`--iterations`) argument.
3. Your results will be _much_ better if you use the `inpaint-1.5` model
released by runwayML and installed by default by `scripts/preload_models.py`.
released by runwayML and installed by default by `scripts/configure_invokeai.py`.
This model was trained specifically to harmoniously fill in image gaps. The
standard model will work as well, but you may notice color discontinuities at
the border.

View File

@ -28,11 +28,11 @@ should "just work" without further intervention. Simply pass the `--upscale`
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `scripts/preload_models.py`. If GFPAN is failing with an
loaded when you run `scripts/configure_invokeai.py`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
```bash
python scripts/preload_models.py
python scripts/configure_invokeai.py
```
If you do not run this script in advance, the GFPGAN module will attempt to
@ -110,7 +110,7 @@ This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
GFPGAN. You can do this either by running `configure_invokeai.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.

View File

@ -123,7 +123,8 @@ You wil need one of the following:
- [The InvokeAI Web Interface](features/WEB.md)
- [WebGUI hotkey reference guide](features/WEBUIHOTKEYS.md)
<!-- this link does not exist - [WebGUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md) -->
- [WebGUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
<!-- seperator -->
- [The Command Line Interace](features/CLI.md)
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
@ -136,6 +137,7 @@ You wil need one of the following:
- [Prompt Engineering](features/PROMPTS.md)
<!-- seperator -->
- Miscellaneous
- [NSFW Checker](features/NSFW.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other](features/OTHER.md)
@ -160,7 +162,7 @@ You wil need one of the following:
- You can now load
[multiple models and switch among them quickly](https://docs.google.com/presentation/d/1WywGA1rny7bpFh7CLSdTr4nNpVKdlUeT0Bj0jCsILyU/edit?usp=sharing)
without leaving the CLI.
- The installation process (via `scripts/preload_models.py`) now lets you select
- The installation process (via `scripts/configure_invokeai.py`) now lets you select
among several popular
[Stable Diffusion models](https://invoke-ai.github.io/InvokeAI/installation/INSTALLING_MODELS/)
and downloads and installs them on your behalf. Among other models, this

View File

@ -56,7 +56,7 @@ unofficial Stable Diffusion models and where they can be obtained.
There are three ways to install weights files:
1. During InvokeAI installation, the `preload_models.py` script can download
1. During InvokeAI installation, the `configure_invokeai.py` script can download
them for you.
2. You can use the command-line interface (CLI) to import, configure and modify
@ -65,13 +65,13 @@ There are three ways to install weights files:
3. You can download the files manually and add the appropriate entries to
`models.yaml`.
### Installation via `preload_models.py`
### Installation via `configure_invokeai.py`
This is the most automatic way. Run `scripts/preload_models.py` from the
This is the most automatic way. Run `scripts/configure_invokeai.py` from the
console. It will ask you to select which models to download and lead you through
the steps of setting up a Hugging Face account if you haven't done so already.
To start, run `python scripts/preload_models.py` from within the InvokeAI:
To start, run `python scripts/configure_invokeai.py` from within the InvokeAI:
directory
!!! example ""
@ -238,7 +238,7 @@ arabian-nights-1.0:
| arabian-nights-1.0 | This is the name of the model that you will refer to from within the CLI and the WebGUI when you need to load and use the model. |
| description | Any description that you want to add to the model to remind you what it is. |
| weights | Relative path to the .ckpt weights file for this model. |
| config | This is the confusingly-named configuration file for the model itself. Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens to need a custom configuration, in which case the place you downloaded it from will tell you what to use instead. For example, the runwayML custom inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`. This is already inclued in the InvokeAI distribution and is configured automatically for you by the `preload_models.py` script. |
| config | This is the confusingly-named configuration file for the model itself. Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens to need a custom configuration, in which case the place you downloaded it from will tell you what to use instead. For example, the runwayML custom inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`. This is already inclued in the InvokeAI distribution and is configured automatically for you by the `configure_invokeai.py` script. |
| vae | If you want to add a VAE file to the model, then enter its path here. |
| width, height | This is the width and height of the images used to train the model. Currently they are always 512 and 512. |

View File

@ -2,7 +2,7 @@
title: Running InvokeAI on Google Colab using a Jupyter Notebook
---
# THIS NEEDS TO BE FLESHED OUT
# THIS DOCUMENTATION IS UNFINISHED - VOLUNTEERS GRATEFULLY ACCEPTED
## Introduction
@ -22,6 +22,4 @@ start running the cells one-by-one.
### Updating the stable version
### Updating to the development version
## Troubleshooting

View File

@ -155,10 +155,10 @@ command-line completion.
process for this is described in [here](INSTALLING_MODELS.md).
```bash
python scripts/preload_models.py
python scripts/configure_invokeai.py
```
The script `preload_models.py` will interactively guide you through the
The script `configure_invokeai.py` will interactively guide you through the
process of downloading and installing the weights files needed for InvokeAI.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you have to agree to. The script will list the steps you need
@ -220,7 +220,7 @@ greatest version, launch the Anaconda window, enter `InvokeAI` and type:
```bash
git pull
conda env update
python scripts/preload_models.py --no-interactive #optional
python scripts/configure_invokeai.py --no-interactive #optional
```
This will bring your local copy into sync with the remote one. The last step may
@ -359,7 +359,7 @@ brew install llvm
If brew config has Clang installed, update to the latest llvm and try creating the environment again.
#### `preload_models.py` or `invoke.py` crashes at an early stage
#### `configure_invokeai.py` or `invoke.py` crashes at an early stage
This is usually due to an incomplete or corrupted Conda install. Make sure you
have linked to the correct environment file and run `conda update` again.

View File

@ -67,7 +67,7 @@ off the process.
InvokeAI code and install it along with its dependencies.
6. After installation completes, the installer will launch a script called
`preload_models.py`, which will guide you through the first-time process of
`configure_invokeai.py`, which will guide you through the first-time process of
selecting one or more Stable Diffusion model weights files, downloading and
configuring them.
@ -119,31 +119,15 @@ This section describes how to update InvokeAI to new versions of the software.
This distribution is changing rapidly, and we add new features on a daily basis.
To update to the latest released version (recommended), run the `update.sh`
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
release and re-run the `preload_models` script to download any updated models
release and re-run the `configure_invokeai` script to download any updated models
files that may be needed. You can also use this to add additional models that
you did not select at installation time.
### Updating to the development version
There may be times that there is a feature in the `development` branch of
InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that
corrects an annoying bug. To do this, you will use the developer's console.
From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or
`invoke.bat` (Windows) and selection option (3) to open the developers console.
Then run the following command to get the `development branch`:
```bash
git checkout development
git pull
conda env update
```
You can now close the developer console and run `invoke` as before. If you get
complaints about missing models, then you may need to do the additional step of
running `preload_models.py`. This happens relatively infrequently. To do this,
running `configure_invokeai.py`. This happens relatively infrequently. To do this,
simply open up the developer's console again and type
`python scripts/preload_models.py`.
`python scripts/configure_invokeai.py`.
## Troubleshooting

View File

@ -5,37 +5,18 @@ title: Overview
We offer several ways to install InvokeAI, each one suited to your
experience and preferences.
1. [InvokeAI installer](INSTALL_INVOKE.md)
1. [Source code installer](INSTALL_SOURCE.md)
This is a installer script that installs InvokeAI and all the
third party libraries it depends on. When a new version of
InvokeAI is released, you will download and reinstall the new
version.
This is a script that will install Python, the Anaconda ("conda")
package manager, all of InvokeAI's its essential third party
libraries and InvokeAI itself. It includes access to a "developer
console" which will help us debug problems with you and give you
to access experimental features.
This installer is designed for people who want the system to "just
work", don't have an interest in tinkering with it, and do not
care about upgrading to unreleased experimental features.
This method is recommended for individuals who are installing
InvokeAI for the first time.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
2. [Source code installer](INSTALL_SOURCE.md)
This is a script that will install InvokeAI and all its essential
third party libraries. In contrast to the previous installer, it
includes access to a "developer console" which will allow you to
access experimental features on the development branch.
This method is recommended for individuals who are wish to stay
on the cutting edge of InvokeAI development and are not afraid
of occasional breakage.
3. [Manual Installation](INSTALL_MANUAL.md)
2. [Manual Installation](INSTALL_MANUAL.md)
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
@ -47,14 +28,14 @@ experience and preferences.
the cutting edge of future InvokeAI development and is willing to put
up with occasional glitches and breakage.
4. [Docker Installation](INSTALL_DOCKER.md)
3. [Docker Installation](INSTALL_DOCKER.md)
We also offer a method for creating Docker containers containing
InvokeAI and its dependencies. This method is recommended for
individuals with experience with Docker containers and understand
the pluses and minuses of a container-based install.
5. [Jupyter Notebooks Installation](INSTALL_JUPYTER.md)
4. [Jupyter Notebooks Installation](INSTALL_JUPYTER.md)
This method is suitable for running InvokeAI on a Google Colab
account. It is recommended for individuals who have previously

View File

@ -69,7 +69,7 @@ title: Manual Installation, Linux
machine-learning models:
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
(invokeai) ~/InvokeAI$ python3 scripts/configure_invokeai.py
```
!!! note

View File

@ -111,7 +111,7 @@ will do our best to help.
!!! todo "Download the model weight files"
The `preload_models.py` script downloads and installs the model weight
The `configure_invokeai.py` script downloads and installs the model weight
files for you. It will lead you through the process of getting a Hugging Face
account, accepting the Stable Diffusion model weight license agreement, and
creating a download token:
@ -119,7 +119,7 @@ will do our best to help.
```bash
# This will take some time, depending on the speed of your internet connection
# and will consume about 10GB of space
python scripts/preload_models.py
python scripts/configure_invokeai.py
```
!!! todo "Run InvokeAI!"
@ -220,8 +220,8 @@ There are several causes of these errors:
with "(invokeai)" then you activated it. If it begins with "(base)" or
something else you haven't.
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
instead of `python ./scripts/preload_models.py` or
2. You might've run `./scripts/configure_invokeai.py` or `./scripts/invoke.py`
instead of `python ./scripts/configure_invokeai.py` or
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
@ -359,7 +359,7 @@ python ./scripts/txt2img.py \
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
```bash
python scripts/preload_models.py
python scripts/configure_invokeai.py
```
---

View File

@ -65,7 +65,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash
python scripts/preload_models.py
python scripts/configure_invokeai.py
```
!!! note