Merge branch 'main' into patch-1

This commit is contained in:
Lincoln Stein
2023-01-16 08:38:14 -05:00
committed by GitHub
99 changed files with 9477 additions and 2127 deletions

View File

@ -4,6 +4,97 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. The format of the models directory has changed to mimic the
HuggingFace cache directory. By default, diffusers models are
now automatically downloaded and retrieved from the directory
`ROOTDIR/models/diffusers`, while other models are stored in
the directory `ROOTDIR/models/hub`. This organization is the
same as that used by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/hugging_face`
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

View File

@ -12,17 +12,18 @@ title: Installing Manually
## Introduction
You have two choices for manual installation, the [first
one](#PIP_method) uses basic Python virtual environment (`venv`)
commands and the PIP package manager. The [second one](#Conda_method)
based on the Anaconda3 package manager (`conda`). Both methods require
you to enter commands on the terminal, also known as the "console".
You have two choices for manual installation.
The [first one](#pip-Install) uses basic Python virtual environment (`venv`)
command and `pip` package manager.
The [second one](#Conda-method) uses Anaconda3 package manager (`conda`).
Both methods require you to enter commands on the terminal, also known as the
"console".
Note that the conda install method is currently deprecated and will not
be supported at some point in the future.
Note that the `conda` installation method is currently deprecated and will
not be supported at some point in the future.
On Windows systems you are encouraged to install and use the
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
On Windows systems, you are encouraged to install and use the
[PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
which provides compatibility with Linux and Mac shells and nice
features such as command-line completion.
@ -37,7 +38,7 @@ manager, please follow these steps:
```bash
python -V
```
2. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
GitHub:
@ -52,15 +53,15 @@ manager, please follow these steps:
environment named `invokeai`:
```bash
python -mvenv invokeai
python -m venv invokeai
source invokeai/bin/activate
```
4. Make sure that pip is installed in your virtual environment an up to date:
4. Make sure that pip is installed in your virtual environment an up to date:
```bash
python -mensurepip --upgrade
python -mpip install --upgrade pip
python -m ensurepip --upgrade
python -m pip install --upgrade pip
```
5. Pick the correct `requirements*.txt` file for your hardware and operating
@ -199,20 +200,20 @@ manager, please follow these steps:
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of the directory.
9. Render away!
9. Render away!
Browse the [features](../features/CLI.md) section to learn about all the things you
can do with InvokeAI.
Note that some GPUs are slow to warm up. In particular, when using an AMD
card with the ROCm driver, you may have to wait for over a minute the first
time you try to generate an image. Fortunately, after the warm up period
time you try to generate an image. Fortunately, after the warm-up period
rendering will be fast.
10. Subsequently, to relaunch the script, be sure to run "conda activate
invokeai", enter the `InvokeAI` directory, and then launch the invoke
script. If you forget to activate the 'invokeai' environment, the script
will fail with multiple `ModuleNotFound` errors.
10. Subsequently, to relaunch the script, be sure to enter `InvokeAI` directory,
activate the virtual environment, and then launch `invoke.py` script.
If you forget to activate the virtual environment,
the script will fail with multiple `ModuleNotFound` errors.
!!! tip