mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3
This commit is contained in:
commit
061fb4ef00
@ -19,13 +19,13 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
|
|||||||
This will take the original image shown here:
|
This will take the original image shown here:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png)
|
![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 }
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
and generate a new image based on it as shown here:
|
and generate a new image based on it as shown here:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png)
|
![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 }
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
||||||
@ -45,15 +45,16 @@ Note that the prompt makes a big difference. For example, this slight variation
|
|||||||
on the prompt produces a very different image:
|
on the prompt produces a very different image:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png)
|
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
|
||||||
<caption markdown>photograph of a tree on a hill with a river</caption>
|
<caption markdown>photograph of a tree on a hill with a river</caption>
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
!!! tip
|
!!! tip
|
||||||
|
|
||||||
When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
|
When designing prompts, think about how the images scraped from the internet were
|
||||||
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
|
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
|
||||||
model, or film settings.
|
They will, however, be captioned with the publication, photographer, camera model,
|
||||||
|
or film settings.
|
||||||
|
|
||||||
If the initial image contains transparent regions, then Stable Diffusion will
|
If the initial image contains transparent regions, then Stable Diffusion will
|
||||||
only draw within the transparent regions, a process called
|
only draw within the transparent regions, a process called
|
||||||
@ -61,17 +62,17 @@ only draw within the transparent regions, a process called
|
|||||||
However, for this to work correctly, the color information underneath the
|
However, for this to work correctly, the color information underneath the
|
||||||
transparent needs to be preserved, not erased.
|
transparent needs to be preserved, not erased.
|
||||||
|
|
||||||
!!! warning
|
!!! warning "**IMPORTANT ISSUE** "
|
||||||
|
|
||||||
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller
|
`img2img` does not work properly on initial images smaller
|
||||||
than 512x512. Please scale your image to at least 512x512 before using it.
|
than 512x512. Please scale your image to at least 512x512 before using it.
|
||||||
Larger images are not a problem, but may run out of VRAM on your GPU card. To
|
Larger images are not a problem, but may run out of VRAM on your GPU card. To
|
||||||
fix this, use the --fit option, which downscales the initial image to fit within
|
fix this, use the --fit option, which downscales the initial image to fit within
|
||||||
the box specified by width x height:
|
the box specified by width x height:
|
||||||
|
|
||||||
```
|
```
|
||||||
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
||||||
```
|
```
|
||||||
|
|
||||||
## How does it actually work, though?
|
## How does it actually work, though?
|
||||||
|
|
||||||
@ -87,7 +88,7 @@ from a prompt. If the step count is 10, then the "latent space" (Stable
|
|||||||
Diffusion's internal representation of the image) for the prompt "fire" with
|
Diffusion's internal representation of the image) for the prompt "fire" with
|
||||||
seed `1592514025` develops something like this:
|
seed `1592514025` develops something like this:
|
||||||
|
|
||||||
```commandline
|
```bash
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -133,9 +134,9 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
|||||||
|
|
||||||
| | strength = 0.7 | strength = 0.4 |
|
| | strength = 0.7 | strength = 0.4 |
|
||||||
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
|
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
|
||||||
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
|
| initial image that SD sees | ![step-0](../assets/img2img/000032.step-0.png) | ![step-0](../assets/img2img/000030.step-0.png) |
|
||||||
| steps argument to `invoke>` | `-S10` | `-S10` |
|
| steps argument to `invoke>` | `-S10` | `-S10` |
|
||||||
| steps actually taken | 7 | 4 |
|
| steps actually taken | `7` | `4` |
|
||||||
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
|
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
|
||||||
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
|
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
|
||||||
|
|
||||||
@ -150,7 +151,7 @@ If you want to try this out yourself, all of these are using a seed of
|
|||||||
`1592514025` with a width/height of `384`, step count `10`, the default sampler
|
`1592514025` with a width/height of `384`, step count `10`, the default sampler
|
||||||
(`k_lms`), and the single-word prompt `"fire"`:
|
(`k_lms`), and the single-word prompt `"fire"`:
|
||||||
|
|
||||||
```commandline
|
```bash
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -170,7 +171,7 @@ give each generation 20 steps.
|
|||||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
||||||
does `20` steps from my image):
|
does `20` steps from my image):
|
||||||
|
|
||||||
```commandline
|
```bash
|
||||||
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -4,12 +4,17 @@ title: Docker
|
|||||||
|
|
||||||
# :fontawesome-brands-docker: Docker
|
# :fontawesome-brands-docker: Docker
|
||||||
|
|
||||||
## Before you begin
|
!!! warning "For end users"
|
||||||
|
|
||||||
- For end users: Install InvokeAI locally using the instructions for your OS.
|
We highly recommend to Install InvokeAI locally using [these instructions](index.md)"
|
||||||
- For developers: For container-related development tasks or for enabling easy
|
|
||||||
deployment to other environments (on-premises or cloud), follow these
|
!!! tip "For developers"
|
||||||
instructions. For general use, install locally to leverage your machine's GPU.
|
|
||||||
|
For container-related development tasks or for enabling easy
|
||||||
|
deployment to other environments (on-premises or cloud), follow these
|
||||||
|
instructions.
|
||||||
|
|
||||||
|
For general use, install locally to leverage your machine's GPU.
|
||||||
|
|
||||||
## Why containers?
|
## Why containers?
|
||||||
|
|
||||||
@ -94,11 +99,11 @@ After the build process is done, you can run the container via the provided
|
|||||||
./docker-build/run.sh
|
./docker-build/run.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
When used without arguments, the container will start the website and provide
|
When used without arguments, the container will start the webserver and provide
|
||||||
you the link to open it. But if you want to use some other parameters you can
|
you the link to open it. But if you want to use some other parameters you can
|
||||||
also do so.
|
also do so.
|
||||||
|
|
||||||
!!! example
|
!!! example ""
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker-build/run.sh --from_file tests/validate_pr_prompt.txt
|
docker-build/run.sh --from_file tests/validate_pr_prompt.txt
|
||||||
@ -112,7 +117,7 @@ also do so.
|
|||||||
|
|
||||||
!!! warning "Deprecated"
|
!!! warning "Deprecated"
|
||||||
|
|
||||||
From here on you will find the rest of the previous Docker-Docs, which will still
|
From here on you will find the the previous Docker-Docs, which will still
|
||||||
provide some usefull informations.
|
provide some usefull informations.
|
||||||
|
|
||||||
## Usage (time to have fun)
|
## Usage (time to have fun)
|
||||||
|
@ -14,8 +14,7 @@ download the notebook from the link above and load it up in VSCode
|
|||||||
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
||||||
start running the cells one-by-one.
|
start running the cells one-by-one.
|
||||||
|
|
||||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
|
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
|
||||||
|
|
||||||
|
|
||||||
## Walkthrough
|
## Walkthrough
|
||||||
|
|
||||||
|
@ -2,51 +2,54 @@
|
|||||||
title: Manual Installation
|
title: Manual Installation
|
||||||
---
|
---
|
||||||
|
|
||||||
# :fontawesome-brands-linux: Linux
|
<figure markdown>
|
||||||
# :fontawesome-brands-apple: macOS
|
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
|
||||||
# :fontawesome-brands-windows: Windows
|
</figure>
|
||||||
|
|
||||||
|
!!! warning "This is for advanced Users"
|
||||||
|
|
||||||
|
who are already expirienced with using conda or pip
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
You have two choices for manual installation, the [first
|
You have two choices for manual installation, the [first one](#Conda_method)
|
||||||
one](#Conda_method) based on the Anaconda3 package manager (`conda`),
|
based on the Anaconda3 package manager (`conda`), and
|
||||||
and [a second one](#PIP_method) which uses basic Python virtual
|
[a second one](#PIP_method) which uses basic Python virtual environment (`venv`)
|
||||||
environment (`venv`) commands and the PIP package manager. Both
|
commands and the PIP package manager. Both methods require you to enter commands
|
||||||
methods require you to enter commands on the command-line shell, also
|
on the terminal, also known as the "console".
|
||||||
known as the "console".
|
|
||||||
|
|
||||||
On Windows systems you are encouraged to install and use the
|
On Windows systems you are encouraged to install and use the
|
||||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||||
which provides compatibility with Linux and Mac shells and nice
|
which provides compatibility with Linux and Mac shells and nice features such as
|
||||||
features such as command-line completion.
|
command-line completion.
|
||||||
|
|
||||||
### Conda method
|
### Conda method
|
||||||
|
|
||||||
1. Check that your system meets the [hardware
|
1. Check that your system meets the
|
||||||
requirements](index.md#Hardware_Requirements) and has the appropriate
|
[hardware requirements](index.md#Hardware_Requirements) and has the
|
||||||
GPU drivers installed. In particular, if you are a Linux user with an
|
appropriate GPU drivers installed. In particular, if you are a Linux user
|
||||||
AMD GPU installed, you may need to install the [ROCm
|
with an AMD GPU installed, you may need to install the
|
||||||
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
InvokeAI does not yet support Windows machines with AMD GPUs due to
|
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
|
||||||
the lack of ROCm driver support on this platform.
|
of ROCm driver support on this platform.
|
||||||
|
|
||||||
To confirm that the appropriate drivers are installed, run
|
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
|
||||||
`nvidia-smi` on NVIDIA/CUDA systems, and `rocm-smi` on AMD
|
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
|
||||||
systems. These should return information about the installed video
|
information about the installed video card.
|
||||||
card.
|
|
||||||
|
|
||||||
Macintosh users with MPS acceleration, or anybody with a CPU-only
|
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
|
||||||
system, can skip this step.
|
can skip this step.
|
||||||
|
|
||||||
2. You will need to install Anaconda3 and Git if they are not already
|
2. You will need to install Anaconda3 and Git if they are not already
|
||||||
available. Use your operating system's preferred installer, or
|
available. Use your operating system's preferred package manager, or
|
||||||
download installers from the following URLs
|
download the installers manually. You can find them here:
|
||||||
|
|
||||||
- Anaconda3 (https://www.anaconda.com/)
|
- [Anaconda3](https://www.anaconda.com/)
|
||||||
- git (https://git-scm.com/downloads)
|
- [git](https://git-scm.com/downloads)
|
||||||
|
|
||||||
3. Copy the InvokeAI source code from GitHub using `git`:
|
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
||||||
|
GitHub:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
@ -55,122 +58,160 @@ download installers from the following URLs
|
|||||||
This will create InvokeAI folder where you will follow the rest of the
|
This will create InvokeAI folder where you will follow the rest of the
|
||||||
steps.
|
steps.
|
||||||
|
|
||||||
3. Enter the newly-created InvokeAI folder. From this step forward make sure
|
4. Enter the newly-created InvokeAI folder:
|
||||||
that you are working in the InvokeAI directory!
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd InvokeAI
|
cd InvokeAI
|
||||||
```
|
```
|
||||||
4. Select the appropriate environment file:
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different
|
From this step forward make sure that you are working in the InvokeAI
|
||||||
operating systems and GPU hardware. They are located in the
|
directory!
|
||||||
|
|
||||||
|
5. Select the appropriate environment file:
|
||||||
|
|
||||||
|
We have created a series of environment files suited for different operating
|
||||||
|
systems and GPU hardware. They are located in the
|
||||||
`environments-and-requirements` directory:
|
`environments-and-requirements` directory:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
| filename | OS |
|
||||||
|
| :----------------------: | :----------------------------: |
|
||||||
|
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
|
||||||
|
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
|
||||||
|
| environment-mac.yml | Macintosh |
|
||||||
|
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Choose the appropriate environment file for your system and link or copy it
|
||||||
|
to `environment.yml` in InvokeAI's top-level directory. To do so, run
|
||||||
|
following command from the repository-root:
|
||||||
|
|
||||||
|
!!! Example ""
|
||||||
|
|
||||||
|
=== "Macintosh and Linux"
|
||||||
|
|
||||||
|
!!! todo
|
||||||
|
|
||||||
|
Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the
|
||||||
|
table above
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
When this is done, confirm that a file `environment.yml` has been linked in
|
||||||
|
the InvokeAI root directory and that it points to the correct file in the
|
||||||
|
`environments-and-requirements`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -la
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Windows"
|
||||||
|
|
||||||
|
Since it requires admin privileges to create links, we will use the copy (cp)
|
||||||
|
command to create your `environment.yml`
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
cp environments-and-requirements\environment-win-cuda.yml environment.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Afterwards verify that the file `environment.yml` has been created, either via the
|
||||||
|
explorer or by using the command `dir` from the terminal
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
dir
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Create the conda environment:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
|
conda env update
|
||||||
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
|
|
||||||
environment-mac.yml # Macintoshes with MPS acceleration
|
|
||||||
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Select the appropriate environment file, and make a link to it
|
This will create a new environment named `invokeai` and install all InvokeAI
|
||||||
from `environment.yml` in the top-level InvokeAI directory. The
|
dependencies into it. If something goes wrong you should take a look at
|
||||||
command to do this from the top-level directory is:
|
[troubleshooting](#troubleshooting).
|
||||||
|
|
||||||
!!! todo "Macintosh and Linux"
|
7. Activate the `invokeai` environment:
|
||||||
|
|
||||||
```bash
|
In order to use the newly created environment you will first need to
|
||||||
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
activate it
|
||||||
```
|
|
||||||
|
|
||||||
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
|
```bash
|
||||||
|
conda activate invokeai
|
||||||
|
```
|
||||||
|
|
||||||
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command"
|
Your command-line prompt should change to indicate that `invokeai` is active
|
||||||
|
by prepending `(invokeai)`.
|
||||||
|
|
||||||
```bash
|
8. Pre-Load the model weights files:
|
||||||
cp environments-and-requirements\environment-win-cuda.yml environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
When this is done, confirm that a file `environment.yml` has been created in
|
!!! tip
|
||||||
the InvokeAI root directory and that it points to the correct file in the
|
|
||||||
`environments-and-requirements`.
|
|
||||||
|
|
||||||
4. Run conda:
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
conda env update
|
python scripts/preload_models.py
|
||||||
```
|
```
|
||||||
|
|
||||||
This will create a new environment named `invokeai` and install all
|
The script `preload_models.py` will interactively guide you through the
|
||||||
InvokeAI dependencies into it.
|
process of downloading and installing the weights files needed for InvokeAI.
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you have to agree to. The script will list the steps you need
|
||||||
|
to take to create an account on the site that hosts the weights files,
|
||||||
|
accept the agreement, and provide an access token that allows InvokeAI to
|
||||||
|
legally download and install the weights files.
|
||||||
|
|
||||||
If something goes wrong at this point, see
|
If you get an error message about a module not being installed, check that
|
||||||
[troubleshooting](#Troubleshooting).
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
5. Activate the `invokeai` environment:
|
9. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
```bash
|
!!! example ""
|
||||||
conda activate invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
Your command-line prompt should change to indicate that `invokeai` is active.
|
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
|
|
||||||
6. Load the model weights files:
|
=== "CLI"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/preload_models.py
|
python scripts/invoke.py
|
||||||
```
|
```
|
||||||
|
|
||||||
(Windows users should use the backslash instead of the slash)
|
=== "local Webserver"
|
||||||
|
|
||||||
The script `preload_models.py` will interactively guide you through
|
```bash
|
||||||
downloading and installing the weights files needed for
|
python scripts/invoke.py --web
|
||||||
InvokeAI. Note that the main Stable Diffusion weights file is
|
```
|
||||||
protected by a license agreement that you have to agree to. The
|
|
||||||
script will list the steps you need to take to create an account on
|
|
||||||
the site that hosts the weights files, accept the agreement, and
|
|
||||||
provide an access token that allows InvokeAI to legally download
|
|
||||||
and install the weights files.
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another
|
=== "Public Webserver"
|
||||||
Stable Diffusion distribution, you may skip this step (by selecting
|
|
||||||
"skip" when prompted) and configure InvokeAI to use the
|
|
||||||
previously-downloaded files. The process for this is described in
|
|
||||||
[INSTALLING_MODELS.md].
|
|
||||||
|
|
||||||
If you get an error message about a module not being installed,
|
```bash
|
||||||
check that the `invokeai` environment is active and if not, repeat
|
python scripts/invoke.py --web --host 0.0.0.0
|
||||||
step 5.
|
```
|
||||||
|
|
||||||
7. Run the command-line interface or the web interface:
|
If you choose the run the web interface, point your browser at
|
||||||
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
```bash
|
10. Render away!
|
||||||
python scripts/invoke.py # command line
|
|
||||||
python scripts/invoke.py --web # web interface
|
|
||||||
```
|
|
||||||
|
|
||||||
(Windows users replace backslash with forward slash)
|
Browse the [features](../features) section to learn about all the things you
|
||||||
|
can do with InvokeAI.
|
||||||
|
|
||||||
If you choose the run the web interface, point your browser at
|
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||||
http://localhost:9090 in order to load the GUI.
|
card with the ROCm driver, you may have to wait for over a minute the first
|
||||||
|
time you try to generate an image. Fortunately, after the warm up period
|
||||||
|
rendering will be fast.
|
||||||
|
|
||||||
8. Render away!
|
11. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||||
|
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
||||||
Browse the features listed in the [Stable Diffusion Toolkit
|
script. If you forget to activate the 'invokeai' environment, the script
|
||||||
Docs](https://invoke-ai.git) to learn about all the things you can
|
will fail with multiple `ModuleNotFound` errors.
|
||||||
do with InvokeAI.
|
|
||||||
|
|
||||||
Note that some GPUs are slow to warm up. In particular, when using
|
|
||||||
an AMD card with the ROCm driver, you may have to wait for over a
|
|
||||||
minute the first time you try to generate an image. Fortunately, after
|
|
||||||
the warm up period rendering will be fast.
|
|
||||||
|
|
||||||
9. Subsequently, to relaunch the script, be sure to run "conda
|
|
||||||
activate invokeai", enter the `InvokeAI` directory, and then launch
|
|
||||||
the invoke script. If you forget to activate the 'invokeai'
|
|
||||||
environment, the script will fail with multiple `ModuleNotFound`
|
|
||||||
errors.
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
## Updating to newer versions of the script
|
||||||
|
|
||||||
@ -184,185 +225,192 @@ conda env update
|
|||||||
python scripts/preload_models.py --no-interactive #optional
|
python scripts/preload_models.py --no-interactive #optional
|
||||||
```
|
```
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one. The
|
This will bring your local copy into sync with the remote one. The last step may
|
||||||
last step may be needed to take advantage of new features or released
|
be needed to take advantage of new features or released models. The
|
||||||
models. The `--no-interactive` flag will prevent the script from
|
`--no-interactive` flag will prevent the script from prompting you to download
|
||||||
prompting you to download the big Stable Diffusion weights files.
|
the big Stable Diffusion weights files.
|
||||||
|
|
||||||
## pip Install
|
## pip Install
|
||||||
|
|
||||||
To install InvokeAI with only the PIP package manager, please follow
|
To install InvokeAI with only the PIP package manager, please follow these
|
||||||
these steps:
|
steps:
|
||||||
|
|
||||||
1. Make sure you are using Python 3.9 or higher. The rest of the install
|
1. Make sure you are using Python 3.9 or higher. The rest of the install
|
||||||
procedure depends on this:
|
procedure depends on this:
|
||||||
|
|
||||||
```bash
|
|
||||||
python -V
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Install the `virtualenv` tool if you don't have it already:
|
|
||||||
```bash
|
|
||||||
pip install virtualenv
|
|
||||||
```
|
|
||||||
|
|
||||||
3. From within the InvokeAI top-level directory, create and activate a
|
|
||||||
virtual environment named `invokeai`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
virtualenv invokeai
|
|
||||||
source invokeai/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Pick the correct `requirements*.txt` file for your hardware and
|
|
||||||
operating system.
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different
|
|
||||||
operating systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
requirements-lin-amd.txt # Linux with an AMD (ROCm) GPU
|
python -V
|
||||||
requirements-lin-arm64.txt # Linux running on arm64 systems
|
|
||||||
requirements-lin-cuda.txt # Linux with an NVIDIA (CUDA) GPU
|
|
||||||
requirements-mac-mps-cpu.txt # Macintoshes with MPS acceleration
|
|
||||||
requirements-lin-win-colab-cuda.txt # Windows with an NVIDA (CUDA) GPU
|
|
||||||
# (supports Google Colab too)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Select the appropriate requirements file, and make a link to it
|
2. Install the `virtualenv` tool if you don't have it already:
|
||||||
from `environment.txt` in the top-level InvokeAI directory. The
|
|
||||||
command to do this from the top-level directory is:
|
|
||||||
|
|
||||||
!!! todo "Macintosh and Linux"
|
```bash
|
||||||
|
pip install virtualenv
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
3. From within the InvokeAI top-level directory, create and activate a virtual
|
||||||
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
environment named `invokeai`:
|
||||||
```
|
|
||||||
|
|
||||||
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
|
```bash
|
||||||
|
virtualenv invokeai
|
||||||
|
source invokeai/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command instead"
|
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
||||||
|
system.
|
||||||
|
|
||||||
```bash
|
We have created a series of environment files suited for different operating
|
||||||
cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
systems and GPU hardware. They are located in the
|
||||||
```
|
`environments-and-requirements` directory:
|
||||||
|
|
||||||
Note that the order of arguments is reversed between the Linux/Mac and Windows
|
<figure markdown>
|
||||||
commands!
|
|
||||||
|
|
||||||
Please do not link directly to the file
|
| filename | OS |
|
||||||
`environments-and-requirements/requirements.txt`. This is a base requirements
|
| :---------------------------------: | :-------------------------------------------------------------: |
|
||||||
file that does not have the platform-specific libraries.
|
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
||||||
|
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
||||||
|
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
||||||
|
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
||||||
|
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
||||||
|
|
||||||
When this is done, confirm that a file `requirements.txt` has been
|
</figure>
|
||||||
created in the InvokeAI root directory and that it points to the
|
|
||||||
correct file in the `environments-and-requirements`.
|
|
||||||
|
|
||||||
5. Run PIP
|
Select the appropriate requirements file, and make a link to it from
|
||||||
|
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
||||||
|
this from the top-level directory is:
|
||||||
|
|
||||||
Be sure that the `invokeai` environment is active before doing
|
!!! example ""
|
||||||
this:
|
|
||||||
|
|
||||||
```bash
|
=== "Macintosh and Linux"
|
||||||
pip install --prefer-binary -r requirements.txt
|
|
||||||
```
|
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Windows"
|
||||||
|
|
||||||
|
!!! info "since admin privileges are required to make links, so we use the copy (`cp`) command instead"
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Please do not link or copy `environments-and-requirements/requirements.txt`.
|
||||||
|
This is a base requirements file that does not have the platform-specific
|
||||||
|
libraries.
|
||||||
|
|
||||||
|
When this is done, confirm that a file named `requirements.txt` has been
|
||||||
|
created in the InvokeAI root directory and that it points to the correct
|
||||||
|
file in `environments-and-requirements`.
|
||||||
|
|
||||||
|
5. Run PIP
|
||||||
|
|
||||||
|
Be sure that the `invokeai` environment is active before doing this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --prefer-binary -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
Here are some common issues and their suggested solutions.
|
Here are some common issues and their suggested solutions.
|
||||||
|
|
||||||
### Conda install
|
### Conda
|
||||||
|
|
||||||
1. Conda fails before completing `conda update`:
|
#### Conda fails before completing `conda update`
|
||||||
|
|
||||||
The usual source of these errors is a package
|
The usual source of these errors is a package incompatibility. While we have
|
||||||
incompatibility. While we have tried to minimize these, over time
|
tried to minimize these, over time packages get updated and sometimes introduce
|
||||||
packages get updated and sometimes introduce incompatibilities.
|
incompatibilities.
|
||||||
|
|
||||||
We suggest that you search
|
We suggest that you search
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
|
||||||
"bugs-and-support" channel of the [InvokeAI
|
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
||||||
Discord](https://discord.gg/ZmtBAhwWhy).
|
|
||||||
|
|
||||||
You may also try to install the broken packages manually using PIP. To do this, activate
|
You may also try to install the broken packages manually using PIP. To do this,
|
||||||
the `invokeai` environment, and run `pip install` with the name and version of the
|
activate the `invokeai` environment, and run `pip install` with the name and
|
||||||
package that is causing the incompatibility. For example:
|
version of the package that is causing the incompatibility. For example:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install test-tube==0.7.5
|
pip install test-tube==0.7.5
|
||||||
```
|
```
|
||||||
|
|
||||||
You can keep doing this until all requirements are satisfied and
|
You can keep doing this until all requirements are satisfied and the `invoke.py`
|
||||||
the `invoke.py` script runs without errors. Please report to
|
script runs without errors. Please report to
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
|
||||||
were able to do to work around the problem so that others can
|
to work around the problem so that others can benefit from your investigation.
|
||||||
benefit from your investigation.
|
|
||||||
|
|
||||||
2. `preload_models.py` or `invoke.py` crashes at an early stage
|
#### `preload_models.py` or `invoke.py` crashes at an early stage
|
||||||
|
|
||||||
This is usually due to an incomplete or corrupted Conda install.
|
This is usually due to an incomplete or corrupted Conda install. Make sure you
|
||||||
Make sure you have linked to the correct environment file and run
|
have linked to the correct environment file and run `conda update` again.
|
||||||
`conda update` again.
|
|
||||||
|
|
||||||
If the problem persists, a more extreme measure is to clear Conda's
|
If the problem persists, a more extreme measure is to clear Conda's caches and
|
||||||
caches and remove the `invokeai` environment:
|
remove the `invokeai` environment:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
conda deactivate
|
conda deactivate
|
||||||
conda env remove -n invokeai
|
conda env remove -n invokeai
|
||||||
conda clean -a
|
conda clean -a
|
||||||
conda update
|
conda update
|
||||||
```
|
```
|
||||||
|
|
||||||
This removes all cached library files, including ones that may have
|
This removes all cached library files, including ones that may have been
|
||||||
been corrupted somehow. (This is not supposed to happen, but does
|
corrupted somehow. (This is not supposed to happen, but does anyway).
|
||||||
anyway).
|
|
||||||
|
|
||||||
3. `invoke.py` crashes at a later stage.
|
#### `invoke.py` crashes at a later stage
|
||||||
|
|
||||||
If the CLI or web site had been working ok, but something
|
If the CLI or web site had been working ok, but something unexpected happens
|
||||||
unexpected happens later on during the session, you've encountered
|
later on during the session, you've encountered a code bug that is probably
|
||||||
a code bug that is probably unrelated to an install issue. Please
|
unrelated to an install issue. Please search
|
||||||
search [Issues](https://github.com/invoke-ai/InvokeAI/issues), file
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
|
||||||
a bug report, or ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
||||||
|
|
||||||
4. My renders are running very slowly!
|
#### My renders are running very slowly
|
||||||
|
|
||||||
You may have installed the wrong torch (machine learning) package,
|
You may have installed the wrong torch (machine learning) package, and the
|
||||||
and the system is running on CPU rather than the GPU. To check,
|
system is running on CPU rather than the GPU. To check, look at the log messages
|
||||||
look at the log messages that appear when `invoke.py` is first
|
that appear when `invoke.py` is first starting up. One of the earlier lines
|
||||||
starting up. One of the earlier lines should say `Using device type
|
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
|
||||||
cuda`. On AMD systems, it will also say "cuda", and on Macintoshes,
|
and on Macintoshes, it should say "mps". If instead the message says it is
|
||||||
it should say "mps". If instead the message says it is running on
|
running on "cpu", then you may need to install the correct torch library.
|
||||||
"cpu", then you may need to install the correct torch library.
|
|
||||||
|
|
||||||
You may be able to fix this by installing a different torch
|
You may be able to fix this by installing a different torch library. Here are
|
||||||
library. Here are the magic incantations for Conda and PIP.
|
the magic incantations for Conda and PIP.
|
||||||
|
|
||||||
!!! todo "For CUDA systems"
|
!!! todo "For CUDA systems"
|
||||||
|
|
||||||
(conda)
|
- conda
|
||||||
```bash
|
|
||||||
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
|
||||||
```
|
|
||||||
|
|
||||||
(pip)
|
```bash
|
||||||
```bash
|
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
```
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For AMD systems"
|
- pip
|
||||||
|
|
||||||
(conda)
|
```bash
|
||||||
```bash
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
conda activate invokeai
|
```
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
(pip)
|
!!! todo "For AMD systems"
|
||||||
```bash
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
More information and troubleshooting tips can be found at https://pytorch.org.
|
- conda
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda activate invokeai
|
||||||
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
||||||
|
```
|
||||||
|
|
||||||
|
- pip
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
||||||
|
```
|
||||||
|
|
||||||
|
More information and troubleshooting tips can be found at https://pytorch.org.
|
||||||
|
@ -1,148 +1,137 @@
|
|||||||
---
|
---
|
||||||
title: The InvokeAI Source Installer
|
title: Source Installer
|
||||||
---
|
---
|
||||||
|
|
||||||
|
# The InvokeAI Source Installer
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
The source installer is a shell script that attempts to automate every
|
The source installer is a shell script that attempts to automate every step
|
||||||
step needed to install and run InvokeAI on a stock computer running
|
needed to install and run InvokeAI on a stock computer running recent versions
|
||||||
recent versions of Linux, MacOSX or Windows. It will leave you with a
|
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
||||||
version that runs a stable version of InvokeAI with the option to
|
version of InvokeAI with the option to upgrade to experimental versions later.
|
||||||
upgrade to experimental versions later. It is not as foolproof as the
|
It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md)
|
||||||
[InvokeAI installer](INSTALL_INVOKE.md)
|
|
||||||
|
|
||||||
Before you begin, make sure that you meet the [hardware
|
Before you begin, make sure that you meet the
|
||||||
requirements](index.md#Hardware_Requirements) and has the appropriate
|
[hardware requirements](index.md#Hardware_Requirements) and has the appropriate
|
||||||
GPU drivers installed. In particular, if you are a Linux user with an
|
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
|
||||||
AMD GPU installed, you may need to install the [ROCm
|
installed, you may need to install the
|
||||||
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the
|
Installation requires roughly 18G of free disk space to load the libraries and
|
||||||
libraries and recommended model weights files.
|
recommended model weights files.
|
||||||
|
|
||||||
## Walk through
|
## Walk through
|
||||||
|
|
||||||
Though there are multiple steps, there really is only one click
|
Though there are multiple steps, there really is only one click involved to kick
|
||||||
involved to kick off the process.
|
off the process.
|
||||||
|
|
||||||
1. The source installer is distributed in ZIP files. Go to the [latest
|
1. The source installer is distributed in ZIP files. Go to the
|
||||||
release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
||||||
look for a series of files named:
|
look for a series of files named:
|
||||||
|
|
||||||
- invokeAI-src-installer-mac.zip
|
- invokeAI-src-installer-mac.zip
|
||||||
- invokeAI-src-installer-windows.zip
|
- invokeAI-src-installer-windows.zip
|
||||||
- invokeAI-src-installer-linux.zip
|
- invokeAI-src-installer-linux.zip
|
||||||
|
|
||||||
Download the one that is appropriate for your operating system.
|
Download the one that is appropriate for your operating system.
|
||||||
|
|
||||||
2. Unpack the zip file into a directory that has at least 18G of free
|
2. Unpack the zip file into a directory that has at least 18G of free space. Do
|
||||||
space. Do *not* unpack into a directory that has an earlier version of
|
_not_ unpack into a directory that has an earlier version of InvokeAI.
|
||||||
InvokeAI.
|
|
||||||
|
|
||||||
This will create a new directory named "InvokeAI". This example
|
This will create a new directory named "InvokeAI". This example shows how
|
||||||
shows how this would look using the `unzip` command-line tool,
|
this would look using the `unzip` command-line tool, but you may use any
|
||||||
but you may use any graphical or command-line Zip extractor:
|
graphical or command-line Zip extractor:
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
C:\Documents\Linco> unzip invokeAI-windows.zip
|
C:\Documents\Linco> unzip invokeAI-windows.zip
|
||||||
Archive: C: \Linco\Downloads\invokeAI-linux.zip
|
Archive: C: \Linco\Downloads\invokeAI-linux.zip
|
||||||
creating: invokeAI\
|
creating: invokeAI\
|
||||||
inflating: invokeAI\install.bat
|
inflating: invokeAI\install.bat
|
||||||
inflating: invokeAI\readme.txt
|
inflating: invokeAI\readme.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
3. If you are using a desktop GUI, double-click the installer file.
|
3. If you are using a desktop GUI, double-click the installer file. It will be
|
||||||
It will be named `install.bat` on Windows systems and `install.sh`
|
named `install.bat` on Windows systems and `install.sh` on Linux and
|
||||||
on Linux and Macintosh systems.
|
Macintosh systems.
|
||||||
|
|
||||||
4. Alternatively, form the command line, run the shell script or .bat
|
4. Alternatively, form the command line, run the shell script or .bat file:
|
||||||
file:
|
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
C:\Documents\Linco> cd invokeAI
|
C:\Documents\Linco> cd invokeAI
|
||||||
C:\Documents\Linco> install.bat
|
C:\Documents\Linco\invokeAI> install.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Sit back and let the install script work. It will install various
|
5. Sit back and let the install script work. It will install various binary
|
||||||
binary requirements including Conda, Git and Python, then download
|
requirements including Conda, Git and Python, then download the current
|
||||||
the current InvokeAI code and install it along with its
|
InvokeAI code and install it along with its dependencies.
|
||||||
dependencies.
|
|
||||||
|
|
||||||
6. After installation completes, the installer will launch a script
|
6. After installation completes, the installer will launch a script called
|
||||||
called `preload_models.py`, which will guide you through the
|
`preload_models.py`, which will guide you through the first-time process of
|
||||||
first-time process of selecting one or more Stable Diffusion model
|
selecting one or more Stable Diffusion model weights files, downloading and
|
||||||
weights files, downloading and configuring them.
|
configuring them.
|
||||||
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
license agreement that you must agree to in order to use. The
|
agreement that you must agree to in order to use. The script will list the
|
||||||
script will list the steps you need to take to create an account on
|
steps you need to take to create an account on the official site that hosts
|
||||||
the official site that hosts the weights files, accept the
|
the weights files, accept the agreement, and provide an access token that
|
||||||
agreement, and provide an access token that allows InvokeAI to
|
allows InvokeAI to legally download and install the weights files.
|
||||||
legally download and install the weights files.
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
Stable Diffusion distribution, you may skip this step (by selecting
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
"skip" when prompted) and configure InvokeAI to use the
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
previously-downloaded files. The process for this is described in
|
process for this is described in [Installing Models](INSTALLING_MODELS.md).
|
||||||
[INSTALLING_MODELS.md].
|
|
||||||
|
|
||||||
7. The script will now exit and you'll be ready to generate some
|
7. The script will now exit and you'll be ready to generate some images. The
|
||||||
images. The invokeAI directory will contain numerous files. Look
|
invokeAI directory will contain numerous files. Look for a shell script
|
||||||
for a shell script named `invoke.sh` (Linux/Mac) or `invoke.bat`
|
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
|
||||||
(Windows). Launch the script by double-clicking it or typing
|
by double-clicking it or typing its name at the command-line:
|
||||||
its name at the command-line:
|
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
C:\Documents\Linco\invokeAI> cd invokeAI
|
C:\Documents\Linco> cd invokeAI
|
||||||
C:\Documents\Linco\invokeAI> invoke.bat
|
C:\Documents\Linco\invokeAI> invoke.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
The `invoke.bat` (`invoke.sh`) script will give you the choice of
|
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
|
||||||
starting (1) the command-line interface, or (2) the web GUI. If you
|
the command-line interface, or (2) the web GUI. If you start the latter, you can
|
||||||
start the latter, you can load the user interface by pointing your
|
load the user interface by pointing your browser at http://localhost:9090.
|
||||||
browser at http://localhost:9090.
|
|
||||||
|
|
||||||
The `invoke` script also offers you a third option labeled "open
|
The `invoke` script also offers you a third option labeled "open the developer
|
||||||
the developer console". If you choose this option, you will be
|
console". If you choose this option, you will be dropped into a command-line
|
||||||
dropped into a command-line interface in which you can run python
|
interface in which you can run python commands directly, access developer tools,
|
||||||
commands directly, access developer tools, and launch InvokeAI
|
and launch InvokeAI with customized options. To do the latter, you would launch
|
||||||
with customized options. To do the latter, you would launch the
|
the script `scripts/invoke.py` as shown in this example:
|
||||||
script `scripts/invoke.py` as shown in this example:
|
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
python scripts\invoke.py --web --max_load_models=3 \
|
python scripts/invoke.py --web --max_load_models=3 \
|
||||||
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
|
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
|
||||||
```
|
```
|
||||||
|
|
||||||
These options are described in detail in the [Command-Line
|
These options are described in detail in the
|
||||||
Interface](../features/CLI.md) documentation.
|
[Command-Line Interface](../features/CLI.md) documentation.
|
||||||
|
|
||||||
## Updating to newer versions
|
## Updating to newer versions
|
||||||
|
|
||||||
This section describes how to update InvokeAI to new versions of the
|
This section describes how to update InvokeAI to new versions of the software.
|
||||||
software.
|
|
||||||
|
|
||||||
### Updating the stable version
|
### Updating the stable version
|
||||||
|
|
||||||
This distribution is changing rapidly, and we add new features on a
|
This distribution is changing rapidly, and we add new features on a daily basis.
|
||||||
daily basis. To update to the latest released version (recommended),
|
To update to the latest released version (recommended), run the `update.sh`
|
||||||
run the `update.sh` (Linux/Mac) or `update.bat` (Windows)
|
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
||||||
scripts. This will fetch the latest release and re-run the
|
release and re-run the `preload_models` script to download any updated models
|
||||||
`preload_models` script to download any updated models files that may
|
files that may be needed. You can also use this to add additional models that
|
||||||
be needed. You can also use this to add additional models that you did
|
you did not select at installation time.
|
||||||
not select at installation time.
|
|
||||||
|
|
||||||
### Updating to the development version
|
### Updating to the development version
|
||||||
|
|
||||||
There may be times that there is a feature in the `development` branch
|
There may be times that there is a feature in the `development` branch of
|
||||||
of InvokeAI that you'd like to take advantage of. Or perhaps there is
|
InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that
|
||||||
a branch that corrects an annoying bug. To do this, you will use the
|
corrects an annoying bug. To do this, you will use the developer's console.
|
||||||
developer's console.
|
|
||||||
|
|
||||||
From within the invokeAI directory, run the command `invoke.sh`
|
From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or
|
||||||
(Linux/Mac) or `invoke.bat` (Windows) and selection option (3) to open
|
`invoke.bat` (Windows) and selection option (3) to open the developers console.
|
||||||
the developers console. Then run the following command to get the
|
Then run the following command to get the `development branch`:
|
||||||
`development branch`:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git checkout development
|
git checkout development
|
||||||
@ -150,19 +139,18 @@ git pull
|
|||||||
conda env update
|
conda env update
|
||||||
```
|
```
|
||||||
|
|
||||||
You can now close the developer console and run `invoke` as before.
|
You can now close the developer console and run `invoke` as before. If you get
|
||||||
If you get complaints about missing models, then you may need to do
|
complaints about missing models, then you may need to do the additional step of
|
||||||
the additional step of running `preload_models.py`. This happens
|
running `preload_models.py`. This happens relatively infrequently. To do this,
|
||||||
relatively infrequently. To do this, simply open up the developer's
|
simply open up the developer's console again and type
|
||||||
console again and type `python scripts/preload_models.py`.
|
`python scripts/preload_models.py`.
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI
|
If you run into problems during or after installation, the InvokeAI team is
|
||||||
team is available to help you. Either create an
|
available to help you. Either create an
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
||||||
site, or make a request for help on the "bugs-and-support" channel of
|
make a request for help on the "bugs-and-support" channel of our
|
||||||
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
|
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
||||||
volunteer organization, but typically somebody will be available to
|
organization, but typically somebody will be available to help you within 24
|
||||||
help you within 24 hours, and often much sooner.
|
hours, and often much sooner.
|
||||||
|
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
---
|
---
|
||||||
title: Installation Overview
|
title: Overview
|
||||||
---
|
---
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
We offer several ways to install InvokeAI, each one suited to your
|
We offer several ways to install InvokeAI, each one suited to your
|
||||||
experience and preferences.
|
experience and preferences.
|
||||||
|
|
Loading…
Reference in New Issue
Block a user