mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
fix manual install documentation
This commit is contained in:
parent
64339af2dc
commit
d94f955d9d
@ -12,16 +12,203 @@ title: Manual Installation
|
|||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
You have two choices for manual installation, the [first one](#Conda_method)
|
You have two choices for manual installation, the [first
|
||||||
based on the Anaconda3 package manager (`conda`), and
|
one](#PIP_method) uses basic Python virtual environment (`venv`)
|
||||||
[a second one](#PIP_method) which uses basic Python virtual environment (`venv`)
|
commands and the PIP package manager. The [second one](#Conda_method)
|
||||||
commands and the PIP package manager. Both methods require you to enter commands
|
based on the Anaconda3 package manager (`conda`). Both methods require
|
||||||
on the terminal, also known as the "console".
|
you to enter commands on the terminal, also known as the "console".
|
||||||
|
|
||||||
|
Note that the conda install method is currently deprecated and will not
|
||||||
|
be supported at some point in the future.
|
||||||
|
|
||||||
On Windows systems you are encouraged to install and use the
|
On Windows systems you are encouraged to install and use the
|
||||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||||
which provides compatibility with Linux and Mac shells and nice features such as
|
which provides compatibility with Linux and Mac shells and nice
|
||||||
command-line completion.
|
features such as command-line completion.
|
||||||
|
|
||||||
|
## pip Install
|
||||||
|
|
||||||
|
To install InvokeAI with virtual environments and the PIP package
|
||||||
|
manager, please follow these steps:
|
||||||
|
|
||||||
|
1. Make sure you are using Python 3.9 or 3.10. The rest of the install
|
||||||
|
procedure depends on this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -V
|
||||||
|
```
|
||||||
|
|
||||||
|
2. From within the InvokeAI top-level directory, create and activate a virtual
|
||||||
|
environment named `invokeai`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -mvenv invokeai
|
||||||
|
source invokeai/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Make sure that pip is installed in your virtual environment an up to date:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -mensurepip --upgrade
|
||||||
|
python -mpip install --upgrade pip
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
||||||
|
system.
|
||||||
|
|
||||||
|
We have created a series of environment files suited for different operating
|
||||||
|
systems and GPU hardware. They are located in the
|
||||||
|
`environments-and-requirements` directory:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
| filename | OS |
|
||||||
|
| :---------------------------------: | :-------------------------------------------------------------: |
|
||||||
|
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
||||||
|
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
||||||
|
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
||||||
|
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
||||||
|
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Select the appropriate requirements file, and make a link to it from
|
||||||
|
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
||||||
|
this from the top-level directory is:
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
=== "Macintosh and Linux"
|
||||||
|
|
||||||
|
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Windows"
|
||||||
|
|
||||||
|
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
|
||||||
|
This is a base requirements file that does not have the platform-specific
|
||||||
|
libraries. Also, be sure to link or copy the platform-specific file to
|
||||||
|
a top-level file named `requirements.txt` as shown here. Running pip on
|
||||||
|
a requirements file in a subdirectory will not work as expected.
|
||||||
|
|
||||||
|
When this is done, confirm that a file named `requirements.txt` has been
|
||||||
|
created in the InvokeAI root directory and that it points to the correct
|
||||||
|
file in `environments-and-requirements`.
|
||||||
|
|
||||||
|
5. Run PIP
|
||||||
|
|
||||||
|
Be sure that the `invokeai` environment is active before doing this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --prefer-binary -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Set up the runtime directory
|
||||||
|
|
||||||
|
In this step you will initialize a runtime directory that will
|
||||||
|
contain the models, model config files, directory for textual
|
||||||
|
inversion embeddings, and your outputs. This keeps the runtime
|
||||||
|
directory separate from the source code and aids in updating.
|
||||||
|
|
||||||
|
You may pick any location for this directory using the `--root_dir`
|
||||||
|
option (abbreviated --root). If you don't pass this option, it will
|
||||||
|
default to `invokeai` in your home directory.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
configure_invokeai.py --root_dir ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
The script `configure_invokeai.py` will interactively guide you through the
|
||||||
|
process of downloading and installing the weights files needed for InvokeAI.
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you have to agree to. The script will list the steps you need
|
||||||
|
to take to create an account on the site that hosts the weights files,
|
||||||
|
accept the agreement, and provide an access token that allows InvokeAI to
|
||||||
|
legally download and install the weights files.
|
||||||
|
|
||||||
|
If you get an error message about a module not being installed, check that
|
||||||
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
|
Note that `configure_invokeai.py` and `invoke.py` should be installed
|
||||||
|
under your virtual environment directory and the system should find them
|
||||||
|
on the PATH. If this isn't working on your system, you can call the
|
||||||
|
scripts directory using `python scripts/configure_invoke.py` and
|
||||||
|
`python scripts/invoke.py`.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
7. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
|
Activate the environment (with `source invokeai/bin/activate`), and then
|
||||||
|
run the script `invoke.py`. If you selected a non-default location
|
||||||
|
for the runtime directory, please specify the path with the `--root_dir`
|
||||||
|
option (abbreviated below as `--root`):
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
!!! warning "Make sure that the virtual environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
If you choose the run the web interface, point your browser at
|
||||||
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of the directory.
|
||||||
|
|
||||||
|
8. Render away!
|
||||||
|
|
||||||
|
Browse the [features](../features/CLI.md) section to learn about all the things you
|
||||||
|
can do with InvokeAI.
|
||||||
|
|
||||||
|
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||||
|
card with the ROCm driver, you may have to wait for over a minute the first
|
||||||
|
time you try to generate an image. Fortunately, after the warm up period
|
||||||
|
rendering will be fast.
|
||||||
|
|
||||||
|
9. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||||
|
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
||||||
|
script. If you forget to activate the 'invokeai' environment, the script
|
||||||
|
will fail with multiple `ModuleNotFound` errors.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
Do not move the source code repository after installation. The virtual environment directory has absolute paths in it that get confused if the directory is moved.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Conda method
|
### Conda method
|
||||||
|
|
||||||
@ -145,17 +332,19 @@ command-line completion.
|
|||||||
Your command-line prompt should change to indicate that `invokeai` is active
|
Your command-line prompt should change to indicate that `invokeai` is active
|
||||||
by prepending `(invokeai)`.
|
by prepending `(invokeai)`.
|
||||||
|
|
||||||
8. Pre-Load the model weights files:
|
8. Set up the runtime directory
|
||||||
|
|
||||||
!!! tip
|
In this step you will initialize a runtime directory that will
|
||||||
|
contain the models, model config files, directory for textual
|
||||||
|
inversion embeddings, and your outputs. This keeps the runtime
|
||||||
|
directory separate from the source code and aids in updating.
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
You may pick any location for this directory using the `--root_dir`
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
option (abbreviated --root). If you don't pass this option, it will
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
default to `invokeai` in your home directory.
|
||||||
process for this is described in [here](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/configure_invokeai.py
|
python scripts/configure_invokeai.py --root_dir ~/Programs/invokeai
|
||||||
```
|
```
|
||||||
|
|
||||||
The script `configure_invokeai.py` will interactively guide you through the
|
The script `configure_invokeai.py` will interactively guide you through the
|
||||||
@ -169,8 +358,26 @@ command-line completion.
|
|||||||
If you get an error message about a module not being installed, check that
|
If you get an error message about a module not being installed, check that
|
||||||
the `invokeai` environment is active and if not, repeat step 5.
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
|
Note that `configure_invokeai.py` and `invoke.py` should be
|
||||||
|
installed under your conda directory and the system should find
|
||||||
|
them automatically on the PATH. If this isn't working on your
|
||||||
|
system, you can call the scripts directory using `python
|
||||||
|
scripts/configure_invoke.py` and `python scripts/invoke.py`.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
9. Run the command-line- or the web- interface:
|
9. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
|
Activate the environment (with `source invokeai/bin/activate`), and then
|
||||||
|
run the script `invoke.py`. If you selected a non-default location
|
||||||
|
for the runtime directory, please specify the path with the `--root_dir`
|
||||||
|
option (abbreviated below as `--root`):
|
||||||
|
|
||||||
!!! example ""
|
!!! example ""
|
||||||
|
|
||||||
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
@ -178,24 +385,28 @@ command-line completion.
|
|||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/invoke.py
|
invoke.py --root ~/Programs/invokeai
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "local Webserver"
|
=== "local Webserver"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/invoke.py --web
|
invoke.py --web --root ~/Programs/invokeai
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Public Webserver"
|
=== "Public Webserver"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
||||||
```
|
```
|
||||||
|
|
||||||
If you choose the run the web interface, point your browser at
|
If you choose the run the web interface, point your browser at
|
||||||
http://localhost:9090 in order to load the GUI.
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of your choice.
|
||||||
|
|
||||||
10. Render away!
|
10. Render away!
|
||||||
|
|
||||||
Browse the [features](../features/CLI.md) section to learn about all the things you
|
Browse the [features](../features/CLI.md) section to learn about all the things you
|
||||||
@ -211,6 +422,34 @@ command-line completion.
|
|||||||
script. If you forget to activate the 'invokeai' environment, the script
|
script. If you forget to activate the 'invokeai' environment, the script
|
||||||
will fail with multiple `ModuleNotFound` errors.
|
will fail with multiple `ModuleNotFound` errors.
|
||||||
|
|
||||||
|
## Creating an "install" version of InvokeAI
|
||||||
|
|
||||||
|
If you wish you can install InvokeAI and all its dependencies in the
|
||||||
|
runtime directory. This allows you to delete the source code
|
||||||
|
repository and eliminates the need to provide `--root_dir` at startup
|
||||||
|
time. Note that this method only works with the PIP method.
|
||||||
|
|
||||||
|
1. Follow the instructions for the PIP install, but in step #2 put the
|
||||||
|
virtual environment into the runtime directory. For example, assuming the
|
||||||
|
runtime directory lives in `~/Programs/invokeai`, you'd run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -menv ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Now follow steps 3 to 5 in the PIP recipe, ending with the `pip install`
|
||||||
|
step.
|
||||||
|
|
||||||
|
3. Run one additional step while you are in the source code repository
|
||||||
|
directory `pip install .` (note the dot at the end).
|
||||||
|
|
||||||
|
4. That's all! Now, whenever you activate the virtual environment,
|
||||||
|
`invoke.py` will know where to look for the runtime directory without
|
||||||
|
needing a `--root_dir` argument. In addition, you can now move or
|
||||||
|
delete the source code repository entirely.
|
||||||
|
|
||||||
|
(Don't move the runtime directory!)
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
## Updating to newer versions of the script
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
This distribution is changing rapidly. If you used the `git clone` method
|
||||||
@ -228,95 +467,6 @@ be needed to take advantage of new features or released models. The
|
|||||||
`--no-interactive` flag will prevent the script from prompting you to download
|
`--no-interactive` flag will prevent the script from prompting you to download
|
||||||
the big Stable Diffusion weights files.
|
the big Stable Diffusion weights files.
|
||||||
|
|
||||||
## pip Install
|
|
||||||
|
|
||||||
To install InvokeAI with only the PIP package manager, please follow these
|
|
||||||
steps:
|
|
||||||
|
|
||||||
1. Make sure you are using Python 3.9 or higher. The rest of the install
|
|
||||||
procedure depends on this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -V
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Install the `virtualenv` tool if you don't have it already:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install virtualenv
|
|
||||||
```
|
|
||||||
|
|
||||||
3. From within the InvokeAI top-level directory, create and activate a virtual
|
|
||||||
environment named `invokeai`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
virtualenv invokeai
|
|
||||||
source invokeai/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
|
||||||
system.
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different operating
|
|
||||||
systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| filename | OS |
|
|
||||||
| :---------------------------------: | :-------------------------------------------------------------: |
|
|
||||||
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
|
||||||
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
|
||||||
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
|
||||||
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
|
||||||
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Select the appropriate requirements file, and make a link to it from
|
|
||||||
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
|
||||||
this from the top-level directory is:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
=== "Macintosh and Linux"
|
|
||||||
|
|
||||||
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
|
|
||||||
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning
|
|
||||||
|
|
||||||
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
|
|
||||||
This is a base requirements file that does not have the platform-specific
|
|
||||||
libraries. Also, be sure to link or copy the platform-specific file to
|
|
||||||
a top-level file named `requirements.txt` as shown here. Running pip on
|
|
||||||
a requirements file in a subdirectory will not work as expected.
|
|
||||||
|
|
||||||
When this is done, confirm that a file named `requirements.txt` has been
|
|
||||||
created in the InvokeAI root directory and that it points to the correct
|
|
||||||
file in `environments-and-requirements`.
|
|
||||||
|
|
||||||
5. Run PIP
|
|
||||||
|
|
||||||
Be sure that the `invokeai` environment is active before doing this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install --prefer-binary -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
Here are some common issues and their suggested solutions.
|
Here are some common issues and their suggested solutions.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user