mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
improve installation documentation
1. Added a big fat warning to the Windows installer to tell user to install Visual C++ redistributable. 2. Added a bit fat warning to the automated installer doc to tell user the same thing. 3. Reordered entries on the table-of-contents sidebar for installation to prioritize the most important docs. 4. Moved older installation documentation into deprecated folder. 5. Moved developer-specific installation documentation into the developers folder.
This commit is contained in:
parent
bfa8fed568
commit
f7170e4156
308
docs/installation/010_INSTALL_AUTOMATED.md
Normal file
308
docs/installation/010_INSTALL_AUTOMATED.md
Normal file
@ -0,0 +1,308 @@
|
|||||||
|
---
|
||||||
|
title: Installing with the Automated Installer
|
||||||
|
---
|
||||||
|
|
||||||
|
# InvokeAI Automated Installation
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
The automated installer is a shell script that attempts to automate every step
|
||||||
|
needed to install and run InvokeAI on a stock computer running recent versions
|
||||||
|
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
||||||
|
version of InvokeAI with the option to upgrade to experimental versions later.
|
||||||
|
|
||||||
|
## Walk through
|
||||||
|
|
||||||
|
1. Make sure that your system meets the
|
||||||
|
[hardware requirements](../index.md#hardware-requirements) and has the
|
||||||
|
appropriate GPU drivers installed. In particular, if you are a Linux user
|
||||||
|
with an AMD GPU installed, you may need to install the
|
||||||
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
|
!!! info "Required Space"
|
||||||
|
|
||||||
|
Installation requires roughly 18G of free disk space to load the libraries and
|
||||||
|
recommended model weights files.
|
||||||
|
|
||||||
|
2. Check that your system has an up-to-date Python installed. To do this, open
|
||||||
|
up a command-line window ("Terminal" on Linux and Macintosh, "Command" or
|
||||||
|
"Powershell" on Windows) and type `python --version`. If Python is
|
||||||
|
installed, it will print out the version number. If it is version `3.9.1` or
|
||||||
|
higher, you meet requirements.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
!!! warning "If you see an older version, or get a command not found error"
|
||||||
|
|
||||||
|
Go to [Python Downloads](https://www.python.org/downloads/) and
|
||||||
|
download the appropriate installer package for your platform. We recommend
|
||||||
|
[Version 3.10.9](https://www.python.org/downloads/release/python-3109/),
|
||||||
|
which has been extensively tested with InvokeAI.
|
||||||
|
|
||||||
|
!!! warning "At this time we do not recommend Python 3.11"
|
||||||
|
|
||||||
|
_Please select your platform in the section below for platform-specific
|
||||||
|
setup requirements._
|
||||||
|
|
||||||
|
=== "Windows users"
|
||||||
|
|
||||||
|
- During the Python configuration process,
|
||||||
|
look out for a checkbox to add Python to your PATH
|
||||||
|
and select it. If the install script complains that it can't
|
||||||
|
find python, then open the Python installer again and choose
|
||||||
|
"Modify" existing installation.
|
||||||
|
|
||||||
|
- Installation requires an up to date version of the Microsoft Visual C libraries. Please install the 2015-2022 libraries available here: https://learn.microsoft.com/en-us/cpp/windows/deploying-native-desktop-applications-visual-cpp?view=msvc-170
|
||||||
|
|
||||||
|
=== "Mac users"
|
||||||
|
|
||||||
|
- After installing Python, you may need to run the
|
||||||
|
following command from the Terminal in order to install the Web
|
||||||
|
certificates needed to download model data from https sites. If
|
||||||
|
you see lots of CERTIFICATE ERRORS during the last part of the
|
||||||
|
install, this is the problem, and you can fix it with this command:
|
||||||
|
|
||||||
|
`/Applications/Python\ 3.10/Install\ Certificates.command`
|
||||||
|
|
||||||
|
- You may need to install the Xcode command line tools. These
|
||||||
|
are a set of tools that are needed to run certain applications in a
|
||||||
|
Terminal, including InvokeAI. This package is provided directly by Apple.
|
||||||
|
|
||||||
|
- To install, open a terminal window and run `xcode-select
|
||||||
|
--install`. You will get a macOS system popup guiding you through the
|
||||||
|
install. If you already have them installed, you will instead see some
|
||||||
|
output in the Terminal advising you that the tools are already installed.
|
||||||
|
|
||||||
|
- More information can be found here:
|
||||||
|
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
|
||||||
|
|
||||||
|
=== "Linux users"
|
||||||
|
|
||||||
|
For reasons that are not entirely clear, installing the correct version of Python can be a bit of a challenge on Ubuntu, Linux Mint, and otherUbuntu-derived distributions.
|
||||||
|
|
||||||
|
In particular, Ubuntu version 20.04 LTS comes with an old version of Python, does not come with the PIP package manager installed, and to make matters worse, the `python` command points to Python2, not Python3.
|
||||||
|
|
||||||
|
Here is the quick recipe for bringing your system up to date:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install python3.9
|
||||||
|
sudo apt install python3-pip
|
||||||
|
cd /usr/bin
|
||||||
|
sudo ln -sf python3.9 python3
|
||||||
|
sudo ln -sf python3 python
|
||||||
|
```
|
||||||
|
|
||||||
|
You can still access older versions of Python by calling `python2`, `python3.8`,
|
||||||
|
etc.
|
||||||
|
|
||||||
|
3. The source installer is distributed in ZIP files. Go to the
|
||||||
|
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
||||||
|
look for a series of files named:
|
||||||
|
|
||||||
|
- [InvokeAI-installer-2.2.4-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-mac.zip)
|
||||||
|
- [InvokeAI-installer-2.2.4-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-windows.zip)
|
||||||
|
- [InvokeAI-installer-2.2.4-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-linux.zip)
|
||||||
|
|
||||||
|
Download the one that is appropriate for your operating system.
|
||||||
|
|
||||||
|
4. Unpack the zip file into a convenient directory. This will create a new
|
||||||
|
directory named "InvokeAI-Installer". This example shows how this would look
|
||||||
|
using the `unzip` command-line tool, but you may use any graphical or
|
||||||
|
command-line Zip extractor:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> unzip InvokeAI-installer-2.2.4-windows.zip
|
||||||
|
Archive: C: \Linco\Downloads\InvokeAI-installer-2.2.4-windows.zip
|
||||||
|
creating: InvokeAI-Installer\
|
||||||
|
inflating: InvokeAI-Installer\install.bat
|
||||||
|
inflating: InvokeAI-Installer\readme.txt
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
After successful installation, you can delete the `InvokeAI-Installer`
|
||||||
|
directory.
|
||||||
|
|
||||||
|
5. **Windows only** Please double-click on the file WinLongPathsEnabled.reg and
|
||||||
|
accept the dialog box that asks you if you wish to modify your registry.
|
||||||
|
This activates long filename support on your system and will prevent
|
||||||
|
mysterious errors during installation.
|
||||||
|
|
||||||
|
6. If you are using a desktop GUI, double-click the installer file. It will be
|
||||||
|
named `install.bat` on Windows systems and `install.sh` on Linux and
|
||||||
|
Macintosh systems.
|
||||||
|
|
||||||
|
On Windows systems you will probably get an "Untrusted Publisher" warning.
|
||||||
|
Click on "More Info" and select "Run Anyway." You trust us, right?
|
||||||
|
|
||||||
|
7. Alternatively, from the command line, run the shell script or .bat file:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd InvokeAI-Installer
|
||||||
|
C:\Documents\Linco\invokeAI> install.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
8. The script will ask you to choose where to install InvokeAI. Select a
|
||||||
|
directory with at least 18G of free space for a full install. InvokeAI and
|
||||||
|
all its support files will be installed into a new directory named
|
||||||
|
`invokeai` located at the location you specify.
|
||||||
|
|
||||||
|
- The default is to install the `invokeai` directory in your home directory,
|
||||||
|
usually `C:\Users\YourName\invokeai` on Windows systems,
|
||||||
|
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
|
||||||
|
on Macintoshes, where "YourName" is your login name.
|
||||||
|
|
||||||
|
- The script uses tab autocompletion to suggest directory path completions.
|
||||||
|
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
|
||||||
|
to suggest completions.
|
||||||
|
|
||||||
|
9. Sit back and let the install script work. It will install the third-party
|
||||||
|
libraries needed by InvokeAI, then download the current InvokeAI release and
|
||||||
|
install it.
|
||||||
|
|
||||||
|
Be aware that some of the library download and install steps take a long
|
||||||
|
time. In particular, the `pytorch` package is quite large and often appears
|
||||||
|
to get "stuck" at 99.9%. Have patience and the installation step will
|
||||||
|
eventually resume. However, there are occasions when the library install
|
||||||
|
does legitimately get stuck. If you have been waiting for more than ten
|
||||||
|
minutes and nothing is happening, you can interrupt the script with ^C. You
|
||||||
|
may restart it and it will pick up where it left off.
|
||||||
|
|
||||||
|
10. After installation completes, the installer will launch a script called
|
||||||
|
`configure_invokeai.py`, which will guide you through the first-time process
|
||||||
|
of selecting one or more Stable Diffusion model weights files, downloading
|
||||||
|
and configuring them. We provide a list of popular models that InvokeAI
|
||||||
|
performs well with. However, you can add more weight files later on using
|
||||||
|
the command-line client or the Web UI. See
|
||||||
|
[Installing Models](INSTALLING_MODELS.md) for details.
|
||||||
|
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you must agree to in order to use. The script will list the
|
||||||
|
steps you need to take to create an account on the official site that hosts
|
||||||
|
the weights files, accept the agreement, and provide an access token that
|
||||||
|
allows InvokeAI to legally download and install the weights files.
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [Installing Models](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
11. The script will now exit and you'll be ready to generate some images. Look
|
||||||
|
for the directory `invokeai` installed in the location you chose at the
|
||||||
|
beginning of the install session. Look for a shell script named `invoke.sh`
|
||||||
|
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
|
||||||
|
it or typing its name at the command-line:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd invokeai
|
||||||
|
C:\Documents\Linco\invokeAI> invoke.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
- The `invoke.bat` (`invoke.sh`) script will give you the choice of starting
|
||||||
|
(1) the command-line interface, or (2) the web GUI. If you start the
|
||||||
|
latter, you can load the user interface by pointing your browser at
|
||||||
|
http://localhost:9090.
|
||||||
|
|
||||||
|
- The script also offers you a third option labeled "open the developer
|
||||||
|
console". If you choose this option, you will be dropped into a
|
||||||
|
command-line interface in which you can run python commands directly,
|
||||||
|
access developer tools, and launch InvokeAI with customized options.
|
||||||
|
|
||||||
|
12. You can launch InvokeAI with several different command-line arguments that
|
||||||
|
customize its behavior. For example, you can change the location of the
|
||||||
|
image output directory, or select your favorite sampler. See the
|
||||||
|
[Command-Line Interface](../features/CLI.md) for a full list of the options.
|
||||||
|
|
||||||
|
- To set defaults that will take effect every time you launch InvokeAI,
|
||||||
|
use a text editor (e.g. Notepad) to exit the file
|
||||||
|
`invokeai\invokeai.init`. It contains a variety of examples that you can
|
||||||
|
follow to add and modify launch options.
|
||||||
|
|
||||||
|
!!! warning "The `invokeai` directory contains the `invoke` application, its
|
||||||
|
configuration files, the model weight files, and outputs of image generation.
|
||||||
|
Once InvokeAI is installed, do not move or remove this directory."
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### _Package dependency conflicts_
|
||||||
|
|
||||||
|
If you have previously installed InvokeAI or another Stable Diffusion package,
|
||||||
|
the installer may occasionally pick up outdated libraries and either the
|
||||||
|
installer or `invoke` will fail with complaints about library conflicts. You can
|
||||||
|
address this by entering the `invokeai` directory and running `update.sh`, which
|
||||||
|
will bring InvokeAI up to date with the latest libraries.
|
||||||
|
|
||||||
|
### ldm from pypi
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Some users have tried to correct dependency problems by installing
|
||||||
|
the `ldm` package from PyPi.org. Unfortunately this is an unrelated package that
|
||||||
|
has nothing to do with the 'latent diffusion model' used by InvokeAI. Installing
|
||||||
|
ldm will make matters worse. If you've installed ldm, uninstall it with
|
||||||
|
`pip uninstall ldm`.
|
||||||
|
|
||||||
|
### Corrupted configuration file
|
||||||
|
|
||||||
|
Everything seems to install ok, but `invoke` complains of a corrupted
|
||||||
|
configuration file and goes back into the configuration process (asking you to
|
||||||
|
download models, etc), but this doesn't fix the problem.
|
||||||
|
|
||||||
|
This issue is often caused by a misconfigured configuration directive in the
|
||||||
|
`invokeai\invokeai.init` initialization file that contains startup settings. The
|
||||||
|
easiest way to fix the problem is to move the file out of the way and re-run
|
||||||
|
`configure_invokeai.py`. Enter the developer's console (option 3 of the launcher
|
||||||
|
script) and run this command:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
configure_invokeai.py --root=.
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the dot (.) after `--root`. It is part of the command.
|
||||||
|
|
||||||
|
_If none of these maneuvers fixes the problem_ then please report the problem to
|
||||||
|
the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
|
||||||
|
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive
|
||||||
|
assistance.
|
||||||
|
|
||||||
|
### other problems
|
||||||
|
|
||||||
|
If you run into problems during or after installation, the InvokeAI team is
|
||||||
|
available to help you. Either create an
|
||||||
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
||||||
|
make a request for help on the "bugs-and-support" channel of our
|
||||||
|
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
||||||
|
organization, but typically somebody will be available to help you within 24
|
||||||
|
hours, and often much sooner.
|
||||||
|
|
||||||
|
## Updating to newer versions
|
||||||
|
|
||||||
|
This distribution is changing rapidly, and we add new features on a daily basis.
|
||||||
|
To update to the latest released version (recommended), run the `update.sh`
|
||||||
|
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
||||||
|
release and re-run the `configure_invokeai` script to download any updated
|
||||||
|
models files that may be needed. You can also use this to add additional models
|
||||||
|
that you did not select at installation time.
|
||||||
|
|
||||||
|
You can now close the developer console and run `invoke` as before. If you get
|
||||||
|
complaints about missing models, then you may need to do the additional step of
|
||||||
|
running `configure_invokeai.py`. This happens relatively infrequently. To do
|
||||||
|
this, simply open up the developer's console again and type
|
||||||
|
`python scripts/configure_invokeai.py`.
|
||||||
|
|
||||||
|
You may also use the `update` script to install any selected version of
|
||||||
|
InvokeAI. From https://github.com/invoke-ai/InvokeAI, navigate to the zip file
|
||||||
|
link of the version you wish to install. You can find the zip links by going to
|
||||||
|
the one of the release pages and looking for the **Assets** section at the
|
||||||
|
bottom. Alternatively, you can browse "branches" and "tags" at the top of the
|
||||||
|
big code directory on the InvokeAI welcome page. When you find the version you
|
||||||
|
want to install, go to the green "<> Code" button at the top, and copy the
|
||||||
|
"Download ZIP" link.
|
||||||
|
|
||||||
|
Now run `update.sh` (or `update.bat`) with the URL of the desired InvokeAI
|
||||||
|
version as its argument. For example, this will install the old 2.2.0 release.
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.0.zip
|
||||||
|
```
|
||||||
|
|
579
docs/installation/020_INSTALL_MANUAL.md
Normal file
579
docs/installation/020_INSTALL_MANUAL.md
Normal file
@ -0,0 +1,579 @@
|
|||||||
|
---
|
||||||
|
title: Installing Manually
|
||||||
|
---
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
!!! warning "This is for advanced Users"
|
||||||
|
|
||||||
|
who are already experienced with using conda or pip
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
You have two choices for manual installation, the [first
|
||||||
|
one](#PIP_method) uses basic Python virtual environment (`venv`)
|
||||||
|
commands and the PIP package manager. The [second one](#Conda_method)
|
||||||
|
based on the Anaconda3 package manager (`conda`). Both methods require
|
||||||
|
you to enter commands on the terminal, also known as the "console".
|
||||||
|
|
||||||
|
Note that the conda install method is currently deprecated and will not
|
||||||
|
be supported at some point in the future.
|
||||||
|
|
||||||
|
On Windows systems you are encouraged to install and use the
|
||||||
|
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||||
|
which provides compatibility with Linux and Mac shells and nice
|
||||||
|
features such as command-line completion.
|
||||||
|
|
||||||
|
## pip Install
|
||||||
|
|
||||||
|
To install InvokeAI with virtual environments and the PIP package
|
||||||
|
manager, please follow these steps:
|
||||||
|
|
||||||
|
1. Make sure you are using Python 3.9 or 3.10. The rest of the install
|
||||||
|
procedure depends on this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -V
|
||||||
|
```
|
||||||
|
|
||||||
|
2. From within the InvokeAI top-level directory, create and activate a virtual
|
||||||
|
environment named `invokeai`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -mvenv invokeai
|
||||||
|
source invokeai/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Make sure that pip is installed in your virtual environment an up to date:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -mensurepip --upgrade
|
||||||
|
python -mpip install --upgrade pip
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
||||||
|
system.
|
||||||
|
|
||||||
|
We have created a series of environment files suited for different operating
|
||||||
|
systems and GPU hardware. They are located in the
|
||||||
|
`environments-and-requirements` directory:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
| filename | OS |
|
||||||
|
| :---------------------------------: | :-------------------------------------------------------------: |
|
||||||
|
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
||||||
|
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
||||||
|
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
||||||
|
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
||||||
|
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Select the appropriate requirements file, and make a link to it from
|
||||||
|
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
||||||
|
this from the top-level directory is:
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
=== "Macintosh and Linux"
|
||||||
|
|
||||||
|
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Windows"
|
||||||
|
|
||||||
|
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
|
||||||
|
This is a base requirements file that does not have the platform-specific
|
||||||
|
libraries. Also, be sure to link or copy the platform-specific file to
|
||||||
|
a top-level file named `requirements.txt` as shown here. Running pip on
|
||||||
|
a requirements file in a subdirectory will not work as expected.
|
||||||
|
|
||||||
|
When this is done, confirm that a file named `requirements.txt` has been
|
||||||
|
created in the InvokeAI root directory and that it points to the correct
|
||||||
|
file in `environments-and-requirements`.
|
||||||
|
|
||||||
|
5. Run PIP
|
||||||
|
|
||||||
|
Be sure that the `invokeai` environment is active before doing this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --prefer-binary -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Set up the runtime directory
|
||||||
|
|
||||||
|
In this step you will initialize a runtime directory that will
|
||||||
|
contain the models, model config files, directory for textual
|
||||||
|
inversion embeddings, and your outputs. This keeps the runtime
|
||||||
|
directory separate from the source code and aids in updating.
|
||||||
|
|
||||||
|
You may pick any location for this directory using the `--root_dir`
|
||||||
|
option (abbreviated --root). If you don't pass this option, it will
|
||||||
|
default to `invokeai` in your home directory.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
configure_invokeai.py --root_dir ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
The script `configure_invokeai.py` will interactively guide you through the
|
||||||
|
process of downloading and installing the weights files needed for InvokeAI.
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you have to agree to. The script will list the steps you need
|
||||||
|
to take to create an account on the site that hosts the weights files,
|
||||||
|
accept the agreement, and provide an access token that allows InvokeAI to
|
||||||
|
legally download and install the weights files.
|
||||||
|
|
||||||
|
If you get an error message about a module not being installed, check that
|
||||||
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
|
Note that `configure_invokeai.py` and `invoke.py` should be installed
|
||||||
|
under your virtual environment directory and the system should find them
|
||||||
|
on the PATH. If this isn't working on your system, you can call the
|
||||||
|
scripts directory using `python scripts/configure_invoke.py` and
|
||||||
|
`python scripts/invoke.py`.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
7. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
|
Activate the environment (with `source invokeai/bin/activate`), and then
|
||||||
|
run the script `invoke.py`. If you selected a non-default location
|
||||||
|
for the runtime directory, please specify the path with the `--root_dir`
|
||||||
|
option (abbreviated below as `--root`):
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
!!! warning "Make sure that the virtual environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
If you choose the run the web interface, point your browser at
|
||||||
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of the directory.
|
||||||
|
|
||||||
|
8. Render away!
|
||||||
|
|
||||||
|
Browse the [features](../features/CLI.md) section to learn about all the things you
|
||||||
|
can do with InvokeAI.
|
||||||
|
|
||||||
|
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||||
|
card with the ROCm driver, you may have to wait for over a minute the first
|
||||||
|
time you try to generate an image. Fortunately, after the warm up period
|
||||||
|
rendering will be fast.
|
||||||
|
|
||||||
|
9. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||||
|
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
||||||
|
script. If you forget to activate the 'invokeai' environment, the script
|
||||||
|
will fail with multiple `ModuleNotFound` errors.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
Do not move the source code repository after installation. The virtual environment directory has absolute paths in it that get confused if the directory is moved.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Conda method
|
||||||
|
|
||||||
|
1. Check that your system meets the
|
||||||
|
[hardware requirements](index.md#Hardware_Requirements) and has the
|
||||||
|
appropriate GPU drivers installed. In particular, if you are a Linux user
|
||||||
|
with an AMD GPU installed, you may need to install the
|
||||||
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
|
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
|
||||||
|
of ROCm driver support on this platform.
|
||||||
|
|
||||||
|
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
|
||||||
|
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
|
||||||
|
information about the installed video card.
|
||||||
|
|
||||||
|
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
|
||||||
|
can skip this step.
|
||||||
|
|
||||||
|
2. You will need to install Anaconda3 and Git if they are not already
|
||||||
|
available. Use your operating system's preferred package manager, or
|
||||||
|
download the installers manually. You can find them here:
|
||||||
|
|
||||||
|
- [Anaconda3](https://www.anaconda.com/)
|
||||||
|
- [git](https://git-scm.com/downloads)
|
||||||
|
|
||||||
|
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
||||||
|
GitHub:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create InvokeAI folder where you will follow the rest of the
|
||||||
|
steps.
|
||||||
|
|
||||||
|
4. Enter the newly-created InvokeAI folder:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd InvokeAI
|
||||||
|
```
|
||||||
|
|
||||||
|
From this step forward make sure that you are working in the InvokeAI
|
||||||
|
directory!
|
||||||
|
|
||||||
|
5. Select the appropriate environment file:
|
||||||
|
|
||||||
|
We have created a series of environment files suited for different operating
|
||||||
|
systems and GPU hardware. They are located in the
|
||||||
|
`environments-and-requirements` directory:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
| filename | OS |
|
||||||
|
| :----------------------: | :----------------------------: |
|
||||||
|
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
|
||||||
|
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
|
||||||
|
| environment-mac.yml | Macintosh |
|
||||||
|
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Choose the appropriate environment file for your system and link or copy it
|
||||||
|
to `environment.yml` in InvokeAI's top-level directory. To do so, run
|
||||||
|
following command from the repository-root:
|
||||||
|
|
||||||
|
!!! Example ""
|
||||||
|
|
||||||
|
=== "Macintosh and Linux"
|
||||||
|
|
||||||
|
!!! todo "Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the table above"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
When this is done, confirm that a file `environment.yml` has been linked in
|
||||||
|
the InvokeAI root directory and that it points to the correct file in the
|
||||||
|
`environments-and-requirements`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -la
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Windows"
|
||||||
|
|
||||||
|
!!! todo " Since it requires admin privileges to create links, we will use the copy command to create your `environment.yml`"
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
copy environments-and-requirements\environment-win-cuda.yml environment.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Afterwards verify that the file `environment.yml` has been created, either via the
|
||||||
|
explorer or by using the command `dir` from the terminal
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
dir
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning "Do not try to run conda on directly on the subdirectory environments file. This won't work. Instead, copy or link it to the top-level directory as shown."
|
||||||
|
|
||||||
|
6. Create the conda environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda env update
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create a new environment named `invokeai` and install all InvokeAI
|
||||||
|
dependencies into it. If something goes wrong you should take a look at
|
||||||
|
[troubleshooting](#troubleshooting).
|
||||||
|
|
||||||
|
7. Activate the `invokeai` environment:
|
||||||
|
|
||||||
|
In order to use the newly created environment you will first need to
|
||||||
|
activate it
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda activate invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
Your command-line prompt should change to indicate that `invokeai` is active
|
||||||
|
by prepending `(invokeai)`.
|
||||||
|
|
||||||
|
8. Set up the runtime directory
|
||||||
|
|
||||||
|
In this step you will initialize a runtime directory that will
|
||||||
|
contain the models, model config files, directory for textual
|
||||||
|
inversion embeddings, and your outputs. This keeps the runtime
|
||||||
|
directory separate from the source code and aids in updating.
|
||||||
|
|
||||||
|
You may pick any location for this directory using the `--root_dir`
|
||||||
|
option (abbreviated --root). If you don't pass this option, it will
|
||||||
|
default to `invokeai` in your home directory.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/configure_invokeai.py --root_dir ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
The script `configure_invokeai.py` will interactively guide you through the
|
||||||
|
process of downloading and installing the weights files needed for InvokeAI.
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you have to agree to. The script will list the steps you need
|
||||||
|
to take to create an account on the site that hosts the weights files,
|
||||||
|
accept the agreement, and provide an access token that allows InvokeAI to
|
||||||
|
legally download and install the weights files.
|
||||||
|
|
||||||
|
If you get an error message about a module not being installed, check that
|
||||||
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
|
Note that `configure_invokeai.py` and `invoke.py` should be
|
||||||
|
installed under your conda directory and the system should find
|
||||||
|
them automatically on the PATH. If this isn't working on your
|
||||||
|
system, you can call the scripts directory using `python
|
||||||
|
scripts/configure_invoke.py` and `python scripts/invoke.py`.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
9. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
|
Activate the environment (with `source invokeai/bin/activate`), and then
|
||||||
|
run the script `invoke.py`. If you selected a non-default location
|
||||||
|
for the runtime directory, please specify the path with the `--root_dir`
|
||||||
|
option (abbreviated below as `--root`):
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
If you choose the run the web interface, point your browser at
|
||||||
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of your choice.
|
||||||
|
|
||||||
|
10. Render away!
|
||||||
|
|
||||||
|
Browse the [features](../features/CLI.md) section to learn about all the things you
|
||||||
|
can do with InvokeAI.
|
||||||
|
|
||||||
|
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||||
|
card with the ROCm driver, you may have to wait for over a minute the first
|
||||||
|
time you try to generate an image. Fortunately, after the warm up period
|
||||||
|
rendering will be fast.
|
||||||
|
|
||||||
|
11. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||||
|
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
||||||
|
script. If you forget to activate the 'invokeai' environment, the script
|
||||||
|
will fail with multiple `ModuleNotFound` errors.
|
||||||
|
|
||||||
|
## Creating an "install" version of InvokeAI
|
||||||
|
|
||||||
|
If you wish you can install InvokeAI and all its dependencies in the
|
||||||
|
runtime directory. This allows you to delete the source code
|
||||||
|
repository and eliminates the need to provide `--root_dir` at startup
|
||||||
|
time. Note that this method only works with the PIP method.
|
||||||
|
|
||||||
|
1. Follow the instructions for the PIP install, but in step #2 put the
|
||||||
|
virtual environment into the runtime directory. For example, assuming the
|
||||||
|
runtime directory lives in `~/Programs/invokeai`, you'd run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -menv ~/Programs/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Now follow steps 3 to 5 in the PIP recipe, ending with the `pip install`
|
||||||
|
step.
|
||||||
|
|
||||||
|
3. Run one additional step while you are in the source code repository
|
||||||
|
directory `pip install .` (note the dot at the end).
|
||||||
|
|
||||||
|
4. That's all! Now, whenever you activate the virtual environment,
|
||||||
|
`invoke.py` will know where to look for the runtime directory without
|
||||||
|
needing a `--root_dir` argument. In addition, you can now move or
|
||||||
|
delete the source code repository entirely.
|
||||||
|
|
||||||
|
(Don't move the runtime directory!)
|
||||||
|
|
||||||
|
## Updating to newer versions of the script
|
||||||
|
|
||||||
|
This distribution is changing rapidly. If you used the `git clone` method
|
||||||
|
(step 5) to download the InvokeAI directory, then to update to the latest and
|
||||||
|
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git pull
|
||||||
|
conda env update
|
||||||
|
python scripts/configure_invokeai.py --no-interactive #optional
|
||||||
|
```
|
||||||
|
|
||||||
|
This will bring your local copy into sync with the remote one. The last step may
|
||||||
|
be needed to take advantage of new features or released models. The
|
||||||
|
`--no-interactive` flag will prevent the script from prompting you to download
|
||||||
|
the big Stable Diffusion weights files.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
Here are some common issues and their suggested solutions.
|
||||||
|
|
||||||
|
### Conda
|
||||||
|
|
||||||
|
#### Conda fails before completing `conda update`
|
||||||
|
|
||||||
|
The usual source of these errors is a package incompatibility. While we have
|
||||||
|
tried to minimize these, over time packages get updated and sometimes introduce
|
||||||
|
incompatibilities.
|
||||||
|
|
||||||
|
We suggest that you search
|
||||||
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
|
||||||
|
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
||||||
|
|
||||||
|
You may also try to install the broken packages manually using PIP. To do this,
|
||||||
|
activate the `invokeai` environment, and run `pip install` with the name and
|
||||||
|
version of the package that is causing the incompatibility. For example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install test-tube==0.7.5
|
||||||
|
```
|
||||||
|
|
||||||
|
You can keep doing this until all requirements are satisfied and the `invoke.py`
|
||||||
|
script runs without errors. Please report to
|
||||||
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
|
||||||
|
to work around the problem so that others can benefit from your investigation.
|
||||||
|
|
||||||
|
### Create Conda Environment fails on MacOS
|
||||||
|
|
||||||
|
If conda create environment fails with lmdb error, this is most likely caused by Clang.
|
||||||
|
Run brew config to see which Clang is installed on your Mac. If Clang isn't installed, that's causing the error.
|
||||||
|
Start by installing additional XCode command line tools, followed by brew install llvm.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
xcode-select --install
|
||||||
|
brew install llvm
|
||||||
|
```
|
||||||
|
|
||||||
|
If brew config has Clang installed, update to the latest llvm and try creating the environment again.
|
||||||
|
|
||||||
|
#### `configure_invokeai.py` or `invoke.py` crashes at an early stage
|
||||||
|
|
||||||
|
This is usually due to an incomplete or corrupted Conda install. Make sure you
|
||||||
|
have linked to the correct environment file and run `conda update` again.
|
||||||
|
|
||||||
|
If the problem persists, a more extreme measure is to clear Conda's caches and
|
||||||
|
remove the `invokeai` environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda deactivate
|
||||||
|
conda env remove -n invokeai
|
||||||
|
conda clean -a
|
||||||
|
conda update
|
||||||
|
```
|
||||||
|
|
||||||
|
This removes all cached library files, including ones that may have been
|
||||||
|
corrupted somehow. (This is not supposed to happen, but does anyway).
|
||||||
|
|
||||||
|
#### `invoke.py` crashes at a later stage
|
||||||
|
|
||||||
|
If the CLI or web site had been working ok, but something unexpected happens
|
||||||
|
later on during the session, you've encountered a code bug that is probably
|
||||||
|
unrelated to an install issue. Please search
|
||||||
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
|
||||||
|
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
||||||
|
|
||||||
|
#### My renders are running very slowly
|
||||||
|
|
||||||
|
You may have installed the wrong torch (machine learning) package, and the
|
||||||
|
system is running on CPU rather than the GPU. To check, look at the log messages
|
||||||
|
that appear when `invoke.py` is first starting up. One of the earlier lines
|
||||||
|
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
|
||||||
|
and on Macintoshes, it should say "mps". If instead the message says it is
|
||||||
|
running on "cpu", then you may need to install the correct torch library.
|
||||||
|
|
||||||
|
You may be able to fix this by installing a different torch library. Here are
|
||||||
|
the magic incantations for Conda and PIP.
|
||||||
|
|
||||||
|
!!! todo "For CUDA systems"
|
||||||
|
|
||||||
|
- conda
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
||||||
|
```
|
||||||
|
|
||||||
|
- pip
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "For AMD systems"
|
||||||
|
|
||||||
|
- conda
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda activate invokeai
|
||||||
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
||||||
|
```
|
||||||
|
|
||||||
|
- pip
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
||||||
|
```
|
||||||
|
|
||||||
|
More information and troubleshooting tips can be found at https://pytorch.org.
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: Docker
|
title: Installing with Docker
|
||||||
---
|
---
|
||||||
|
|
||||||
# :fontawesome-brands-docker: Docker
|
# :fontawesome-brands-docker: Docker
|
@ -1,315 +0,0 @@
|
|||||||
---
|
|
||||||
title: InvokeAI Automated Installation
|
|
||||||
---
|
|
||||||
|
|
||||||
# InvokeAI Automated Installation
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
The automated installer is a shell script that attempts to automate every step
|
|
||||||
needed to install and run InvokeAI on a stock computer running recent versions
|
|
||||||
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
|
||||||
version of InvokeAI with the option to upgrade to experimental versions later.
|
|
||||||
|
|
||||||
## Walk through
|
|
||||||
|
|
||||||
1. Make sure that your system meets the
|
|
||||||
[hardware requirements](../index.md#hardware-requirements) and has the
|
|
||||||
appropriate GPU drivers installed. In particular, if you are a Linux user
|
|
||||||
with an AMD GPU installed, you may need to install the
|
|
||||||
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
!!! info "Required Space"
|
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the libraries and
|
|
||||||
recommended model weights files.
|
|
||||||
|
|
||||||
2. Check that your system has an up-to-date Python installed. To do this, open
|
|
||||||
up a command-line window ("Terminal" on Linux and Macintosh, "Command" or
|
|
||||||
"Powershell" on Windows) and type `python --version`. If Python is
|
|
||||||
installed, it will print out the version number. If it is version `3.9.1` or
|
|
||||||
higher, you meet requirements.
|
|
||||||
|
|
||||||
!!! warning "If you see an older version, or get a command not found error"
|
|
||||||
|
|
||||||
Go to [Python Downloads](https://www.python.org/downloads/) and
|
|
||||||
download the appropriate installer package for your platform. We recommend
|
|
||||||
[Version 3.10.9](https://www.python.org/downloads/release/python-3109/),
|
|
||||||
which has been extensively tested with InvokeAI.
|
|
||||||
|
|
||||||
!!! warning "At this time we do not recommend Python 3.11"
|
|
||||||
|
|
||||||
=== "Windows users"
|
|
||||||
|
|
||||||
- During the Python configuration process,
|
|
||||||
Please look out for a checkbox to add Python to your PATH
|
|
||||||
and select it. If the install script complains that it can't
|
|
||||||
find python, then open the Python installer again and choose
|
|
||||||
"Modify" existing installation.
|
|
||||||
|
|
||||||
- There is a slight possibility that you will encountered
|
|
||||||
DLL load errors at the very end of the installation process. This is caused
|
|
||||||
by not having up to date Visual C++ redistributable libraries. If this
|
|
||||||
happens to you, you can install the C++ libraries from this site:
|
|
||||||
https://learn.microsoft.com/en-us/cpp/windows/deploying-native-desktop-applications-visual-cpp?view=msvc-170
|
|
||||||
|
|
||||||
=== "Mac users"
|
|
||||||
|
|
||||||
- After installing Python, you may need to run the
|
|
||||||
following command from the Terminal in order to install the Web
|
|
||||||
certificates needed to download model data from https sites. If
|
|
||||||
you see lots of CERTIFICATE ERRORS during the last part of the
|
|
||||||
install, this is the problem, and you can fix it with this command:
|
|
||||||
|
|
||||||
`/Applications/Python\ 3.10/Install\ Certificates.command`
|
|
||||||
|
|
||||||
- You may need to install the Xcode command line tools. These
|
|
||||||
are a set of tools that are needed to run certain applications in a
|
|
||||||
Terminal, including InvokeAI. This package is provided directly by Apple.
|
|
||||||
|
|
||||||
- To install, open a terminal window and run `xcode-select
|
|
||||||
--install`. You will get a macOS system popup guiding you through the
|
|
||||||
install. If you already have them installed, you will instead see some
|
|
||||||
output in the Terminal advising you that the tools are already installed.
|
|
||||||
|
|
||||||
- More information can be found here:
|
|
||||||
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
|
|
||||||
|
|
||||||
=== "Linux users"
|
|
||||||
|
|
||||||
- See [Installing Python in Ubuntu](#installing-python-in-ubuntu) for some
|
|
||||||
platform-specific tips.
|
|
||||||
|
|
||||||
3. The source installer is distributed in ZIP files. Go to the
|
|
||||||
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
|
||||||
look for a series of files named:
|
|
||||||
|
|
||||||
- [InvokeAI-installer-2.2.4-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-mac.zip)
|
|
||||||
- [InvokeAI-installer-2.2.4-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-windows.zip)
|
|
||||||
- [InvokeAI-installer-2.2.4-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/InvokeAI-installer-2.2.4-linux.zip)
|
|
||||||
|
|
||||||
Download the one that is appropriate for your operating system.
|
|
||||||
|
|
||||||
4. Unpack the zip file into a convenient directory. This will create a new
|
|
||||||
directory named "InvokeAI-Installer". This example shows how this would look
|
|
||||||
using the `unzip` command-line tool, but you may use any graphical or
|
|
||||||
command-line Zip extractor:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> unzip InvokeAI-installer-2.2.4-windows.zip
|
|
||||||
Archive: C: \Linco\Downloads\InvokeAI-installer-2.2.4-windows.zip
|
|
||||||
creating: InvokeAI-Installer\
|
|
||||||
inflating: InvokeAI-Installer\install.bat
|
|
||||||
inflating: InvokeAI-Installer\readme.txt
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
After successful installation, you can delete the `InvokeAI-Installer`
|
|
||||||
directory.
|
|
||||||
|
|
||||||
5. **Windows only** Please double-click on the file WinLongPathsEnabled.reg and
|
|
||||||
accept the dialog box that asks you if you wish to modify your registry.
|
|
||||||
This activates long filename support on your system and will prevent
|
|
||||||
mysterious errors during installation.
|
|
||||||
|
|
||||||
6. If you are using a desktop GUI, double-click the installer file. It will be
|
|
||||||
named `install.bat` on Windows systems and `install.sh` on Linux and
|
|
||||||
Macintosh systems.
|
|
||||||
|
|
||||||
On Windows systems you will probably get an "Untrusted Publisher" warning.
|
|
||||||
Click on "More Info" and select "Run Anyway." You trust us, right?
|
|
||||||
|
|
||||||
7. Alternatively, from the command line, run the shell script or .bat file:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd InvokeAI-Installer
|
|
||||||
C:\Documents\Linco\invokeAI> install.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
8. The script will ask you to choose where to install InvokeAI. Select a
|
|
||||||
directory with at least 18G of free space for a full install. InvokeAI and
|
|
||||||
all its support files will be installed into a new directory named
|
|
||||||
`invokeai` located at the location you specify.
|
|
||||||
|
|
||||||
- The default is to install the `invokeai` directory in your home directory,
|
|
||||||
usually `C:\Users\YourName\invokeai` on Windows systems,
|
|
||||||
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
|
|
||||||
on Macintoshes, where "YourName" is your login name.
|
|
||||||
|
|
||||||
- The script uses tab autocompletion to suggest directory path completions.
|
|
||||||
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
|
|
||||||
to suggest completions.
|
|
||||||
|
|
||||||
9. Sit back and let the install script work. It will install the third-party
|
|
||||||
libraries needed by InvokeAI, then download the current InvokeAI release and
|
|
||||||
install it.
|
|
||||||
|
|
||||||
Be aware that some of the library download and install steps take a long
|
|
||||||
time. In particular, the `pytorch` package is quite large and often appears
|
|
||||||
to get "stuck" at 99.9%. Have patience and the installation step will
|
|
||||||
eventually resume. However, there are occasions when the library install
|
|
||||||
does legitimately get stuck. If you have been waiting for more than ten
|
|
||||||
minutes and nothing is happening, you can interrupt the script with ^C. You
|
|
||||||
may restart it and it will pick up where it left off.
|
|
||||||
|
|
||||||
10. After installation completes, the installer will launch a script called
|
|
||||||
`configure_invokeai.py`, which will guide you through the first-time process
|
|
||||||
of selecting one or more Stable Diffusion model weights files, downloading
|
|
||||||
and configuring them. We provide a list of popular models that InvokeAI
|
|
||||||
performs well with. However, you can add more weight files later on using
|
|
||||||
the command-line client or the Web UI. See
|
|
||||||
[Installing Models](INSTALLING_MODELS.md) for details.
|
|
||||||
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you must agree to in order to use. The script will list the
|
|
||||||
steps you need to take to create an account on the official site that hosts
|
|
||||||
the weights files, accept the agreement, and provide an access token that
|
|
||||||
allows InvokeAI to legally download and install the weights files.
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [Installing Models](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
11. The script will now exit and you'll be ready to generate some images. Look
|
|
||||||
for the directory `invokeai` installed in the location you chose at the
|
|
||||||
beginning of the install session. Look for a shell script named `invoke.sh`
|
|
||||||
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
|
|
||||||
it or typing its name at the command-line:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd invokeai
|
|
||||||
C:\Documents\Linco\invokeAI> invoke.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
- The `invoke.bat` (`invoke.sh`) script will give you the choice of starting
|
|
||||||
(1) the command-line interface, or (2) the web GUI. If you start the
|
|
||||||
latter, you can load the user interface by pointing your browser at
|
|
||||||
http://localhost:9090.
|
|
||||||
|
|
||||||
- The script also offers you a third option labeled "open the developer
|
|
||||||
console". If you choose this option, you will be dropped into a
|
|
||||||
command-line interface in which you can run python commands directly,
|
|
||||||
access developer tools, and launch InvokeAI with customized options.
|
|
||||||
|
|
||||||
12. You can launch InvokeAI with several different command-line arguments that
|
|
||||||
customize its behavior. For example, you can change the location of the
|
|
||||||
image output directory, or select your favorite sampler. See the
|
|
||||||
[Command-Line Interface](../features/CLI.md) for a full list of the options.
|
|
||||||
|
|
||||||
- To set defaults that will take effect every time you launch InvokeAI,
|
|
||||||
use a text editor (e.g. Notepad) to exit the file
|
|
||||||
`invokeai\invokeai.init`. It contains a variety of examples that you can
|
|
||||||
follow to add and modify launch options.
|
|
||||||
|
|
||||||
!!! warning "The `invokeai` directory contains the `invoke` application, its
|
|
||||||
configuration files, the model weight files, and outputs of image generation.
|
|
||||||
Once InvokeAI is installed, do not move or remove this directory."
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### _Package dependency conflicts_
|
|
||||||
|
|
||||||
If you have previously installed InvokeAI or another Stable Diffusion package,
|
|
||||||
the installer may occasionally pick up outdated libraries and either the
|
|
||||||
installer or `invoke` will fail with complaints about library conflicts. You can
|
|
||||||
address this by entering the `invokeai` directory and running `update.sh`, which
|
|
||||||
will bring InvokeAI up to date with the latest libraries.
|
|
||||||
|
|
||||||
### ldm from pypi
|
|
||||||
|
|
||||||
!!! warning
|
|
||||||
|
|
||||||
Some users have tried to correct dependency problems by installing
|
|
||||||
the `ldm` package from PyPi.org. Unfortunately this is an unrelated package that
|
|
||||||
has nothing to do with the 'latent diffusion model' used by InvokeAI. Installing
|
|
||||||
ldm will make matters worse. If you've installed ldm, uninstall it with
|
|
||||||
`pip uninstall ldm`.
|
|
||||||
|
|
||||||
### Corrupted configuration file
|
|
||||||
|
|
||||||
Everything seems to install ok, but `invoke` complains of a corrupted
|
|
||||||
configuration file and goes back into the configuration process (asking you to
|
|
||||||
download models, etc), but this doesn't fix the problem.
|
|
||||||
|
|
||||||
This issue is often caused by a misconfigured configuration directive in the
|
|
||||||
`invokeai\invokeai.init` initialization file that contains startup settings. The
|
|
||||||
easiest way to fix the problem is to move the file out of the way and re-run
|
|
||||||
`configure_invokeai.py`. Enter the developer's console (option 3 of the launcher
|
|
||||||
script) and run this command:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
configure_invokeai.py --root=.
|
|
||||||
```
|
|
||||||
|
|
||||||
Note the dot (.) after `--root`. It is part of the command.
|
|
||||||
|
|
||||||
_If none of these maneuvers fixes the problem_ then please report the problem to
|
|
||||||
the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
|
|
||||||
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive
|
|
||||||
assistance.
|
|
||||||
|
|
||||||
### other problems
|
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI team is
|
|
||||||
available to help you. Either create an
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
|
||||||
make a request for help on the "bugs-and-support" channel of our
|
|
||||||
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
|
||||||
organization, but typically somebody will be available to help you within 24
|
|
||||||
hours, and often much sooner.
|
|
||||||
|
|
||||||
## Updating to newer versions
|
|
||||||
|
|
||||||
This distribution is changing rapidly, and we add new features on a daily basis.
|
|
||||||
To update to the latest released version (recommended), run the `update.sh`
|
|
||||||
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
|
||||||
release and re-run the `configure_invokeai` script to download any updated
|
|
||||||
models files that may be needed. You can also use this to add additional models
|
|
||||||
that you did not select at installation time.
|
|
||||||
|
|
||||||
You can now close the developer console and run `invoke` as before. If you get
|
|
||||||
complaints about missing models, then you may need to do the additional step of
|
|
||||||
running `configure_invokeai.py`. This happens relatively infrequently. To do
|
|
||||||
this, simply open up the developer's console again and type
|
|
||||||
`python scripts/configure_invokeai.py`.
|
|
||||||
|
|
||||||
You may also use the `update` script to install any selected version of
|
|
||||||
InvokeAI. From https://github.com/invoke-ai/InvokeAI, navigate to the zip file
|
|
||||||
link of the version you wish to install. You can find the zip links by going to
|
|
||||||
the one of the release pages and looking for the **Assets** section at the
|
|
||||||
bottom. Alternatively, you can browse "branches" and "tags" at the top of the
|
|
||||||
big code directory on the InvokeAI welcome page. When you find the version you
|
|
||||||
want to install, go to the green "<> Code" button at the top, and copy the
|
|
||||||
"Download ZIP" link.
|
|
||||||
|
|
||||||
Now run `update.sh` (or `update.bat`) with the URL of the desired InvokeAI
|
|
||||||
version as its argument. For example, this will install the old 2.2.0 release.
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.0.zip
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installing Python in Ubuntu
|
|
||||||
|
|
||||||
For reasons that are not entirely clear, installing the correct version of
|
|
||||||
Python can be a bit of a challenge on Ubuntu, Linux Mint, and other
|
|
||||||
Ubuntu-derived distributions.
|
|
||||||
|
|
||||||
In particular, Ubuntu version 20.04 LTS comes with an old version of Python,
|
|
||||||
does not come with the PIP package manager installed, and to make matters worse,
|
|
||||||
the `python` command points to Python2, not Python3.
|
|
||||||
|
|
||||||
Here is the quick recipe for bringing your system up to date:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install python3.9
|
|
||||||
sudo apt install python3-pip
|
|
||||||
cd /usr/bin
|
|
||||||
sudo ln -sf python3.9 python3
|
|
||||||
sudo ln -sf python3 python
|
|
||||||
```
|
|
||||||
|
|
||||||
You can still access older versions of Python by calling `python2`, `python3.8`,
|
|
||||||
etc.
|
|
1
docs/installation/INSTALL_AUTOMATED.md
Symbolic link
1
docs/installation/INSTALL_AUTOMATED.md
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
010_INSTALL_AUTOMATED.md
|
@ -1,579 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation
|
|
||||||
---
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
!!! warning "This is for advanced Users"
|
|
||||||
|
|
||||||
who are already experienced with using conda or pip
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
You have two choices for manual installation, the [first
|
|
||||||
one](#PIP_method) uses basic Python virtual environment (`venv`)
|
|
||||||
commands and the PIP package manager. The [second one](#Conda_method)
|
|
||||||
based on the Anaconda3 package manager (`conda`). Both methods require
|
|
||||||
you to enter commands on the terminal, also known as the "console".
|
|
||||||
|
|
||||||
Note that the conda install method is currently deprecated and will not
|
|
||||||
be supported at some point in the future.
|
|
||||||
|
|
||||||
On Windows systems you are encouraged to install and use the
|
|
||||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
|
||||||
which provides compatibility with Linux and Mac shells and nice
|
|
||||||
features such as command-line completion.
|
|
||||||
|
|
||||||
## pip Install
|
|
||||||
|
|
||||||
To install InvokeAI with virtual environments and the PIP package
|
|
||||||
manager, please follow these steps:
|
|
||||||
|
|
||||||
1. Make sure you are using Python 3.9 or 3.10. The rest of the install
|
|
||||||
procedure depends on this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -V
|
|
||||||
```
|
|
||||||
|
|
||||||
2. From within the InvokeAI top-level directory, create and activate a virtual
|
|
||||||
environment named `invokeai`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -mvenv invokeai
|
|
||||||
source invokeai/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Make sure that pip is installed in your virtual environment an up to date:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -mensurepip --upgrade
|
|
||||||
python -mpip install --upgrade pip
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
|
||||||
system.
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different operating
|
|
||||||
systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| filename | OS |
|
|
||||||
| :---------------------------------: | :-------------------------------------------------------------: |
|
|
||||||
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
|
||||||
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
|
||||||
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
|
||||||
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
|
||||||
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Select the appropriate requirements file, and make a link to it from
|
|
||||||
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
|
||||||
this from the top-level directory is:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
=== "Macintosh and Linux"
|
|
||||||
|
|
||||||
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
|
|
||||||
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning
|
|
||||||
|
|
||||||
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
|
|
||||||
This is a base requirements file that does not have the platform-specific
|
|
||||||
libraries. Also, be sure to link or copy the platform-specific file to
|
|
||||||
a top-level file named `requirements.txt` as shown here. Running pip on
|
|
||||||
a requirements file in a subdirectory will not work as expected.
|
|
||||||
|
|
||||||
When this is done, confirm that a file named `requirements.txt` has been
|
|
||||||
created in the InvokeAI root directory and that it points to the correct
|
|
||||||
file in `environments-and-requirements`.
|
|
||||||
|
|
||||||
5. Run PIP
|
|
||||||
|
|
||||||
Be sure that the `invokeai` environment is active before doing this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install --prefer-binary -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Set up the runtime directory
|
|
||||||
|
|
||||||
In this step you will initialize a runtime directory that will
|
|
||||||
contain the models, model config files, directory for textual
|
|
||||||
inversion embeddings, and your outputs. This keeps the runtime
|
|
||||||
directory separate from the source code and aids in updating.
|
|
||||||
|
|
||||||
You may pick any location for this directory using the `--root_dir`
|
|
||||||
option (abbreviated --root). If you don't pass this option, it will
|
|
||||||
default to `invokeai` in your home directory.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
configure_invokeai.py --root_dir ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
The script `configure_invokeai.py` will interactively guide you through the
|
|
||||||
process of downloading and installing the weights files needed for InvokeAI.
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you have to agree to. The script will list the steps you need
|
|
||||||
to take to create an account on the site that hosts the weights files,
|
|
||||||
accept the agreement, and provide an access token that allows InvokeAI to
|
|
||||||
legally download and install the weights files.
|
|
||||||
|
|
||||||
If you get an error message about a module not being installed, check that
|
|
||||||
the `invokeai` environment is active and if not, repeat step 5.
|
|
||||||
|
|
||||||
Note that `configure_invokeai.py` and `invoke.py` should be installed
|
|
||||||
under your virtual environment directory and the system should find them
|
|
||||||
on the PATH. If this isn't working on your system, you can call the
|
|
||||||
scripts directory using `python scripts/configure_invoke.py` and
|
|
||||||
`python scripts/invoke.py`.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [here](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
7. Run the command-line- or the web- interface:
|
|
||||||
|
|
||||||
Activate the environment (with `source invokeai/bin/activate`), and then
|
|
||||||
run the script `invoke.py`. If you selected a non-default location
|
|
||||||
for the runtime directory, please specify the path with the `--root_dir`
|
|
||||||
option (abbreviated below as `--root`):
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
!!! warning "Make sure that the virtual environment is activated, which should create `(invokeai)` in front of your prompt!"
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --web --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
If you choose the run the web interface, point your browser at
|
|
||||||
http://localhost:9090 in order to load the GUI.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of the directory.
|
|
||||||
|
|
||||||
8. Render away!
|
|
||||||
|
|
||||||
Browse the [features](../features/CLI.md) section to learn about all the things you
|
|
||||||
can do with InvokeAI.
|
|
||||||
|
|
||||||
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
|
||||||
card with the ROCm driver, you may have to wait for over a minute the first
|
|
||||||
time you try to generate an image. Fortunately, after the warm up period
|
|
||||||
rendering will be fast.
|
|
||||||
|
|
||||||
9. Subsequently, to relaunch the script, be sure to run "conda activate
|
|
||||||
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
|
||||||
script. If you forget to activate the 'invokeai' environment, the script
|
|
||||||
will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
Do not move the source code repository after installation. The virtual environment directory has absolute paths in it that get confused if the directory is moved.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Conda method
|
|
||||||
|
|
||||||
1. Check that your system meets the
|
|
||||||
[hardware requirements](index.md#Hardware_Requirements) and has the
|
|
||||||
appropriate GPU drivers installed. In particular, if you are a Linux user
|
|
||||||
with an AMD GPU installed, you may need to install the
|
|
||||||
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
|
|
||||||
of ROCm driver support on this platform.
|
|
||||||
|
|
||||||
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
|
|
||||||
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
|
|
||||||
information about the installed video card.
|
|
||||||
|
|
||||||
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
|
|
||||||
can skip this step.
|
|
||||||
|
|
||||||
2. You will need to install Anaconda3 and Git if they are not already
|
|
||||||
available. Use your operating system's preferred package manager, or
|
|
||||||
download the installers manually. You can find them here:
|
|
||||||
|
|
||||||
- [Anaconda3](https://www.anaconda.com/)
|
|
||||||
- [git](https://git-scm.com/downloads)
|
|
||||||
|
|
||||||
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
|
||||||
GitHub:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create InvokeAI folder where you will follow the rest of the
|
|
||||||
steps.
|
|
||||||
|
|
||||||
4. Enter the newly-created InvokeAI folder:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
From this step forward make sure that you are working in the InvokeAI
|
|
||||||
directory!
|
|
||||||
|
|
||||||
5. Select the appropriate environment file:
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different operating
|
|
||||||
systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| filename | OS |
|
|
||||||
| :----------------------: | :----------------------------: |
|
|
||||||
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
|
|
||||||
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
|
|
||||||
| environment-mac.yml | Macintosh |
|
|
||||||
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Choose the appropriate environment file for your system and link or copy it
|
|
||||||
to `environment.yml` in InvokeAI's top-level directory. To do so, run
|
|
||||||
following command from the repository-root:
|
|
||||||
|
|
||||||
!!! Example ""
|
|
||||||
|
|
||||||
=== "Macintosh and Linux"
|
|
||||||
|
|
||||||
!!! todo "Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the table above"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
When this is done, confirm that a file `environment.yml` has been linked in
|
|
||||||
the InvokeAI root directory and that it points to the correct file in the
|
|
||||||
`environments-and-requirements`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ls -la
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
|
|
||||||
!!! todo " Since it requires admin privileges to create links, we will use the copy command to create your `environment.yml`"
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
copy environments-and-requirements\environment-win-cuda.yml environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Afterwards verify that the file `environment.yml` has been created, either via the
|
|
||||||
explorer or by using the command `dir` from the terminal
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
dir
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning "Do not try to run conda on directly on the subdirectory environments file. This won't work. Instead, copy or link it to the top-level directory as shown."
|
|
||||||
|
|
||||||
6. Create the conda environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda env update
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create a new environment named `invokeai` and install all InvokeAI
|
|
||||||
dependencies into it. If something goes wrong you should take a look at
|
|
||||||
[troubleshooting](#troubleshooting).
|
|
||||||
|
|
||||||
7. Activate the `invokeai` environment:
|
|
||||||
|
|
||||||
In order to use the newly created environment you will first need to
|
|
||||||
activate it
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
Your command-line prompt should change to indicate that `invokeai` is active
|
|
||||||
by prepending `(invokeai)`.
|
|
||||||
|
|
||||||
8. Set up the runtime directory
|
|
||||||
|
|
||||||
In this step you will initialize a runtime directory that will
|
|
||||||
contain the models, model config files, directory for textual
|
|
||||||
inversion embeddings, and your outputs. This keeps the runtime
|
|
||||||
directory separate from the source code and aids in updating.
|
|
||||||
|
|
||||||
You may pick any location for this directory using the `--root_dir`
|
|
||||||
option (abbreviated --root). If you don't pass this option, it will
|
|
||||||
default to `invokeai` in your home directory.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/configure_invokeai.py --root_dir ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
The script `configure_invokeai.py` will interactively guide you through the
|
|
||||||
process of downloading and installing the weights files needed for InvokeAI.
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you have to agree to. The script will list the steps you need
|
|
||||||
to take to create an account on the site that hosts the weights files,
|
|
||||||
accept the agreement, and provide an access token that allows InvokeAI to
|
|
||||||
legally download and install the weights files.
|
|
||||||
|
|
||||||
If you get an error message about a module not being installed, check that
|
|
||||||
the `invokeai` environment is active and if not, repeat step 5.
|
|
||||||
|
|
||||||
Note that `configure_invokeai.py` and `invoke.py` should be
|
|
||||||
installed under your conda directory and the system should find
|
|
||||||
them automatically on the PATH. If this isn't working on your
|
|
||||||
system, you can call the scripts directory using `python
|
|
||||||
scripts/configure_invoke.py` and `python scripts/invoke.py`.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [here](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
9. Run the command-line- or the web- interface:
|
|
||||||
|
|
||||||
Activate the environment (with `source invokeai/bin/activate`), and then
|
|
||||||
run the script `invoke.py`. If you selected a non-default location
|
|
||||||
for the runtime directory, please specify the path with the `--root_dir`
|
|
||||||
option (abbreviated below as `--root`):
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --web --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke.py --web --host 0.0.0.0 --root ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
If you choose the run the web interface, point your browser at
|
|
||||||
http://localhost:9090 in order to load the GUI.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
You can permanently set the location of the runtime directory by setting the environment variable INVOKEAI_ROOT to the path of your choice.
|
|
||||||
|
|
||||||
10. Render away!
|
|
||||||
|
|
||||||
Browse the [features](../features/CLI.md) section to learn about all the things you
|
|
||||||
can do with InvokeAI.
|
|
||||||
|
|
||||||
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
|
||||||
card with the ROCm driver, you may have to wait for over a minute the first
|
|
||||||
time you try to generate an image. Fortunately, after the warm up period
|
|
||||||
rendering will be fast.
|
|
||||||
|
|
||||||
11. Subsequently, to relaunch the script, be sure to run "conda activate
|
|
||||||
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
|
||||||
script. If you forget to activate the 'invokeai' environment, the script
|
|
||||||
will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
## Creating an "install" version of InvokeAI
|
|
||||||
|
|
||||||
If you wish you can install InvokeAI and all its dependencies in the
|
|
||||||
runtime directory. This allows you to delete the source code
|
|
||||||
repository and eliminates the need to provide `--root_dir` at startup
|
|
||||||
time. Note that this method only works with the PIP method.
|
|
||||||
|
|
||||||
1. Follow the instructions for the PIP install, but in step #2 put the
|
|
||||||
virtual environment into the runtime directory. For example, assuming the
|
|
||||||
runtime directory lives in `~/Programs/invokeai`, you'd run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -menv ~/Programs/invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Now follow steps 3 to 5 in the PIP recipe, ending with the `pip install`
|
|
||||||
step.
|
|
||||||
|
|
||||||
3. Run one additional step while you are in the source code repository
|
|
||||||
directory `pip install .` (note the dot at the end).
|
|
||||||
|
|
||||||
4. That's all! Now, whenever you activate the virtual environment,
|
|
||||||
`invoke.py` will know where to look for the runtime directory without
|
|
||||||
needing a `--root_dir` argument. In addition, you can now move or
|
|
||||||
delete the source code repository entirely.
|
|
||||||
|
|
||||||
(Don't move the runtime directory!)
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the InvokeAI directory, then to update to the latest and
|
|
||||||
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
conda env update
|
|
||||||
python scripts/configure_invokeai.py --no-interactive #optional
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one. The last step may
|
|
||||||
be needed to take advantage of new features or released models. The
|
|
||||||
`--no-interactive` flag will prevent the script from prompting you to download
|
|
||||||
the big Stable Diffusion weights files.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
Here are some common issues and their suggested solutions.
|
|
||||||
|
|
||||||
### Conda
|
|
||||||
|
|
||||||
#### Conda fails before completing `conda update`
|
|
||||||
|
|
||||||
The usual source of these errors is a package incompatibility. While we have
|
|
||||||
tried to minimize these, over time packages get updated and sometimes introduce
|
|
||||||
incompatibilities.
|
|
||||||
|
|
||||||
We suggest that you search
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
|
|
||||||
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
|
||||||
|
|
||||||
You may also try to install the broken packages manually using PIP. To do this,
|
|
||||||
activate the `invokeai` environment, and run `pip install` with the name and
|
|
||||||
version of the package that is causing the incompatibility. For example:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install test-tube==0.7.5
|
|
||||||
```
|
|
||||||
|
|
||||||
You can keep doing this until all requirements are satisfied and the `invoke.py`
|
|
||||||
script runs without errors. Please report to
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
|
|
||||||
to work around the problem so that others can benefit from your investigation.
|
|
||||||
|
|
||||||
### Create Conda Environment fails on MacOS
|
|
||||||
|
|
||||||
If conda create environment fails with lmdb error, this is most likely caused by Clang.
|
|
||||||
Run brew config to see which Clang is installed on your Mac. If Clang isn't installed, that's causing the error.
|
|
||||||
Start by installing additional XCode command line tools, followed by brew install llvm.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
xcode-select --install
|
|
||||||
brew install llvm
|
|
||||||
```
|
|
||||||
|
|
||||||
If brew config has Clang installed, update to the latest llvm and try creating the environment again.
|
|
||||||
|
|
||||||
#### `configure_invokeai.py` or `invoke.py` crashes at an early stage
|
|
||||||
|
|
||||||
This is usually due to an incomplete or corrupted Conda install. Make sure you
|
|
||||||
have linked to the correct environment file and run `conda update` again.
|
|
||||||
|
|
||||||
If the problem persists, a more extreme measure is to clear Conda's caches and
|
|
||||||
remove the `invokeai` environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda deactivate
|
|
||||||
conda env remove -n invokeai
|
|
||||||
conda clean -a
|
|
||||||
conda update
|
|
||||||
```
|
|
||||||
|
|
||||||
This removes all cached library files, including ones that may have been
|
|
||||||
corrupted somehow. (This is not supposed to happen, but does anyway).
|
|
||||||
|
|
||||||
#### `invoke.py` crashes at a later stage
|
|
||||||
|
|
||||||
If the CLI or web site had been working ok, but something unexpected happens
|
|
||||||
later on during the session, you've encountered a code bug that is probably
|
|
||||||
unrelated to an install issue. Please search
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
|
|
||||||
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
|
||||||
|
|
||||||
#### My renders are running very slowly
|
|
||||||
|
|
||||||
You may have installed the wrong torch (machine learning) package, and the
|
|
||||||
system is running on CPU rather than the GPU. To check, look at the log messages
|
|
||||||
that appear when `invoke.py` is first starting up. One of the earlier lines
|
|
||||||
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
|
|
||||||
and on Macintoshes, it should say "mps". If instead the message says it is
|
|
||||||
running on "cpu", then you may need to install the correct torch library.
|
|
||||||
|
|
||||||
You may be able to fix this by installing a different torch library. Here are
|
|
||||||
the magic incantations for Conda and PIP.
|
|
||||||
|
|
||||||
!!! todo "For CUDA systems"
|
|
||||||
|
|
||||||
- conda
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
|
||||||
```
|
|
||||||
|
|
||||||
- pip
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For AMD systems"
|
|
||||||
|
|
||||||
- conda
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
- pip
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
More information and troubleshooting tips can be found at https://pytorch.org.
|
|
1
docs/installation/INSTALL_MANUAL.md
Symbolic link
1
docs/installation/INSTALL_MANUAL.md
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
020_INSTALL_MANUAL.md
|
@ -5,14 +5,14 @@ title: Overview
|
|||||||
We offer several ways to install InvokeAI, each one suited to your
|
We offer several ways to install InvokeAI, each one suited to your
|
||||||
experience and preferences.
|
experience and preferences.
|
||||||
|
|
||||||
1. [Automated Installer](INSTALL_AUTOMATED.md)
|
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
|
||||||
|
|
||||||
This is a script that will install all of InvokeAI's essential
|
This is a script that will install all of InvokeAI's essential
|
||||||
third party libraries and InvokeAI itself. It includes access to a
|
third party libraries and InvokeAI itself. It includes access to a
|
||||||
"developer console" which will help us debug problems with you and
|
"developer console" which will help us debug problems with you and
|
||||||
give you to access experimental features.
|
give you to access experimental features.
|
||||||
|
|
||||||
2. [Manual Installation](INSTALL_MANUAL.md)
|
2. [Manual Installation](020_INSTALL_MANUAL.md)
|
||||||
|
|
||||||
In this method you will manually run the commands needed to install
|
In this method you will manually run the commands needed to install
|
||||||
InvokeAI and its dependencies. We offer two recipes: one suited to
|
InvokeAI and its dependencies. We offer two recipes: one suited to
|
||||||
@ -25,7 +25,7 @@ experience and preferences.
|
|||||||
the cutting edge of future InvokeAI development and is willing to put
|
the cutting edge of future InvokeAI development and is willing to put
|
||||||
up with occasional glitches and breakage.
|
up with occasional glitches and breakage.
|
||||||
|
|
||||||
3. [Docker Installation](INSTALL_DOCKER.md)
|
3. [Docker Installation](040_INSTALL_DOCKER.md)
|
||||||
|
|
||||||
We also offer a method for creating Docker containers containing
|
We also offer a method for creating Docker containers containing
|
||||||
InvokeAI and its dependencies. This method is recommended for
|
InvokeAI and its dependencies. This method is recommended for
|
||||||
|
@ -25,15 +25,14 @@ set PYTHON_URL=https://www.python.org/downloads/release/python-3109/
|
|||||||
set err_msg=An error has occurred and the script could not continue.
|
set err_msg=An error has occurred and the script could not continue.
|
||||||
|
|
||||||
@rem --------------------------- Intro -------------------------------
|
@rem --------------------------- Intro -------------------------------
|
||||||
echo This script will install InvokeAI and its dependencies. Before you start,
|
echo This script will install InvokeAI and its dependencies.
|
||||||
echo please make sure to do the following:
|
echo.
|
||||||
|
echo BEFORE YOU START PLEASE MAKE SURE TO DO THE FOLLOWING
|
||||||
echo 1. Install python 3.9 or higher.
|
echo 1. Install python 3.9 or higher.
|
||||||
echo 2. Double-click on the file WinLongPathsEnabled.reg in order to
|
echo 2. Double-click on the file WinLongPathsEnabled.reg in order to
|
||||||
echo enable long path support on your system.
|
echo enable long path support on your system.
|
||||||
echo 3. Some users have found they need to install the Visual C++ core
|
echo 3. Install the Visual C++ core libraries.
|
||||||
echo libraries or else they experience DLL loading problems at the end of the install.
|
echo Pleaase download and install the libraries from:
|
||||||
echo Visual C++ is very likely already installed on your system, but if you get DLL
|
|
||||||
echo issues, please download and install the libraries by going to:
|
|
||||||
echo https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
|
echo https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
|
||||||
echo.
|
echo.
|
||||||
echo See %INSTRUCTIONS% for more details.
|
echo See %INSTRUCTIONS% for more details.
|
||||||
|
Loading…
Reference in New Issue
Block a user