mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
docs: remove deprecated installatino docs
This commit is contained in:
parent
fb317d0967
commit
3493b26c55
@ -1,89 +0,0 @@
|
|||||||
---
|
|
||||||
title: build binary installers
|
|
||||||
---
|
|
||||||
|
|
||||||
# :simple-buildkite: How to build "binary" installers (InvokeAI-mac/windows/linux_on_*.zip)
|
|
||||||
|
|
||||||
## 1. Ensure `installers/requirements.in` is correct
|
|
||||||
|
|
||||||
and up to date on the branch to be installed.
|
|
||||||
|
|
||||||
## <a name="step-2"></a> 2. Run `pip-compile` on each platform.
|
|
||||||
|
|
||||||
On each target platform, in the branch that is to be installed, and
|
|
||||||
inside the InvokeAI git root folder, run the following commands:
|
|
||||||
|
|
||||||
```commandline
|
|
||||||
conda activate invokeai # or however you activate python
|
|
||||||
pip install pip-tools
|
|
||||||
pip-compile --allow-unsafe --generate-hashes --output-file=binary_installer/<reqsfile>.txt binary_installer/requirements.in
|
|
||||||
```
|
|
||||||
where `<reqsfile>.txt` is whichever of
|
|
||||||
```commandline
|
|
||||||
py3.10-darwin-arm64-mps-reqs.txt
|
|
||||||
py3.10-darwin-x86_64-reqs.txt
|
|
||||||
py3.10-linux-x86_64-cuda-reqs.txt
|
|
||||||
py3.10-windows-x86_64-cuda-reqs.txt
|
|
||||||
```
|
|
||||||
matches the current OS and architecture.
|
|
||||||
> There is no way to cross-compile these. They must be done on a system matching the target OS and arch.
|
|
||||||
|
|
||||||
## <a name="step-3"></a> 3. Set github repository and branch
|
|
||||||
|
|
||||||
Once all reqs files have been collected and committed **to the branch
|
|
||||||
to be installed**, edit `binary_installer/install.sh.in` and `binary_installer/install.bat.in` so that `RELEASE_URL`
|
|
||||||
and `RELEASE_SOURCEBALL` point to the github repo and branch that is
|
|
||||||
to be installed.
|
|
||||||
|
|
||||||
For example, to install `main` branch of `InvokeAI`, they should be
|
|
||||||
set as follows:
|
|
||||||
|
|
||||||
`install.sh.in`:
|
|
||||||
```commandline
|
|
||||||
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
|
||||||
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
`install.bat.in`:
|
|
||||||
```commandline
|
|
||||||
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
|
||||||
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
Or, to install `damians-cool-feature` branch of `damian0815`, set them
|
|
||||||
as follows:
|
|
||||||
|
|
||||||
`install.sh.in`:
|
|
||||||
```commandline
|
|
||||||
RELEASE_URL=https://github.com/damian0815/InvokeAI
|
|
||||||
RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
`install.bat.in`:
|
|
||||||
```commandline
|
|
||||||
set RELEASE_URL=https://github.com/damian0815/InvokeAI
|
|
||||||
set RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
The branch and repo specified here **must** contain the correct reqs
|
|
||||||
files. The installer zip files **do not** contain requirements files,
|
|
||||||
they are pulled from the specified branch during the installation
|
|
||||||
process.
|
|
||||||
|
|
||||||
## 4. Create zip files.
|
|
||||||
|
|
||||||
cd into the `installers/` folder and run
|
|
||||||
`./create_installers.sh`. This will create
|
|
||||||
`InvokeAI-mac_on_<branch>.zip`,
|
|
||||||
`InvokeAI-windows_on_<branch>.zip` and
|
|
||||||
`InvokeAI-linux_on_<branch>.zip`. These files can be distributed to end users.
|
|
||||||
|
|
||||||
These zips will continue to function as installers for all future
|
|
||||||
pushes to those branches, as long as necessary changes to
|
|
||||||
`requirements.in` are propagated in a timely manner to the
|
|
||||||
`py3.10-*-reqs.txt` files using pip-compile as outlined in [step
|
|
||||||
2](#step-2).
|
|
||||||
|
|
||||||
To actually install, users should unzip the appropriate zip file into an empty
|
|
||||||
folder and run `install.sh` on macOS/Linux or `install.bat` on
|
|
||||||
Windows.
|
|
@ -1,64 +0,0 @@
|
|||||||
---
|
|
||||||
title: InvokeAI Binary Installer
|
|
||||||
---
|
|
||||||
|
|
||||||
The InvokeAI binary installer is a shell script that will install InvokeAI onto a stock
|
|
||||||
computer running recent versions of Linux, MacOSX or Windows. It will leave you
|
|
||||||
with a version that runs a stable version of InvokeAI. When a new version of
|
|
||||||
InvokeAI is released, you will download and reinstall the new version.
|
|
||||||
|
|
||||||
If you wish to tinker with unreleased versions of InvokeAI that introduce
|
|
||||||
potentially unstable new features, you should consider using the
|
|
||||||
[source installer](INSTALL_SOURCE.md) or one of the
|
|
||||||
[manual install](../020_INSTALL_MANUAL.md) methods.
|
|
||||||
|
|
||||||
**Important Caveats**
|
|
||||||
- This script does not support AMD GPUs. For Linux AMD support,
|
|
||||||
please use the manual or source code installer methods.
|
|
||||||
|
|
||||||
- This script has difficulty on some Macintosh machines
|
|
||||||
that have previously been used for Python development due to
|
|
||||||
conflicting development tools versions. Mac developers may wish
|
|
||||||
to try the source code installer or one of the manual methods instead.
|
|
||||||
|
|
||||||
!!! todo
|
|
||||||
|
|
||||||
Before you begin, make sure that you meet
|
|
||||||
the[hardware requirements](/#hardware-requirements) and has the
|
|
||||||
appropriate GPU drivers installed. In particular, if you are a Linux user with
|
|
||||||
an AMD GPU installed, you may need to install the
|
|
||||||
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the libraries and
|
|
||||||
recommended model weights files.
|
|
||||||
|
|
||||||
## Steps to Install
|
|
||||||
|
|
||||||
1. Download the
|
|
||||||
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
|
|
||||||
InvokeAI's installer for your platform. Look for a file named `InvokeAI-binary-<your platform>.zip`
|
|
||||||
|
|
||||||
2. Place the downloaded package someplace where you have plenty of HDD space,
|
|
||||||
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
|
|
||||||
|
|
||||||
3. Extract the 'InvokeAI' folder from the downloaded package
|
|
||||||
|
|
||||||
4. Open the extracted 'InvokeAI' folder
|
|
||||||
|
|
||||||
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
|
|
||||||
a terminal)
|
|
||||||
|
|
||||||
6. Follow the prompts
|
|
||||||
|
|
||||||
7. After installation, please run the 'invoke.bat' file (on Windows) or
|
|
||||||
'invoke.sh' file (on Linux/Mac) to start InvokeAI.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI team is
|
|
||||||
available to help you. Either create an
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
|
||||||
make a request for help on the "bugs-and-support" channel of our
|
|
||||||
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
|
||||||
organization, but typically somebody will be available to help you within 24
|
|
||||||
hours, and often much sooner.
|
|
@ -1,32 +0,0 @@
|
|||||||
---
|
|
||||||
title: Running InvokeAI on Google Colab using a Jupyter Notebook
|
|
||||||
---
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
We have a [Jupyter
|
|
||||||
notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
|
||||||
with cell-by-cell installation steps. It will download the code in
|
|
||||||
this repo as one of the steps, so instead of cloning this repo, simply
|
|
||||||
download the notebook from the link above and load it up in VSCode
|
|
||||||
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
|
||||||
start running the cells one-by-one.
|
|
||||||
|
|
||||||
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
|
|
||||||
|
|
||||||
## Running Online On Google Colabotary
|
|
||||||
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
|
||||||
|
|
||||||
## Running Locally (Cloning)
|
|
||||||
|
|
||||||
1. Install the Jupyter Notebook python library (one-time):
|
|
||||||
pip install jupyter
|
|
||||||
|
|
||||||
2. Clone the InvokeAI repository:
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
cd invoke-ai
|
|
||||||
3. Create a virtual environment using conda:
|
|
||||||
conda create -n invoke jupyter
|
|
||||||
4. Activate the environment and start the Jupyter notebook:
|
|
||||||
conda activate invoke
|
|
||||||
jupyter notebook
|
|
@ -1,135 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, Linux
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-linux: Linux
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
1. You will need to install the following prerequisites if they are not already
|
|
||||||
available. Use your operating system's preferred installer.
|
|
||||||
|
|
||||||
- Python (version 3.8.5 recommended; higher may work)
|
|
||||||
- git
|
|
||||||
|
|
||||||
2. Install the Python Anaconda environment manager.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
After installing anaconda, you should log out of your system and log back
|
|
||||||
in. If the installation worked, your command prompt will be prefixed by the
|
|
||||||
name of the current anaconda environment - `(base)`.
|
|
||||||
|
|
||||||
3. Copy the InvokeAI source code from GitHub:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create InvokeAI folder where you will follow the rest of the
|
|
||||||
steps.
|
|
||||||
|
|
||||||
4. Enter the newly-created InvokeAI folder. From this step forward make sure
|
|
||||||
that you are working in the InvokeAI directory!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) ~$ cd InvokeAI
|
|
||||||
(base) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Use anaconda to copy necessary python packages, create a new python
|
|
||||||
environment named `invokeai` and then activate the environment.
|
|
||||||
|
|
||||||
!!! todo "For systems with a CUDA (Nvidia) card:"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
|
||||||
(base) ~/InvokeAI$ conda env create -f environment-cuda.yml
|
|
||||||
(base) ~/InvokeAI$ conda activate invokeai
|
|
||||||
(invokeai) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For systems with an AMD card (using ROCm driver):"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
|
||||||
(base) ~/InvokeAI$ conda env create -f environment-AMD.yml
|
|
||||||
(base) ~/InvokeAI$ conda activate invokeai
|
|
||||||
(invokeai) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
After these steps, your command prompt will be prefixed by `(invokeai)` as
|
|
||||||
shown above.
|
|
||||||
|
|
||||||
6. Load the big stable diffusion weights files and a couple of smaller
|
|
||||||
machine-learning models:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/configure_invokeai.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
This script will lead you through the process of creating an account on Hugging Face,
|
|
||||||
accepting the terms and conditions of the Stable Diffusion model license,
|
|
||||||
and obtaining an access token for downloading. It will then download and
|
|
||||||
install the weights files for you.
|
|
||||||
|
|
||||||
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing
|
|
||||||
the same thing.
|
|
||||||
|
|
||||||
7. Start generating images!
|
|
||||||
|
|
||||||
!!! todo "Run InvokeAI!"
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../../deprecated/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
8. Subsequently, to relaunch the script, be sure to run "conda activate
|
|
||||||
invokeai" (step 5, second command), enter the `InvokeAI` directory, and then
|
|
||||||
launch the invoke script (step 8). If you forget to activate the 'invokeai'
|
|
||||||
environment, the script will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the InvokeAI directory, then to update to the latest and
|
|
||||||
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/InvokeAI$ git pull
|
|
||||||
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
|
|
||||||
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one.
|
|
@ -1,525 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, macOS
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-apple: macOS
|
|
||||||
|
|
||||||
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the
|
|
||||||
community.
|
|
||||||
|
|
||||||
While the repo does run on Intel Macs, we only have a couple reports. If you
|
|
||||||
have an Intel Mac and run into issues, please create an issue on Github and we
|
|
||||||
will do our best to help.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- macOS 12.3 Monterey or later
|
|
||||||
- About 10GB of storage (and 10GB of data if your internet connection has data
|
|
||||||
caps)
|
|
||||||
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
!!! todo "Homebrew"
|
|
||||||
|
|
||||||
First you will install the "brew" package manager. Skip this if brew is already installed.
|
|
||||||
|
|
||||||
```bash title="install brew (and Xcode command line tools)"
|
|
||||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Conda Installation"
|
|
||||||
|
|
||||||
Now there are two different ways to set up the Python (miniconda) environment:
|
|
||||||
|
|
||||||
1. Standalone
|
|
||||||
2. with pyenv
|
|
||||||
|
|
||||||
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
|
|
||||||
|
|
||||||
=== "Standalone"
|
|
||||||
|
|
||||||
```bash title="Install cmake, protobuf, and rust"
|
|
||||||
brew install cmake protobuf rust
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash title="Clone the InvokeAI repository"
|
|
||||||
# Clone the Invoke AI repo
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
Choose the appropriate architecture for your system and install miniconda:
|
|
||||||
|
|
||||||
=== "M1 arm64"
|
|
||||||
|
|
||||||
```bash title="Install miniconda for M1 arm64"
|
|
||||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
|
|
||||||
-o Miniconda3-latest-MacOSX-arm64.sh
|
|
||||||
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Intel x86_64"
|
|
||||||
|
|
||||||
```bash title="Install miniconda for Intel"
|
|
||||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
|
|
||||||
-o Miniconda3-latest-MacOSX-x86_64.sh
|
|
||||||
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "with pyenv"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
brew install pyenv-virtualenv
|
|
||||||
pyenv install anaconda3-2022.05
|
|
||||||
pyenv virtualenv anaconda3-2022.05
|
|
||||||
eval "$(pyenv init -)"
|
|
||||||
pyenv activate anaconda3-2022.05
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Clone the Invoke AI repo"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Create the environment & install packages"
|
|
||||||
|
|
||||||
=== "M1 Mac"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Intel x86_64 Mac"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Activate the environment (you need to do this every time you want to run SD)
|
|
||||||
conda activate invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! info
|
|
||||||
|
|
||||||
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
|
||||||
create -f environment-mac.yml` never finishing in some situations. So
|
|
||||||
it isn't required but won't hurt.
|
|
||||||
|
|
||||||
!!! todo "Download the model weight files"
|
|
||||||
|
|
||||||
The `configure_invokeai.py` script downloads and installs the model weight
|
|
||||||
files for you. It will lead you through the process of getting a Hugging Face
|
|
||||||
account, accepting the Stable Diffusion model weight license agreement, and
|
|
||||||
creating a download token:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# This will take some time, depending on the speed of your internet connection
|
|
||||||
# and will consume about 10GB of space
|
|
||||||
python scripts/configure_invokeai.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Run InvokeAI!"
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../../deprecated/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common problems
|
|
||||||
|
|
||||||
After you followed all the instructions and try to run invoke.py, you might get
|
|
||||||
several errors. Here's the errors I've seen and found solutions for.
|
|
||||||
|
|
||||||
### Is it slow?
|
|
||||||
|
|
||||||
```bash title="Be sure to specify 1 sample and 1 iteration."
|
|
||||||
python ./scripts/orig_scripts/txt2img.py \
|
|
||||||
--prompt "ocean" \
|
|
||||||
--ddim_steps 5 \
|
|
||||||
--n_samples 1 \
|
|
||||||
--n_iter 1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Doesn't work anymore?
|
|
||||||
|
|
||||||
PyTorch nightly includes support for MPS. Because of this, this setup is
|
|
||||||
inherently unstable. One morning I woke up and it no longer worked no matter
|
|
||||||
what I did until I switched to miniforge. However, I have another Mac that works
|
|
||||||
just fine with Anaconda. If you can't get it to work, please search a little
|
|
||||||
first because many of the errors will get posted and solved. If you can't find a
|
|
||||||
solution please [create an issue](https://github.com/invoke-ai/InvokeAI/issues).
|
|
||||||
|
|
||||||
One debugging step is to update to the latest version of PyTorch nightly.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda install \
|
|
||||||
pytorch \
|
|
||||||
torchvision \
|
|
||||||
-c pytorch-nightly \
|
|
||||||
-n invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clean -f
|
|
||||||
conda clean \
|
|
||||||
--yes \
|
|
||||||
--all
|
|
||||||
```
|
|
||||||
|
|
||||||
Or you could try to completley reset Anaconda:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda update \
|
|
||||||
--force-reinstall \
|
|
||||||
-y \
|
|
||||||
-n base \
|
|
||||||
-c defaults conda
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "No module named cv2", torch, 'invokeai', 'transformers', 'taming', etc
|
|
||||||
|
|
||||||
There are several causes of these errors:
|
|
||||||
|
|
||||||
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins
|
|
||||||
with "(invokeai)" then you activated it. If it begins with "(base)" or
|
|
||||||
something else you haven't.
|
|
||||||
|
|
||||||
2. You might've run `./scripts/configure_invokeai.py` or `./scripts/invoke.py`
|
|
||||||
instead of `python ./scripts/configure_invokeai.py` or
|
|
||||||
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
|
|
||||||
|
|
||||||
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
|
|
||||||
|
|
||||||
3. if it says you're missing taming you need to rebuild your virtual
|
|
||||||
environment.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda deactivate
|
|
||||||
conda env remove -n invokeai
|
|
||||||
conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
4. If you have activated the invokeai virtual environment and tried rebuilding
|
|
||||||
it, maybe the problem could be that I have something installed that you don't
|
|
||||||
and you'll just need to manually install it. Make sure you activate the
|
|
||||||
virtual environment so it installs there instead of globally.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
pip install <package name>
|
|
||||||
```
|
|
||||||
|
|
||||||
You might also need to install Rust (I mention this again below).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### How many snakes are living in your computer?
|
|
||||||
|
|
||||||
You might have multiple Python installations on your system, in which case it's
|
|
||||||
important to be explicit and consistent about which one to use for a given
|
|
||||||
project. This is because virtual environments are coupled to the Python that
|
|
||||||
created it (and all the associated 'system-level' modules).
|
|
||||||
|
|
||||||
When you run `python` or `python3`, your shell searches the colon-delimited
|
|
||||||
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
|
|
||||||
that order - first match wins. You can ask for the location of the first
|
|
||||||
`python3` found in your `PATH` with the `which` command like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python3
|
|
||||||
/usr/bin/python3
|
|
||||||
```
|
|
||||||
|
|
||||||
Anything in `/usr/bin` is
|
|
||||||
[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
|
|
||||||
However, `/usr/bin/python3` is not actually python3, but rather a stub that
|
|
||||||
offers to install Xcode (which includes python 3). If you have Xcode installed
|
|
||||||
already, `/usr/bin/python3` will execute
|
|
||||||
`/Library/Developer/CommandLineTools/usr/bin/python3` or
|
|
||||||
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
|
|
||||||
Xcode you've selected with `xcode-select`).
|
|
||||||
|
|
||||||
Note that `/usr/bin/python` is an entirely different python - specifically,
|
|
||||||
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python3
|
|
||||||
/opt/homebrew/bin/python3
|
|
||||||
```
|
|
||||||
|
|
||||||
If you installed python3 with Homebrew and you've modified your path to search
|
|
||||||
for Homebrew binaries before system ones, you'll see the above path.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python
|
|
||||||
/opt/anaconda3/bin/python
|
|
||||||
```
|
|
||||||
|
|
||||||
If you have Anaconda installed, you will see the above path. There is a
|
|
||||||
`/opt/anaconda3/bin/python3` also.
|
|
||||||
|
|
||||||
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
|
|
||||||
should actually be the _same python_, which you can verify by comparing the
|
|
||||||
output of `python3 -V` and `python -V`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) % which python
|
|
||||||
/Users/name/miniforge3/envs/invokeai/bin/python
|
|
||||||
```
|
|
||||||
|
|
||||||
The above is what you'll see if you have miniforge and correctly activated the
|
|
||||||
invokeai environment, while usingd the standalone setup instructions above.
|
|
||||||
|
|
||||||
If you otherwise installed via pyenv, you will get this result:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(anaconda3-2022.05) % which python
|
|
||||||
/Users/name/.pyenv/shims/python
|
|
||||||
```
|
|
||||||
|
|
||||||
It's all a mess and you should know
|
|
||||||
[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
|
|
||||||
if you want to fix it. Here's a brief hint of the most common ways you can
|
|
||||||
modify it (don't really have the time to explain it all here).
|
|
||||||
|
|
||||||
- ~/.zshrc
|
|
||||||
- ~/.bash_profile
|
|
||||||
- ~/.bashrc
|
|
||||||
- /etc/paths.d
|
|
||||||
- /etc/path
|
|
||||||
|
|
||||||
Which one you use will depend on what you have installed, except putting a file
|
|
||||||
in /etc/paths.d - which also is the way I prefer to do.
|
|
||||||
|
|
||||||
Finally, to answer the question posed by this section's title, it may help to
|
|
||||||
list all of the `python` / `python3` things found in `$PATH` instead of just the
|
|
||||||
first hit. To do so, add the `-a` switch to `which`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which -a python3
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
This will show a list of all binaries which are actually available in your PATH.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Debugging?
|
|
||||||
|
|
||||||
Tired of waiting for your renders to finish before you can see if it works?
|
|
||||||
Reduce the steps! The image quality will be horrible but at least you'll get
|
|
||||||
quick feedback.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python ./scripts/txt2img.py \
|
|
||||||
--prompt "ocean" \
|
|
||||||
--ddim_steps 5 \
|
|
||||||
--n_samples 1 \
|
|
||||||
--n_iter 1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/configure_invokeai.py
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "The operator [name] is not current implemented for the MPS device." (sic)
|
|
||||||
|
|
||||||
!!! example "example error"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
|
|
||||||
implemented for the MPS device. If you want this op to be added in priority
|
|
||||||
during the prototype phase of this feature, please comment on
|
|
||||||
https://github.com/pytorch/pytorch/issues/77764.
|
|
||||||
As a temporary fix, you can set the environment variable
|
|
||||||
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
|
|
||||||
WARNING: this will be slower than running natively on MPS.
|
|
||||||
```
|
|
||||||
|
|
||||||
The InvokeAI version includes this fix in
|
|
||||||
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
|
|
||||||
|
|
||||||
### "Could not build wheels for tokenizers"
|
|
||||||
|
|
||||||
I have not seen this error because I had Rust installed on my computer before I
|
|
||||||
started playing with Stable Diffusion. The fix is to install Rust.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl \
|
|
||||||
--proto '=https' \
|
|
||||||
--tlsv1.2 \
|
|
||||||
-sSf https://sh.rustup.rs | sh
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### How come `--seed` doesn't work?
|
|
||||||
|
|
||||||
!!! Information
|
|
||||||
|
|
||||||
Completely reproducible results are not guaranteed across PyTorch releases,
|
|
||||||
individual commits, or different platforms. Furthermore, results may not be
|
|
||||||
reproducible between CPU and GPU executions, even when using identical seeds.
|
|
||||||
|
|
||||||
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
|
|
||||||
|
|
||||||
Second, we might have a fix that at least gets a consistent seed sort of. We're
|
|
||||||
still working on it.
|
|
||||||
|
|
||||||
### libiomp5.dylib error?
|
|
||||||
|
|
||||||
```bash
|
|
||||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
|
||||||
```
|
|
||||||
|
|
||||||
You are likely using an Intel package by mistake. Be sure to run conda with the
|
|
||||||
environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
|
||||||
|
|
||||||
`CONDA_SUBDIR=osx-arm64 conda install ...`
|
|
||||||
|
|
||||||
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
|
|
||||||
by a dependency.
|
|
||||||
[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
|
|
||||||
is a metapackage designed to prevent this, by making it impossible to install
|
|
||||||
`mkl`, but if your environment is already broken it may not work.
|
|
||||||
|
|
||||||
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
|
||||||
masks the underlying issue of using Intel packages.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Not enough memory
|
|
||||||
|
|
||||||
This seems to be a common problem and is probably the underlying problem for a
|
|
||||||
lot of symptoms (listed below). The fix is to lower your image size or to add
|
|
||||||
`model.half()` right after the model is loaded. I should probably test it out.
|
|
||||||
I've read that the reason this fixes problems is because it converts the model
|
|
||||||
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
|
|
||||||
how that would affect the quality of the images though.
|
|
||||||
|
|
||||||
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "Error: product of dimension sizes > 2\*\*31'"
|
|
||||||
|
|
||||||
This error happens with img2img, which I haven't played with too much yet. But I
|
|
||||||
know it's because your image is too big or the resolution isn't a multiple of
|
|
||||||
32x32. Because the stable-diffusion model was trained on images that were 512 x
|
|
||||||
512, it's always best to use that output size (which is the default). However,
|
|
||||||
if you're using that size and you get the above error, try 256 x 256 or 512 x
|
|
||||||
256 or something as the source image.
|
|
||||||
|
|
||||||
BTW, 2\*\*31-1 =
|
|
||||||
[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
|
|
||||||
is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
|
|
||||||
C.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### I just got Rickrolled! Do I have a virus?
|
|
||||||
|
|
||||||
You don't have a virus. It's part of the project. Here's
|
|
||||||
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) and
|
|
||||||
here's
|
|
||||||
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
|
|
||||||
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
|
|
||||||
call this "computer vision", sheesh).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### My images come out black
|
|
||||||
|
|
||||||
We might have this fixed, we are still testing.
|
|
||||||
|
|
||||||
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
|
|
||||||
on CUDA GPU's where the images come out green. Maybe it's the same issue?
|
|
||||||
Someone in that issue says to use "--precision full", but this fork actually
|
|
||||||
disables that flag. I don't know why, someone else provided that code and I
|
|
||||||
don't know what it does. Maybe the `model.half()` suggestion above would fix
|
|
||||||
this issue too. I should probably test it.
|
|
||||||
|
|
||||||
### "view size is not compatible with input tensor's size and stride"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
File "/opt/anaconda3/envs/invokeai/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
|
|
||||||
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
|
|
||||||
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
|
|
||||||
```
|
|
||||||
|
|
||||||
Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but
|
|
||||||
we found a file in stable-diffusion that we could change instead. This is a
|
|
||||||
32-bit vs 16-bit problem.
|
|
||||||
|
|
||||||
### The processor must support the Intel bla bla bla
|
|
||||||
|
|
||||||
What? Intel? On an Apple Silicon?
|
|
||||||
|
|
||||||
```bash
|
|
||||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
|
||||||
```
|
|
||||||
|
|
||||||
This is due to the Intel `mkl` package getting picked up when you try to install
|
|
||||||
something that depends on it-- Rosetta can translate some Intel instructions but
|
|
||||||
not the specialized ones here. To avoid this, make sure to use the environment
|
|
||||||
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
|
|
||||||
use ARM packages, and use `nomkl` as described above.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
|
|
||||||
|
|
||||||
May appear when just starting to generate, e.g.:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> clouds
|
|
||||||
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
|
|
||||||
placeholder_idx = torch.where(
|
|
||||||
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
|
|
||||||
LLVM ERROR: Failed to infer result type(s).
|
|
||||||
Abort trap: 6
|
|
||||||
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
|
|
||||||
warnings.warn('resource_tracker: There appear to be %d '
|
|
||||||
```
|
|
@ -1,17 +0,0 @@
|
|||||||
---
|
|
||||||
title: Installing InvokeAI with the Pre-Compiled PIP Installer
|
|
||||||
---
|
|
||||||
|
|
||||||
# THIS NEEDS TO BE FLESHED OUT
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
## Walkthrough
|
|
||||||
|
|
||||||
## Updating to newer versions
|
|
||||||
|
|
||||||
### Updating the stable version
|
|
||||||
|
|
||||||
### Updating to the development version
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
@ -1,225 +0,0 @@
|
|||||||
---
|
|
||||||
title: Source Installer
|
|
||||||
---
|
|
||||||
|
|
||||||
# The InvokeAI Source Installer
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
The source installer is a shell script that attempts to automate every step
|
|
||||||
needed to install and run InvokeAI on a stock computer running recent versions
|
|
||||||
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
|
||||||
version of InvokeAI with the option to upgrade to experimental versions later.
|
|
||||||
|
|
||||||
Before you begin, make sure that you meet the
|
|
||||||
[hardware requirements](../../index.md#hardware-requirements) and has the appropriate
|
|
||||||
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
|
|
||||||
installed, you may need to install the
|
|
||||||
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the libraries and
|
|
||||||
recommended model weights files.
|
|
||||||
|
|
||||||
## Walk through
|
|
||||||
|
|
||||||
Though there are multiple steps, there really is only one click involved to kick
|
|
||||||
off the process.
|
|
||||||
|
|
||||||
1. The source installer is distributed in ZIP files. Go to the
|
|
||||||
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
|
||||||
look for a series of files named:
|
|
||||||
|
|
||||||
- [invokeAI-src-installer-2.2.3-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-mac.zip)
|
|
||||||
- [invokeAI-src-installer-2.2.3-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-windows.zip)
|
|
||||||
- [invokeAI-src-installer-2.2.3-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-linux.zip)
|
|
||||||
|
|
||||||
Download the one that is appropriate for your operating system.
|
|
||||||
|
|
||||||
2. Unpack the zip file into a directory that has at least 18G of free space. Do
|
|
||||||
_not_ unpack into a directory that has an earlier version of InvokeAI.
|
|
||||||
|
|
||||||
This will create a new directory named "InvokeAI". This example shows how
|
|
||||||
this would look using the `unzip` command-line tool, but you may use any
|
|
||||||
graphical or command-line Zip extractor:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> unzip invokeAI-windows.zip
|
|
||||||
Archive: C: \Linco\Downloads\invokeAI-linux.zip
|
|
||||||
creating: invokeAI\
|
|
||||||
inflating: invokeAI\install.bat
|
|
||||||
inflating: invokeAI\readme.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
3. If you are a macOS user, you may need to install the Xcode command line tools.
|
|
||||||
These are a set of tools that are needed to run certain applications in a Terminal,
|
|
||||||
including InvokeAI. This package is provided directly by Apple.
|
|
||||||
|
|
||||||
To install, open a terminal window and run `xcode-select --install`. You will get
|
|
||||||
a macOS system popup guiding you through the install. If you already have them
|
|
||||||
installed, you will instead see some output in the Terminal advising you that the
|
|
||||||
tools are already installed.
|
|
||||||
|
|
||||||
More information can be found here:
|
|
||||||
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
|
|
||||||
|
|
||||||
4. If you are using a desktop GUI, double-click the installer file. It will be
|
|
||||||
named `install.bat` on Windows systems and `install.sh` on Linux and
|
|
||||||
Macintosh systems.
|
|
||||||
|
|
||||||
5. Alternatively, from the command line, run the shell script or .bat file:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd invokeAI
|
|
||||||
C:\Documents\Linco\invokeAI> install.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Sit back and let the install script work. It will install various binary
|
|
||||||
requirements including Conda, Git and Python, then download the current
|
|
||||||
InvokeAI code and install it along with its dependencies.
|
|
||||||
|
|
||||||
Be aware that some of the library download and install steps take a long time.
|
|
||||||
In particular, the `pytorch` package is quite large and often appears to get
|
|
||||||
"stuck" at 99.9%. Similarly, the `pip installing requirements` step may
|
|
||||||
appear to hang. Have patience and the installation step will eventually
|
|
||||||
resume. However, there are occasions when the library install does
|
|
||||||
legitimately get stuck. If you have been waiting for more than ten minutes
|
|
||||||
and nothing is happening, you can interrupt the script with ^C. You may restart
|
|
||||||
it and it will pick up where it left off.
|
|
||||||
|
|
||||||
7. After installation completes, the installer will launch a script called
|
|
||||||
`configure_invokeai.py`, which will guide you through the first-time process of
|
|
||||||
selecting one or more Stable Diffusion model weights files, downloading and
|
|
||||||
configuring them.
|
|
||||||
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you must agree to in order to use. The script will list the
|
|
||||||
steps you need to take to create an account on the official site that hosts
|
|
||||||
the weights files, accept the agreement, and provide an access token that
|
|
||||||
allows InvokeAI to legally download and install the weights files.
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [Installing Models](../050_INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
8. The script will now exit and you'll be ready to generate some images. The
|
|
||||||
invokeAI directory will contain numerous files. Look for a shell script
|
|
||||||
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
|
|
||||||
by double-clicking it or typing its name at the command-line:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd invokeAI
|
|
||||||
C:\Documents\Linco\invokeAI> invoke.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
|
|
||||||
the command-line interface, or (2) the web GUI. If you start the latter, you can
|
|
||||||
load the user interface by pointing your browser at http://localhost:9090.
|
|
||||||
|
|
||||||
The `invoke` script also offers you a third option labeled "open the developer
|
|
||||||
console". If you choose this option, you will be dropped into a command-line
|
|
||||||
interface in which you can run python commands directly, access developer tools,
|
|
||||||
and launch InvokeAI with customized options. To do the latter, you would launch
|
|
||||||
the script `scripts/invoke.py` as shown in this example:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
python scripts/invoke.py --web --max_load_models=3 \
|
|
||||||
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
|
|
||||||
```
|
|
||||||
|
|
||||||
These options are described in detail in the
|
|
||||||
[Command-Line Interface](../../deprecated/CLI.md) documentation.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
_Package dependency conflicts_ If you have previously installed
|
|
||||||
InvokeAI or another Stable Diffusion package, the installer may
|
|
||||||
occasionally pick up outdated libraries and either the installer or
|
|
||||||
`invoke` will fail with complaints out library conflicts. There are
|
|
||||||
two steps you can take to clear this problem. Both of these are done
|
|
||||||
from within the "developer's console", which you can get to by
|
|
||||||
launching `invoke.sh` (or `invoke.bat`) and selecting launch option
|
|
||||||
#3:
|
|
||||||
|
|
||||||
1. Remove the previous `invokeai` environment completely. From within
|
|
||||||
the developer's console, give the command `conda env remove -n
|
|
||||||
invokeai`. This will delete previous files installed by `invoke`.
|
|
||||||
|
|
||||||
Then exit from the developer's console and launch the script
|
|
||||||
`update.sh` (or `update.bat`). This will download the most recent
|
|
||||||
InvokeAI (including bug fixes) and reinstall the environment.
|
|
||||||
You should then be able to run `invoke.sh`/`invoke.bat`.
|
|
||||||
|
|
||||||
2. If this doesn't work, you can try cleaning your system's conda
|
|
||||||
cache. This is slightly more extreme, but won't interfere with
|
|
||||||
any other python-based programs installed on your computer.
|
|
||||||
From the developer's console, run the command `conda clean -a`
|
|
||||||
and answer "yes" to all prompts.
|
|
||||||
|
|
||||||
After this is done, run `update.sh` and try again as before.
|
|
||||||
|
|
||||||
_"Corrupted configuration file."__ Everything seems to install ok, but
|
|
||||||
`invoke` complains of a corrupted configuration file and goes calls
|
|
||||||
`configure_invokeai.py` to fix, but this doesn't fix the problem.
|
|
||||||
|
|
||||||
This issue is often caused by a misconfigured configuration directive
|
|
||||||
in the `.invokeai` initialization file that contains startup settings.
|
|
||||||
This can be corrected by fixing the offending line.
|
|
||||||
|
|
||||||
First find `.invokeai`. It is a small text file located in your home
|
|
||||||
directory, `~/.invokeai` on Mac and Linux systems, and `C:\Users\*your
|
|
||||||
name*\.invokeai` on Windows systems. Open it with a text editor
|
|
||||||
(e.g. Notepad on Windows, TextEdit on Macs, or `nano` on Linux)
|
|
||||||
and look for the lines starting with `--root` and `--outdir`.
|
|
||||||
|
|
||||||
An example is here:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
--root="/home/lstein/invokeai"
|
|
||||||
--outdir="/home/lstein/invokeai/outputs"
|
|
||||||
```
|
|
||||||
|
|
||||||
There should not be whitespace before or after the directory paths,
|
|
||||||
and the paths should not end with slashes:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
--root="/home/lstein/invokeai " # wrong! no whitespace here
|
|
||||||
--root="/home\lstein\invokeai\" # wrong! shouldn't end in a slash
|
|
||||||
```
|
|
||||||
|
|
||||||
Fix the problem with your text editor and save as a **plain text**
|
|
||||||
file. This should clear the issue.
|
|
||||||
|
|
||||||
_If none of these maneuvers fixes the problem_ then please report the
|
|
||||||
problem to the [InvokeAI
|
|
||||||
Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
|
|
||||||
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive assistance.
|
|
||||||
|
|
||||||
## Updating to newer versions
|
|
||||||
|
|
||||||
This section describes how to update InvokeAI to new versions of the software.
|
|
||||||
|
|
||||||
### Updating the stable version
|
|
||||||
|
|
||||||
This distribution is changing rapidly, and we add new features on a daily basis.
|
|
||||||
To update to the latest released version (recommended), run the `update.sh`
|
|
||||||
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
|
||||||
release and re-run the `configure_invokeai` script to download any updated models
|
|
||||||
files that may be needed. You can also use this to add additional models that
|
|
||||||
you did not select at installation time.
|
|
||||||
|
|
||||||
You can now close the developer console and run `invoke` as before. If you get
|
|
||||||
complaints about missing models, then you may need to do the additional step of
|
|
||||||
running `configure_invokeai.py`. This happens relatively infrequently. To do this,
|
|
||||||
simply open up the developer's console again and type
|
|
||||||
`python scripts/configure_invokeai.py`.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI team is
|
|
||||||
available to help you. Either create an
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
|
||||||
make a request for help on the "bugs-and-support" channel of our
|
|
||||||
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
|
||||||
organization, but typically somebody will be available to help you within 24
|
|
||||||
hours, and often much sooner.
|
|
@ -1,137 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, Windows
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-windows: Windows
|
|
||||||
|
|
||||||
## **Notebook install (semi-automated)**
|
|
||||||
|
|
||||||
We have a
|
|
||||||
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
|
||||||
with cell-by-cell installation steps. It will download the code in this repo as
|
|
||||||
one of the steps, so instead of cloning this repo, simply download the notebook
|
|
||||||
from the link above and load it up in VSCode (with the appropriate extensions
|
|
||||||
installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
|
||||||
|
|
||||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
|
|
||||||
|
|
||||||
## **Manual Install with Conda**
|
|
||||||
|
|
||||||
1. Install Anaconda3 (miniconda3 version) from [here](https://docs.anaconda.com/anaconda/install/windows/)
|
|
||||||
|
|
||||||
2. Install Git from [here](https://git-scm.com/download/win)
|
|
||||||
|
|
||||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command
|
|
||||||
window. Type all the remaining commands in this window.
|
|
||||||
|
|
||||||
4. Run the command:
|
|
||||||
|
|
||||||
```batch
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create stable-diffusion folder where you will follow the rest of
|
|
||||||
the steps.
|
|
||||||
|
|
||||||
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
|
|
||||||
|
|
||||||
```batch
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Run the following commands:
|
|
||||||
|
|
||||||
!!! todo "For systems with a CUDA (Nvidia) card:"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
rmdir src # (this is a precaution in case there is already a src directory)
|
|
||||||
conda env create -f environment-cuda.yml
|
|
||||||
conda activate invokeai
|
|
||||||
(invokeai)>
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For systems with an AMD card (using ROCm driver):"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
rmdir src # (this is a precaution in case there is already a src directory)
|
|
||||||
conda env create -f environment-AMD.yml
|
|
||||||
conda activate invokeai
|
|
||||||
(invokeai)>
|
|
||||||
```
|
|
||||||
|
|
||||||
This will install all python requirements and activate the "invokeai" environment
|
|
||||||
which sets PATH and other environment variables properly.
|
|
||||||
|
|
||||||
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/configure_invokeai.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
This script will lead you through the process of creating an account on Hugging Face,
|
|
||||||
accepting the terms and conditions of the Stable Diffusion model license, and
|
|
||||||
obtaining an access token for downloading. It will then download and install the
|
|
||||||
weights files for you.
|
|
||||||
|
|
||||||
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing the
|
|
||||||
same thing.
|
|
||||||
|
|
||||||
8. Start generating images!
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../../deprecated/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
9. Subsequently, to relaunch the script, first activate the Anaconda
|
|
||||||
command window (step 3),enter the InvokeAI directory (step 5, `cd
|
|
||||||
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
|
|
||||||
launch the invoke script (step 9).
|
|
||||||
|
|
||||||
!!! tip "Tildebyte has written an alternative"
|
|
||||||
|
|
||||||
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
|
||||||
which uses the Windows Powershell and pew. If you are having trouble with
|
|
||||||
Anaconda on Windows, give this a try (or try it first!)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the stable-diffusion directory, then to update to the
|
|
||||||
latest and greatest version, launch the Anaconda window, enter
|
|
||||||
`stable-diffusion`, and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
conda env update
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one.
|
|
@ -126,15 +126,6 @@ nav:
|
|||||||
- Installing Models: 'installation/050_INSTALLING_MODELS.md'
|
- Installing Models: 'installation/050_INSTALLING_MODELS.md'
|
||||||
- Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md'
|
- Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md'
|
||||||
- Installing xFormers: 'installation/070_INSTALL_XFORMERS.md'
|
- Installing xFormers: 'installation/070_INSTALL_XFORMERS.md'
|
||||||
- Developers Documentation: 'installation/Developers_documentation/BUILDING_BINARY_INSTALLERS.md'
|
|
||||||
- Deprecated Documentation:
|
|
||||||
- Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md'
|
|
||||||
- Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md'
|
|
||||||
- Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md'
|
|
||||||
- Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_MAC.md'
|
|
||||||
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
|
|
||||||
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
|
|
||||||
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
|
|
||||||
- Workflows & Nodes:
|
- Workflows & Nodes:
|
||||||
- Nodes Overview: 'nodes/overview.md'
|
- Nodes Overview: 'nodes/overview.md'
|
||||||
- Workflow Editor Basics: 'nodes/NODES.md'
|
- Workflow Editor Basics: 'nodes/NODES.md'
|
||||||
|
Loading…
Reference in New Issue
Block a user