2023-01-18 14:31:19 +00:00
|
|
|
---
|
|
|
|
title: Installing xFormers
|
|
|
|
---
|
|
|
|
|
|
|
|
# :material-image-size-select-large: Installing xformers
|
|
|
|
|
|
|
|
xFormers is toolbox that integrates with the pyTorch and CUDA
|
|
|
|
libraries to provide accelerated performance and reduced memory
|
|
|
|
consumption for applications using the transformers machine learning
|
|
|
|
architecture. After installing xFormers, InvokeAI users who have
|
|
|
|
CUDA GPUs will see a noticeable decrease in GPU memory consumption and
|
|
|
|
an increase in speed.
|
|
|
|
|
|
|
|
xFormers can be installed into a working InvokeAI installation without
|
|
|
|
any code changes or other updates. This document explains how to
|
|
|
|
install xFormers.
|
|
|
|
|
|
|
|
## Linux
|
|
|
|
|
|
|
|
Note that xFormers only works with true NVIDIA GPUs and will not work
|
|
|
|
properly with the ROCm driver for AMD acceleration.
|
|
|
|
|
|
|
|
xFormers is not currently available as a pip binary wheel and must be
|
|
|
|
installed from source. These instructions were written for a system
|
|
|
|
running Ubuntu 22.04, but other Linux distributions should be able to
|
|
|
|
adapt this recipe.
|
|
|
|
|
|
|
|
### 1. Install CUDA Toolkit 11.7
|
|
|
|
|
|
|
|
You will need the CUDA developer's toolkit in order to compile and
|
|
|
|
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
|
|
|
|
package.** It is out of date and will cause conflicts among the NVIDIA
|
|
|
|
driver and binaries. Instead install the CUDA Toolkit package provided
|
|
|
|
by NVIDIA itself. Go to [CUDA Toolkit 11.7
|
|
|
|
Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive)
|
|
|
|
and use the target selection wizard to choose your platform and Linux
|
|
|
|
distribution. Select an installer type of "runfile (local)" at the
|
|
|
|
last step.
|
|
|
|
|
|
|
|
This will provide you with a recipe for downloading and running a
|
|
|
|
install shell script that will install the toolkit and drivers. For
|
|
|
|
example, the install script recipe for Ubuntu 22.04 running on a
|
|
|
|
x86_64 system is:
|
|
|
|
|
|
|
|
```
|
|
|
|
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda_11.7.0_515.43.04_linux.run
|
|
|
|
sudo sh cuda_11.7.0_515.43.04_linux.run
|
|
|
|
```
|
|
|
|
|
|
|
|
Rather than cut-and-paste this example, We recommend that you walk
|
|
|
|
through the toolkit wizard in order to get the most up to date
|
|
|
|
installer for your system.
|
|
|
|
|
|
|
|
### 2. Confirm/Install pyTorch 1.13 with CUDA 11.7 support
|
|
|
|
|
|
|
|
If you are using InvokeAI 2.3 or higher, these will already be
|
|
|
|
installed. If not, you can check whether you have the needed libraries
|
|
|
|
using a quick command. Activate the invokeai virtual environment,
|
|
|
|
either by entering the "developer's console", or manually with a
|
|
|
|
command similar to `source ~/invokeai/.venv/bin/activate` (depending
|
|
|
|
on where your `invokeai` directory is.
|
|
|
|
|
|
|
|
Then run the command:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
python -c 'exec("import torch\nprint(torch.__version__)")'
|
|
|
|
```
|
|
|
|
|
|
|
|
If it prints __1.13.1+cu117__ you're good. If not, you can install the
|
|
|
|
most up to date libraries with this command:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
pip install --upgrade --force-reinstall torch torchvision
|
|
|
|
```
|
|
|
|
|
2023-01-19 03:34:36 +00:00
|
|
|
### 3. Install the triton module
|
|
|
|
|
|
|
|
This module isn't necessary for xFormers image inference optimization,
|
|
|
|
but avoids a startup warning.
|
|
|
|
|
|
|
|
```sh
|
|
|
|
pip install triton
|
|
|
|
```
|
|
|
|
|
|
|
|
### 4. Install source code build prerequisites
|
2023-01-18 14:31:19 +00:00
|
|
|
|
|
|
|
To build xFormers from source, you will need the `build-essentials`
|
|
|
|
package. If you don't have it installed already, run:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
sudo apt install build-essential
|
|
|
|
```
|
|
|
|
|
2023-01-19 03:34:36 +00:00
|
|
|
### 5. Build xFormers
|
2023-01-18 14:31:19 +00:00
|
|
|
|
|
|
|
There is no pip wheel package for xFormers at this time (January
|
|
|
|
2023). Although there is a conda package, InvokeAI no longer
|
|
|
|
officially supports conda installations and you're on your own if you
|
|
|
|
wish to try this route.
|
|
|
|
|
|
|
|
Following the recipe provided at the [xFormers GitHub
|
|
|
|
page](https://github.com/facebookresearch/xformers), and with the
|
|
|
|
InvokeAI virtual environment active (see step 1) run the following
|
|
|
|
commands:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
pip install ninja
|
|
|
|
export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
|
|
|
|
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
|
|
|
|
```
|
|
|
|
|
|
|
|
The TORCH_CUDA_ARCH_LIST is a list of GPU architectures to compile
|
|
|
|
xFormer support for. You can speed up compilation by selecting
|
|
|
|
the architecture specific for your system. You'll find the list of
|
|
|
|
GPUs and their architectures at NVIDIA's [GPU Compute
|
|
|
|
Capability](https://developer.nvidia.com/cuda-gpus) table.
|
|
|
|
|
|
|
|
If the compile and install completes successfully, you can check that
|
|
|
|
xFormers is installed with this command:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
python -m xformers.info
|
|
|
|
```
|
|
|
|
|
|
|
|
If suiccessful, the top of the listing should indicate "available" for
|
|
|
|
each of the `memory_efficient_attention` modules, as shown here:
|
|
|
|
|
|
|
|
```sh
|
|
|
|
memory_efficient_attention.cutlassF: available
|
|
|
|
memory_efficient_attention.cutlassB: available
|
|
|
|
memory_efficient_attention.flshattF: available
|
|
|
|
memory_efficient_attention.flshattB: available
|
|
|
|
memory_efficient_attention.smallkF: available
|
|
|
|
memory_efficient_attention.smallkB: available
|
|
|
|
memory_efficient_attention.tritonflashattF: available
|
|
|
|
memory_efficient_attention.tritonflashattB: available
|
|
|
|
[...]
|
|
|
|
```
|
|
|
|
|
|
|
|
You can now launch InvokeAI and enjoy the benefits of xFormers.
|
|
|
|
|
|
|
|
## Windows
|
|
|
|
|
|
|
|
To come
|
|
|
|
|
|
|
|
## Macintosh
|
|
|
|
|
|
|
|
Since CUDA is unavailable on Macintosh systems, you will not benefit
|
|
|
|
from xFormers.
|