diff --git a/README.md b/README.md index 7b3f95ab10..f10c372545 100644 --- a/README.md +++ b/README.md @@ -161,7 +161,7 @@ the command `npm install -g yarn` if needed) _For Windows/Linux with an NVIDIA GPU:_ ```terminal - pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117 + pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 ``` _For Linux with an AMD GPU:_ diff --git a/docs/installation/010_INSTALL_AUTOMATED.md b/docs/installation/010_INSTALL_AUTOMATED.md index 07a88d5d1b..0937c07bca 100644 --- a/docs/installation/010_INSTALL_AUTOMATED.md +++ b/docs/installation/010_INSTALL_AUTOMATED.md @@ -471,7 +471,7 @@ Then type the following commands: === "NVIDIA System" ```bash - pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu117 + pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118 pip install xformers ``` diff --git a/docs/installation/020_INSTALL_MANUAL.md b/docs/installation/020_INSTALL_MANUAL.md index 84f19da9bf..a68e5b1414 100644 --- a/docs/installation/020_INSTALL_MANUAL.md +++ b/docs/installation/020_INSTALL_MANUAL.md @@ -148,7 +148,7 @@ manager, please follow these steps: === "CUDA (NVidia)" ```bash - pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117 + pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 ``` === "ROCm (AMD)" @@ -312,7 +312,7 @@ installation protocol (important!) === "CUDA (NVidia)" ```bash - pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117 + pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 ``` === "ROCm (AMD)" @@ -356,7 +356,7 @@ you can do so using this unsupported recipe: mkdir ~/invokeai conda create -n invokeai python=3.10 conda activate invokeai -pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117 +pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 invokeai-configure --root ~/invokeai invokeai --root ~/invokeai --web ``` diff --git a/docs/installation/030_INSTALL_CUDA_AND_ROCM.md b/docs/installation/030_INSTALL_CUDA_AND_ROCM.md index 3d3445e3a0..314043f538 100644 --- a/docs/installation/030_INSTALL_CUDA_AND_ROCM.md +++ b/docs/installation/030_INSTALL_CUDA_AND_ROCM.md @@ -34,11 +34,11 @@ directly from NVIDIA. **Do not try to install Ubuntu's nvidia-cuda-toolkit package. It is out of date and will cause conflicts among the NVIDIA driver and binaries.** -Go to [CUDA Toolkit 11.7 -Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive), -and use the target selection wizard to choose your operating system, -hardware platform, and preferred installation method (e.g. "local" -versus "network"). +Go to [CUDA Toolkit +Downloads](https://developer.nvidia.com/cuda-downloads), and use the +target selection wizard to choose your operating system, hardware +platform, and preferred installation method (e.g. "local" versus +"network"). This will provide you with a downloadable install file or, depending on your choices, a recipe for downloading and running a install shell @@ -61,7 +61,7 @@ Runtime Site](https://developer.nvidia.com/nvidia-container-runtime) When installing torch and torchvision manually with `pip`, remember to provide the argument `--extra-index-url -https://download.pytorch.org/whl/cu117` as described in the [Manual +https://download.pytorch.org/whl/cu118` as described in the [Manual Installation Guide](020_INSTALL_MANUAL.md). ## :simple-amd: ROCm diff --git a/docs/installation/070_INSTALL_XFORMERS.md b/docs/installation/070_INSTALL_XFORMERS.md index be54a3ee86..849f9d1ddc 100644 --- a/docs/installation/070_INSTALL_XFORMERS.md +++ b/docs/installation/070_INSTALL_XFORMERS.md @@ -28,18 +28,21 @@ command line, then just be sure to activate it's virtual environment. Then run the following three commands: ```sh -pip install xformers==0.0.16rc425 -pip install triton +pip install xformers~=0.0.19 +pip install triton # WON'T WORK ON WINDOWS python -m xformers.info output ``` The first command installs `xformers`, the second installs the `triton` training accelerator, and the third prints out the `xformers` -installation status. If all goes well, you'll see a report like the +installation status. On Windows, please omit the `triton` package, +which is not available on that platform. + +If all goes well, you'll see a report like the following: ```sh -xFormers 0.0.16rc425 +xFormers 0.0.20 memory_efficient_attention.cutlassF: available memory_efficient_attention.cutlassB: available memory_efficient_attention.flshattF: available @@ -48,22 +51,28 @@ memory_efficient_attention.smallkF: available memory_efficient_attention.smallkB: available memory_efficient_attention.tritonflashattF: available memory_efficient_attention.tritonflashattB: available +indexing.scaled_index_addF: available +indexing.scaled_index_addB: available +indexing.index_select: available +swiglu.dual_gemm_silu: available +swiglu.gemm_fused_operand_sum: available swiglu.fused.p.cpp: available is_triton_available: True is_functorch_available: False -pytorch.version: 1.13.1+cu117 +pytorch.version: 2.0.1+cu118 pytorch.cuda: available -gpu.compute_capability: 8.6 -gpu.name: NVIDIA RTX A2000 12GB +gpu.compute_capability: 8.9 +gpu.name: NVIDIA GeForce RTX 4070 build.info: available -build.cuda_version: 1107 -build.python_version: 3.10.9 -build.torch_version: 1.13.1+cu117 +build.cuda_version: 1108 +build.python_version: 3.10.11 +build.torch_version: 2.0.1+cu118 build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6 build.env.XFORMERS_BUILD_TYPE: Release build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None build.env.NVCC_FLAGS: None -build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.16rc425 +build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.20 +build.nvcc_version: 11.8.89 source.privacy: open source ``` @@ -83,14 +92,14 @@ installed from source. These instructions were written for a system running Ubuntu 22.04, but other Linux distributions should be able to adapt this recipe. -#### 1. Install CUDA Toolkit 11.7 +#### 1. Install CUDA Toolkit 11.8 You will need the CUDA developer's toolkit in order to compile and install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit package.** It is out of date and will cause conflicts among the NVIDIA driver and binaries. Instead install the CUDA Toolkit package provided -by NVIDIA itself. Go to [CUDA Toolkit 11.7 -Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive) +by NVIDIA itself. Go to [CUDA Toolkit 11.8 +Downloads](https://developer.nvidia.com/cuda-11-8-0-download-archive) and use the target selection wizard to choose your platform and Linux distribution. Select an installer type of "runfile (local)" at the last step. @@ -101,17 +110,17 @@ example, the install script recipe for Ubuntu 22.04 running on a x86_64 system is: ``` -wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda_11.7.0_515.43.04_linux.run -sudo sh cuda_11.7.0_515.43.04_linux.run +wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run +sudo sh cuda_11.8.0_520.61.05_linux.run ``` Rather than cut-and-paste this example, We recommend that you walk through the toolkit wizard in order to get the most up to date installer for your system. -#### 2. Confirm/Install pyTorch 1.13 with CUDA 11.7 support +#### 2. Confirm/Install pyTorch 2.01 with CUDA 11.8 support -If you are using InvokeAI 2.3 or higher, these will already be +If you are using InvokeAI 3.0.2 or higher, these will already be installed. If not, you can check whether you have the needed libraries using a quick command. Activate the invokeai virtual environment, either by entering the "developer's console", or manually with a @@ -124,7 +133,7 @@ Then run the command: python -c 'exec("import torch\nprint(torch.__version__)")' ``` -If it prints __1.13.1+cu117__ you're good. If not, you can install the +If it prints __1.13.1+cu118__ you're good. If not, you can install the most up to date libraries with this command: ```sh diff --git a/installer/lib/installer.py b/installer/lib/installer.py index 545d99ab0b..f8bd72d495 100644 --- a/installer/lib/installer.py +++ b/installer/lib/installer.py @@ -463,10 +463,10 @@ def get_torch_source() -> (Union[str, None], str): url = "https://download.pytorch.org/whl/cpu" if device == "cuda": - url = "https://download.pytorch.org/whl/cu117" + url = "https://download.pytorch.org/whl/cu118" optional_modules = "[xformers,onnx-cuda]" if device == "cuda_and_dml": - url = "https://download.pytorch.org/whl/cu117" + url = "https://download.pytorch.org/whl/cu118" optional_modules = "[xformers,onnx-directml]" # in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13