From 0e5c3a641a1e3842ca0f7b5868adbc4651666e14 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:05:07 +1100 Subject: [PATCH 1/6] Revert torch to use cu121 --- installer/lib/installer.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/installer/lib/installer.py b/installer/lib/installer.py index 8938a39b3a..0d30756fe8 100644 --- a/installer/lib/installer.py +++ b/installer/lib/installer.py @@ -460,10 +460,10 @@ def get_torch_source() -> (Union[str, None], str): url = "https://download.pytorch.org/whl/cpu" if device == "cuda": - url = "https://download.pytorch.org/whl/cu118" + url = "https://download.pytorch.org/whl/cu121" optional_modules = "[xformers,onnx-cuda]" if device == "cuda_and_dml": - url = "https://download.pytorch.org/whl/cu118" + url = "https://download.pytorch.org/whl/cu121" optional_modules = "[xformers,onnx-directml]" # in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13 From c5672adb6b35afb481f7640ca2b0b680b63a72b6 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:10:57 +1100 Subject: [PATCH 2/6] Update 070_INSTALL_XFORMERS.md --- docs/installation/070_INSTALL_XFORMERS.md | 31 ++++++++--------------- 1 file changed, 10 insertions(+), 21 deletions(-) diff --git a/docs/installation/070_INSTALL_XFORMERS.md b/docs/installation/070_INSTALL_XFORMERS.md index 849f9d1ddc..94300a7c90 100644 --- a/docs/installation/070_INSTALL_XFORMERS.md +++ b/docs/installation/070_INSTALL_XFORMERS.md @@ -28,7 +28,7 @@ command line, then just be sure to activate it's virtual environment. Then run the following three commands: ```sh -pip install xformers~=0.0.19 +pip install xformers~=0.0.22 pip install triton # WON'T WORK ON WINDOWS python -m xformers.info output ``` @@ -42,7 +42,7 @@ If all goes well, you'll see a report like the following: ```sh -xFormers 0.0.20 +xFormers 0.0.22 memory_efficient_attention.cutlassF: available memory_efficient_attention.cutlassB: available memory_efficient_attention.flshattF: available @@ -59,14 +59,14 @@ swiglu.gemm_fused_operand_sum: available swiglu.fused.p.cpp: available is_triton_available: True is_functorch_available: False -pytorch.version: 2.0.1+cu118 +pytorch.version: 2.1.0+cu121 pytorch.cuda: available gpu.compute_capability: 8.9 gpu.name: NVIDIA GeForce RTX 4070 build.info: available build.cuda_version: 1108 build.python_version: 3.10.11 -build.torch_version: 2.0.1+cu118 +build.torch_version: 2.1.0+cu121 build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6 build.env.XFORMERS_BUILD_TYPE: Release build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None @@ -92,33 +92,22 @@ installed from source. These instructions were written for a system running Ubuntu 22.04, but other Linux distributions should be able to adapt this recipe. -#### 1. Install CUDA Toolkit 11.8 +#### 1. Install CUDA Toolkit 12.1 You will need the CUDA developer's toolkit in order to compile and install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit package.** It is out of date and will cause conflicts among the NVIDIA driver and binaries. Instead install the CUDA Toolkit package provided -by NVIDIA itself. Go to [CUDA Toolkit 11.8 -Downloads](https://developer.nvidia.com/cuda-11-8-0-download-archive) +by NVIDIA itself. Go to [CUDA Toolkit 12.1 +Downloads](https://developer.nvidia.com/cuda-12-1-0-download-archive) and use the target selection wizard to choose your platform and Linux distribution. Select an installer type of "runfile (local)" at the last step. This will provide you with a recipe for downloading and running a -install shell script that will install the toolkit and drivers. For -example, the install script recipe for Ubuntu 22.04 running on a -x86_64 system is: +install shell script that will install the toolkit and drivers. -``` -wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run -sudo sh cuda_11.8.0_520.61.05_linux.run -``` - -Rather than cut-and-paste this example, We recommend that you walk -through the toolkit wizard in order to get the most up to date -installer for your system. - -#### 2. Confirm/Install pyTorch 2.01 with CUDA 11.8 support +#### 2. Confirm/Install pyTorch 2.1.0 with CUDA 12.1 support If you are using InvokeAI 3.0.2 or higher, these will already be installed. If not, you can check whether you have the needed libraries @@ -133,7 +122,7 @@ Then run the command: python -c 'exec("import torch\nprint(torch.__version__)")' ``` -If it prints __1.13.1+cu118__ you're good. If not, you can install the +If it prints __2.1.0+cu121__ you're good. If not, you can install the most up to date libraries with this command: ```sh From f412582d60def3c16142973ccc630383182a7537 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:12:01 +1100 Subject: [PATCH 3/6] Update README.md to cu121 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d471a57dce..bb166bcf8a 100644 --- a/README.md +++ b/README.md @@ -161,7 +161,7 @@ the command `npm install -g yarn` if needed) _For Windows/Linux with an NVIDIA GPU:_ ```terminal - pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 + pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 ``` _For Linux with an AMD GPU:_ From ea0f8b8791b013746bbbe508038408140993ad32 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:13:34 +1100 Subject: [PATCH 4/6] Update 020_INSTALL_MANUAL.md --- docs/installation/020_INSTALL_MANUAL.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/installation/020_INSTALL_MANUAL.md b/docs/installation/020_INSTALL_MANUAL.md index 27484c0ffd..0ddf1bca68 100644 --- a/docs/installation/020_INSTALL_MANUAL.md +++ b/docs/installation/020_INSTALL_MANUAL.md @@ -148,7 +148,7 @@ manager, please follow these steps: === "CUDA (NVidia)" ```bash - pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 + pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 ``` === "ROCm (AMD)" @@ -327,7 +327,7 @@ installation protocol (important!) === "CUDA (NVidia)" ```bash - pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 + pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 ``` === "ROCm (AMD)" @@ -375,7 +375,7 @@ you can do so using this unsupported recipe: mkdir ~/invokeai conda create -n invokeai python=3.10 conda activate invokeai -pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118 +pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 invokeai-configure --root ~/invokeai invokeai --root ~/invokeai --web ``` From 4039dd148da6382411f91ab067dc1896ec4c883c Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:14:17 +1100 Subject: [PATCH 5/6] Update 030_INSTALL_CUDA_AND_ROCM.md --- docs/installation/030_INSTALL_CUDA_AND_ROCM.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/installation/030_INSTALL_CUDA_AND_ROCM.md b/docs/installation/030_INSTALL_CUDA_AND_ROCM.md index 7f8af06b58..ca33eb5fed 100644 --- a/docs/installation/030_INSTALL_CUDA_AND_ROCM.md +++ b/docs/installation/030_INSTALL_CUDA_AND_ROCM.md @@ -85,7 +85,7 @@ You can find which version you should download from [this link](https://docs.nvi When installing torch and torchvision manually with `pip`, remember to provide the argument `--extra-index-url -https://download.pytorch.org/whl/cu118` as described in the [Manual +https://download.pytorch.org/whl/cu121` as described in the [Manual Installation Guide](020_INSTALL_MANUAL.md). ## :simple-amd: ROCm From 909afc266e867229cf72734b9e88ceb35ef7f98d Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Tue, 14 Nov 2023 11:14:38 +1100 Subject: [PATCH 6/6] Update 010_INSTALL_AUTOMATED.md --- docs/installation/010_INSTALL_AUTOMATED.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/installation/010_INSTALL_AUTOMATED.md b/docs/installation/010_INSTALL_AUTOMATED.md index 83a182eea8..f6e799aac0 100644 --- a/docs/installation/010_INSTALL_AUTOMATED.md +++ b/docs/installation/010_INSTALL_AUTOMATED.md @@ -471,7 +471,7 @@ Then type the following commands: === "NVIDIA System" ```bash - pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118 + pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121 pip install xformers ```