diff --git a/README.md b/README.md index 932bd79ae9..d471a57dce 100644 --- a/README.md +++ b/README.md @@ -175,7 +175,7 @@ the command `npm install -g yarn` if needed) pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu ``` - _For Macintoshes, either Intel or M1/M2:_ + _For Macintoshes, either Intel or M1/M2/M3:_ ```sh pip install InvokeAI --use-pep517 diff --git a/docs/installation/010_INSTALL_AUTOMATED.md b/docs/installation/010_INSTALL_AUTOMATED.md index 52192f33c0..83a182eea8 100644 --- a/docs/installation/010_INSTALL_AUTOMATED.md +++ b/docs/installation/010_INSTALL_AUTOMATED.md @@ -179,7 +179,7 @@ experimental versions later. you will have the choice of CUDA (NVidia cards), ROCm (AMD cards), or CPU (no graphics acceleration). On Windows, you'll have the choice of CUDA vs CPU, and on Macs you'll be offered CPU only. When - you select CPU on M1 or M2 Macintoshes, you will get MPS-based + you select CPU on M1/M2/M3 Macintoshes, you will get MPS-based graphics acceleration without installing additional drivers. If you are unsure what GPU you are using, you can ask the installer to guess. diff --git a/docs/installation/040_INSTALL_DOCKER.md b/docs/installation/040_INSTALL_DOCKER.md index a550056ce1..24f32442de 100644 --- a/docs/installation/040_INSTALL_DOCKER.md +++ b/docs/installation/040_INSTALL_DOCKER.md @@ -30,7 +30,7 @@ methodology for details on why running applications in such a stateless fashion The container is configured for CUDA by default, but can be built to support AMD GPUs by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time. -Developers on Apple silicon (M1/M2): You +Developers on Apple silicon (M1/M2/M3): You [can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224) and performance is reduced compared with running it directly on macOS but for development purposes it's fine. Once you're done with development tasks on your