2022-09-15 14:53:41 +00:00
---
title: macOS
---
2022-08-31 04:33:23 +00:00
2022-09-18 19:30:18 +00:00
# :fontawesome-brands-apple: macOS
2022-09-15 14:53:41 +00:00
## Requirements
2022-08-31 15:27:13 +00:00
2022-09-02 14:17:19 +00:00
- macOS 12.3 Monterey or later
- Python
- Patience
2022-09-16 21:42:21 +00:00
- Apple Silicon or Intel Mac
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
Things have moved really fast and so these instructions change often which makes
them outdated pretty fast. One of the problems is that there are so many
different ways to run this.
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
We are trying to build a testing setup so that when we make changes it doesn't
always break.
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
## How to
2022-09-02 14:17:19 +00:00
2022-09-16 19:30:21 +00:00
(this hasn't been 100% tested yet)
2022-09-02 14:17:19 +00:00
2022-09-16 19:30:21 +00:00
First get the weights checkpoint download started since it's big and will take
some time:
2022-09-02 14:29:28 +00:00
2022-09-16 19:30:21 +00:00
1. Sign up at [huggingface.co ](https://huggingface.co )
2022-09-15 14:53:41 +00:00
2. Go to the
[Stable diffusion diffusion model page ](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original )
2022-09-11 15:52:43 +00:00
3. Accept the terms and click Access Repository:
2022-09-15 14:53:41 +00:00
4. Download
[sd-v1-4.ckpt (4.27 GB) ](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt )
and note where you have saved it (probably the Downloads folder)
2022-09-02 15:22:30 +00:00
2022-09-17 04:34:38 +00:00
While that is downloading, open a Terminal and run the following commands:
2022-08-31 04:33:23 +00:00
2022-09-18 16:48:05 +00:00
!!! todo "Homebrew"
=== "no brew installation yet"
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
=== "brew is already installed"
Only if you installed protobuf in a previous version of this tutorial, otherwise skip
`#!bash brew uninstall protobuf`
2022-09-17 04:34:38 +00:00
2022-09-18 09:09:27 +00:00
!!! todo "Conda Installation"
2022-09-17 04:34:38 +00:00
2022-09-18 09:09:27 +00:00
Now there are two different ways to set up the Python (miniconda) environment:
2022-09-17 04:34:38 +00:00
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone
2022-09-15 14:53:41 +00:00
2022-09-17 04:34:38 +00:00
=== "Standalone"
2022-09-16 19:30:21 +00:00
2022-09-17 04:34:38 +00:00
```bash
# install cmake and rust:
brew install cmake rust
```
2022-09-08 03:30:06 +00:00
2022-09-17 04:34:38 +00:00
=== "M1 arm64"
2022-09-08 03:30:06 +00:00
2022-09-18 16:48:05 +00:00
```bash title="Install miniconda for M1 arm64"
2022-09-17 04:34:38 +00:00
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
-o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
```
2022-09-02 14:29:28 +00:00
2022-09-17 04:34:38 +00:00
=== "Intel x86_64"
2022-09-16 21:42:21 +00:00
2022-09-18 16:48:05 +00:00
```bash title="Install miniconda for Intel"
2022-09-17 04:34:38 +00:00
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
-o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
```
2022-09-02 14:17:19 +00:00
2022-09-17 04:34:38 +00:00
=== "with pyenv"
2022-09-08 03:30:06 +00:00
2022-09-18 16:48:05 +00:00
```{.bash .annotate}
2022-09-19 06:36:42 +00:00
brew install rust pyenv-virtualenv # (1)!
2022-09-17 04:34:38 +00:00
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
2022-09-18 16:48:05 +00:00
2022-09-19 06:36:42 +00:00
1. You might already have this installed, if that is the case just continue.
2022-09-08 03:30:06 +00:00
2022-09-18 09:09:27 +00:00
```{.bash .annotate title="local repo setup"}
2022-09-02 14:29:28 +00:00
# clone the repo
2022-09-25 05:27:57 +00:00
git clone https://github.com/invoke-ai/InvokeAI.git
2022-09-25 12:40:17 +00:00
2022-09-25 05:27:57 +00:00
cd InvokeAI
2022-08-31 04:33:23 +00:00
2022-09-02 14:29:28 +00:00
# wait until the checkpoint file has downloaded, then proceed
# create symlink to checkpoint
2022-08-31 04:33:23 +00:00
mkdir -p models/ldm/stable-diffusion-v1/
2022-09-02 15:22:30 +00:00
2022-09-19 06:36:42 +00:00
PATH_TO_CKPT="$HOME/Downloads" # (1)!
2022-09-02 15:22:30 +00:00
2022-09-16 19:30:21 +00:00
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
models/ldm/stable-diffusion-v1/model.ckpt
2022-09-17 04:34:38 +00:00
```
2022-09-16 21:42:21 +00:00
2022-09-17 04:34:38 +00:00
1. or wherever you saved sd-v1-4.ckpt
2022-09-18 09:09:27 +00:00
!!! todo "create Conda Environment"
2022-09-17 04:34:38 +00:00
=== "M1 arm64"
2022-09-02 15:22:30 +00:00
2022-09-17 04:34:38 +00:00
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 \
conda env create \
-f environment-mac.yaml \
& & conda activate ldm
```
2022-08-31 04:33:23 +00:00
2022-09-27 04:58:05 +00:00
2022-09-17 04:34:38 +00:00
=== "Intel x86_64"
2022-08-31 15:27:13 +00:00
2022-09-17 04:34:38 +00:00
```bash
2022-09-21 10:04:33 +00:00
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 \
2022-09-17 04:34:38 +00:00
conda env create \
-f environment-mac.yaml \
& & conda activate ldm
```
2022-09-18 09:09:27 +00:00
```{.bash .annotate title="preload models and run script"}
2022-09-02 14:29:28 +00:00
# only need to do this once
2022-08-31 15:27:13 +00:00
python scripts/preload_models.py
2022-09-02 14:29:28 +00:00
2022-09-17 04:34:38 +00:00
# now you can run SD in CLI mode
2022-09-19 06:36:42 +00:00
python scripts/dream.py --full_precision # (1)!
2022-08-31 04:33:23 +00:00
2022-09-16 21:42:21 +00:00
# or run the web interface!
python scripts/dream.py --web
2022-09-02 14:17:19 +00:00
2022-09-17 04:34:38 +00:00
# The original scripts should work as well.
2022-09-16 19:30:21 +00:00
python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
2022-09-02 14:17:19 +00:00
```
2022-09-27 04:58:05 +00:00
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
2022-09-15 14:53:41 +00:00
---
2022-08-31 04:33:23 +00:00
2022-09-17 05:55:55 +00:00
## Common problems
After you followed all the instructions and try to run dream.py, you might
get several errors. Here's the errors I've seen and found solutions for.
2022-08-31 04:33:23 +00:00
2022-08-31 15:27:13 +00:00
### Is it slow?
2022-09-19 06:36:42 +00:00
```bash title="Be sure to specify 1 sample and 1 iteration."
2022-09-15 14:53:41 +00:00
python ./scripts/orig_scripts/txt2img.py \
2022-09-16 19:30:21 +00:00
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
2022-09-15 14:53:41 +00:00
```
2022-08-31 15:27:13 +00:00
2022-09-15 14:53:41 +00:00
---
2022-08-31 15:27:13 +00:00
2022-08-31 04:33:23 +00:00
### Doesn't work anymore?
2022-09-15 14:53:41 +00:00
PyTorch nightly includes support for MPS. Because of this, this setup is
inherently unstable. One morning I woke up and it no longer worked no matter
what I did until I switched to miniforge. However, I have another Mac that works
just fine with Anaconda. If you can't get it to work, please search a little
first because many of the errors will get posted and solved. If you can't find a
solution please
2022-09-21 07:29:02 +00:00
[create an issue ](https://github.com/invoke-ai/InvokeAI/issues ).
2022-08-31 04:33:23 +00:00
2022-08-31 15:27:13 +00:00
One debugging step is to update to the latest version of PyTorch nightly.
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
```bash
2022-09-16 19:30:21 +00:00
conda install \
pytorch \
torchvision \
2022-09-19 06:36:42 +00:00
-c pytorch-nightly \
-n ldm
2022-09-15 14:53:41 +00:00
```
2022-08-31 04:33:23 +00:00
2022-09-27 04:58:05 +00:00
If it takes forever to run `conda env create -f environment-mac.yml` , try this:
2022-09-02 14:17:19 +00:00
2022-09-16 19:30:21 +00:00
```bash
git clean -f
conda clean \
--yes \
--all
```
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
Or you could try to completley reset Anaconda:
2022-08-31 15:27:13 +00:00
2022-09-27 07:16:27 +00:00
```bash
conda update \
--force-reinstall \
-y \
-n base \
-c defaults conda
2022-09-15 14:53:41 +00:00
```
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
---
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
There are several causes of these errors:
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
1. Did you remember to `conda activate ldm` ? If your terminal prompt begins with
"(ldm)" then you activated it. If it begins with "(base)" or something else
you haven't.
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
2. You might've run `./scripts/preload_models.py` or `./scripts/dream.py`
instead of `python ./scripts/preload_models.py` or
`python ./scripts/dream.py` . The cause of this error is long so it's below.
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
2022-09-02 14:17:19 +00:00
2022-09-16 19:30:21 +00:00
3. if it says you're missing taming you need to rebuild your virtual
environment.
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
```bash
conda deactivate
2022-09-11 15:52:43 +00:00
conda env remove -n ldm
2022-09-27 04:58:05 +00:00
conda env create -f environment-mac.yml
2022-09-16 19:30:21 +00:00
```
2022-09-27 07:16:27 +00:00
2022-09-16 19:30:21 +00:00
4. If you have activated the ldm virtual environment and tried rebuilding it,
maybe the problem could be that I have something installed that you don't and
you'll just need to manually install it. Make sure you activate the virtual
environment so it installs there instead of globally.
2022-08-31 04:33:23 +00:00
2022-09-16 19:30:21 +00:00
```bash
2022-09-11 15:52:43 +00:00
conda activate ldm
2022-09-16 19:30:21 +00:00
pip install < package name >
```
2022-08-31 04:33:23 +00:00
You might also need to install Rust (I mention this again below).
2022-09-15 14:53:41 +00:00
---
2022-09-02 14:17:19 +00:00
### How many snakes are living in your computer?
2022-09-13 19:13:47 +00:00
You might have multiple Python installations on your system, in which case it's
2022-09-15 14:53:41 +00:00
important to be explicit and consistent about which one to use for a given
project. This is because virtual environments are coupled to the Python that
created it (and all the associated 'system-level' modules).
2022-09-13 19:13:47 +00:00
2022-09-15 14:53:41 +00:00
When you run `python` or `python3` , your shell searches the colon-delimited
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
that order - first match wins. You can ask for the location of the first
`python3` found in your `PATH` with the `which` command like this:
2022-09-02 14:17:19 +00:00
2022-09-15 14:53:41 +00:00
```bash
% which python3
/usr/bin/python3
```
2022-09-02 14:17:19 +00:00
2022-09-15 14:53:41 +00:00
Anything in `/usr/bin` is
[part of the OS ](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6 ).
However, `/usr/bin/python3` is not actually python3, but rather a stub that
offers to install Xcode (which includes python 3). If you have Xcode installed
already, `/usr/bin/python3` will execute
`/Library/Developer/CommandLineTools/usr/bin/python3` or
2022-09-13 19:13:47 +00:00
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
2022-09-02 14:17:19 +00:00
Xcode you've selected with `xcode-select` ).
2022-09-15 14:53:41 +00:00
Note that `/usr/bin/python` is an entirely different python - specifically,
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
2022-09-13 19:13:47 +00:00
2022-09-15 14:53:41 +00:00
```bash
% which python3
/opt/homebrew/bin/python3
```
2022-09-02 14:17:19 +00:00
If you installed python3 with Homebrew and you've modified your path to search
for Homebrew binaries before system ones, you'll see the above path.
2022-09-15 14:53:41 +00:00
```bash
% which python
/opt/anaconda3/bin/python
```
2022-09-02 14:17:19 +00:00
2022-09-13 19:13:47 +00:00
If you have Anaconda installed, you will see the above path. There is a
2022-09-15 14:53:41 +00:00
`/opt/anaconda3/bin/python3` also.
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
should actually be the _same python_ , which you can verify by comparing the
output of `python3 -V` and `python -V` .
2022-09-02 14:17:19 +00:00
2022-09-15 14:53:41 +00:00
```bash
(ldm) % which python
/Users/name/miniforge3/envs/ldm/bin/python
```
2022-09-02 14:17:19 +00:00
2022-09-16 19:30:21 +00:00
The above is what you'll see if you have miniforge and correctly activated the
2022-09-17 05:55:55 +00:00
ldm environment, while usingd the standalone setup instructions above.
2022-09-13 19:13:47 +00:00
2022-09-17 05:55:55 +00:00
If you otherwise installed via pyenv, you will get this result:
2022-09-02 14:17:19 +00:00
2022-09-15 14:53:41 +00:00
```bash
(anaconda3-2022.05) % which python
/Users/name/.pyenv/shims/python
```
2022-09-02 14:17:19 +00:00
2022-09-15 14:53:41 +00:00
It's all a mess and you should know
[how to modify the path environment variable ](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac )
2022-09-16 19:30:21 +00:00
if you want to fix it. Here's a brief hint of the most common ways you can
modify it (don't really have the time to explain it all here).
2022-09-02 14:17:19 +00:00
- ~/.zshrc
- ~/.bash_profile
- ~/.bashrc
- /etc/paths.d
- /etc/path
2022-09-16 19:30:21 +00:00
Which one you use will depend on what you have installed, except putting a file
in /etc/paths.d - which also is the way I prefer to do.
2022-08-31 15:27:13 +00:00
2022-09-15 14:53:41 +00:00
Finally, to answer the question posed by this section's title, it may help to
list all of the `python` / `python3` things found in `$PATH` instead of just the
2022-09-16 19:30:21 +00:00
first hit. To do so, add the `-a` switch to `which` :
2022-09-13 19:13:47 +00:00
2022-09-16 12:07:17 +00:00
```bash
% which -a python3
...
```
2022-09-13 19:13:47 +00:00
2022-09-16 19:30:21 +00:00
This will show a list of all binaries which are actually available in your PATH.
2022-09-17 05:55:55 +00:00
---
2022-09-13 19:13:47 +00:00
2022-08-31 15:27:13 +00:00
### Debugging?
2022-09-15 14:53:41 +00:00
Tired of waiting for your renders to finish before you can see if it works?
Reduce the steps! The image quality will be horrible but at least you'll get
quick feedback.
2022-08-31 15:27:13 +00:00
2022-09-16 12:07:17 +00:00
```bash
2022-09-16 19:30:21 +00:00
python ./scripts/txt2img.py \
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
2022-09-16 12:07:17 +00:00
```
2022-08-31 15:27:13 +00:00
2022-09-17 05:55:55 +00:00
---
2022-08-31 15:27:13 +00:00
2022-09-16 19:30:21 +00:00
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
2022-08-31 15:27:13 +00:00
2022-09-16 12:07:17 +00:00
```bash
python scripts/preload_models.py
```
2022-08-31 15:27:13 +00:00
2022-09-17 05:55:55 +00:00
---
2022-08-31 15:27:13 +00:00
2022-08-31 04:33:23 +00:00
### "The operator [name] is not current implemented for the MPS device." (sic)
2022-09-18 09:09:27 +00:00
!!! example "example error"
2022-08-31 04:33:23 +00:00
2022-09-18 09:09:27 +00:00
```bash
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
implemented for the MPS device. If you want this op to be added in priority
during the prototype phase of this feature, please comment on
https://github.com/pytorch/pytorch/issues/77764.
As a temporary fix, you can set the environment variable
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
WARNING: this will be slower than running natively on MPS.
```
2022-08-31 04:33:23 +00:00
2022-09-19 06:36:42 +00:00
This fork already includes a fix for this in
2022-09-27 04:58:05 +00:00
[environment-mac.yml ](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml ).
2022-08-31 04:33:23 +00:00
### "Could not build wheels for tokenizers"
2022-09-15 14:53:41 +00:00
I have not seen this error because I had Rust installed on my computer before I
started playing with Stable Diffusion. The fix is to install Rust.
2022-08-31 04:33:23 +00:00
2022-09-16 12:07:17 +00:00
```bash
2022-09-16 19:30:21 +00:00
curl \
--proto '=https' \
--tlsv1.2 \
-sSf https://sh.rustup.rs | sh
2022-09-16 12:07:17 +00:00
```
2022-08-31 04:33:23 +00:00
2022-09-17 05:55:55 +00:00
---
2022-08-31 04:33:23 +00:00
### How come `--seed` doesn't work?
2022-08-31 15:27:13 +00:00
First this:
2022-09-15 14:53:41 +00:00
> Completely reproducible results are not guaranteed across PyTorch releases,
> individual commits, or different platforms. Furthermore, results may not be
> reproducible between CPU and GPU executions, even when using identical seeds.
2022-08-31 04:33:23 +00:00
[PyTorch docs ](https://pytorch.org/docs/stable/notes/randomness.html )
2022-08-31 15:27:13 +00:00
Second, we might have a fix that at least gets a consistent seed sort of. We're
still working on it.
2022-08-31 04:33:23 +00:00
### libiomp5.dylib error?
2022-09-16 12:07:17 +00:00
```bash
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
```
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
You are likely using an Intel package by mistake. Be sure to run conda with the
environment variable `CONDA_SUBDIR=osx-arm64` , like so:
2022-08-31 04:33:23 +00:00
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs.
I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory.
I see this taking about 2.0s/it.
I've moved many deps from pip to conda-forge, to take advantage of the
precompiled binaries. Some notes for Mac users, since I've seen a lot of
confusion about this:
One doesn't need the `apple` channel to run this on a Mac-- that's only
used by `tensorflow-deps`, required for running tensorflow-metal. For
that, I have an example environment.yml here:
https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022
However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to
ensure that you do not run any Intel-specific packages such as `mkl`,
which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274)
on the ARM architecture and cause the environment to break.
I've also added a comment in the env file about 3.10 not working yet.
When it becomes possible to update, those commands run on an osx-arm64
machine should work to determine the new version set.
Here's what a successful run of dream.py should look like:
```
$ python scripts/dream.py --full_precision SIGABRT(6) ↵ 08:42:59
* Initializing, be patient...
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using slower but more accurate full-precision math (--full_precision)
>> Setting Sampler to k_lms
model loaded in 6.12s
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> "an astronaut riding a horse"
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it]
Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it]
Usage stats:
1 image(s) generated in 98.60s
Max VRAM used for this generation: 0.00G
Outputs:
outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180
```
2022-09-01 01:18:19 +00:00
`CONDA_SUBDIR=osx-arm64 conda install ...`
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
by a dependency.
[nomkl ](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for )
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs.
I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory.
I see this taking about 2.0s/it.
I've moved many deps from pip to conda-forge, to take advantage of the
precompiled binaries. Some notes for Mac users, since I've seen a lot of
confusion about this:
One doesn't need the `apple` channel to run this on a Mac-- that's only
used by `tensorflow-deps`, required for running tensorflow-metal. For
that, I have an example environment.yml here:
https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022
However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to
ensure that you do not run any Intel-specific packages such as `mkl`,
which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274)
on the ARM architecture and cause the environment to break.
I've also added a comment in the env file about 3.10 not working yet.
When it becomes possible to update, those commands run on an osx-arm64
machine should work to determine the new version set.
Here's what a successful run of dream.py should look like:
```
$ python scripts/dream.py --full_precision SIGABRT(6) ↵ 08:42:59
* Initializing, be patient...
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using slower but more accurate full-precision math (--full_precision)
>> Setting Sampler to k_lms
model loaded in 6.12s
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> "an astronaut riding a horse"
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it]
Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it]
Usage stats:
1 image(s) generated in 98.60s
Max VRAM used for this generation: 0.00G
Outputs:
outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180
```
2022-09-01 01:18:19 +00:00
is a metapackage designed to prevent this, by making it impossible to install
2022-09-02 14:17:19 +00:00
`mkl` , but if your environment is already broken it may not work.
2022-08-31 04:33:23 +00:00
2022-09-11 15:52:43 +00:00
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs.
I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory.
I see this taking about 2.0s/it.
I've moved many deps from pip to conda-forge, to take advantage of the
precompiled binaries. Some notes for Mac users, since I've seen a lot of
confusion about this:
One doesn't need the `apple` channel to run this on a Mac-- that's only
used by `tensorflow-deps`, required for running tensorflow-metal. For
that, I have an example environment.yml here:
https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022
However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to
ensure that you do not run any Intel-specific packages such as `mkl`,
which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274)
on the ARM architecture and cause the environment to break.
I've also added a comment in the env file about 3.10 not working yet.
When it becomes possible to update, those commands run on an osx-arm64
machine should work to determine the new version set.
Here's what a successful run of dream.py should look like:
```
$ python scripts/dream.py --full_precision SIGABRT(6) ↵ 08:42:59
* Initializing, be patient...
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using slower but more accurate full-precision math (--full_precision)
>> Setting Sampler to k_lms
model loaded in 6.12s
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> "an astronaut riding a horse"
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it]
Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it]
Usage stats:
1 image(s) generated in 98.60s
Max VRAM used for this generation: 0.00G
Outputs:
outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180
```
2022-09-01 01:18:19 +00:00
masks the underlying issue of using Intel packages.
2022-08-31 15:27:13 +00:00
2022-09-17 05:55:55 +00:00
---
2022-09-15 14:53:41 +00:00
### Not enough memory
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
This seems to be a common problem and is probably the underlying problem for a
lot of symptoms (listed below). The fix is to lower your image size or to add
`model.half()` right after the model is loaded. I should probably test it out.
I've read that the reason this fixes problems is because it converts the model
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
how that would affect the quality of the images though.
2022-08-31 04:33:23 +00:00
See [this issue ](https://github.com/CompVis/stable-diffusion/issues/71 ).
2022-09-17 05:55:55 +00:00
---
2022-09-11 15:52:43 +00:00
### "Error: product of dimension sizes > 2\*\*31'"
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
This error happens with img2img, which I haven't played with too much yet. But I
know it's because your image is too big or the resolution isn't a multiple of
32x32. Because the stable-diffusion model was trained on images that were 512 x
512, it's always best to use that output size (which is the default). However,
if you're using that size and you get the above error, try 256 x 256 or 512 x
256 or something as the source image.
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
BTW, 2\*\*31-1 =
[2,147,483,647 ](https://en.wikipedia.org/wiki/2,147,483,647#In_computing ), which
is also 32-bit signed [LONG_MAX ](https://en.wikipedia.org/wiki/C_data_types ) in
C.
2022-08-31 04:33:23 +00:00
2022-09-17 05:55:55 +00:00
---
2022-08-31 04:33:23 +00:00
### I just got Rickrolled! Do I have a virus?
You don't have a virus. It's part of the project. Here's
2022-09-21 07:29:02 +00:00
[Rick ](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg )
2022-09-15 14:53:41 +00:00
and here's
2022-09-21 07:29:02 +00:00
[the code ](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79 )
2022-09-15 14:53:41 +00:00
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
call this "computer vision", sheesh).
2022-08-31 04:33:23 +00:00
2022-09-15 14:53:41 +00:00
Actually, this could be happening because there's not enough RAM. You could try
the `model.half()` suggestion or specify smaller output images.
2022-08-31 04:33:23 +00:00
2022-09-17 05:55:55 +00:00
---
2022-08-31 04:33:23 +00:00
### My images come out black
2022-08-31 15:27:13 +00:00
We might have this fixed, we are still testing.
There's a [similar issue ](https://github.com/CompVis/stable-diffusion/issues/69 )
on CUDA GPU's where the images come out green. Maybe it's the same issue?
2022-09-15 14:53:41 +00:00
Someone in that issue says to use "--precision full", but this fork actually
disables that flag. I don't know why, someone else provided that code and I
don't know what it does. Maybe the `model.half()` suggestion above would fix
this issue too. I should probably test it.
2022-08-31 04:33:23 +00:00
### "view size is not compatible with input tensor's size and stride"
2022-09-15 14:53:41 +00:00
```bash
File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
2022-08-31 04:33:23 +00:00
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
2022-09-21 07:29:02 +00:00
Update to the latest version of invoke-ai/InvokeAI. We were patching
2022-09-15 14:53:41 +00:00
pytorch but we found a file in stable-diffusion that we could change instead.
This is a 32-bit vs 16-bit problem.
---
2022-08-31 04:33:23 +00:00
### The processor must support the Intel bla bla bla
What? Intel? On an Apple Silicon?
2022-09-16 12:07:17 +00:00
```bash
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
```
2022-08-31 04:33:23 +00:00
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs.
I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory.
I see this taking about 2.0s/it.
I've moved many deps from pip to conda-forge, to take advantage of the
precompiled binaries. Some notes for Mac users, since I've seen a lot of
confusion about this:
One doesn't need the `apple` channel to run this on a Mac-- that's only
used by `tensorflow-deps`, required for running tensorflow-metal. For
that, I have an example environment.yml here:
https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022
However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to
ensure that you do not run any Intel-specific packages such as `mkl`,
which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274)
on the ARM architecture and cause the environment to break.
I've also added a comment in the env file about 3.10 not working yet.
When it becomes possible to update, those commands run on an osx-arm64
machine should work to determine the new version set.
Here's what a successful run of dream.py should look like:
```
$ python scripts/dream.py --full_precision SIGABRT(6) ↵ 08:42:59
* Initializing, be patient...
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using slower but more accurate full-precision math (--full_precision)
>> Setting Sampler to k_lms
model loaded in 6.12s
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> "an astronaut riding a horse"
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it]
Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it]
Usage stats:
1 image(s) generated in 98.60s
Max VRAM used for this generation: 0.00G
Outputs:
outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180
```
2022-09-01 01:18:19 +00:00
This is due to the Intel `mkl` package getting picked up when you try to install
something that depends on it-- Rosetta can translate some Intel instructions but
not the specialized ones here. To avoid this, make sure to use the environment
variable `CONDA_SUBDIR=osx-arm64` , which restricts the Conda environment to only
2022-09-01 14:11:14 +00:00
use ARM packages, and use `nomkl` as described above.
2022-09-03 18:28:34 +00:00
2022-09-15 14:53:41 +00:00
---
2022-09-11 15:52:43 +00:00
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
2022-09-03 18:28:34 +00:00
May appear when just starting to generate, e.g.:
2022-09-15 14:53:41 +00:00
```bash
2022-09-03 18:28:34 +00:00
dream> clouds
Generating: 0%| | 0/1 [00:00< ?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor< 2x1280xf32 > ' and 'tensor< *xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6
/Users/[...]/opt/anaconda3/envs/ldm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
2022-09-11 15:52:43 +00:00
```
2022-09-03 18:28:34 +00:00