Updates INSTALL_MAC.md

This commit is contained in:
psychedelicious 2022-10-04 13:32:06 +11:00 committed by Lincoln Stein
parent 8e76bc2b5d
commit 9318719b9e

View File

@ -1,89 +1,105 @@
# **macOS Instructions**
Requirements
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the community.
While the repo does run on Intel Macs, we only have a couple reports. If you have an Intel Mac and run into issues, please create an issue on Github and we will do our best to help.
## Requirements
- macOS 12.3 Monterey or later
- Python
- Patience
- Apple Silicon\*
- About 10GB of storage (and 10GB of data if your internet connection has data caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
\*I haven't tested any of this on Intel Macs but I have read that one person got it to work, so Apple Silicon might not be requried.
## Installation
Things have moved really fast and so these instructions change often
and are often out-of-date. One of the problems is that there are so
many different ways to run this.
We are trying to build a testing setup so that when we make changes it
doesn't always break.
How to (this hasn't been 100% tested yet):
First get the weights checkpoint download started - it's big:
First you need to download a large checkpoint file.
1. Sign up at https://huggingface.co
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository:
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder)
3. Accept the terms and click Access Repository
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder). You may want to move it somewhere else for longer term storage - SD needs this file to run.
While that is downloading, open Terminal and run the following commands one at a time.
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
Do not just copy and paste the whole thing into your terminal!
```bash
# install brew (and Xcode command line tools):
# Install brew (and Xcode command line tools):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
#
# Now there are two different routes to get the Python (miniconda) environment up and running:
# Now there are two options to get the Python (miniconda) environment up and running:
# 1. Alongside pyenv
# 2. No pyenv
# 2. Standalone
#
# If you don't know what we are talking about, choose 2.
#
# NOW EITHER DO
# 1. Installing alongside pyenv
# If you are familiar with python environments, you'll know there are other options
# for setting up the environment - you are on your own if you go one of those routes.
##### BEGIN TWO DIFFERENT OPTIONS #####
### BEGIN OPTION 1: Installing alongside pyenv ###
brew install pyenv-virtualenv # you might have this from before, no problem
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
### END OPTION 1 ###
# OR,
# 2. Installing standalone
# install python 3, git, cmake, protobuf:
### BEGIN OPTION 2: Installing standalone ###
# Install cmake, protobuf, and rust:
brew install cmake protobuf rust
# install miniconda (M1 arm64 version):
# BEGIN ARCHITECTURE-DEPENDENT STEP #
# For M1: install miniconda (M1 arm64 version):
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
# For Intel: install miniconda (Intel x86-64 version):
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
# END ARCHITECTURE-DEPENDENT STEP #
# EITHER WAY,
# continue from here
### END OPTION 2 ###
# clone the repo
##### END TWO DIFFERENT OPTIONS #####
# Clone the Invoke AI repo
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
#
# wait until the checkpoint file has downloaded, then proceed
#
### WAIT FOR THE CHECKPOINT FILE TO DOWNLOAD, THEN PROCEED ###
# create symlink to checkpoint
# We will leave the big checkpoint wherever you stashed it for long-term storage,
# and make a link to it from the repo's folder. This allows you to use it for
# other repos, and if you need to delete Invoke AI, you won't have to download it again.
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
PATH_TO_CKPT="$HOME/Downloads" # or wherever you saved sd-v1-4.ckpt
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads"
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
# install packages
# BEGIN ARCHITECTURE-DEPENDENT STEP #
# For M1: Create the environment & install packages
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
# For Intel: Create the environment & install packages
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yaml
# END ARCHITECTURE-DEPENDENT STEP #
# Activate the environment (you need to do this every time you want to run SD)
conda activate ldm
# only need to do this once
# This will download some bits and pieces and make take a while
python scripts/preload_models.py
# run SD!
python scripts/dream.py --full_precision # half-precision requires autocast and won't work
# Run SD!
python scripts/dream.py
```
The original scripts should work as well.