mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'development' into fix-empty-prompt-exception
This commit is contained in:
183
docs/installation/INSTALL_DOCKER.md
Normal file
183
docs/installation/INSTALL_DOCKER.md
Normal file
@ -0,0 +1,183 @@
|
||||
# Before you begin
|
||||
|
||||
- For end users: Install Stable Diffusion locally using the instructions for your OS.
|
||||
- For developers: For container-related development tasks or for enabling easy deployment to other environments (on-premises or cloud), follow these instructions. For general use, install locally to leverage your machine's GPU.
|
||||
|
||||
# Why containers?
|
||||
|
||||
They provide a flexible, reliable way to build and deploy Stable Diffusion. You'll also use a Docker volume to store the largest model files and image outputs as a first step in decoupling storage and compute. Future enhancements can do this for other assets. See [Processes](https://12factor.net/processes) under the Twelve-Factor App methodology for details on why running applications in such a stateless fashion is important.
|
||||
|
||||
You can specify the target platform when building the image and running the container. You'll also need to specify the Stable Diffusion requirements file that matches the container's OS and the architecture it will run on.
|
||||
|
||||
Developers on Apple silicon (M1/M2): You [can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224) and performance is reduced compared with running it directly on macOS but for development purposes it's fine. Once you're done with development tasks on your laptop you can build for the target platform and architecture and deploy to another environment with NVIDIA GPUs on-premises or in the cloud.
|
||||
|
||||
# Installation on a Linux container
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Get the data files
|
||||
|
||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download the model file ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```. You'll need to create an account but it's quick and free.
|
||||
|
||||
Also download the face restoration model.
|
||||
```Shell
|
||||
cd ~/Downloads
|
||||
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
|
||||
```
|
||||
|
||||
### Install [Docker](https://github.com/santisbon/guides#docker)
|
||||
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this [Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to increase Swap and Disk image size too.
|
||||
|
||||
## Setup
|
||||
|
||||
Set the fork you want to use and other variables.
|
||||
```Shell
|
||||
TAG_STABLE_DIFFUSION="santisbon/stable-diffusion"
|
||||
PLATFORM="linux/arm64"
|
||||
GITHUB_STABLE_DIFFUSION="-b orig-gfpgan https://github.com/santisbon/stable-diffusion.git"
|
||||
REQS_STABLE_DIFFUSION="requirements-linux-arm64.txt"
|
||||
CONDA_SUBDIR="osx-arm64"
|
||||
|
||||
echo $TAG_STABLE_DIFFUSION
|
||||
echo $PLATFORM
|
||||
echo $GITHUB_STABLE_DIFFUSION
|
||||
echo $REQS_STABLE_DIFFUSION
|
||||
echo $CONDA_SUBDIR
|
||||
```
|
||||
|
||||
Create a Docker volume for the downloaded model files.
|
||||
```Shell
|
||||
docker volume create my-vol
|
||||
```
|
||||
|
||||
Copy the data files to the Docker volume using a lightweight Linux container. We'll need the models at run time. You just need to create the container with the mountpoint; no need to run this dummy container.
|
||||
```Shell
|
||||
cd ~/Downloads # or wherever you saved the files
|
||||
|
||||
docker create --platform $PLATFORM --name dummy --mount source=my-vol,target=/data alpine
|
||||
|
||||
docker cp sd-v1-4.ckpt dummy:/data
|
||||
docker cp GFPGANv1.3.pth dummy:/data
|
||||
```
|
||||
|
||||
Get the repo and download the Miniconda installer (we'll need it at build time). Replace the URL with the version matching your container OS and the architecture it will run on.
|
||||
```Shell
|
||||
cd ~
|
||||
git clone $GITHUB_STABLE_DIFFUSION
|
||||
|
||||
cd stable-diffusion/docker-build
|
||||
chmod +x entrypoint.sh
|
||||
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh && chmod +x anaconda.sh
|
||||
```
|
||||
|
||||
Build the Docker image. Give it any tag ```-t``` that you want.
|
||||
Choose the Linux container's host platform: x86-64/Intel is ```amd64```. Apple silicon is ```arm64```. If deploying the container to the cloud to leverage powerful GPU instances you'll be on amd64 hardware but if you're just trying this out locally on Apple silicon choose arm64.
|
||||
The application uses libraries that need to match the host environment so use the appropriate requirements file.
|
||||
Tip: Check that your shell session has the env variables set above.
|
||||
```Shell
|
||||
docker build -t $TAG_STABLE_DIFFUSION \
|
||||
--platform $PLATFORM \
|
||||
--build-arg gsd=$GITHUB_STABLE_DIFFUSION \
|
||||
--build-arg rsd=$REQS_STABLE_DIFFUSION \
|
||||
--build-arg cs=$CONDA_SUBDIR \
|
||||
.
|
||||
```
|
||||
|
||||
Run a container using your built image.
|
||||
Tip: Make sure you've created and populated the Docker volume (above).
|
||||
```Shell
|
||||
docker run -it \
|
||||
--rm \
|
||||
--platform $PLATFORM \
|
||||
--name stable-diffusion \
|
||||
--hostname stable-diffusion \
|
||||
--mount source=my-vol,target=/data \
|
||||
$TAG_STABLE_DIFFUSION
|
||||
```
|
||||
|
||||
# Usage (time to have fun)
|
||||
|
||||
## Startup
|
||||
If you're on a **Linux container** the ```dream``` script is **automatically started** and the output dir set to the Docker volume you created earlier.
|
||||
|
||||
If you're **directly on macOS follow these startup instructions**.
|
||||
With the Conda environment activated (```conda activate ldm```), run the interactive interface that combines the functionality of the original scripts ```txt2img``` and ```img2img```:
|
||||
Use the more accurate but VRAM-intensive full precision math because half-precision requires autocast and won't work.
|
||||
By default the images are saved in ```outputs/img-samples/```.
|
||||
```Shell
|
||||
python3 scripts/dream.py --full_precision
|
||||
```
|
||||
|
||||
You'll get the script's prompt. You can see available options or quit.
|
||||
```Shell
|
||||
dream> -h
|
||||
dream> q
|
||||
```
|
||||
|
||||
## Text to Image
|
||||
For quick (but bad) image results test with 5 steps (default 50) and 1 sample image. This will let you know that everything is set up correctly.
|
||||
Then increase steps to 100 or more for good (but slower) results.
|
||||
The prompt can be in quotes or not.
|
||||
```Shell
|
||||
dream> The hulk fighting with sheldon cooper -s5 -n1
|
||||
dream> "woman closeup highly detailed" -s 150
|
||||
# Reuse previous seed and apply face restoration
|
||||
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
||||
```
|
||||
|
||||
You'll need to experiment to see if face restoration is making it better or worse for your specific prompt.
|
||||
|
||||
If you're on a container the output is set to the Docker volume. You can copy it wherever you want.
|
||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
||||
Or you can copy it from your Mac terminal. Keep in mind ```docker cp``` can't expand ```*.png``` so you'll need to specify the image file name.
|
||||
|
||||
On your host Mac (you can use the name of any container that mounted the volume):
|
||||
```Shell
|
||||
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
||||
```
|
||||
|
||||
## Image to Image
|
||||
You can also do text-guided image-to-image translation. For example, turning a sketch into a detailed drawing.
|
||||
|
||||
```strength``` is a value between 0.0 and 1.0 that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
|
||||
|
||||
Make sure your input image size dimensions are multiples of 64 e.g. 512x512. Otherwise you'll get ```Error: product of dimension sizes > 2**31'```. If you still get the error [try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.) like 512x256.
|
||||
|
||||
If you're on a Docker container, copy your input image into the Docker volume
|
||||
```Shell
|
||||
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
|
||||
```
|
||||
|
||||
Try it out generating an image (or more). The ```dream``` script needs absolute paths to find the image so don't use ```~```.
|
||||
|
||||
If you're on your Mac
|
||||
```Shell
|
||||
dream> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
|
||||
```
|
||||
If you're on a Linux container on your Mac
|
||||
```Shell
|
||||
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
|
||||
```
|
||||
|
||||
## Web Interface
|
||||
You can use the ```dream``` script with a graphical web interface. Start the web server with:
|
||||
```Shell
|
||||
python3 scripts/dream.py --full_precision --web
|
||||
```
|
||||
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
|
||||
|
||||
Press Control-C at the command line to stop the web server.
|
||||
|
||||
## Notes
|
||||
|
||||
Some text you can add at the end of the prompt to make it very pretty:
|
||||
```Shell
|
||||
cinematic photo, highly detailed, cinematic lighting, ultra-detailed, ultrarealistic, photorealism, Octane Rendering, cyberpunk lights, Hyper Detail, 8K, HD, Unreal Engine, V-Ray, full hd, cyberpunk, abstract, 3d octane render + 4k UHD + immense detail + dramatic lighting + well lit + black, purple, blue, pink, cerulean, teal, metallic colours, + fine details, ultra photoreal, photographic, concept art, cinematic composition, rule of thirds, mysterious, eerie, photorealism, breathtaking detailed, painting art deco pattern, by hsiao, ron cheng, john james audubon, bizarre compositions, exquisite detail, extremely moody lighting, painted by greg rutkowski makoto shinkai takashi takeuchi studio ghibli, akihiko yoshida
|
||||
```
|
||||
|
||||
The original scripts should work as well.
|
||||
```Shell
|
||||
python3 scripts/orig_scripts/txt2img.py --help
|
||||
python3 scripts/orig_scripts/txt2img.py --ddim_steps 100 --n_iter 1 --n_samples 1 --plms --prompt "new born baby kitten. Hyper Detail, Octane Rendering, Unreal Engine, V-Ray"
|
||||
python3 scripts/orig_scripts/txt2img.py --ddim_steps 5 --n_iter 1 --n_samples 1 --plms --prompt "ocean" # or --klms
|
||||
```
|
@ -1,20 +1,28 @@
|
||||
# **Linux Installation**
|
||||
---
|
||||
title: Linux
|
||||
---
|
||||
|
||||
1. You will need to install the following prerequisites if they are not already available. Use your operating system's preferred installer
|
||||
# :fontawesome-brands-linux: Linux
|
||||
|
||||
- Python (version 3.8.5 recommended; higher may work)
|
||||
- git
|
||||
## Installation
|
||||
|
||||
1. You will need to install the following prerequisites if they are not already
|
||||
available. Use your operating system's preferred installer.
|
||||
|
||||
- Python (version 3.8.5 recommended; higher may work)
|
||||
- git
|
||||
|
||||
2. Install the Python Anaconda environment manager.
|
||||
|
||||
```
|
||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
||||
```
|
||||
```bash
|
||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
||||
```
|
||||
|
||||
After installing anaconda, you should log out of your system and log back in. If the installation
|
||||
worked, your command prompt will be prefixed by the name of the current anaconda environment - `(base)`.
|
||||
After installing anaconda, you should log out of your system and log back in. If
|
||||
the installation worked, your command prompt will be prefixed by the name of the
|
||||
current anaconda environment - `(base)`.
|
||||
|
||||
3. Copy the InvokeAI source code from GitHub:
|
||||
|
||||
@ -31,7 +39,9 @@ This will create InvokeAI folder where you will follow the rest of the steps.
|
||||
(base) ~/InvokeAI$
|
||||
```
|
||||
|
||||
5. Use anaconda to copy necessary python packages, create a new python environment named `ldm` and activate the environment.
|
||||
5. Use anaconda to copy necessary python packages, create a new python
|
||||
environment named `ldm` and activate the environment.
|
||||
|
||||
|
||||
```
|
||||
(base) ~/InvokeAI$ conda env create -f environment.yaml
|
||||
@ -39,24 +49,36 @@ This will create InvokeAI folder where you will follow the rest of the steps.
|
||||
(ldm) ~/InvokeAI$
|
||||
```
|
||||
|
||||
After these steps, your command prompt will be prefixed by `(ldm)` as shown above.
|
||||
After these steps, your command prompt will be prefixed by `(ldm)` as shown
|
||||
above.
|
||||
|
||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
||||
|
||||
|
||||
```
|
||||
(ldm) ~/InvokeAI$ python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
Note that this step is necessary because I modified the original just-in-time model loading scheme to allow the script to work on GPU machines that are not internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
||||
!!! note
|
||||
|
||||
This step is necessary because I modified the original just-in-time
|
||||
model loading scheme to allow the script to work on GPU machines that are not
|
||||
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
||||
|
||||
7. Now you need to install the weights for the stable diffusion model.
|
||||
|
||||
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.
|
||||
- For running with the released weights, you will first need to set up an acount
|
||||
with [Hugging Face](https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.)
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on the
|
||||
file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click
|
||||
the "download" link. Save the file somewhere safe on your local machine.
|
||||
|
||||
Now run the following commands from within the stable-diffusion directory.
|
||||
This will create a symbolic link from the stable-diffusion model.ckpt file, to
|
||||
the true location of the `sd-v1-4.ckpt` file.
|
||||
|
||||
Now run the following commands from within the stable-diffusion directory. This will create a symbolic link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.
|
||||
|
||||
```
|
||||
(ldm) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
|
||||
@ -78,7 +100,8 @@ Now run the following commands from within the stable-diffusion directory. This
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||
|
||||
### Updating to newer versions of the script
|
||||
## Updating to newer versions of the script
|
||||
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
||||
|
||||
|
@ -1,206 +1,338 @@
|
||||
# **macOS Instructions**
|
||||
---
|
||||
title: macOS
|
||||
---
|
||||
|
||||
Requirements
|
||||
# :fontawesome-brands-apple: macOS
|
||||
|
||||
## Requirements
|
||||
|
||||
- macOS 12.3 Monterey or later
|
||||
- Python
|
||||
- Patience
|
||||
- Apple Silicon\*
|
||||
- Apple Silicon or Intel Mac
|
||||
|
||||
\*I haven't tested any of this on Intel Macs but I have read that one person got it to work, so Apple Silicon might not be requried.
|
||||
Things have moved really fast and so these instructions change often which makes
|
||||
them outdated pretty fast. One of the problems is that there are so many
|
||||
different ways to run this.
|
||||
|
||||
Things have moved really fast and so these instructions change often
|
||||
and are often out-of-date. One of the problems is that there are so
|
||||
many different ways to run this.
|
||||
We are trying to build a testing setup so that when we make changes it doesn't
|
||||
always break.
|
||||
|
||||
We are trying to build a testing setup so that when we make changes it
|
||||
doesn't always break.
|
||||
## How to
|
||||
|
||||
How to (this hasn't been 100% tested yet):
|
||||
(this hasn't been 100% tested yet)
|
||||
|
||||
First get the weights checkpoint download started - it's big:
|
||||
First get the weights checkpoint download started since it's big and will take
|
||||
some time:
|
||||
|
||||
1. Sign up at https://huggingface.co
|
||||
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
1. Sign up at [huggingface.co](https://huggingface.co)
|
||||
2. Go to the
|
||||
[Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
3. Accept the terms and click Access Repository:
|
||||
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder)
|
||||
4. Download
|
||||
[sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt)
|
||||
and note where you have saved it (probably the Downloads folder)
|
||||
|
||||
While that is downloading, open Terminal and run the following commands one at a time.
|
||||
While that is downloading, open a Terminal and run the following commands:
|
||||
|
||||
```bash
|
||||
# install brew (and Xcode command line tools):
|
||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
!!! todo "Homebrew"
|
||||
|
||||
#
|
||||
# Now there are two different routes to get the Python (miniconda) environment up and running:
|
||||
# 1. Alongside pyenv
|
||||
# 2. No pyenv
|
||||
#
|
||||
# If you don't know what we are talking about, choose 2.
|
||||
#
|
||||
# NOW EITHER DO
|
||||
# 1. Installing alongside pyenv
|
||||
=== "no brew installation yet"
|
||||
|
||||
brew install pyenv-virtualenv # you might have this from before, no problem
|
||||
pyenv install anaconda3-2022.05
|
||||
pyenv virtualenv anaconda3-2022.05
|
||||
eval "$(pyenv init -)"
|
||||
pyenv activate anaconda3-2022.05
|
||||
```bash title="install brew (and Xcode command line tools)"
|
||||
/bin/bash -c \
|
||||
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
```
|
||||
|
||||
# OR,
|
||||
# 2. Installing standalone
|
||||
# install python 3, git, cmake, protobuf:
|
||||
brew install cmake protobuf rust
|
||||
=== "brew is already installed"
|
||||
|
||||
Only if you installed protobuf in a previous version of this tutorial, otherwise skip
|
||||
|
||||
# install miniconda (M1 arm64 version):
|
||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
|
||||
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
|
||||
`#!bash brew uninstall protobuf`
|
||||
|
||||
!!! todo "Conda Installation"
|
||||
|
||||
# EITHER WAY,
|
||||
# continue from here
|
||||
Now there are two different ways to set up the Python (miniconda) environment:
|
||||
1. Standalone
|
||||
2. with pyenv
|
||||
If you don't know what we are talking about, choose Standalone
|
||||
|
||||
=== "Standalone"
|
||||
|
||||
```bash
|
||||
# install cmake and rust:
|
||||
brew install cmake rust
|
||||
```
|
||||
|
||||
=== "M1 arm64"
|
||||
|
||||
```bash title="Install miniconda for M1 arm64"
|
||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
|
||||
-o Miniconda3-latest-MacOSX-arm64.sh
|
||||
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
|
||||
```
|
||||
|
||||
=== "Intel x86_64"
|
||||
|
||||
```bash title="Install miniconda for Intel"
|
||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
|
||||
-o Miniconda3-latest-MacOSX-x86_64.sh
|
||||
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
|
||||
```
|
||||
|
||||
=== "with pyenv"
|
||||
|
||||
```{.bash .annotate}
|
||||
brew install rust pyenv-virtualenv # (1)!
|
||||
pyenv install anaconda3-2022.05
|
||||
pyenv virtualenv anaconda3-2022.05
|
||||
eval "$(pyenv init -)"
|
||||
pyenv activate anaconda3-2022.05
|
||||
```
|
||||
|
||||
1. You might already have this installed, if that is the case just continue.
|
||||
|
||||
```{.bash .annotate title="local repo setup"}
|
||||
# clone the repo
|
||||
git clone https://github.com/lstein/stable-diffusion.git
|
||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||
cd stable-diffusion
|
||||
|
||||
#
|
||||
# wait until the checkpoint file has downloaded, then proceed
|
||||
#
|
||||
|
||||
# create symlink to checkpoint
|
||||
mkdir -p models/ldm/stable-diffusion-v1/
|
||||
|
||||
PATH_TO_CKPT="$HOME/Downloads" # or wherever you saved sd-v1-4.ckpt
|
||||
PATH_TO_CKPT="$HOME/Downloads" # (1)!
|
||||
|
||||
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
|
||||
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
|
||||
models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
|
||||
# install packages
|
||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
|
||||
conda activate ldm
|
||||
1. or wherever you saved sd-v1-4.ckpt
|
||||
|
||||
!!! todo "create Conda Environment"
|
||||
|
||||
=== "M1 arm64"
|
||||
|
||||
```bash
|
||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 \
|
||||
conda env create \
|
||||
-f environment-mac.yaml \
|
||||
&& conda activate ldm
|
||||
```
|
||||
|
||||
=== "Intel x86_64"
|
||||
|
||||
```bash
|
||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-x86_64 \
|
||||
conda env create \
|
||||
-f environment-mac.yaml \
|
||||
&& conda activate ldm
|
||||
```
|
||||
|
||||
```{.bash .annotate title="preload models and run script"}
|
||||
# only need to do this once
|
||||
python scripts/preload_models.py
|
||||
|
||||
# run SD!
|
||||
python scripts/dream.py --full_precision # half-precision requires autocast and won't work
|
||||
# now you can run SD in CLI mode
|
||||
python scripts/dream.py --full_precision # (1)!
|
||||
|
||||
# or run the web interface!
|
||||
python scripts/dream.py --web
|
||||
|
||||
# The original scripts should work as well.
|
||||
python scripts/orig_scripts/txt2img.py \
|
||||
--prompt "a photograph of an astronaut riding a horse" \
|
||||
--plms
|
||||
```
|
||||
|
||||
The original scripts should work as well.
|
||||
1. half-precision requires autocast which is unfortunatelly incompatible
|
||||
|
||||
```
|
||||
python scripts/orig_scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
||||
```
|
||||
!!! note
|
||||
|
||||
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
||||
create -f environment-mac.yaml` never finishing in some situations. So
|
||||
it isn't required but wont hurt.
|
||||
`#!bash export PIP_EXISTS_ACTION=w` is a precaution to fix a problem where
|
||||
|
||||
After you follow all the instructions and run dream.py you might get several errors. Here's the errors I've seen and found solutions for.
|
||||
```bash
|
||||
conda env create \
|
||||
-f environment-mac.yaml
|
||||
```
|
||||
|
||||
did never finish in some situations. So it isn't required but wont hurt.
|
||||
|
||||
---
|
||||
|
||||
## Common problems
|
||||
|
||||
After you followed all the instructions and try to run dream.py, you might
|
||||
get several errors. Here's the errors I've seen and found solutions for.
|
||||
|
||||
### Is it slow?
|
||||
|
||||
Be sure to specify 1 sample and 1 iteration.
|
||||
```bash title="Be sure to specify 1 sample and 1 iteration."
|
||||
python ./scripts/orig_scripts/txt2img.py \
|
||||
--prompt "ocean" \
|
||||
--ddim_steps 5 \
|
||||
--n_samples 1 \
|
||||
--n_iter 1
|
||||
```
|
||||
|
||||
python ./scripts/orig_scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
---
|
||||
|
||||
### Doesn't work anymore?
|
||||
|
||||
PyTorch nightly includes support for MPS. Because of this, this setup is inherently unstable. One morning I woke up and it no longer worked no matter what I did until I switched to miniforge. However, I have another Mac that works just fine with Anaconda. If you can't get it to work, please search a little first because many of the errors will get posted and solved. If you can't find a solution please [create an issue](https://github.com/lstein/stable-diffusion/issues).
|
||||
PyTorch nightly includes support for MPS. Because of this, this setup is
|
||||
inherently unstable. One morning I woke up and it no longer worked no matter
|
||||
what I did until I switched to miniforge. However, I have another Mac that works
|
||||
just fine with Anaconda. If you can't get it to work, please search a little
|
||||
first because many of the errors will get posted and solved. If you can't find a
|
||||
solution please
|
||||
[create an issue](https://github.com/invoke-ai/InvokeAI/issues).
|
||||
|
||||
One debugging step is to update to the latest version of PyTorch nightly.
|
||||
|
||||
conda install pytorch torchvision torchaudio -c pytorch-nightly
|
||||
```bash
|
||||
conda install \
|
||||
pytorch \
|
||||
torchvision \
|
||||
-c pytorch-nightly \
|
||||
-n ldm
|
||||
```
|
||||
|
||||
If `conda env create -f environment-mac.yaml` takes forever run this.
|
||||
If it takes forever to run
|
||||
|
||||
git clean -f
|
||||
```bash
|
||||
conda env create \
|
||||
-f environment-mac.yaml
|
||||
```
|
||||
|
||||
And run this.
|
||||
you could try to run:
|
||||
|
||||
conda clean --yes --all
|
||||
```bash
|
||||
git clean -f
|
||||
conda clean \
|
||||
--yes \
|
||||
--all
|
||||
```
|
||||
|
||||
Or you could reset Anaconda.
|
||||
Or you could try to completley reset Anaconda:
|
||||
|
||||
conda update --force-reinstall -y -n base -c defaults conda
|
||||
```bash
|
||||
conda update \
|
||||
--force-reinstall \
|
||||
-y \
|
||||
-n base \
|
||||
-c defaults conda
|
||||
```
|
||||
|
||||
### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc.
|
||||
---
|
||||
|
||||
There are several causes of these errors.
|
||||
### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc
|
||||
|
||||
First, did you remember to `conda activate ldm`? If your terminal prompt
|
||||
begins with "(ldm)" then you activated it. If it begins with "(base)"
|
||||
or something else you haven't.
|
||||
There are several causes of these errors:
|
||||
|
||||
Second, you might've run `./scripts/preload_models.py` or `./scripts/dream.py`
|
||||
instead of `python ./scripts/preload_models.py` or `python ./scripts/dream.py`.
|
||||
The cause of this error is long so it's below.
|
||||
1. Did you remember to `conda activate ldm`? If your terminal prompt begins with
|
||||
"(ldm)" then you activated it. If it begins with "(base)" or something else
|
||||
you haven't.
|
||||
|
||||
Third, if it says you're missing taming you need to rebuild your virtual
|
||||
environment.
|
||||
2. You might've run `./scripts/preload_models.py` or `./scripts/dream.py`
|
||||
instead of `python ./scripts/preload_models.py` or
|
||||
`python ./scripts/dream.py`. The cause of this error is long so it's below.
|
||||
|
||||
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
|
||||
|
||||
3. if it says you're missing taming you need to rebuild your virtual
|
||||
environment.
|
||||
|
||||
```bash
|
||||
conda deactivate
|
||||
conda env remove -n ldm
|
||||
conda env create -f environment-mac.yaml
|
||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 \
|
||||
conda env create \
|
||||
-f environment-mac.yaml
|
||||
```
|
||||
|
||||
Fourth, If you have activated the ldm virtual environment and tried rebuilding it, maybe the problem could be that I have something installed that you don't and you'll just need to manually install it. Make sure you activate the virtual environment so it installs there instead of
|
||||
globally.
|
||||
4. If you have activated the ldm virtual environment and tried rebuilding it,
|
||||
maybe the problem could be that I have something installed that you don't and
|
||||
you'll just need to manually install it. Make sure you activate the virtual
|
||||
environment so it installs there instead of globally.
|
||||
|
||||
```bash
|
||||
conda activate ldm
|
||||
pip install *name*
|
||||
pip install <package name>
|
||||
```
|
||||
|
||||
You might also need to install Rust (I mention this again below).
|
||||
|
||||
---
|
||||
|
||||
### How many snakes are living in your computer?
|
||||
|
||||
You might have multiple Python installations on your system, in which case it's
|
||||
important to be explicit and consistent about which one to use for a given project.
|
||||
This is because virtual environments are coupled to the Python that created it (and all
|
||||
the associated 'system-level' modules).
|
||||
important to be explicit and consistent about which one to use for a given
|
||||
project. This is because virtual environments are coupled to the Python that
|
||||
created it (and all the associated 'system-level' modules).
|
||||
|
||||
When you run `python` or `python3`, your shell searches the colon-delimited locations
|
||||
in the `PATH` environment variable (`echo $PATH` to see that list) in that order - first match wins.
|
||||
You can ask for the location of the first `python3` found in your `PATH` with the `which` command like this:
|
||||
When you run `python` or `python3`, your shell searches the colon-delimited
|
||||
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
|
||||
that order - first match wins. You can ask for the location of the first
|
||||
`python3` found in your `PATH` with the `which` command like this:
|
||||
|
||||
% which python3
|
||||
/usr/bin/python3
|
||||
```bash
|
||||
% which python3
|
||||
/usr/bin/python3
|
||||
```
|
||||
|
||||
Anything in `/usr/bin` is [part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6). However, `/usr/bin/python3` is not actually python3, but
|
||||
rather a stub that offers to install Xcode (which includes python 3). If you have Xcode installed already,
|
||||
`/usr/bin/python3` will execute `/Library/Developer/CommandLineTools/usr/bin/python3` or
|
||||
Anything in `/usr/bin` is
|
||||
[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
|
||||
However, `/usr/bin/python3` is not actually python3, but rather a stub that
|
||||
offers to install Xcode (which includes python 3). If you have Xcode installed
|
||||
already, `/usr/bin/python3` will execute
|
||||
`/Library/Developer/CommandLineTools/usr/bin/python3` or
|
||||
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
|
||||
Xcode you've selected with `xcode-select`).
|
||||
|
||||
Note that `/usr/bin/python` is an entirely different python - specifically, python 2. Note: starting in
|
||||
macOS 12.3, `/usr/bin/python` no longer exists.
|
||||
Note that `/usr/bin/python` is an entirely different python - specifically,
|
||||
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
|
||||
|
||||
% which python3
|
||||
/opt/homebrew/bin/python3
|
||||
```bash
|
||||
% which python3
|
||||
/opt/homebrew/bin/python3
|
||||
```
|
||||
|
||||
If you installed python3 with Homebrew and you've modified your path to search
|
||||
for Homebrew binaries before system ones, you'll see the above path.
|
||||
|
||||
% which python
|
||||
/opt/anaconda3/bin/python
|
||||
```bash
|
||||
% which python
|
||||
/opt/anaconda3/bin/python
|
||||
```
|
||||
|
||||
If you have Anaconda installed, you will see the above path. There is a
|
||||
`/opt/anaconda3/bin/python3` also. We expect that `/opt/anaconda3/bin/python`
|
||||
and `/opt/anaconda3/bin/python3` should actually be the *same python*, which you can
|
||||
verify by comparing the output of `python3 -V` and `python -V`.
|
||||
`/opt/anaconda3/bin/python3` also.
|
||||
|
||||
(ldm) % which python
|
||||
/Users/name/miniforge3/envs/ldm/bin/python
|
||||
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
|
||||
should actually be the _same python_, which you can verify by comparing the
|
||||
output of `python3 -V` and `python -V`.
|
||||
|
||||
The above is what you'll see if you have miniforge and you've correctly activated
|
||||
the ldm environment, and you used option 2 in the setup instructions above ("no pyenv").
|
||||
```bash
|
||||
(ldm) % which python
|
||||
/Users/name/miniforge3/envs/ldm/bin/python
|
||||
```
|
||||
|
||||
(anaconda3-2022.05) % which python
|
||||
/Users/name/.pyenv/shims/python
|
||||
|
||||
... and the above is what you'll see if you used option 1 ("Alongside pyenv").
|
||||
The above is what you'll see if you have miniforge and correctly activated the
|
||||
ldm environment, while usingd the standalone setup instructions above.
|
||||
|
||||
It's all a mess and you should know [how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
|
||||
if you want to fix it. Here's a brief hint of all the ways you can modify it
|
||||
(don't really have the time to explain it all here).
|
||||
If you otherwise installed via pyenv, you will get this result:
|
||||
|
||||
```bash
|
||||
(anaconda3-2022.05) % which python
|
||||
/Users/name/.pyenv/shims/python
|
||||
```
|
||||
|
||||
It's all a mess and you should know
|
||||
[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
|
||||
if you want to fix it. Here's a brief hint of the most common ways you can
|
||||
modify it (don't really have the time to explain it all here).
|
||||
|
||||
- ~/.zshrc
|
||||
- ~/.bash_profile
|
||||
@ -208,53 +340,86 @@ if you want to fix it. Here's a brief hint of all the ways you can modify it
|
||||
- /etc/paths.d
|
||||
- /etc/path
|
||||
|
||||
Which one you use will depend on what you have installed except putting a file
|
||||
in /etc/paths.d is what I prefer to do.
|
||||
Which one you use will depend on what you have installed, except putting a file
|
||||
in /etc/paths.d - which also is the way I prefer to do.
|
||||
|
||||
Finally, to answer the question posed by this section's title, it may help to list
|
||||
all of the `python` / `python3` things found in `$PATH` instead of just the one that
|
||||
will be executed by default. To do that, add the `-a` switch to `which`:
|
||||
Finally, to answer the question posed by this section's title, it may help to
|
||||
list all of the `python` / `python3` things found in `$PATH` instead of just the
|
||||
first hit. To do so, add the `-a` switch to `which`:
|
||||
|
||||
% which -a python3
|
||||
...
|
||||
```bash
|
||||
% which -a python3
|
||||
...
|
||||
```
|
||||
|
||||
This will show a list of all binaries which are actually available in your PATH.
|
||||
|
||||
---
|
||||
|
||||
### Debugging?
|
||||
|
||||
Tired of waiting for your renders to finish before you can see if it
|
||||
works? Reduce the steps! The image quality will be horrible but at least you'll
|
||||
get quick feedback.
|
||||
Tired of waiting for your renders to finish before you can see if it works?
|
||||
Reduce the steps! The image quality will be horrible but at least you'll get
|
||||
quick feedback.
|
||||
|
||||
python ./scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
```bash
|
||||
python ./scripts/txt2img.py \
|
||||
--prompt "ocean" \
|
||||
--ddim_steps 5 \
|
||||
--n_samples 1 \
|
||||
--n_iter 1
|
||||
```
|
||||
|
||||
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'...
|
||||
---
|
||||
|
||||
python scripts/preload_models.py
|
||||
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
|
||||
|
||||
```bash
|
||||
python scripts/preload_models.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### "The operator [name] is not current implemented for the MPS device." (sic)
|
||||
|
||||
Example error.
|
||||
!!! example "example error"
|
||||
|
||||
```
|
||||
...
|
||||
NotImplementedError: The operator 'aten::_index_put_impl_' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on [https://github.com/pytorch/pytorch/issues/77764](https://github.com/pytorch/pytorch/issues/77764). As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
|
||||
```
|
||||
```bash
|
||||
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
|
||||
implemented for the MPS device. If you want this op to be added in priority
|
||||
during the prototype phase of this feature, please comment on
|
||||
https://github.com/pytorch/pytorch/issues/77764.
|
||||
As a temporary fix, you can set the environment variable
|
||||
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
|
||||
WARNING: this will be slower than running natively on MPS.
|
||||
```
|
||||
|
||||
The lstein branch includes this fix in [environment-mac.yaml](https://github.com/lstein/stable-diffusion/blob/main/environment-mac.yaml).
|
||||
This fork already includes a fix for this in
|
||||
[environment-mac.yaml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yaml).
|
||||
|
||||
---
|
||||
|
||||
### "Could not build wheels for tokenizers"
|
||||
|
||||
I have not seen this error because I had Rust installed on my computer before I started playing with Stable Diffusion. The fix is to install Rust.
|
||||
I have not seen this error because I had Rust installed on my computer before I
|
||||
started playing with Stable Diffusion. The fix is to install Rust.
|
||||
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```bash
|
||||
curl \
|
||||
--proto '=https' \
|
||||
--tlsv1.2 \
|
||||
-sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### How come `--seed` doesn't work?
|
||||
|
||||
First this:
|
||||
|
||||
> Completely reproducible results are not guaranteed across PyTorch
|
||||
> releases, individual commits, or different platforms. Furthermore,
|
||||
> results may not be reproducible between CPU and GPU executions, even
|
||||
> when using identical seeds.
|
||||
> Completely reproducible results are not guaranteed across PyTorch releases,
|
||||
> individual commits, or different platforms. Furthermore, results may not be
|
||||
> reproducible between CPU and GPU executions, even when using identical seeds.
|
||||
|
||||
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
|
||||
|
||||
@ -263,55 +428,68 @@ still working on it.
|
||||
|
||||
### libiomp5.dylib error?
|
||||
|
||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
||||
```bash
|
||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
||||
```
|
||||
|
||||
You are likely using an Intel package by mistake. Be sure to run conda with
|
||||
the environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
||||
You are likely using an Intel package by mistake. Be sure to run conda with the
|
||||
environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
||||
|
||||
`CONDA_SUBDIR=osx-arm64 conda install ...`
|
||||
|
||||
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in by
|
||||
a dependency. [nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
|
||||
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
|
||||
by a dependency.
|
||||
[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
|
||||
is a metapackage designed to prevent this, by making it impossible to install
|
||||
`mkl`, but if your environment is already broken it may not work.
|
||||
|
||||
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
||||
masks the underlying issue of using Intel packages.
|
||||
|
||||
### Not enough memory.
|
||||
---
|
||||
|
||||
This seems to be a common problem and is probably the underlying
|
||||
problem for a lot of symptoms (listed below). The fix is to lower your
|
||||
image size or to add `model.half()` right after the model is loaded. I
|
||||
should probably test it out. I've read that the reason this fixes
|
||||
problems is because it converts the model from 32-bit to 16-bit and
|
||||
that leaves more RAM for other things. I have no idea how that would
|
||||
affect the quality of the images though.
|
||||
### Not enough memory
|
||||
|
||||
This seems to be a common problem and is probably the underlying problem for a
|
||||
lot of symptoms (listed below). The fix is to lower your image size or to add
|
||||
`model.half()` right after the model is loaded. I should probably test it out.
|
||||
I've read that the reason this fixes problems is because it converts the model
|
||||
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
|
||||
how that would affect the quality of the images though.
|
||||
|
||||
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
|
||||
|
||||
---
|
||||
|
||||
### "Error: product of dimension sizes > 2\*\*31'"
|
||||
|
||||
This error happens with img2img, which I haven't played with too much
|
||||
yet. But I know it's because your image is too big or the resolution
|
||||
isn't a multiple of 32x32. Because the stable-diffusion model was
|
||||
trained on images that were 512 x 512, it's always best to use that
|
||||
output size (which is the default). However, if you're using that size
|
||||
and you get the above error, try 256 x 256 or 512 x 256 or something
|
||||
as the source image.
|
||||
This error happens with img2img, which I haven't played with too much yet. But I
|
||||
know it's because your image is too big or the resolution isn't a multiple of
|
||||
32x32. Because the stable-diffusion model was trained on images that were 512 x
|
||||
512, it's always best to use that output size (which is the default). However,
|
||||
if you're using that size and you get the above error, try 256 x 256 or 512 x
|
||||
256 or something as the source image.
|
||||
|
||||
BTW, 2\*\*31-1 = [2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in C.
|
||||
BTW, 2\*\*31-1 =
|
||||
[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
|
||||
is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
|
||||
C.
|
||||
|
||||
---
|
||||
|
||||
### I just got Rickrolled! Do I have a virus?
|
||||
|
||||
You don't have a virus. It's part of the project. Here's
|
||||
[Rick](https://github.com/lstein/stable-diffusion/blob/main/assets/rick.jpeg)
|
||||
and here's [the
|
||||
code](https://github.com/lstein/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
|
||||
that swaps him in. It's a NSFW filter, which IMO, doesn't work very
|
||||
good (and we call this "computer vision", sheesh).
|
||||
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg)
|
||||
and here's
|
||||
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
|
||||
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
|
||||
call this "computer vision", sheesh).
|
||||
|
||||
Actually, this could be happening because there's not enough RAM. You could try the `model.half()` suggestion or specify smaller output images.
|
||||
Actually, this could be happening because there's not enough RAM. You could try
|
||||
the `model.half()` suggestion or specify smaller output images.
|
||||
|
||||
---
|
||||
|
||||
### My images come out black
|
||||
|
||||
@ -319,31 +497,32 @@ We might have this fixed, we are still testing.
|
||||
|
||||
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
|
||||
on CUDA GPU's where the images come out green. Maybe it's the same issue?
|
||||
Someone in that issue says to use "--precision full", but this fork
|
||||
actually disables that flag. I don't know why, someone else provided
|
||||
that code and I don't know what it does. Maybe the `model.half()`
|
||||
suggestion above would fix this issue too. I should probably test it.
|
||||
Someone in that issue says to use "--precision full", but this fork actually
|
||||
disables that flag. I don't know why, someone else provided that code and I
|
||||
don't know what it does. Maybe the `model.half()` suggestion above would fix
|
||||
this issue too. I should probably test it.
|
||||
|
||||
### "view size is not compatible with input tensor's size and stride"
|
||||
|
||||
```
|
||||
File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
|
||||
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
|
||||
```bash
|
||||
File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
|
||||
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
|
||||
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
|
||||
```
|
||||
|
||||
Update to the latest version of lstein/stable-diffusion. We were
|
||||
patching pytorch but we found a file in stable-diffusion that we could
|
||||
change instead. This is a 32-bit vs 16-bit problem.
|
||||
Update to the latest version of invoke-ai/InvokeAI. We were patching
|
||||
pytorch but we found a file in stable-diffusion that we could change instead.
|
||||
This is a 32-bit vs 16-bit problem.
|
||||
|
||||
---
|
||||
|
||||
### The processor must support the Intel bla bla bla
|
||||
|
||||
What? Intel? On an Apple Silicon?
|
||||
|
||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
|
||||
The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions.
|
||||
The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions.
|
||||
The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
```bash
|
||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
```
|
||||
|
||||
This is due to the Intel `mkl` package getting picked up when you try to install
|
||||
something that depends on it-- Rosetta can translate some Intel instructions but
|
||||
@ -351,11 +530,13 @@ not the specialized ones here. To avoid this, make sure to use the environment
|
||||
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
|
||||
use ARM packages, and use `nomkl` as described above.
|
||||
|
||||
---
|
||||
|
||||
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
|
||||
|
||||
May appear when just starting to generate, e.g.:
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> clouds
|
||||
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
|
||||
placeholder_idx = torch.where(
|
||||
@ -366,4 +547,3 @@ Abort trap: 6
|
||||
warnings.warn('resource_tracker: There appear to be %d '
|
||||
```
|
||||
|
||||
Macs do not support autocast/mixed-precision. Supply `--full_precision` to use float32 everywhere.
|
||||
|
@ -1,110 +1,136 @@
|
||||
# **Windows Installation**
|
||||
---
|
||||
title: Windows
|
||||
---
|
||||
|
||||
# :fontawesome-brands-windows: Windows
|
||||
|
||||
## **Notebook install (semi-automated)**
|
||||
|
||||
We have a [Jupyter
|
||||
notebook](https://github.com/lstein/stable-diffusion/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
|
||||
with cell-by-cell installation steps. It will download the code in
|
||||
this repo as one of the steps, so instead of cloning this repo, simply
|
||||
download the notebook from the link above and load it up in VSCode
|
||||
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
||||
start running the cells one-by-one.
|
||||
We have a
|
||||
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
|
||||
with cell-by-cell installation steps. It will download the code in this repo as
|
||||
one of the steps, so instead of cloning this repo, simply download the notebook
|
||||
from the link above and load it up in VSCode (with the appropriate extensions
|
||||
installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
||||
|
||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed
|
||||
beforehand - simplified [step-by-step
|
||||
instructions](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
beforehand - simplified
|
||||
[step-by-step instructions](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
||||
are available in the wiki (you'll only need steps 1, 2, & 3 ).
|
||||
|
||||
## **Manual Install**
|
||||
|
||||
### **pip**
|
||||
|
||||
See [Easy-peasy Windows install](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
See
|
||||
[Easy-peasy Windows install](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
||||
in the wiki
|
||||
|
||||
---
|
||||
|
||||
### **Conda**
|
||||
|
||||
1. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
|
||||
1. Install Anaconda3 (miniconda3 version) from [here](https://docs.anaconda.com/anaconda/install/windows/)
|
||||
|
||||
2. Install Git from here: https://git-scm.com/download/win
|
||||
2. Install Git from [here](https://git-scm.com/download/win)
|
||||
|
||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
|
||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command
|
||||
window. Type all the remaining commands in this window.
|
||||
|
||||
4. Run the command:
|
||||
|
||||
```
|
||||
git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
```batch
|
||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||
```
|
||||
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
This will create stable-diffusion folder where you will follow the rest of
|
||||
the steps.
|
||||
|
||||
5. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
5. Enter the newly-created stable-diffusion folder. From this step forward make
|
||||
sure that you are working in the stable-diffusion directory!
|
||||
|
||||
```
|
||||
cd stable-diffusion
|
||||
```
|
||||
```batch
|
||||
cd stable-diffusion
|
||||
```
|
||||
|
||||
6. Run the following two commands:
|
||||
|
||||
```
|
||||
conda env create -f environment.yaml (step 6a)
|
||||
conda activate ldm (step 6b)
|
||||
```
|
||||
```batch
|
||||
conda env create -f environment.yaml
|
||||
conda activate ldm
|
||||
```
|
||||
|
||||
This will install all python requirements and activate the "ldm"
|
||||
environment which sets PATH and other environment variables properly.
|
||||
This will install all python requirements and activate the "ldm" environment
|
||||
which sets PATH and other environment variables properly.
|
||||
|
||||
7. Run the command:
|
||||
|
||||
```
|
||||
python scripts\preload_models.py
|
||||
```
|
||||
```batch
|
||||
python scripts\preload_models.py
|
||||
```
|
||||
|
||||
This installs several machine learning models that stable diffusion requires.
|
||||
This installs several machine learning models that stable diffusion requires.
|
||||
|
||||
Note: This step is required. This was done because some users may might be blocked by firewalls or have limited internet connectivity for the models to be downloaded just-in-time.
|
||||
Note: This step is required. This was done because some users may might be
|
||||
blocked by firewalls or have limited internet connectivity for the models to
|
||||
be downloaded just-in-time.
|
||||
|
||||
8. Now you need to install the weights for the big stable diffusion model.
|
||||
|
||||
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
|
||||
prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
|
||||
- The weight file is >4 GB in size, so
|
||||
downloading may take a while.
|
||||
- For running with the released weights, you will first need to set up an
|
||||
acount with [Hugging Face](https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser at
|
||||
[https://huggingface.co/CompVis/stable-diffusion-v-1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original).
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on
|
||||
the file named `sd-v1-4.ckpt`. You'll be taken to a page that prompts you
|
||||
to click the "download" link. Now save the file somewhere safe on your
|
||||
local machine.
|
||||
- The weight file is >4 GB in size, so downloading may take a while.
|
||||
|
||||
Now run the following commands from **within the stable-diffusion directory** to copy the weights file to the right place:
|
||||
Now run the following commands from **within the stable-diffusion directory**
|
||||
to copy the weights file to the right place:
|
||||
|
||||
```
|
||||
mkdir -p models\ldm\stable-diffusion-v1
|
||||
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
|
||||
```
|
||||
```batch
|
||||
mkdir -p models\ldm\stable-diffusion-v1
|
||||
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
|
||||
```
|
||||
|
||||
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
|
||||
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
|
||||
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever
|
||||
you stashed this file. If you prefer not to copy or move the .ckpt file, you
|
||||
may instead create a shortcut to it from within
|
||||
`models\ldm\stable-diffusion-v1\`.
|
||||
|
||||
9. Start generating images!
|
||||
|
||||
```
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
```batch
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
```
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
```
|
||||
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the stable-diffusion directory (step 5, `cd \path\to\stable-diffusion`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command
|
||||
window (step 3),enter the stable-diffusion directory (step 5,
|
||||
`cd \path\to\stable-diffusion`), run `conda activate ldm` (step 6b), and
|
||||
then launch the dream script (step 9).
|
||||
|
||||
**Note:** Tildebyte has written an alternative ["Easy peasy Windows
|
||||
install"](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
which uses the Windows Powershell and pew. If you are having trouble with Anaconda on Windows, give this a try (or try it first!)
|
||||
**Note:** Tildebyte has written an alternative
|
||||
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
||||
which uses the Windows Powershell and pew. If you are having trouble with
|
||||
Anaconda on Windows, give this a try (or try it first!)
|
||||
|
||||
### Updating to newer versions of the script
|
||||
---
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter `stable-diffusion`, and type:
|
||||
## Updating to newer versions of the script
|
||||
|
||||
```
|
||||
This distribution is changing rapidly. If you used the `git clone` method
|
||||
(step 5) to download the stable-diffusion directory, then to update to the
|
||||
latest and greatest version, launch the Anaconda window, enter
|
||||
`stable-diffusion`, and type:
|
||||
|
||||
```batch
|
||||
git pull
|
||||
conda env update -f environment.yaml
|
||||
```
|
||||
|
Reference in New Issue
Block a user