mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Add entrypoint and update documentation
This commit is contained in:
parent
7136603604
commit
b656d333de
@ -2,29 +2,84 @@
|
|||||||
Table of Contents
|
Table of Contents
|
||||||
=================
|
=================
|
||||||
|
|
||||||
* [Installation](#installation)
|
* [Step 1 - Get the Model](#step-1---get-the-model)
|
||||||
* [Option 1 - Directly on Apple silicon](#option-1---directly-on-apple-silicon)
|
* [Step 2 - Installation](#step-2---installation)
|
||||||
|
* [Option A - On a Linux container with Docker for Apple silicon](#option-a---on-a-linux-container-with-docker-for-apple-silicon)
|
||||||
* [Prerequisites](#prerequisites)
|
* [Prerequisites](#prerequisites)
|
||||||
* [Setup](#setup)
|
* [Setup](#setup)
|
||||||
* [Option 2 - On a Linux container with Docker for Apple silicon](#option-2---on-a-linux-container-with-docker-for-apple-silicon)
|
* [Option B - Directly on Apple silicon](#option-b---directly-on-apple-silicon)
|
||||||
* [Prerequisites](#prerequisites-1)
|
* [Prerequisites](#prerequisites-1)
|
||||||
* [Setup](#setup-1)
|
* [Setup](#setup-1)
|
||||||
* [[Optional] Face Restoration and Upscaling](#optional-face-restoration-and-upscaling)
|
* [Step 3 - Usage (time to have fun)](#step-3---usage-time-to-have-fun)
|
||||||
* [Setup](#setup-2)
|
|
||||||
* [Usage](#usage)
|
|
||||||
* [Startup](#startup)
|
* [Startup](#startup)
|
||||||
* [Text to Image](#text-to-image)
|
* [Text to Image](#text-to-image)
|
||||||
* [Image to Image](#image-to-image)
|
* [Image to Image](#image-to-image)
|
||||||
* [Web Interface](#web-interface)
|
* [Web Interface](#web-interface)
|
||||||
* [Notes](#notes)
|
* [Notes](#notes)
|
||||||
|
|
||||||
|
# Step 1 - Get the Model
|
||||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```.
|
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```.
|
||||||
You'll need to create an account but it's quick and free.
|
You'll need to create an account but it's quick and free.
|
||||||
|
|
||||||
# Installation
|
# Step 2 - Installation
|
||||||
|
|
||||||
## Option 1 - Directly on Apple silicon
|
## Option A - On a Linux container with Docker for Apple silicon
|
||||||
|
You [can't access the Macbook M1/M2 GPU cores from the Docker containers](https://github.com/pytorch/pytorch/issues/81224) so performance is reduced but for development purposes it's fine.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
[Install Docker](https://gist.github.com/santisbon/2165fd1c9aaa1f7974f424535d3756f7#install-2)
|
||||||
|
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this [Issue](https://github.com/lstein/stable-diffusion/issues/342). You may need to increase Swap and Disk image size too.
|
||||||
|
|
||||||
|
Create a Docker volume for the downloaded model file
|
||||||
|
```
|
||||||
|
docker volume create my-vol
|
||||||
|
```
|
||||||
|
|
||||||
|
Populate the volume using a lightweight Linux container. You just need to create the container with the mountpoint; no need to run it.
|
||||||
|
```Shell
|
||||||
|
docker create --platform linux/arm64 --name dummy --mount source=my-vol,target=/data alpine # or arm64v8/alpine
|
||||||
|
|
||||||
|
# Copy the model file to the Docker volume. We'll need it at run time.
|
||||||
|
cd ~/Downloads # or wherever you saved sd-v1-4.ckpt
|
||||||
|
docker cp sd-v1-4.ckpt dummy:/data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
```Shell
|
||||||
|
# Set the fork you want to use.
|
||||||
|
GITHUB_STABLE_DIFFUSION="https://github.com/santisbon/stable-diffusion.git"
|
||||||
|
|
||||||
|
git clone $GITHUB_STABLE_DIFFUSION
|
||||||
|
cd stable-diffusion
|
||||||
|
chmod +x entrypoint.sh
|
||||||
|
# download the Miniconda installer. We'll need it at build time.
|
||||||
|
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh \
|
||||||
|
```
|
||||||
|
|
||||||
|
Build the Docker image. Give it any tag ```-t``` that you want.
|
||||||
|
Tip: Make sure your shell session has the env variable set (above) with ```echo $GITHUB_STABLE_DIFFUSION```.
|
||||||
|
```Shell
|
||||||
|
docker build -t santisbon/stable-diffusion \
|
||||||
|
--build-arg gsd=$GITHUB_STABLE_DIFFUSION \
|
||||||
|
--build-arg sdreq="requirements-linux-arm64.txt" \
|
||||||
|
.
|
||||||
|
```
|
||||||
|
|
||||||
|
Run a container using your built image e.g.
|
||||||
|
```Shell
|
||||||
|
docker run -it \
|
||||||
|
--rm \
|
||||||
|
--platform linux/arm64 \
|
||||||
|
--name stable-diffusion \
|
||||||
|
--hostname stable-diffusion \
|
||||||
|
--mount source=my-vol,target=/data \
|
||||||
|
--expose 9090 \
|
||||||
|
--publish 9090:9090 \
|
||||||
|
santisbon/stable-diffusion
|
||||||
|
```
|
||||||
|
Tip: Make sure you've created the Docker volume (above)
|
||||||
|
|
||||||
|
## Option B - Directly on Apple silicon
|
||||||
For Mac M1/M2. Read more about [Metal Performance Shaders (MPS) framework](https://developer.apple.com/documentation/metalperformanceshaders).
|
For Mac M1/M2. Read more about [Metal Performance Shaders (MPS) framework](https://developer.apple.com/documentation/metalperformanceshaders).
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
@ -40,7 +95,7 @@ conda init zsh && source ~/.zshrc # or bash and .bashrc
|
|||||||
|
|
||||||
```Shell
|
```Shell
|
||||||
# Set the fork you want to use.
|
# Set the fork you want to use.
|
||||||
GITHUB_STABLE_DIFFUSION=https://github.com/santisbon/stable-diffusion.git
|
GITHUB_STABLE_DIFFUSION="https://github.com/santisbon/stable-diffusion.git"
|
||||||
|
|
||||||
git clone $GITHUB_STABLE_DIFFUSION
|
git clone $GITHUB_STABLE_DIFFUSION
|
||||||
cd stable-diffusion
|
cd stable-diffusion
|
||||||
@ -53,8 +108,8 @@ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
|
|||||||
|
|
||||||
# When path exists, pip3 will (w)ipe.
|
# When path exists, pip3 will (w)ipe.
|
||||||
# restrict the Conda environment to only use ARM packages. M1/M2 is ARM-based. You could also conda install nomkl.
|
# restrict the Conda environment to only use ARM packages. M1/M2 is ARM-based. You could also conda install nomkl.
|
||||||
PIP_EXISTS_ACTION=w
|
PIP_EXISTS_ACTION="w"
|
||||||
CONDA_SUBDIR=osx-arm64
|
CONDA_SUBDIR="osx-arm64"
|
||||||
conda env create -f environment-mac.yaml && conda activate ldm
|
conda env create -f environment-mac.yaml && conda activate ldm
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -63,92 +118,15 @@ You can verify you're in the virtual environment by looking at which executable
|
|||||||
type python3
|
type python3
|
||||||
```
|
```
|
||||||
|
|
||||||
Only need to do this once:
|
Face Restoration and Upscaling
|
||||||
```Shell
|
```Shell
|
||||||
python3 scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## Option 2 - On a Linux container with Docker for Apple silicon
|
|
||||||
You [can't access the Macbook M1/M2 GPU cores from the Docker containers](https://github.com/pytorch/pytorch/issues/81224) so performance is reduced but for development purposes it's fine.
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
[Install Docker](https://gist.github.com/santisbon/2165fd1c9aaa1f7974f424535d3756f7#install-2)
|
|
||||||
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this [Issue](https://github.com/lstein/stable-diffusion/issues/342). You may need to increase Swap and Disk image size too.
|
|
||||||
|
|
||||||
Create a Docker volume for the downloaded model file
|
|
||||||
```
|
|
||||||
docker volume create my-vol
|
|
||||||
```
|
|
||||||
|
|
||||||
Populate the volume using a lightweight Linux container. You just need to create the container with the mountpoint; no need to run it.
|
|
||||||
```Shell
|
|
||||||
docker create --platform linux/arm64 --name dummy --mount source=my-vol,target=/data alpine # or arm64v8/alpine
|
|
||||||
cd ~/Downloads # or wherever you saved sd-v1-4.ckpt
|
|
||||||
docker cp sd-v1-4.ckpt dummy:/data
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
Start a container for Stable Diffusion. The container's 9090 port is mapped to the host's 80. That way you'll be able to use the Web interface from your Mac.
|
|
||||||
```Shell
|
|
||||||
docker run -it \
|
|
||||||
--platform linux/arm64 \
|
|
||||||
--name stable-diffusion \
|
|
||||||
--hostname stable-diffusion \
|
|
||||||
--mount source=my-vol,target=/data \
|
|
||||||
--expose 9090 \
|
|
||||||
--publish 80:9090 \
|
|
||||||
debian
|
|
||||||
# or arm64v8/debian
|
|
||||||
```
|
|
||||||
|
|
||||||
You're now on the container.
|
|
||||||
```Shell
|
|
||||||
# Set the fork you want to use
|
|
||||||
GITHUB_STABLE_DIFFUSION="-b docker-apple-silicon https://github.com/santisbon/stable-diffusion.git" \
|
|
||||||
&& apt update && apt upgrade -y \
|
|
||||||
&& apt install -y \
|
|
||||||
git \
|
|
||||||
pip \
|
|
||||||
python3 \
|
|
||||||
wget
|
|
||||||
|
|
||||||
# you won't need to close and reopen your terminal after this because we'll source our .<shell>rc file
|
|
||||||
cd /data && wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh \
|
|
||||||
&& chmod +x anaconda.sh && bash anaconda.sh -b -u -p /anaconda && /anaconda/bin/conda init bash && source ~/.bashrc
|
|
||||||
|
|
||||||
cd / && git clone $GITHUB_STABLE_DIFFUSION && cd stable-diffusion
|
|
||||||
|
|
||||||
# When path exists, pip3 will (w)ipe.
|
|
||||||
# restrict the Conda environment to only use ARM packages. M1/M2 is ARM-based. You could also conda install nomkl.
|
|
||||||
PIP_EXISTS_ACTION=w
|
|
||||||
CONDA_SUBDIR=osx-arm64
|
|
||||||
|
|
||||||
# Create the environment, activate it, install requirements.
|
|
||||||
conda create -y --name ldm && conda activate ldm \
|
|
||||||
&& pip3 install -r requirements-linux-arm64.txt
|
|
||||||
|
|
||||||
# Only need to do this once (ok twice if you decide to add face restoration and upscaling):
|
|
||||||
python3 scripts/preload_models.py
|
|
||||||
|
|
||||||
mkdir -p models/ldm/stable-diffusion-v1 \
|
|
||||||
&& chown root:root /data/sd-v1-4.ckpt \
|
|
||||||
&& ln -sf /data/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
|
|
||||||
```
|
|
||||||
|
|
||||||
## [Optional] Face Restoration and Upscaling
|
|
||||||
Whether you're directly on macOS or a Linux container.
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
```Shell
|
|
||||||
# If you're on a Linux container
|
|
||||||
apt install -y libgl1-mesa-glx libglib2.0-0
|
|
||||||
|
|
||||||
# by default expected in a sibling directory to stable-diffusion
|
# by default expected in a sibling directory to stable-diffusion
|
||||||
cd .. && git clone https://github.com/TencentARC/GFPGAN.git && cd GFPGAN
|
cd .. && git clone https://github.com/TencentARC/GFPGAN.git && cd GFPGAN
|
||||||
|
|
||||||
# basicsr: used for training and inference. facexlib: face detection / face restoration helper.
|
# basicsr: used for training and inference. facexlib: face detection / face restoration helper.
|
||||||
pip3 install basicsr facexlib
|
pip3 install basicsr facexlib \
|
||||||
pip3 install -r requirements.txt
|
&& pip3 install -r requirements.txt
|
||||||
|
|
||||||
python3 setup.py develop
|
python3 setup.py develop
|
||||||
pip3 install realesrgan # to enhance the background (non-face) regions and do upscaling
|
pip3 install realesrgan # to enhance the background (non-face) regions and do upscaling
|
||||||
@ -156,24 +134,26 @@ pip3 install realesrgan # to enhance the background (non-face) regions and do up
|
|||||||
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
|
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
|
||||||
|
|
||||||
cd ../stable-diffusion
|
cd ../stable-diffusion
|
||||||
# if we don't preload models it will download model files from the Internet the first time you run dream.py with GFPGAN and Real-ESRGAN turned on.
|
|
||||||
python3 scripts/preload_models.py
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Only need to do this once. If we don't preload models it will download model files from the Internet when you run ```dream.py```.
|
||||||
|
```Shell
|
||||||
|
python3 scripts/preload_models.py
|
||||||
|
```
|
||||||
|
|
||||||
# Usage
|
# Step 3 - Usage (time to have fun)
|
||||||
|
|
||||||
## Startup
|
## Startup
|
||||||
|
If you're on a Linux container the ```dream``` script is automatically started and the output dir set to the Docker volume you created earlier.
|
||||||
|
|
||||||
|
If you're directly on macOS follow these startup instructions.
|
||||||
With the Conda environment activated (```conda activate ldm```), run the interactive interface that combines the functionality of the original scripts txt2img and img2img:
|
With the Conda environment activated (```conda activate ldm```), run the interactive interface that combines the functionality of the original scripts txt2img and img2img:
|
||||||
Use the more accurate but VRAM-intensive full precision math because half-precision requires autocast and won't work.
|
Use the more accurate but VRAM-intensive full precision math because half-precision requires autocast and won't work.
|
||||||
|
|
||||||
By default the images are saved in ```outputs/img-samples/```.
|
|
||||||
If you're on a docker container set the output dir to the Docker volume you created.
|
|
||||||
```Shell
|
```Shell
|
||||||
# If on Macbook
|
# If on Macbook
|
||||||
python3 scripts/dream.py --full_precision
|
python3 scripts/dream.py --full_precision
|
||||||
# If on Linux container
|
# By default the images are saved in outputs/img-samples/.
|
||||||
python3 scripts/dream.py --full_precision -o /data
|
|
||||||
```
|
```
|
||||||
|
|
||||||
You'll get the script's prompt. You can see available options or quit.
|
You'll get the script's prompt. You can see available options or quit.
|
||||||
@ -183,7 +163,7 @@ dream> q
|
|||||||
```
|
```
|
||||||
|
|
||||||
## Text to Image
|
## Text to Image
|
||||||
For quick (but very rough) results test with 5 steps (default 50) and 1 sample image. This will let you know that everything is set up correctly.
|
For quick (but bad) image results test with 5 steps (default 50) and 1 sample image. This will let you know that everything is set up correctly.
|
||||||
Then increase steps to 100 or more for good (but slower) results.
|
Then increase steps to 100 or more for good (but slower) results.
|
||||||
The prompt can be in quotes or not.
|
The prompt can be in quotes or not.
|
||||||
```Shell
|
```Shell
|
||||||
@ -194,11 +174,11 @@ dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
|||||||
# TODO: example for upscaling.
|
# TODO: example for upscaling.
|
||||||
```
|
```
|
||||||
You'll need to experiment to see if face restoration is making it better or worse for your specific prompt.
|
You'll need to experiment to see if face restoration is making it better or worse for your specific prompt.
|
||||||
The -U option for upscaling has an [Issue](https://github.com/lstein/stable-diffusion/issues/297) on Mac.
|
The -U option for upscaling has an [Issue](https://github.com/lstein/stable-diffusion/issues/297).
|
||||||
|
|
||||||
If you're on a container and set the output to the Docker volume (or moved it there with ```mv outputs/img-samples/ /data/```) you can copy it wherever you want.
|
If you're on a container the output is set to the Docker volume. You can copy it wherever you want.
|
||||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
||||||
Or you can copy it from your terminal. Keep in mind ```docker cp``` can't expand ```*.png``` so you'll need to specify the image file name:
|
Or you can copy it from your Mac terminal. Keep in mind ```docker cp``` can't expand ```*.png``` so you'll need to specify the image file name:
|
||||||
```Shell
|
```Shell
|
||||||
# On your host Macbook (you can use the name of any container that mounted the volume)
|
# On your host Macbook (you can use the name of any container that mounted the volume)
|
||||||
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
||||||
|
@ -1,8 +1,10 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
ln -sf /data/sd-v1-4.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt
|
ln -sf /data/sd-v1-4.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt
|
||||||
|
cd /stable-diffusion
|
||||||
|
conda activate ldm
|
||||||
|
|
||||||
if [ $# -eq 0 ]; then
|
if [ $# -eq 0 ]; then
|
||||||
python3 /stable-diffusion/scripts/dream.py --full_precision -o /data
|
python3 scripts/dream.py --full_precision -o /data
|
||||||
else
|
else
|
||||||
python3 /stable-diffusion/scripts/dream.py --full_precision -o /data "$@"
|
python3 scripts/dream.py --full_precision -o /data "$@"
|
||||||
fi
|
fi
|
||||||
|
Loading…
Reference in New Issue
Block a user