mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Update documentation
This commit is contained in:
parent
6d2084e030
commit
4185afea5c
@ -18,6 +18,10 @@ Table of Contents
|
||||
* [Web Interface](#web-interface)
|
||||
* [Notes](#notes)
|
||||
|
||||
|
||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```.
|
||||
You'll need to create an account but it's quick and free.
|
||||
|
||||
# Installation
|
||||
|
||||
## Option 1 - Directly on Apple silicon
|
||||
@ -33,8 +37,9 @@ conda init zsh && source ~/.zshrc # or bash and .bashrc
|
||||
```
|
||||
|
||||
### Setup
|
||||
Set it to the fork you want to use.
|
||||
|
||||
```Shell
|
||||
# Set the fork you want to use.
|
||||
GITHUB_STABLE_DIFFUSION=https://github.com/santisbon/stable-diffusion.git
|
||||
|
||||
git clone $GITHUB_STABLE_DIFFUSION
|
||||
@ -42,8 +47,6 @@ cd stable-diffusion
|
||||
mkdir -p models/ldm/stable-diffusion-v1/
|
||||
```
|
||||
|
||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB). You'll need to create an account but it's quick and free. Then set up the environment:
|
||||
|
||||
```Shell
|
||||
PATH_TO_CKPT="$HOME/Downloads" # or wherever you saved sd-v1-4.ckpt
|
||||
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
|
||||
@ -70,10 +73,7 @@ You [can't access the Macbook M1/M2 GPU cores from the Docker containers](https:
|
||||
|
||||
### Prerequisites
|
||||
[Install Docker](https://gist.github.com/santisbon/2165fd1c9aaa1f7974f424535d3756f7#install-2)
|
||||
On the Docker Desktop app, go to Preferences, Resources, Advanced. Adjust the CPUs and Memory to the largest amount available to avoid this [Issue](https://github.com/lstein/stable-diffusion/issues/342). You may need to increase Swap and Disk image size too.
|
||||
|
||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```.
|
||||
You'll need to create an account but it's quick and free.
|
||||
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this [Issue](https://github.com/lstein/stable-diffusion/issues/342). You may need to increase Swap and Disk image size too.
|
||||
|
||||
Create a Docker volume for the downloaded model file
|
||||
```
|
||||
@ -82,38 +82,39 @@ docker volume create my-vol
|
||||
|
||||
Populate the volume using a lightweight Linux container. You just need to create the container with the mountpoint; no need to run it.
|
||||
```Shell
|
||||
docker create --name dummy --mount source=my-vol,target=/data alpine
|
||||
docker create --platform linux/arm64 --name dummy --mount source=my-vol,target=/data alpine # or arm64v8/alpine
|
||||
cd ~/Downloads # or wherever you saved sd-v1-4.ckpt
|
||||
docker cp sd-v1-4.ckpt dummy:/data
|
||||
```
|
||||
|
||||
### Setup
|
||||
Start a container for Stable Diffusion
|
||||
Start a container for Stable Diffusion. The container's 9090 port is mapped to the host's 80. That way you'll be able to use the Web interface from your Mac.
|
||||
```Shell
|
||||
docker run -it \
|
||||
--platform linux/arm64 \
|
||||
--name stable-diffusion \
|
||||
--hostname stable-diffusion \
|
||||
--mount source=my-vol,target=/data \
|
||||
--expose 9090 \
|
||||
--publish 80:9090 \
|
||||
debian
|
||||
# or arm64v8/debian
|
||||
```
|
||||
|
||||
You're now on the container. Set the fork you want to use and set up the container:
|
||||
You're now on the container.
|
||||
```Shell
|
||||
GITHUB_STABLE_DIFFUSION="-b docker-apple-silicon https://github.com/santisbon/stable-diffusion.git"
|
||||
|
||||
apt update && apt upgrade -y && apt install -y \
|
||||
# Set the fork you want to use
|
||||
GITHUB_STABLE_DIFFUSION="-b docker-apple-silicon https://github.com/santisbon/stable-diffusion.git" \
|
||||
&& apt update && apt upgrade -y \
|
||||
&& apt install -y \
|
||||
git \
|
||||
pip3 \
|
||||
pip \
|
||||
python3 \
|
||||
wget
|
||||
|
||||
# you won't need to close and reopen your terminal after this because we'll source our .<shell>rc file
|
||||
cd /data && wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh \
|
||||
&& chmod +x anaconda.sh && bash anaconda.sh -b -u -p /anaconda && /anaconda/bin/conda init bash && source ~/.bashrc
|
||||
# update conda
|
||||
conda update -y -n base -c defaults conda
|
||||
|
||||
cd / && git clone $GITHUB_STABLE_DIFFUSION && cd stable-diffusion
|
||||
|
||||
@ -122,11 +123,11 @@ cd / && git clone $GITHUB_STABLE_DIFFUSION && cd stable-diffusion
|
||||
PIP_EXISTS_ACTION=w
|
||||
CONDA_SUBDIR=osx-arm64
|
||||
|
||||
# Create the environment
|
||||
# conda env create -f environment.yaml && conda activate ldm
|
||||
conda create -y --name ldm && conda activate ldm
|
||||
pip3 install -r requirements-linux-arm64.txt
|
||||
# Create the environment, activate it, install requirements.
|
||||
conda create -y --name ldm && conda activate ldm \
|
||||
&& pip3 install -r requirements-linux-arm64.txt
|
||||
|
||||
# Only need to do this once (ok twice if you decide to add face restoration and upscaling):
|
||||
python3 scripts/preload_models.py
|
||||
|
||||
mkdir -p models/ldm/stable-diffusion-v1 \
|
||||
@ -139,6 +140,9 @@ Whether you're directly on macOS or a Linux container.
|
||||
|
||||
### Setup
|
||||
```Shell
|
||||
# If you're on a Linux container
|
||||
apt install -y libgl1-mesa-glx libglib2.0-0
|
||||
|
||||
# by default expected in a sibling directory to stable-diffusion
|
||||
cd .. && git clone https://github.com/TencentARC/GFPGAN.git && cd GFPGAN
|
||||
|
||||
@ -152,9 +156,11 @@ pip3 install realesrgan # to enhance the background (non-face) regions and do up
|
||||
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
|
||||
|
||||
cd ../stable-diffusion
|
||||
python3 scripts/preload_models.py # if not, it will download model files from the Internet the first time you run dream.py with GFPGAN and Real-ESRGAN turned on.
|
||||
# if we don't preload models it will download model files from the Internet the first time you run dream.py with GFPGAN and Real-ESRGAN turned on.
|
||||
python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
|
||||
# Usage
|
||||
|
||||
## Startup
|
||||
@ -162,12 +168,12 @@ With the Conda environment activated (```conda activate ldm```), run the interac
|
||||
Use the more accurate but VRAM-intensive full precision math because half-precision requires autocast and won't work.
|
||||
|
||||
By default the images are saved in ```outputs/img-samples/```.
|
||||
If you're on a docker container set the output dir to the Docker volume.
|
||||
If you're on a docker container set the output dir to the Docker volume you created.
|
||||
```Shell
|
||||
# If on Macbook
|
||||
python3 scripts/dream.py --full_precision
|
||||
# If on Linux container
|
||||
python3 scripts/dream.py --full_precision -o /data
|
||||
python3 scripts/dream.py --full_precision -o /data
|
||||
```
|
||||
|
||||
You'll get the script's prompt. You can see available options or quit.
|
||||
@ -177,21 +183,25 @@ dream> q
|
||||
```
|
||||
|
||||
## Text to Image
|
||||
For quick (and rough) results test with 5 steps (default 50), 1 sample image.
|
||||
Increase steps to 100 or more for good (but slower) results.
|
||||
For quick (but very rough) results test with 5 steps (default 50) and 1 sample image. This will let you know that everything is set up correctly.
|
||||
Then increase steps to 100 or more for good (but slower) results.
|
||||
The prompt can be in quotes or not.
|
||||
```
|
||||
```Shell
|
||||
dream> The hulk fighting with sheldon cooper -s5 -n1
|
||||
dream> "woman closeup highly detailed" -s 150
|
||||
# Reuse previous seed and apply face restoration (if you installed GFPGAN)
|
||||
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.8
|
||||
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
||||
# TODO: example for upscaling.
|
||||
```
|
||||
TODO: example for upscaling. The -U option currently [doesn't work](https://github.com/lstein/stable-diffusion/issues/297) on Mac.
|
||||
You'll need to experiment to see if face restoration is making it better or worse for your specific prompt.
|
||||
The -U option for upscaling has an [Issue](https://github.com/lstein/stable-diffusion/issues/297) on Mac.
|
||||
|
||||
If you're on a container and set the output to the Docker volume (or moved it there with ```mv outputs/img-samples/ /data/```) you can copy it easily wherever you want.
|
||||
If you're on a container and set the output to the Docker volume (or moved it there with ```mv outputs/img-samples/ /data/```) you can copy it wherever you want.
|
||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
||||
Or you can copy it from your terminal. Keep in mind ```docker cp``` can't expand ```*.png``` so you'll need to specify the image file name:
|
||||
```Shell
|
||||
# On your host Macbook (you can use the name of any container that mounted the volume)
|
||||
docker cp dummy:/data/ ~/Pictures
|
||||
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
||||
```
|
||||
|
||||
## Image to Image
|
||||
@ -201,23 +211,26 @@ Make sure your input image size dimensions are multiples of 64 e.g. 512x512. Oth
|
||||
|
||||
If you're on a docker container, copy your input image into the Docker volume
|
||||
```Shell
|
||||
docker cp ~/Pictures/sketch-mountains-input.jpg dummy:/data/
|
||||
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
|
||||
```
|
||||
|
||||
Try it out generating an image (or 4).
|
||||
Try it out generating an image (or 4).
|
||||
The ```dream``` script needs absolute paths to find the image so don't use ```~```.
|
||||
```Shell
|
||||
# If you're on your Macbook
|
||||
dream> "A fantasy landscape, trending on artstation" -I ~/Pictures/sketch-mountains-input.jpg --strength 0.8 --steps 100 -n4
|
||||
dream> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.8 --steps 100 -n4
|
||||
# If you're on a Linux container on your Macbook
|
||||
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.8 --steps 100 -n1
|
||||
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n1
|
||||
```
|
||||
|
||||
## Web Interface
|
||||
You can use the script with a graphical web interface
|
||||
You can use the ```dream``` script with a graphical web interface. Start the web server with:
|
||||
```Shell
|
||||
python3 scripts/dream.py --full_precision --web
|
||||
```
|
||||
and point your browser to http://127.0.0.1:9090
|
||||
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
|
||||
|
||||
Press Control-C at the command line to stop the web server.
|
||||
|
||||
## Notes
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user