mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Optimized Docker build with support for external working directory (#1544)
* add docker build optimized for size; do not copy models to image useful for cloud deployments. attempts to utilize docker layer caching as effectively as possible. also some quick tools to help with building * add workflow to build cloud img in ci * push cloud image in addition to building * (ci) also tag docker images with git SHA * (docker) rework Makefile for easy cache population and local use * support the new conda-less install; further optimize docker build * (ci) clean up the build-cloud-img action * improve the Makefile for local use * move execution of invoke script from entrypoint to cmd, allows overriding the cmd if needed (e.g. in Runpod * remove unnecessary copyright statements * (docs) add a section on running InvokeAI in the cloud using Docker * (docker) add patchmatch to the cloud image; improve build caching; simplify Makefile * (docker) fix pip requirements path to use binary_installer directory
This commit is contained in:
@ -6,7 +6,7 @@ title: Docker
|
||||
|
||||
!!! warning "For end users"
|
||||
|
||||
We highly recommend to Install InvokeAI locally using [these instructions](index.md)"
|
||||
We highly recommend to Install InvokeAI locally using [these instructions](index.md)
|
||||
|
||||
!!! tip "For developers"
|
||||
|
||||
@ -16,6 +16,10 @@ title: Docker
|
||||
|
||||
For general use, install locally to leverage your machine's GPU.
|
||||
|
||||
!!! tip "For running on a cloud instance/service"
|
||||
|
||||
Check out the [Running InvokeAI in the cloud with Docker](#running-invokeai-in-the-cloud-with-docker) section below
|
||||
|
||||
## Why containers?
|
||||
|
||||
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also
|
||||
@ -36,7 +40,7 @@ development purposes it's fine. Once you're done with development tasks on your
|
||||
laptop you can build for the target platform and architecture and deploy to
|
||||
another environment with NVIDIA GPUs on-premises or in the cloud.
|
||||
|
||||
## Installation on a Linux container
|
||||
## Installation in a Linux container (desktop)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
@ -117,12 +121,91 @@ also do so.
|
||||
./docker-build/run.sh "banana sushi" -Ak_lms -S42 -s10
|
||||
```
|
||||
|
||||
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
|
||||
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
|
||||
|
||||
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
|
||||
|
||||
---
|
||||
|
||||
## Running InvokeAI in the cloud with Docker
|
||||
|
||||
We offer an optimized Ubuntu-based image that has been well-tested in cloud deployments. Note: it also works well locally on Linux x86_64 systems with an Nvidia GPU. It *may* also work on Windows under WSL2 and on Intel Mac (not tested).
|
||||
|
||||
An advantage of this method is that it does not need any local setup or additional dependencies.
|
||||
|
||||
See the `docker-build/Dockerfile.cloud` file to familizarize yourself with the image's content.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- a `docker` runtime
|
||||
- `make` (optional but helps for convenience)
|
||||
- Huggingface token to download models, or an existing InvokeAI runtime directory from a previous installation
|
||||
|
||||
Neither local Python nor any dependencies are required. If you don't have `make` (part of `build-essentials` on Ubuntu), or do not wish to install it, the commands from the `docker-build/Makefile` are readily adaptable to be executed directly.
|
||||
|
||||
### Building and running the image locally
|
||||
|
||||
1. Clone this repo and `cd docker-build`
|
||||
1. `make build` - this will build the image. (This does *not* require a GPU-capable system).
|
||||
1. _(skip this step if you already have a complete InvokeAI runtime directory)_
|
||||
- `make configure` (This does *not* require a GPU-capable system)
|
||||
- this will create a local cache of models and configs (a.k.a the _runtime dir_)
|
||||
- enter your Huggingface token when prompted
|
||||
1. `make web`
|
||||
1. Open the `http://localhost:9090` URL in your browser, and enjoy the banana sushi!
|
||||
|
||||
To use InvokeAI on the cli, run `make cli`. To open a Bash shell in the container for arbitraty advanced use, `make shell`.
|
||||
|
||||
#### Building and running without `make`
|
||||
|
||||
(Feel free to adapt paths such as `${HOME}/invokeai` to your liking, and modify the CLI arguments as necessary).
|
||||
|
||||
!!! example "Build the image and configure the runtime directory"
|
||||
```Shell
|
||||
cd docker-build
|
||||
|
||||
DOCKER_BUILDKIT=1 docker build -t local/invokeai:latest -f Dockerfile.cloud ..
|
||||
|
||||
docker run --rm -it -v ${HOME}/invokeai:/mnt/invokeai local/invokeai:latest -c "python scripts/configure_invokeai.py"
|
||||
```
|
||||
|
||||
!!! example "Run the web server"
|
||||
```Shell
|
||||
docker run --runtime=nvidia --gpus=all --rm -it -v ${HOME}/invokeai:/mnt/invokeai -p9090:9090 local/invokeai:latest
|
||||
```
|
||||
|
||||
Access the Web UI at http://localhost:9090
|
||||
|
||||
!!! example "Run the InvokeAI interactive CLI"
|
||||
```
|
||||
docker run --runtime=nvidia --gpus=all --rm -it -v ${HOME}/invokeai:/mnt/invokeai local/invokeai:latest -c "python scripts/invoke.py"
|
||||
```
|
||||
|
||||
### Running the image in the cloud
|
||||
|
||||
This image works anywhere you can run a container with a mounted Docker volume. You may either build this image on a cloud instance, or build and push it to your Docker registry. To manually run this on a cloud instance (such as AWS EC2, GCP or Azure VM):
|
||||
|
||||
1. build this image either in the cloud (you'll need to pull the repo), or locally
|
||||
1. `docker tag` it as `your-registry/invokeai` and push to your registry (i.e. Dockerhub)
|
||||
1. `docker pull` it on your cloud instance
|
||||
1. configure the runtime directory as per above example, using `docker run ... configure_invokeai.py` script
|
||||
1. use either one of the `docker run` commands above, substituting the image name for your own image.
|
||||
|
||||
To run this on Runpod, please refer to the following Runpod template: https://www.runpod.io/console/gpu-secure-cloud?template=vm19ukkycf (you need a Runpod subscription). When launching the template, feel free to set the image to pull your own build.
|
||||
|
||||
The template's `README` provides ample detail, but at a high level, the process is as follows:
|
||||
|
||||
1. create a pod using this Docker image
|
||||
1. ensure the pod has an `INVOKEAI_ROOT=<path_to_your_persistent_volume>` environment variable, and that it corresponds to the path to your pod's persistent volume mount
|
||||
1. Run the pod with `sleep infinity` as the Docker command
|
||||
1. Use Runpod basic SSH to connect to the pod, and run `python scripts/configure_invokeai.py` script
|
||||
1. Stop the pod, and change the Docker command to `python scripts/invoke.py --web --host 0.0.0.0`
|
||||
1. Run the pod again, connect to your pod on HTTP port 9090, and enjoy the banana sushi!
|
||||
|
||||
Running on other cloud providers such as Vast.ai will likely work in a similar fashion.
|
||||
|
||||
---
|
||||
|
||||
!!! warning "Deprecated"
|
||||
|
||||
From here on you will find the the previous Docker-Docs, which will still
|
||||
@ -135,12 +218,12 @@ also do so.
|
||||
If you're on a **Linux container** the `invoke` script is **automatically
|
||||
started** and the output dir set to the Docker volume you created earlier.
|
||||
|
||||
If you're **directly on macOS follow these startup instructions**.
|
||||
If you're **directly on macOS follow these startup instructions**.
|
||||
With the Conda environment activated (`conda activate ldm`), run the interactive
|
||||
interface that combines the functionality of the original scripts `txt2img` and
|
||||
`img2img`:
|
||||
`img2img`:
|
||||
Use the more accurate but VRAM-intensive full precision math because
|
||||
half-precision requires autocast and won't work.
|
||||
half-precision requires autocast and won't work.
|
||||
By default the images are saved in `outputs/img-samples/`.
|
||||
|
||||
```Shell
|
||||
@ -157,8 +240,8 @@ invoke> q
|
||||
### Text to Image
|
||||
|
||||
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
|
||||
image. This will let you know that everything is set up correctly.
|
||||
Then increase steps to 100 or more for good (but slower) results.
|
||||
image. This will let you know that everything is set up correctly.
|
||||
Then increase steps to 100 or more for good (but slower) results.
|
||||
The prompt can be in quotes or not.
|
||||
|
||||
```Shell
|
||||
@ -172,8 +255,8 @@ You'll need to experiment to see if face restoration is making it better or
|
||||
worse for your specific prompt.
|
||||
|
||||
If you're on a container the output is set to the Docker volume. You can copy it
|
||||
wherever you want.
|
||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
||||
wherever you want.
|
||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
||||
Or you can copy it from your Mac terminal. Keep in mind `docker cp` can't expand
|
||||
`*.png` so you'll need to specify the image file name.
|
||||
|
||||
|
Reference in New Issue
Block a user