Merge branch 'main' into dev/diffusers

This commit is contained in:
Kevin Turner
2022-12-14 08:25:03 -08:00
committed by GitHub
4 changed files with 25 additions and 0 deletions

View File

@ -5,9 +5,11 @@ volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}:${arch}}
gpus=${GPU_FLAGS:+--gpus=${GPU_FLAGS}}
export project_name
export volumename
export arch
export platform
export invokeai_tag
export gpus

View File

@ -17,4 +17,5 @@ docker run \
--mount="source=$volumename,target=/data" \
--publish=9090:9090 \
--cap-add=sys_nice \
$gpus \
"$invokeai_tag" ${1:+$@}

View File

@ -127,6 +127,27 @@ also do so.
---
## Running the container on your GPU
If you have an Nvidia GPU, you can enable InvokeAI to run on the GPU by running the container with an extra
environment variable to enable GPU usage and have the process run much faster:
```bash
GPU_FLAGS=all ./docker-build/run.sh
```
This passes the `--gpus all` to docker and uses the GPU.
If you don't have a GPU (or your host is not yet setup to use it) you will see a message like this:
`docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].`
You can use the full set of GPU combinations documented here:
https://docs.docker.com/config/containers/resource_constraints/#gpu
For example, use `GPU_FLAGS=device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a` to choose a specific device identified by a UUID.
## Running InvokeAI in the cloud with Docker
We offer an optimized Ubuntu-based image that has been well-tested in cloud deployments. Note: it also works well locally on Linux x86_64 systems with an Nvidia GPU. It *may* also work on Windows under WSL2 and on Intel Mac (not tested).

View File

@ -315,6 +315,7 @@ class Generator:
return blurry
def get_caution_img(self):
path = None
if self.caution_img:
return self.caution_img
# Find the caution image. If we are installed in the package directory it will