mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
(docker) add a README for the docker setup
This commit is contained in:
parent
2a5737c146
commit
e9bc8254dd
@ -2,28 +2,33 @@
|
||||
|
||||
All commands are to be run from the `docker` directory: `cd docker`
|
||||
|
||||
Linux
|
||||
#### Linux
|
||||
|
||||
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
|
||||
2. Install `docker-compose`
|
||||
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04).
|
||||
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
|
||||
3. Ensure docker daemon is able to access the GPU.
|
||||
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||
|
||||
macOS
|
||||
#### macOS
|
||||
|
||||
1. Ensure Docker has at least 16GB RAM
|
||||
2. Enable VirtioFS for file sharing
|
||||
3. Enable `docker-compose` V2 support
|
||||
3. Enable `docker compose` V2 support
|
||||
|
||||
This is done via Docker Desktop preferences
|
||||
|
||||
## Quickstart
|
||||
|
||||
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary.
|
||||
2. `docker-compose up`
|
||||
|
||||
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
|
||||
a. the desired location of the InvokeAI runtime directory, or
|
||||
b. an existing, v3.0.0 compatible runtime directory.
|
||||
1. `docker compose up`
|
||||
|
||||
The image will be built automatically if needed.
|
||||
|
||||
The runtime directory (holding models and outputs) will be created in your home directory, under `~/invokeai`, populated with necessary content (you will be asked a couple of questions during setup)
|
||||
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
|
||||
|
||||
### Use a GPU
|
||||
|
||||
@ -35,40 +40,28 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
|
||||
|
||||
## Customize
|
||||
|
||||
Check the `.env` file. It contains environment variables for running in Docker. Fill it in with your own values. Next time you run `docker-compose up`, your custom values will be used.
|
||||
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
|
||||
|
||||
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
|
||||
You can also set these values in `docker compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
|
||||
|
||||
Example:
|
||||
Example (most values are optional):
|
||||
|
||||
```
|
||||
LOCAL_ROOT_DIR=/Volumes/HugeDrive/invokeai
|
||||
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
|
||||
HUGGINGFACE_TOKEN=the_actual_token
|
||||
CONTAINER_UID=1000
|
||||
GPU_DRIVER=cuda
|
||||
```
|
||||
|
||||
## Moar Customize!
|
||||
|
||||
See the `docker-compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
|
||||
|
||||
|
||||
#### Turn off the NSFW checker
|
||||
|
||||
```
|
||||
command:
|
||||
- invokeai
|
||||
- --no-nsfw_check
|
||||
- --web
|
||||
- --host 0.0.0.0
|
||||
```
|
||||
## Even Moar Customizing!
|
||||
|
||||
See the `docker compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
|
||||
|
||||
### Reconfigure the runtime directory
|
||||
|
||||
Can be used to download additional models from the supported model list
|
||||
|
||||
In conjunction with `LOCAL_ROOT_DIR` can be also used to create bran
|
||||
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
|
||||
|
||||
```
|
||||
command:
|
||||
@ -76,24 +69,9 @@ command:
|
||||
- --yes
|
||||
```
|
||||
|
||||
|
||||
#### Run in CLI mode
|
||||
|
||||
This container starts InvokeAI in web mode by default.
|
||||
|
||||
Override the `command` and run `docker compose:
|
||||
Or install models:
|
||||
|
||||
```
|
||||
command:
|
||||
- invoke
|
||||
```
|
||||
|
||||
Then attach to the container from another terminal:
|
||||
|
||||
```
|
||||
$ docker attach $(docker compose ps invokeai -q)
|
||||
|
||||
invoke>
|
||||
```
|
||||
|
||||
Enjoy using the `invoke>` prompt. To detach from the container, type `Ctrl+P` followed by `Ctrl+Q` (this is the escape sequence).
|
||||
- invokeai-model-install
|
||||
```
|
Loading…
Reference in New Issue
Block a user