Docker image update: ubuntu23.04+python3.11 (#4953)

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

- updates the Docker image with ubuntu23.04 base, python3.11
- use the newer pytorch wheel with cuda12.1 support
- corrects `docker compose` CLI in shell script wrappers and docs
- update / overhaul Docker docs
- clean up obsolete lines in `.gitignore`

## QA Instructions, Screenshots, Recordings

Follow the documentation changes, or simply:

```bash
cd docker
cp .env.sample .env
# Set your INVOKEAI_ROOT in .env
docker compose up
```

## Added/updated tests?

- [ ] Yes
- [x] No : N/A
This commit is contained in:
Millun Atluri 2023-10-20 14:30:53 +11:00 committed by GitHub
commit 7b066681f0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 70 additions and 257 deletions

12
.gitignore vendored
View File

@ -1,8 +1,5 @@
.idea/ .idea/
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
# Byte-compiled / optimized / DLL files # Byte-compiled / optimized / DLL files
__pycache__/ __pycache__/
*.py[cod] *.py[cod]
@ -136,12 +133,10 @@ celerybeat.pid
# Environments # Environments
.env .env
.venv .venv*
env/ env/
venv/ venv/
ENV/ ENV/
env.bak/
venv.bak/
# Spyder project settings # Spyder project settings
.spyderproject .spyderproject
@ -186,11 +181,6 @@ cython_debug/
.scratch/ .scratch/
.vscode/ .vscode/
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
environment.yml
requirements.txt
# source installer files # source installer files
installer/*zip installer/*zip
installer/install.bat installer/install.bat

View File

@ -1,13 +1,15 @@
## Make a copy of this file named `.env` and fill in the values below. ## Make a copy of this file named `.env` and fill in the values below.
## Any environment variables supported by InvokeAI can be specified here. ## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data. # INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# Outputs will also be stored here by default. # Outputs will also be stored here by default.
# This **must** be an absolute path. # This **must** be an absolute path.
INVOKEAI_ROOT= INVOKEAI_ROOT=
HUGGINGFACE_TOKEN= # Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup ## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # GPU_DRIVER=cuda
# CONTAINER_UID=1000 # CONTAINER_UID=1000

View File

@ -2,7 +2,7 @@
## Builder stage ## Builder stage
FROM library/ubuntu:22.04 AS builder FROM library/ubuntu:23.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
@ -10,7 +10,7 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \ --mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \ apt update && apt-get install -y \
git \ git \
python3.10-venv \ python3-venv \
python3-pip \ python3-pip \
build-essential build-essential
@ -37,7 +37,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
elif [ "$GPU_DRIVER" = "rocm" ]; then \ elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.4.2"; \ extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.4.2"; \
else \ else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu118"; \ extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\ fi &&\
pip install $extra_index_url_arg \ pip install $extra_index_url_arg \
torch==$TORCH_VERSION \ torch==$TORCH_VERSION \
@ -70,7 +70,7 @@ RUN --mount=type=cache,target=/usr/lib/node_modules \
#### Runtime stage --------------------------------------- #### Runtime stage ---------------------------------------
FROM library/ubuntu:22.04 AS runtime FROM library/ubuntu:23.04 AS runtime
ARG DEBIAN_FRONTEND=noninteractive ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1
@ -85,6 +85,7 @@ RUN apt update && apt install -y --no-install-recommends \
iotop \ iotop \
bzip2 \ bzip2 \
gosu \ gosu \
magic-wormhole \
libglib2.0-0 \ libglib2.0-0 \
libgl1-mesa-glx \ libgl1-mesa-glx \
python3-venv \ python3-venv \
@ -94,10 +95,6 @@ RUN apt update && apt install -y --no-install-recommends \
libstdc++-10-dev &&\ libstdc++-10-dev &&\
apt-get clean && apt-get autoclean apt-get clean && apt-get autoclean
# globally add magic-wormhole
# for ease of transferring data to and from the container
# when running in sandboxed cloud environments; e.g. Runpod etc.
RUN pip install magic-wormhole
ENV INVOKEAI_SRC=/opt/invokeai ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai ENV VIRTUAL_ENV=/opt/venv/invokeai
@ -120,9 +117,7 @@ WORKDIR ${INVOKEAI_SRC}
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match" RUN python3 -c "from patchmatch import patch_match"
# Create unprivileged user and make the local dir RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
RUN useradd --create-home --shell /bin/bash -u 1000 --comment "container local user" invoke
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R invoke:invoke ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./ COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"] ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]

View File

@ -5,7 +5,7 @@ All commands are to be run from the `docker` directory: `cd docker`
#### Linux #### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`) 1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04). 2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now. - The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU. 3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) - You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
@ -20,7 +20,6 @@ This is done via Docker Desktop preferences
## Quickstart ## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to: 1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory. b. an existing, v3.0.0 compatible runtime directory.
@ -42,20 +41,22 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used. Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `docker compose.yml` directly, but `.env` will help avoid conflicts when code is updated. You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (most values are optional): Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
``` ```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000 CONTAINER_UID=1000
GPU_DRIVER=cuda GPU_DRIVER=cuda
``` ```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing! ## Even Moar Customizing!
See the `docker compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below. See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory ### Reconfigure the runtime directory
@ -63,7 +64,7 @@ Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
``` ```yaml
command: command:
- invokeai-configure - invokeai-configure
- --yes - --yes
@ -71,7 +72,7 @@ command:
Or install models: Or install models:
``` ```yaml
command: command:
- invokeai-model-install - invokeai-model-install
``` ```

View File

@ -5,7 +5,7 @@ build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env) [[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker-compose build args:" echo "docker compose build args:"
echo $build_args echo $build_args
docker-compose build $build_args docker compose build $build_args

View File

@ -19,7 +19,7 @@ set -e -o pipefail
# Default UID: 1000 chosen due to popularity on Linux systems. Possibly 501 on MacOS. # Default UID: 1000 chosen due to popularity on Linux systems. Possibly 501 on MacOS.
USER_ID=${CONTAINER_UID:-1000} USER_ID=${CONTAINER_UID:-1000}
USER=invoke USER=ubuntu
usermod -u ${USER_ID} ${USER} 1>/dev/null usermod -u ${USER_ID} ${USER} 1>/dev/null
configure() { configure() {

View File

@ -1,8 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -e set -e
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}") SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1 cd "$SCRIPTDIR" || exit 1
docker-compose up --build -d docker compose up --build -d
docker-compose logs -f docker compose logs -f

View File

@ -4,30 +4,31 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker # :fontawesome-brands-docker: Docker
!!! warning "For most users" !!! warning "macOS and AMD GPU Users"
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md) We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
because Docker containers can not access the GPU on macOS.
!!! tip "For developers" !!! warning "AMD GPU Users"
For container-related development tasks or for enabling easy Container support for AMD GPUs has been reported to work by the community, but has not received
deployment to other environments (on-premises or cloud), follow these extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
instructions. use the `build.sh` script to build the image for this to take effect at build time.
For general use, install locally to leverage your machine's GPU. !!! tip "Linux and Windows Users"
For optimal performance, configure your Docker daemon to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Why containers? ## Why containers?
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also They provide a flexible, reliable way to build and deploy InvokeAI.
use a Docker volume to store the largest model files and image outputs as a See [Processes](https://12factor.net/processes) under the Twelve-Factor App
first step in decoupling storage and compute. Future enhancements can do this methodology for details on why running applications in such a stateless fashion is important.
for other assets. See [Processes](https://12factor.net/processes) under the
Twelve-Factor App methodology for details on why running applications in such a
stateless fashion is important.
You can specify the target platform when building the image and running the The container is configured for CUDA by default, but can be built to support AMD GPUs
container. You'll also need to specify the InvokeAI requirements file that by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
matches the container's OS and the architecture it will run on.
Developers on Apple silicon (M1/M2): You Developers on Apple silicon (M1/M2): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224) [can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
@ -36,6 +37,16 @@ development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud. another environment with NVIDIA GPUs on-premises or in the cloud.
## TL;DR
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
```bash
# docker compose commands should be run from the `docker` directory
cd docker
docker compose up
```
## Installation in a Linux container (desktop) ## Installation in a Linux container (desktop)
### Prerequisites ### Prerequisites
@ -58,222 +69,33 @@ a token and copy it, since you will need in for the next step.
### Setup ### Setup
Set the fork you want to use and other variables. Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary.
!!! tip Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
I preffer to save my env vars At a minimum, you might want to set the `INVOKEAI_ROOT` environment variable
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply to point to the location where you wish to store your InvokeAI models, configuration, and outputs.
them when I come back.
The build- and run- scripts contain default values for almost everything,
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
created in the last step.
Some Suggestions of variables you may want to change besides the Token:
<figure markdown> <figure markdown>
| Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description | | Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description |
| ----------------------------------------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ----------------------------------------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `HUGGING_FACE_HUB_TOKEN` | No default, but **required**! | This is the only **required** variable, without it you can't download the huggingface models | | `INVOKEAI_ROOT` | `~/invokeai` | **Required** - the location of your InvokeAI root directory. It will be created if it does not exist.
| `REPOSITORY_NAME` | The Basename of the Repo folder | This name will used as the container repository/image name | | `HUGGING_FACE_HUB_TOKEN` | | InvokeAI will work without it, but some of the integrations with HuggingFace (like downloading from models from private repositories) may not work|
| `VOLUMENAME` | `${REPOSITORY_NAME,,}_data` | Name of the Docker Volume where model files will be stored | | `GPU_DRIVER` | `cuda` | Optionally change this to `rocm` to build the image for AMD GPUs. NOTE: Use the `build.sh` script to build the image for this to take effect.
| `ARCH` | arch of the build machine | Can be changed if you want to build the image for another arch |
| `CONTAINER_REGISTRY` | ghcr.io | Name of the Container Registry to use for the full tag |
| `CONTAINER_REPOSITORY` | `$(whoami)/${REPOSITORY_NAME}` | Name of the Container Repository |
| `CONTAINER_FLAVOR` | `cuda` | The flavor of the image to built, available options are `cuda`, `rocm` and `cpu`. If you choose `rocm` or `cpu`, the extra-index-url will be selected automatically, unless you set one yourself. |
| `CONTAINER_TAG` | `${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}` | The Container Repository / Tag which will be used |
| `INVOKE_DOCKERFILE` | `Dockerfile` | The Dockerfile which should be built, handy for development |
| `PIP_EXTRA_INDEX_URL` | | If you want to use a custom pip-extra-index-url |
</figure> </figure>
#### Build the Image #### Build the Image
I provided a build script, which is located next to the Dockerfile in Use the standard `docker compose build` command from within the `docker` directory.
`docker/build.sh`. It can be executed from repository root like this:
```bash If using an AMD GPU:
./docker/build.sh a: set the `GPU_DRIVER=rocm` environment variable in `docker-compose.yml` and continue using `docker compose build` as usual, or
``` b: set `GPU_DRIVER=rocm` in the `.env` file and use the `build.sh` script, provided for convenience
The build Script not only builds the container, but also creates the docker
volume if not existing yet.
#### Run the Container #### Run the Container
After the build process is done, you can run the container via the provided Use the standard `docker compose up` command, and generally the `docker compose` [CLI](https://docs.docker.com/compose/reference/) as usual.
`docker/run.sh` script
```bash Once the container starts up (and configures the InvokeAI root directory if this is a new installation), you can access InvokeAI at [http://localhost:9090](http://localhost:9090)
./docker/run.sh
```
When used without arguments, the container will start the webserver and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
!!! example "run script example"
```bash
./docker/run.sh "banana sushi" -Ak_lms -S42 -s10
```
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
---
## Running the container on your GPU
If you have an Nvidia GPU, you can enable InvokeAI to run on the GPU by running
the container with an extra environment variable to enable GPU usage and have
the process run much faster:
```bash
GPU_FLAGS=all ./docker/run.sh
```
This passes the `--gpus all` to docker and uses the GPU.
If you don't have a GPU (or your host is not yet setup to use it) you will see a
message like this:
`docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].`
You can use the full set of GPU combinations documented here:
https://docs.docker.com/config/containers/resource_constraints/#gpu
For example, use `GPU_FLAGS=device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a` to
choose a specific device identified by a UUID.
---
!!! warning "Deprecated"
From here on you will find the the previous Docker-Docs, which will still
provide some usefull informations.
## Usage (time to have fun)
### Startup
If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier.
If you're **directly on macOS follow these startup instructions**. With the
Conda environment activated (`conda activate ldm`), run the interactive
interface that combines the functionality of the original scripts `txt2img` and
`img2img`: Use the more accurate but VRAM-intensive full precision math because
half-precision requires autocast and won't work. By default the images are saved
in `outputs/img-samples/`.
```Shell
python3 scripts/invoke.py --full_precision
```
You'll get the script's prompt. You can see available options or quit.
```Shell
invoke> -h
invoke> q
```
### Text to Image
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
image. This will let you know that everything is set up correctly. Then increase
steps to 100 or more for good (but slower) results. The prompt can be in quotes
or not.
```Shell
invoke> The hulk fighting with sheldon cooper -s5 -n1
invoke> "woman closeup highly detailed" -s 150
# Reuse previous seed and apply face restoration
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
```
You'll need to experiment to see if face restoration is making it better or
worse for your specific prompt.
If you're on a container the output is set to the Docker volume. You can copy it
wherever you want. You can download it from the Docker Desktop app, Volumes,
my-vol, data. Or you can copy it from your Mac terminal. Keep in mind
`docker cp` can't expand `*.png` so you'll need to specify the image file name.
On your host Mac (you can use the name of any container that mounted the
volume):
```Shell
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
```
### Image to Image
You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing.
`strength` is a value between 0.0 and 1.0 that controls the amount of noise that
is added to the input image. Values that approach 1.0 allow for lots of
variations but will also produce images that are not semantically consistent
with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
Make sure your input image size dimensions are multiples of 64 e.g. 512x512.
Otherwise you'll get `Error: product of dimension sizes > 2**31'`. If you still
get the error
[try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.)
like 512x256.
If you're on a Docker container, copy your input image into the Docker volume
```Shell
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
```
Try it out generating an image (or more). The `invoke` script needs absolute
paths to find the image so don't use `~`.
If you're on your Mac
```Shell
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
```
If you're on a Linux container on your Mac
```Shell
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
```
### Web Interface
You can use the `invoke` script with a graphical web interface. Start the web
server with:
```Shell
python3 scripts/invoke.py --full_precision --web
```
If it's running on your Mac point your Mac web browser to
<http://127.0.0.1:9090>
Press Control-C at the command line to stop the web server.
### Notes
Some text you can add at the end of the prompt to make it very pretty:
```Shell
cinematic photo, highly detailed, cinematic lighting, ultra-detailed, ultrarealistic, photorealism, Octane Rendering, cyberpunk lights, Hyper Detail, 8K, HD, Unreal Engine, V-Ray, full hd, cyberpunk, abstract, 3d octane render + 4k UHD + immense detail + dramatic lighting + well lit + black, purple, blue, pink, cerulean, teal, metallic colours, + fine details, ultra photoreal, photographic, concept art, cinematic composition, rule of thirds, mysterious, eerie, photorealism, breathtaking detailed, painting art deco pattern, by hsiao, ron cheng, john james audubon, bizarre compositions, exquisite detail, extremely moody lighting, painted by greg rutkowski makoto shinkai takashi takeuchi studio ghibli, akihiko yoshida
```
The original scripts should work as well.
```Shell
python3 scripts/orig_scripts/txt2img.py --help
python3 scripts/orig_scripts/txt2img.py --ddim_steps 100 --n_iter 1 --n_samples 1 --plms --prompt "new born baby kitten. Hyper Detail, Octane Rendering, Unreal Engine, V-Ray"
python3 scripts/orig_scripts/txt2img.py --ddim_steps 5 --n_iter 1 --n_samples 1 --plms --prompt "ocean" # or --klms
```