555c26b7ce
## Justification Closes issue #352 This update makes the Dockerfiles OCI compliant, making it easier to use Buildah or other image building techniques that require it ## Implementation This changes a few things, listed below: * auto: Download container is switched to alpine. The `git` container specified the `/git` directory as a volume. As such, all the files under `/git` would be lost after each script invoke. Alpine is used later in the build process anyway, so it shouldn't be any extra cost to switch to it * auto: "New" clone.sh script is copied into the container, which is basically just the previous clone script that was embedded in the Dockerfile. * all: `<<EOF` heredoc styles have been switched to `&& \` * all: I added NVIDIA_DRIVER_CAPABILITIES and NVIDIA_VISIBLE_DEVICES to expose my Nvidia card. This is most likely a selinux/podman problem, but shouldn't change anything with docker to add it. * docker-compose: I added selinux labeling. I tested this with real docker (not just podman!) and it seems to work fine. Though I suggest you try it too. ## Testing Locally builds with buildah. Note: for caching to work properly, you still need to replace `/root/.cache/pip` with `/root/.cache/pip,Z` on selinux systems. Note: I was having some trouble running invoke. Thought it was this PR, but it's a known issue. See https://github.com/invoke-ai/InvokeAI/issues/3182 --------- Co-authored-by: AbdBarho <ka70911@gmail.com>
64 lines
2.3 KiB
Docker
64 lines
2.3 KiB
Docker
FROM alpine:3.17 as xformers
|
|
RUN apk add --no-cache aria2
|
|
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/5.0.0/xformers-0.0.17.dev449-cp310-cp310-manylinux2014_x86_64.whl'
|
|
|
|
|
|
|
|
FROM python:3.10-slim
|
|
|
|
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1
|
|
|
|
|
|
RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.1+cu117 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
|
|
|
|
RUN apt-get update && apt-get install git -y && apt-get clean
|
|
|
|
RUN git clone https://github.com/invoke-ai/InvokeAI.git /stable-diffusion
|
|
|
|
WORKDIR /stable-diffusion
|
|
|
|
RUN --mount=type=cache,target=/root/.cache/pip \
|
|
git reset --hard f232068ab89bd80e4f5f3133dcdb62ea78f1d0f7 && \
|
|
git config --global http.postBuffer 1048576000 && \
|
|
egrep -v '^-e .' environments-and-requirements/requirements-lin-cuda.txt > req.txt && \
|
|
pip install -r req.txt && \
|
|
rm req.txt
|
|
|
|
|
|
# patch match:
|
|
# https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md
|
|
RUN \
|
|
apt-get update && \
|
|
# apt-get install build-essential python3-opencv libopencv-dev -y && \
|
|
apt-get install make g++ libopencv-dev -y && \
|
|
apt-get clean && \
|
|
cd /usr/lib/x86_64-linux-gnu/pkgconfig/ && \
|
|
ln -sf opencv4.pc opencv.pc
|
|
|
|
|
|
ARG BRANCH=main SHA=6e0c6d9cc9f6bdbdefc4b9e94bc1ccde1b04aa42
|
|
RUN --mount=type=cache,target=/root/.cache/pip \
|
|
git fetch && \
|
|
git reset --hard && \
|
|
git checkout ${BRANCH} && \
|
|
git reset --hard ${SHA} && \
|
|
pip install .
|
|
|
|
|
|
RUN --mount=type=cache,target=/root/.cache/pip \
|
|
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.15-cp310-cp310-linux_x86_64.whl \
|
|
pip install -U opencv-python-headless huggingface_hub triton /xformers-0.0.15-cp310-cp310-linux_x86_64.whl && \
|
|
python3 -c "from patchmatch import patch_match"
|
|
|
|
|
|
RUN touch invokeai.init
|
|
COPY . /docker/
|
|
|
|
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
|
ENV NVIDIA_VISIBLE_DEVICES=all
|
|
ENV PYTHONUNBUFFERED=1 ROOT=/stable-diffusion PYTHONPATH="${PYTHONPATH}:${ROOT}" PRELOAD=false CLI_ARGS="" HF_HOME=/root/.cache/huggingface
|
|
EXPOSE 7860
|
|
|
|
ENTRYPOINT ["/docker/entrypoint.sh"]
|
|
CMD invokeai --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir ${ROOT} --outdir /output/invoke ${CLI_ARGS}
|