mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
c5aeb36230
With these changes, the Docker image can be built and executed successfully on hosts with AMD devices with ROCm acceleration. Previously, a ROCm-enabled version of torch would be installed, but later removed during installation of InvokeAI itself. This was caused by InvokeAI needing a newer torch version than was previously installed. The fix consists of multiple components: * Update the hardcoded versions of torch and torchvision to the versions currently used in pyproject.toml, so that a new version need not be installed during installation of InvokeAI. * Specify --extra-index-url on installation of InvokeAI so that even if a verison mismatch occurs, the correct torch version should still be installed. This also necessitates changing --index-url to --extra-index-url for the Torch repo. Otherwise non-torch dependencies would not be found. * In run.sh, build the image for the selected service.
33 lines
742 B
Bash
Executable File
33 lines
742 B
Bash
Executable File
#!/usr/bin/env bash
|
|
set -e -o pipefail
|
|
|
|
run() {
|
|
local scriptdir=$(dirname "${BASH_SOURCE[0]}")
|
|
cd "$scriptdir" || exit 1
|
|
|
|
local build_args=""
|
|
local profile=""
|
|
|
|
touch .env
|
|
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
|
|
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
|
|
|
|
[[ -z "$profile" ]] && profile="nvidia"
|
|
|
|
local service_name="invokeai-$profile"
|
|
|
|
if [[ ! -z "$build_args" ]]; then
|
|
printf "%s\n" "docker compose build args:"
|
|
printf "%s\n" "$build_args"
|
|
fi
|
|
|
|
docker compose build $build_args $service_name
|
|
unset build_args
|
|
|
|
printf "%s\n" "starting service $service_name"
|
|
docker compose --profile "$profile" up -d "$service_name"
|
|
docker compose logs -f
|
|
}
|
|
|
|
run
|