For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.
- Now it emits appropriate error messages telling user what the problem
is.
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.
- Now it emits appropriate error messages telling user what the problem
is.
- dont build frontend since complications with QEMU
- set pip cache dir
- add pip cache to all pip related build steps
- dont lock pip cache
- update dockerignore to exclude uneeded files
env.sh:
- move check for torch to CONVTAINER_FLAVOR detection
Dockerfile
- only mount `/var/cache/apt` for apt related steps
- remove `docker-clean` from `/etc/apt/apt.conf.d` for BuildKit cache
- remove apt-get clean for BuildKit cache
- only copy frontend to frontend-builder
- mount `/usr/local/share/.cache/yarn` in frountend-builder
- separate steps for yarn install and yarn build
- build pytorch in pyproject-builder
build.sh
- prepare for installation with extras
This change allows passing a directory with multiple models in it to be
imported.
Ensures that diffusers directories will still work.
Fixed up some minor type issues.
This allows the --log_tokenization option to be used as a command line
argument (or from invokeai.init), making it possible to view
tokenization information in the terminal when using the web interface.