consolidate environment files

- starting to remove unneeded entries and pins
- no longer require -e in front of github dependencies
- update setup.py with release number
- update manual installation instructions
This commit is contained in:
Lincoln Stein 2022-11-08 03:50:07 +00:00
parent 2ad6ef355a
commit 734dacfbe9
17 changed files with 134 additions and 71 deletions

View File

@ -111,7 +111,7 @@ A number of caveats:
the border. the border.
4. When using the `inpaint-1.5` model, you may notice subtle changes to the area 4. When using the `inpaint-1.5` model, you may notice subtle changes to the area
within the original image. This is because the model performs an outside the masked region. This is because the model performs an
encoding/decoding on the image as a whole. This does not occur with the encoding/decoding on the image as a whole. This does not occur with the
standard model. standard model.

View File

@ -70,7 +70,7 @@ Some Suggestions of variables you may want to change besides the Token:
| `HUGGINGFACE_TOKEN="hg_aewirhghlawrgkjbarug2"` | This is the only required variable, without you can't get the checkpoint | | `HUGGINGFACE_TOKEN="hg_aewirhghlawrgkjbarug2"` | This is the only required variable, without you can't get the checkpoint |
| `ARCH=aarch64` | if you are using a ARM based CPU | | `ARCH=aarch64` | if you are using a ARM based CPU |
| `INVOKEAI_TAG=yourname/invokeai:latest` | the Container Repository / Tag which will be used | | `INVOKEAI_TAG=yourname/invokeai:latest` | the Container Repository / Tag which will be used |
| `INVOKEAI_CONDA_ENV_FILE=environment-linux-aarch64.yml` | since environment.yml wouldn't work with aarch | | `INVOKEAI_CONDA_ENV_FILE=environments/environment-linux-aarch64.yml`| since environment.yml wouldn't work with aarch |
| `INVOKEAI_GIT="-b branchname https://github.com/username/reponame"` | if you want to use your own fork | | `INVOKEAI_GIT="-b branchname https://github.com/username/reponame"` | if you want to use your own fork |
#### Build the Image #### Build the Image

View File

@ -42,11 +42,22 @@ title: Manual Installation, Linux
``` ```
5. Use anaconda to copy necessary python packages, create a new python 5. Use anaconda to copy necessary python packages, create a new python
environment named `invokeai` and activate the environment. environment named `invokeai` and then activate the environment.
!!! todo "For systems with a CUDA (Nvidia) card:"
```bash ```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory) (base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create (base) ~/InvokeAI$ conda env create -f environments/environment-cuda.yml
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
```
!!! todo "For systems with an AMD card (using ROCm driver):"
```bash
(base) rm -rf src # (this is a precaution in case there is already a src directory)
(base) ~/InvokeAI$ conda env create -f environments/environment-rocm.yml
(base) ~/InvokeAI$ conda activate invokeai (base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$ (invokeai) ~/InvokeAI$
``` ```

View File

@ -52,23 +52,29 @@ in the wiki
cd InvokeAI cd InvokeAI
``` ```
6. Run the following two commands: 6. Run the following commands:
```batch title="step 6a" !!! todo "For systems with a CUDA (Nvidia) card:"
conda env create
```bash
rmdir src # (this is a precaution in case there is already a src directory)
conda env create -f environments/environment-cuda.yml
conda activate invokeai
(invokeai)>
``` ```
```batch title="step 6b" !!! todo "For systems with an AMD card (using ROCm driver):"
```bash
rmdir src # (this is a precaution in case there is already a src directory)
conda env create -f environments/environment-rocm.yml
conda activate invokeai conda activate invokeai
(invokeai)>
``` ```
This will install all python requirements and activate the "invokeai" environment This will install all python requirements and activate the "invokeai" environment
which sets PATH and other environment variables properly. which sets PATH and other environment variables properly.
Note that the long form of the first command is `conda env create -f environment.yml`. If the
environment file isn't specified, conda will default to `environment.yml`. You will need
to provide the `-f` option if you wish to load a different environment file at any point.
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models: 7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash ```bash

View File

@ -13,33 +13,33 @@ dependencies:
- cudatoolkit=11.6 - cudatoolkit=11.6
- pip: - pip:
- albumentations==0.4.3 - albumentations==0.4.3
- opencv-python==4.5.5.64
- pudb==2019.2
- imageio==2.9.0
- imageio-ffmpeg==0.4.2
- pytorch-lightning==1.7.7
- omegaconf==2.2.3
- test-tube>=0.7.5
- streamlit==1.12.0
- send2trash==1.8.0
- pillow==9.2.0
- einops==0.3.0
- pyreadline3
- torch-fidelity==0.3.0
- transformers==4.21.3
- diffusers==0.6.0
- torchmetrics==0.7.0
- flask==2.1.3
- flask_socketio==5.3.0
- flask_cors==3.0.10
- dependency_injector==4.40.0 - dependency_injector==4.40.0
- diffusers==0.6.0
- einops==0.3.0
- eventlet - eventlet
- flask==2.1.3
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk - getpass_asterisk
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0 - kornia==0.6.0
- -e git+https://github.com/openai/CLIP.git@main#egg=clip - omegaconf==2.2.3
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers - opencv-python==4.5.5.64
- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion - pillow==9.2.0
- -e git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan - pudb==2019.2
- -e git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan - pyreadline3
- -e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg - pytorch-lightning==1.7.7
- send2trash==1.8.0
- streamlit==1.12.0
- taming-transformers-rom1504
- test-tube>=0.7.5
- torch-fidelity==0.3.0
- torchmetrics==0.7.0
- transformers==4.21.3
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
- git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
- -e . - -e .

View File

@ -0,0 +1,45 @@
name: invokeai
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- python>=3.9
- pip=22.2.2
- numpy=1.23.3
- pip:
- --extra-index-url https://download.pytorch.org/whl/rocm5.2/
- albumentations==0.4.3
- dependency_injector==4.40.0
- diffusers==0.6.0
- einops==0.3.0
- eventlet
- flask==2.1.3
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
- omegaconf==2.2.3
- opencv-python==4.5.5.64
- pillow==9.2.0
- pudb==2019.2
- pyreadline3
- pytorch-lightning==1.7.7
- send2trash==1.8.0
- streamlit==1.12.0
- taming-transformers-rom1504
- test-tube>=0.7.5
- torch
- torch-fidelity==0.3.0
- torchaudio
- torchmetrics==0.7.0
- torchvision
- transformers==4.21.3
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
- git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- -e .

View File

@ -15,13 +15,13 @@ torch==1.12.1
torchvision==0.13.0 torchvision==0.13.0
pytorch-lightning==1.7.7 pytorch-lightning==1.7.7
streamlit==1.12.0 streamlit==1.12.0
taming-transformers-rom1504
test-tube>=0.7.5 test-tube>=0.7.5
torch-fidelity==0.3.0 torch-fidelity==0.3.0
torchmetrics==0.6.0 torchmetrics==0.6.0
transformers==4.21.3 transformers==4.21.3
-e git+https://github.com/openai/CLIP.git@main#egg=clip git+https://github.com/openai/CLIP.git@main#egg=clip
-e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers git+https://github.com/lstein/k-diffusion.git@master#egg=k-diffusion
-e git+https://github.com/lstein/k-diffusion.git@master#egg=k-diffusion git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan
-e git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
-e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
-e . -e .

View File

@ -0,0 +1,6 @@
-r requirements.txt
protobuf==3.19.6
torch<1.13.0
torchvision<0.14.0
-e .

View File

@ -1,41 +1,39 @@
--prefer-binary --prefer-binary
# pip will resolve the version which matches torch
albumentations albumentations
dependency_injector==4.40.0
diffusers
einops einops
eventlet
flask==2.1.3
flask_cors==3.0.10
flask_socketio==5.3.0
flaskwebgui==0.3.7
getpass_asterisk getpass_asterisk
huggingface-hub huggingface-hub
imageio-ffmpeg
imageio imageio
imageio-ffmpeg
kornia kornia
# pip will resolve the version which matches torch
numpy numpy
omegaconf omegaconf
opencv-python opencv-python
pillow pillow
pip>=22 pip>=22
pudb pudb
pytorch-lightning==1.7.7
scikit-image>=0.19
streamlit
pyreadline3 pyreadline3
# "CompVis/taming-transformers" IS NOT INSTALLABLE pytorch-lightning==1.7.7
# This is a drop-in replacement realesrgan
scikit-image>=0.19
send2trash
streamlit
taming-transformers-rom1504 taming-transformers-rom1504
test-tube test-tube
torch-fidelity torch-fidelity
torchmetrics torchmetrics
transformers==4.21.* transformers==4.21.*
flask==2.1.3
flask_socketio==5.3.0
flask_cors==3.0.10
flaskwebgui==0.3.7
send2trash
dependency_injector==4.40.0
eventlet
realesrgan
diffusers
git+https://github.com/openai/CLIP.git@main#egg=clip git+https://github.com/openai/CLIP.git@main#egg=clip
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
-e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg

View File

@ -1,3 +0,0 @@
[tool.blue]
line-length = 90
target-version = ['py310']

View File

@ -2,7 +2,7 @@ from setuptools import setup, find_packages
setup( setup(
name='invoke-ai', name='invoke-ai',
version='2.0.2', version='2.1.3',
description='', description='',
packages=find_packages(), packages=find_packages(),
install_requires=[ install_requires=[