InvokeAI/environment-mac.yaml

59 lines
1.6 KiB
YAML
Raw Normal View History

name: ldm
channels:
- pytorch-nightly
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
- conda-forge
dependencies:
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
- python==3.9.13
- pip==22.2.2
# pytorch-nightly, left unpinned
- pytorch
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
- torchmetrics
- torchvision
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
# I suggest to keep the other deps sorted for convenience.
# If you wish to upgrade to 3.10, try to run this:
#
# ```shell
# CONDA_CMD=conda
# sed -E 's/python==3.9.13/python==3.10.5/;s/ldm/ldm-3.10/;21,99s/- ([^=]+)==.+/- \1/' environment-mac.yaml > /tmp/environment-mac-updated.yml
# CONDA_SUBDIR=osx-arm64 $CONDA_CMD env create -f /tmp/environment-mac-updated.yml && $CONDA_CMD list -n ldm-3.10 | awk ' {print " - " $1 "==" $2;} '
# ```
#
# Unfortunately, as of 2022-08-31, this fails at the pip stage.
- albumentations==1.2.1
- coloredlogs==15.0.1
- einops==0.4.1
- grpcio==1.46.4
- humanfriendly
- imageio-ffmpeg==0.4.7
- imageio==2.21.2
- imgaug==0.4.0
- kornia==0.6.7
- mpmath==1.2.1
- nomkl
- numpy==1.23.2
- omegaconf==2.1.1
- onnx==1.12.0
- onnxruntime==1.12.1
- opencv==4.6.0
- pudb==2022.1
- pytorch-lightning==1.6.5
- scipy==1.9.1
- streamlit==1.12.2
- sympy==1.10.1
- tensorboard==2.9.0
- transformers==4.21.2
- pip:
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
- invisible-watermark
- test-tube
- tokenizers
- torch-fidelity
- -e git+https://github.com/huggingface/diffusers.git@v0.2.4#egg=diffusers
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
Move environment-mac.yaml to Python 3.9 and patch dream.py for Macs. I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory. I see this taking about 2.0s/it. I've moved many deps from pip to conda-forge, to take advantage of the precompiled binaries. Some notes for Mac users, since I've seen a lot of confusion about this: One doesn't need the `apple` channel to run this on a Mac-- that's only used by `tensorflow-deps`, required for running tensorflow-metal. For that, I have an example environment.yml here: https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022 However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to ensure that you do not run any Intel-specific packages such as `mkl`, which will fail with [cryptic errors](https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1226702274) on the ARM architecture and cause the environment to break. I've also added a comment in the env file about 3.10 not working yet. When it becomes possible to update, those commands run on an osx-arm64 machine should work to determine the new version set. Here's what a successful run of dream.py should look like: ``` $ python scripts/dream.py --full_precision  SIGABRT(6) ↵  08:42:59 * Initializing, be patient... Loading model from models/ldm/stable-diffusion-v1/model.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Using slower but more accurate full-precision math (--full_precision) >> Setting Sampler to k_lms model loaded in 6.12s * Initialization done! Awaiting your command (-h for help, 'q' to quit) dream> "an astronaut riding a horse" Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.) placeholder_idx = torch.where( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00, 1.95s/it] Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it] Usage stats: 1 image(s) generated in 98.60s Max VRAM used for this generation: 0.00G Outputs: outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180 ```
2022-09-01 01:18:19 +00:00
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- -e .
variables:
PYTORCH_ENABLE_MPS_FALLBACK: 1