Squashed commit of the following: commit 2c1e0168bb03a2cd625f2d4aca40eee0fdf7e4af Merge:2325c6c
31f2733
Author: Lincoln Stein <lincoln.stein@gmail.com> Date: Tue Oct 11 08:33:18 2022 -0400 Merge branch 'mkdocs-fixes' of https://github.com/mauwii/stable-diffusion into mauwii-mkdocs-fixes commit31f2733e89
Merge:d9d6d3a
a61a690
Author: Lincoln Stein <lincoln.stein@gmail.com> Date: Tue Oct 11 08:05:52 2022 -0400 Merge branch 'main' into mkdocs-fixes commitd9d6d3af3f
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 08:13:04 2022 +0200 some more minor, overseen fixes to IMG2IMG commit4ab5a2aeba
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 07:49:11 2022 +0200 add 4gotten alt-text to images commitf778bd9c0f
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 07:18:11 2022 +0200 update OTHER.md - fix codeblocks, add admonitions, embed graphic commita19f148a8e
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 06:51:29 2022 +0200 update IMG2IMG.md commitc1f1dfa714
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 06:10:25 2022 +0200 update EMBIGGEN.md - fix codeblocks - fix toc - use admonitions commit791e6c63ef
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 05:58:53 2022 +0200 better admonitions for CLI.md commite078025f00
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 05:50:32 2022 +0200 huge update to CLI.md way too many updates to list them all, including: - render keys for keyboard-shortcuts - quote commands and "unhide" parameter-values (like `<int>`, `<string>` - fix codeblocks - quote commands - quote filenames - use admonitions - .... commitbd98dd2307
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 04:49:57 2022 +0200 fix INPAINTING.md - fix numbered List - replace text key combos with actual rendered keyboard keys commit5392000335
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 04:30:11 2022 +0200 fix nubered list and codeblocks in INSTALL_WINDOWS commitffe9276f1e
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 04:12:56 2022 +0200 fix numbered list in INSTALL_LINUX.md also fix blank lines, codeblocks and admonition commit2c6a6a567f
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 03:51:03 2022 +0200 upgrade INSTALL_MAC.md: - use annotations and content-tabs yes, this looks ugly in repo afterwards, but plz also look at mkdocs: https://mauwii.github.io/stable-diffusion/installation/INSTALL_MAC/ commit8f6c544480
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 01:43:11 2022 +0200 comment out PR part in mkdocs-flow.yml commitb52c14a67f
Merge:97ebe58
a1b0b91
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 01:17:28 2022 +0200 Merge branch 'mkdocs-fixes' of github.com:mauwii/stable-diffusion into mkdocs-fixes commita1b0b91bb3
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:59:44 2022 +0200 fix conda env in codeblock commit5f9f9a266e
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:43:46 2022 +0200 fix 4gotten title in TEXTUAL_INVERSION commit8f025b034e
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:41:52 2022 +0200 quote repo_url and repo_name otherwise the version/stars/forks did not appear commit3a52b7deb3
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:39:54 2022 +0200 fix TEXTUAL_INVERSION headline to fit the others commit389b21f966
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:35:48 2022 +0200 fix SAMPLER_CONVERGENCE and add emoji commitf26fc79a18
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:32:04 2022 +0200 fix INSTALL_DOCKER.md: - fix title (Docker instead of "Before you begin") - add headline with Emoji - fix headlines to render toc correct commitcbc3520489
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:24:58 2022 +0200 add headline with emoji to INSTALL_MAC.md commit25f0614d66
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:21:01 2022 +0200 add log emoji to docs/CHANGELOG.md commit42005688fa
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:20:47 2022 +0200 use better fitting Icon for new Name commit0c65bad7f5
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:09:07 2022 +0200 add Headline with Emoji to WEB and POSTPROCESS commit1c1cf2692e
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:56:16 2022 +0200 update index.md: - remove unused template reference - make headline rendered bold and underlined, add (kind of) subtitle - update discord badge and link - update Quick links to look like in GH-Readme - also remove self reference to docs - add screenshot as in GH-Readme - add note pointing to issues tab - update path in command line to reflect new Repo Name commit0e29b0737e
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:23:10 2022 +0200 chng site_name to `Stable Diffusion Toolkit Docs` commitad8a60d992
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:00:02 2022 +0200 fix repo_url in mkdocs.yml commit234569d6b6
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:54:39 2022 +0200 fix link to upscaling in WEB.md and TOC - TOC fixed by adding `#` to every headline after `## Parting remarks` - add missing blank lines commit97c84ad824
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:25:32 2022 +0200 fix broken links in docs/CHANGELOG.md commitbce62b3a32
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:15:37 2022 +0200 add title to CHANGELOG.md to render TOC wo. `**` alternatively remove `**` around headline commit97ebe58b5b
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:59:44 2022 +0200 fix conda env in codeblock commit87ac217e43
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:43:46 2022 +0200 fix 4gotten title in TEXTUAL_INVERSION commit91439e8a52
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:41:52 2022 +0200 quote repo_url and repo_name otherwise the version/stars/forks did not appear commit8a632a9e8f
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:39:54 2022 +0200 fix TEXTUAL_INVERSION headline to fit the others commit7c8ffe2feb
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:35:48 2022 +0200 fix SAMPLER_CONVERGENCE and add emoji commite2e86d2d11
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:32:04 2022 +0200 fix INSTALL_DOCKER.md: - fix title (Docker instead of "Before you begin") - add headline with Emoji - fix headlines to render toc correct commit8b54c083fe
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:24:58 2022 +0200 add headline with emoji to INSTALL_MAC.md commit8d8a032434
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:21:01 2022 +0200 add log emoji to docs/CHANGELOG.md commit76519f6fa4
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:20:47 2022 +0200 use better fitting Icon for new Name commitaff0725533
Author: mauwii <Mauwii@outlook.de> Date: Tue Oct 11 00:09:07 2022 +0200 add Headline with Emoji to WEB and POSTPROCESS commit0f7898cbdd
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:56:16 2022 +0200 update index.md: - remove unused template reference - make headline rendered bold and underlined, add (kind of) subtitle - update discord badge and link - update Quick links to look like in GH-Readme - also remove self reference to docs - add screenshot as in GH-Readme - add note pointing to issues tab - update path in command line to reflect new Repo Name commitf4c04eadf8
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:23:10 2022 +0200 chng site_name to `Stable Diffusion Toolkit Docs` commit6e624827c0
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 23:00:02 2022 +0200 fix repo_url in mkdocs.yml commit158848dd7e
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:54:39 2022 +0200 fix link to upscaling in WEB.md and TOC - TOC fixed by adding `#` to every headline after `## Parting remarks` - add missing blank lines commit533736e135
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:29:46 2022 +0200 fix link to truncation_comparison.jpg in OTHER.md commitdd335142df
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:25:32 2022 +0200 fix broken links in docs/CHANGELOG.md commit374dd54f30
Author: mauwii <Mauwii@outlook.de> Date: Mon Oct 10 22:15:37 2022 +0200 add title to CHANGELOG.md to render TOC wo. `**` alternatively remove `**` around headline
18 KiB
title |
---|
macOS |
:fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the community.
While the repo does run on Intel Macs, we only have a couple reports. If you have an Intel Mac and run into issues, please create an issue on Github and we will do our best to help.
Requirements
- macOS 12.3 Monterey or later
- About 10GB of storage (and 10GB of data if your internet connection has data caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
Installation
First you need to download a large checkpoint file.
- Sign up at https://huggingface.co
- Go to the Stable diffusion diffusion model page
- Accept the terms and click Access Repository
- Download sd-v1-4.ckpt (4.27 GB) and note where you have saved it (probably the Downloads folder). You may want to move it somewhere else for longer term storage - SD needs this file to run.
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
!!! todo "Homebrew"
If you have no brew installation yet (otherwise skip):
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
!!! todo "Conda Installation"
Now there are two different ways to set up the Python (miniconda) environment:
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
=== "Standalone"
```bash title="Install cmake, protobuf, and rust"
brew install cmake protobuf rust
```
Then choose the kind of your Mac and install miniconda:
=== "M1 arm64"
```bash title="Install miniconda for M1 arm64"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
-o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
```
=== "Intel x86_64"
```bash title="Install miniconda for Intel"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
-o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
```
=== "with pyenv"
```bash
brew install pyenv-virtualenv
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
!!! todo "Clone the Invoke AI repo"
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
!!! todo "Wait until the checkpoint-file download finished, then proceed"
We will leave the big checkpoint wherever you stashed it for long-term storage,
and make a link to it from the repo's folder. This allows you to use it for
other repos, or if you need to delete Invoke AI, you won't have to download it again.
```{.bash .annotate}
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads" # (1)!
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
```
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
!!! todo "Create the environment & install packages"
=== "M1 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
```
=== "Intel x86_64 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
```
```bash
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
# This will download some bits and pieces and make take a while
python scripts/preload_models.py
# Run SD!
python scripts/dream.py
# or run the web interface!
python scripts/invoke.py --web
# The original scripts should work as well.
python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
```
!!! info
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
Common problems
After you followed all the instructions and try to run invoke.py, you might get several errors. Here's the errors I've seen and found solutions for.
Is it slow?
python ./scripts/orig_scripts/txt2img.py \
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup is inherently unstable. One morning I woke up and it no longer worked no matter what I did until I switched to miniforge. However, I have another Mac that works just fine with Anaconda. If you can't get it to work, please search a little first because many of the errors will get posted and solved. If you can't find a solution please create an issue.
One debugging step is to update to the latest version of PyTorch nightly.
conda install \
pytorch \
torchvision \
-c pytorch-nightly \
-n ldm
If it takes forever to run conda env create -f environment-mac.yml
, try this:
git clean -f
conda clean \
--yes \
--all
Or you could try to completley reset Anaconda:
conda update \
--force-reinstall \
-y \
-n base \
-c defaults conda
"No module named cv2", torch, 'ldm', 'transformers', 'taming', etc
There are several causes of these errors:
-
Did you remember to
conda activate ldm
? If your terminal prompt begins with "(ldm)" then you activated it. If it begins with "(base)" or something else you haven't. -
You might've run
./scripts/preload_models.py
or./scripts/invoke.py
instead ofpython ./scripts/preload_models.py
orpython ./scripts/invoke.py
. The cause of this error is long so it's below. -
if it says you're missing taming you need to rebuild your virtual environment.
conda deactivate conda env remove -n ldm conda env create -f environment-mac.yml
-
If you have activated the ldm virtual environment and tried rebuilding it, maybe the problem could be that I have something installed that you don't and you'll just need to manually install it. Make sure you activate the virtual environment so it installs there instead of globally.
conda activate ldm pip install <package name>
You might also need to install Rust (I mention this again below).
How many snakes are living in your computer?
You might have multiple Python installations on your system, in which case it's important to be explicit and consistent about which one to use for a given project. This is because virtual environments are coupled to the Python that created it (and all the associated 'system-level' modules).
When you run python
or python3
, your shell searches the colon-delimited
locations in the PATH
environment variable (echo $PATH
to see that list) in
that order - first match wins. You can ask for the location of the first
python3
found in your PATH
with the which
command like this:
% which python3
/usr/bin/python3
Anything in /usr/bin
is
part of the OS.
However, /usr/bin/python3
is not actually python3, but rather a stub that
offers to install Xcode (which includes python 3). If you have Xcode installed
already, /usr/bin/python3
will execute
/Library/Developer/CommandLineTools/usr/bin/python3
or
/Applications/Xcode.app/Contents/Developer/usr/bin/python3
(depending on which
Xcode you've selected with xcode-select
).
Note that /usr/bin/python
is an entirely different python - specifically,
python 2. Note: starting in macOS 12.3, /usr/bin/python
no longer exists.
% which python3
/opt/homebrew/bin/python3
If you installed python3 with Homebrew and you've modified your path to search for Homebrew binaries before system ones, you'll see the above path.
% which python
/opt/anaconda3/bin/python
If you have Anaconda installed, you will see the above path. There is a
/opt/anaconda3/bin/python3
also.
We expect that /opt/anaconda3/bin/python
and /opt/anaconda3/bin/python3
should actually be the same python, which you can verify by comparing the
output of python3 -V
and python -V
.
(ldm) % which python
/Users/name/miniforge3/envs/ldm/bin/python
The above is what you'll see if you have miniforge and correctly activated the ldm environment, while usingd the standalone setup instructions above.
If you otherwise installed via pyenv, you will get this result:
(anaconda3-2022.05) % which python
/Users/name/.pyenv/shims/python
It's all a mess and you should know how to modify the path environment variable if you want to fix it. Here's a brief hint of the most common ways you can modify it (don't really have the time to explain it all here).
- ~/.zshrc
- ~/.bash_profile
- ~/.bashrc
- /etc/paths.d
- /etc/path
Which one you use will depend on what you have installed, except putting a file in /etc/paths.d - which also is the way I prefer to do.
Finally, to answer the question posed by this section's title, it may help to
list all of the python
/ python3
things found in $PATH
instead of just the
first hit. To do so, add the -a
switch to which
:
% which -a python3
...
This will show a list of all binaries which are actually available in your PATH.
Debugging?
Tired of waiting for your renders to finish before you can see if it works? Reduce the steps! The image quality will be horrible but at least you'll get quick feedback.
python ./scripts/txt2img.py \
--prompt "ocean" \
--ddim_steps 5 \
--n_samples 1 \
--n_iter 1
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
python scripts/preload_models.py
"The operator [name] is not current implemented for the MPS device." (sic)
!!! example "example error"
```bash
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
implemented for the MPS device. If you want this op to be added in priority
during the prototype phase of this feature, please comment on
https://github.com/pytorch/pytorch/issues/77764.
As a temporary fix, you can set the environment variable
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
WARNING: this will be slower than running natively on MPS.
```
The InvokeAI version includes this fix in environment-mac.yml.
"Could not build wheels for tokenizers"
I have not seen this error because I had Rust installed on my computer before I started playing with Stable Diffusion. The fix is to install Rust.
curl \
--proto '=https' \
--tlsv1.2 \
-sSf https://sh.rustup.rs | sh
How come --seed
doesn't work?
First this:
Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.
Second, we might have a fix that at least gets a consistent seed sort of. We're still working on it.
libiomp5.dylib error?
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
You are likely using an Intel package by mistake. Be sure to run conda with the
environment variable CONDA_SUBDIR=osx-arm64
, like so:
CONDA_SUBDIR=osx-arm64 conda install ...
This error happens with Anaconda on Macs when the Intel-only mkl
is pulled in
by a dependency.
nomkl
is a metapackage designed to prevent this, by making it impossible to install
mkl
, but if your environment is already broken it may not work.
Do not use os.environ['KMP_DUPLICATE_LIB_OK']='True'
or equivalents as this
masks the underlying issue of using Intel packages.
Not enough memory
This seems to be a common problem and is probably the underlying problem for a
lot of symptoms (listed below). The fix is to lower your image size or to add
model.half()
right after the model is loaded. I should probably test it out.
I've read that the reason this fixes problems is because it converts the model
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
how that would affect the quality of the images though.
See this issue.
"Error: product of dimension sizes > 2**31'"
This error happens with img2img, which I haven't played with too much yet. But I know it's because your image is too big or the resolution isn't a multiple of 32x32. Because the stable-diffusion model was trained on images that were 512 x 512, it's always best to use that output size (which is the default). However, if you're using that size and you get the above error, try 256 x 256 or 512 x 256 or something as the source image.
BTW, 2**31-1 = 2,147,483,647, which is also 32-bit signed LONG_MAX in C.
I just got Rickrolled! Do I have a virus?
You don't have a virus. It's part of the project. Here's Rick and here's the code that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we call this "computer vision", sheesh).
My images come out black
We might have this fixed, we are still testing.
There's a similar issue
on CUDA GPU's where the images come out green. Maybe it's the same issue?
Someone in that issue says to use "--precision full", but this fork actually
disables that flag. I don't know why, someone else provided that code and I
don't know what it does. Maybe the model.half()
suggestion above would fix
this issue too. I should probably test it.
"view size is not compatible with input tensor's size and stride"
File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but we found a file in stable-diffusion that we could change instead. This is a 32-bit vs 16-bit problem.
The processor must support the Intel bla bla bla
What? Intel? On an Apple Silicon?
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
This is due to the Intel mkl
package getting picked up when you try to install
something that depends on it-- Rosetta can translate some Intel instructions but
not the specialized ones here. To avoid this, make sure to use the environment
variable CONDA_SUBDIR=osx-arm64
, which restricts the Conda environment to only
use ARM packages, and use nomkl
as described above.
input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
May appear when just starting to generate, e.g.:
invoke> clouds
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6
/Users/[...]/opt/anaconda3/envs/ldm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '