Update references to lstein/stable-diffusion to invoke-ai/InvokeAI

This commit is contained in:
Jim Hays 2022-09-30 21:50:41 -04:00 committed by Lincoln Stein
parent bc9471987b
commit 58d0f14d03

View File

@ -95,10 +95,9 @@ While that is downloading, open a Terminal and run the following commands:
```{.bash .annotate title="local repo setup"} ```{.bash .annotate title="local repo setup"}
# clone the repo # clone the repo
git clone https://github.com/invoke-ai/InvokeAI.git git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI cd InvokeAI
# wait until the checkpoint file has downloaded, then proceed # Download the checkpoint file, and then proceed
# create symlink to checkpoint # create symlink to checkpoint
mkdir -p models/ldm/stable-diffusion-v1/ mkdir -p models/ldm/stable-diffusion-v1/
@ -172,13 +171,13 @@ python ./scripts/orig_scripts/txt2img.py \
### Doesn't work anymore? ### Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup is PyTorch nightly includes support for MPS. Because of this, this setup
inherently unstable. One morning I woke up and it no longer worked no matter is inherently unstable. One morning I woke up and it no longer worked
what I did until I switched to miniforge. However, I have another Mac that works no matter what I did until I switched to miniforge. However, I have
just fine with Anaconda. If you can't get it to work, please search a little another Mac that works just fine with Anaconda. If you can't get it to
first because many of the errors will get posted and solved. If you can't find a work, please search a little first because many of the errors will get
solution please posted and solved. If you can't find a solution please [create an
[create an issue](https://github.com/invoke-ai/InvokeAI/issues). issue](https://github.com/invoke-ai/InvokeAI/issues).
One debugging step is to update to the latest version of PyTorch nightly. One debugging step is to update to the latest version of PyTorch nightly.
@ -378,8 +377,8 @@ python scripts/preload_models.py
WARNING: this will be slower than running natively on MPS. WARNING: this will be slower than running natively on MPS.
``` ```
This fork already includes a fix for this in The InvokeAI version includes this fix in
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml). [environment-mac.yaml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yaml).
### "Could not build wheels for tokenizers" ### "Could not build wheels for tokenizers"
@ -463,13 +462,10 @@ C.
You don't have a virus. It's part of the project. Here's You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) [Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg)
and here's and here's [the
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79) code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we that swaps him in. It's a NSFW filter, which IMO, doesn't work very
call this "computer vision", sheesh). good (and we call this "computer vision", sheesh).
Actually, this could be happening because there's not enough RAM. You could try
the `model.half()` suggestion or specify smaller output images.
--- ---
@ -492,11 +488,9 @@ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backen
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
``` ```
Update to the latest version of invoke-ai/InvokeAI. We were patching Update to the latest version of invoke-ai/InvokeAI. We were
pytorch but we found a file in stable-diffusion that we could change instead. patching pytorch but we found a file in stable-diffusion that we could
This is a 32-bit vs 16-bit problem. change instead. This is a 32-bit vs 16-bit problem.
---
### The processor must support the Intel bla bla bla ### The processor must support the Intel bla bla bla