Commit Graph

5 Commits

Author SHA1 Message Date
Lincoln Stein
93cba3fba5
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1)
* Removed duplicate fix_func for MPS

* add support for loading VAE autoencoders

To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!

* ported code refactor changes from PR #1221

- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.

* minor fixes to inpaint code

1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.

Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-22 20:09:38 -07:00
Kyle Schouviller
3b01e6e423 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-22 14:56:33 -07:00
Lincoln Stein
20551857da add clipseg support for creating inpaint masks from text
On the command line, the new option is --text_mask or -tm.
Example:

```
invoke> a baseball -I /path/to/still_life.png -tm orange
```

This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
2022-10-18 08:27:48 -04:00
Lincoln Stein
6f93dc7712 cleanup inpainting and img2img
- add a `--inpaint_replace` option that fills masked regions with
  latent noise. This allows radical changes to inpainted regions
  at the cost of losing context.
- fix up readline, arg processing and metadata writing to accommodate
  this change
- fixed bug in storage and retrieval of variations, discovered incidentally
  during testing
- update documentation
2022-10-16 08:50:55 -04:00
Lincoln Stein
2b1aaf4ee7 rename all modules from ldm.dream to ldm.invoke
- scripts and documentation updated to match
- ran preflight checks on both web and CLI and seems to be working
2022-10-08 11:37:23 -04:00