InvokeAI/ldm/models/diffusion
Lincoln Stein b101be041b add support for runwayML custom inpainting model
This is still a work in progress but seems functional. It supports
inpainting, txt2img and img2img on the ddim and k* samplers (plms
still needs work, but I know what to do).

To test this, get the file `sd-v1-5-inpainting.ckpt' from
https://huggingface.co/runwayml/stable-diffusion-inpainting and place it
at `models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt`

Launch invoke.py with --model inpainting-1.5 and proceed as usual.

Caveats:

1. The inpainting model takes about 800 Mb more memory than the standard
   1.5 model. This model will not work on 4 GB cards.

2. The inpainting model is temperamental. It wants you to describe the
   entire scene and not just the masked area to replace. So if you want
   to replace the parrot on a man's shoulder with a crow, the prompt
   "crow" may fail. Try "man with a crow on shoulder" instead. The
   symptom of a failed inpainting is that the area will be erased and
   replaced with background.

3. This has not been tested well. Please report bugs.
2022-10-25 10:45:15 -04:00
..
__init__.py add code 2021-12-21 03:23:41 +01:00
classifier.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
ddim.py start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00
ddpm.py inpaint and txt2img working with ddim sampler 2022-10-25 10:00:28 -04:00
ksampler.py add support for runwayML custom inpainting model 2022-10-25 10:45:15 -04:00
plms.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
sampler.py start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00