- change default model back to 1.4
- remove --fnformat from canonicalized dream prompt arguments
(not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
(definitely needed for image reproducibility)
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model
Credits for advice and assistance during porting:
@any-winter-4079 (http://github.com/any-winter-4079)
@db3000 (Danny Beer http://github.com/db3000)
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model
This is still a work in progress but seems functional. It supports
inpainting, txt2img and img2img on the ddim and k* samplers (plms
still needs work, but I know what to do).
To test this, get the file `sd-v1-5-inpainting.ckpt' from
https://huggingface.co/runwayml/stable-diffusion-inpainting and place it
at `models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt`
Launch invoke.py with --model inpainting-1.5 and proceed as usual.
Caveats:
1. The inpainting model takes about 800 Mb more memory than the standard
1.5 model. This model will not work on 4 GB cards.
2. The inpainting model is temperamental. It wants you to describe the
entire scene and not just the masked area to replace. So if you want
to replace the parrot on a man's shoulder with a crow, the prompt
"crow" may fail. Try "man with a crow on shoulder" instead. The
symptom of a failed inpainting is that the area will be erased and
replaced with background.
3. This has not been tested well. Please report bugs.