1. If tensors are passed to inpaint as init_image and/or init_mask, then
the post-generation image fixup code will be skipped.
2. Post-generation image fixup will work with either a black and white "L"
or "RGB" mask, or an "RGBA" mask.
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
mask. Made the appropriate change.
To add a VAE autoencoder to an existing model:
1. Download the appropriate autoencoder and put it into
models/ldm/stable-diffusion
Note that you MUST use a VAE that was written for the
original CompViz Stable Diffusion codebase. For v1.4,
that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original
2. Edit config/models.yaml to contain the following stanza, modifying `weights`
and `vae` as required to match the weights and vae model file names. There is
no requirement to rename the VAE file.
~~~
stable-diffusion-1.4:
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
description: Stable Diffusion v1.4
config: configs/stable-diffusion/v1-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
~~~
3. Alternatively from within the `invoke.py` CLI, you may use the command
`!editmodel stable-diffusion-1.4` to bring up a simple editor that will
allow you to add the path to the VAE.
4. If you are just installing InvokeAI for the first time, you can also
use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
to create the configuration from scratch.
5. That's it!
* Removed duplicate fix_func for MPS
* add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:
1. Download the appropriate autoencoder and put it into
models/ldm/stable-diffusion
Note that you MUST use a VAE that was written for the
original CompViz Stable Diffusion codebase. For v1.4,
that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original
2. Edit config/models.yaml to contain the following stanza, modifying `weights`
and `vae` as required to match the weights and vae model file names. There is
no requirement to rename the VAE file.
~~~
stable-diffusion-1.4:
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
description: Stable Diffusion v1.4
config: configs/stable-diffusion/v1-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
~~~
3. Alternatively from within the `invoke.py` CLI, you may use the command
`!editmodel stable-diffusion-1.4` to bring up a simple editor that will
allow you to add the path to the VAE.
4. If you are just installing InvokeAI for the first time, you can also
use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
to create the configuration from scratch.
5. That's it!
* ported code refactor changes from PR #1221
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
mask. Made the appropriate change.
* minor fixes to inpaint code
1. If tensors are passed to inpaint as init_image and/or init_mask, then
the post-generation image fixup code will be skipped.
2. Post-generation image fixup will work with either a black and white "L"
or "RGB" mask, or an "RGBA" mask.
Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.
This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
1. If tensors are passed to inpaint as init_image and/or init_mask, then
the post-generation image fixup code will be skipped.
2. Post-generation image fixup will work with either a black and white "L"
or "RGB" mask, or an "RGBA" mask.
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
mask. Made the appropriate change.
To add a VAE autoencoder to an existing model:
1. Download the appropriate autoencoder and put it into
models/ldm/stable-diffusion
Note that you MUST use a VAE that was written for the
original CompViz Stable Diffusion codebase. For v1.4,
that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original
2. Edit config/models.yaml to contain the following stanza, modifying `weights`
and `vae` as required to match the weights and vae model file names. There is
no requirement to rename the VAE file.
~~~
stable-diffusion-1.4:
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
description: Stable Diffusion v1.4
config: configs/stable-diffusion/v1-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
~~~
3. Alternatively from within the `invoke.py` CLI, you may use the command
`!editmodel stable-diffusion-1.4` to bring up a simple editor that will
allow you to add the path to the VAE.
4. If you are just installing InvokeAI for the first time, you can also
use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
to create the configuration from scratch.
5. That's it!
Ironically, the black and white mask file generated by the
`invoke> !mask` command could not be passed as the mask to
`img2img`. This is now fixed and the documentation updated.
- remove unsupported testtubelogger, use csvlogger instead
- fix logic for parsing --gpus option so that it won't crash if
trailing comma absent
- change trainer accelerator from unsupported 'ddp' to 'auto'