InvokeAI/configs/models.yaml

22 lines
811 B
YAML
Raw Normal View History

# This file describes the alternative machine learning models
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1) * Removed duplicate fix_func for MPS * add support for loading VAE autoencoders To add a VAE autoencoder to an existing model: 1. Download the appropriate autoencoder and put it into models/ldm/stable-diffusion Note that you MUST use a VAE that was written for the original CompViz Stable Diffusion codebase. For v1.4, that would be the file named vae-ft-mse-840000-ema-pruned.ckpt that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original 2. Edit config/models.yaml to contain the following stanza, modifying `weights` and `vae` as required to match the weights and vae model file names. There is no requirement to rename the VAE file. ~~~ stable-diffusion-1.4: weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt description: Stable Diffusion v1.4 config: configs/stable-diffusion/v1-inference.yaml vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512 height: 512 ~~~ 3. Alternatively from within the `invoke.py` CLI, you may use the command `!editmodel stable-diffusion-1.4` to bring up a simple editor that will allow you to add the path to the VAE. 4. If you are just installing InvokeAI for the first time, you can also use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead to create the configuration from scratch. 5. That's it! * ported code refactor changes from PR #1221 - pass a PIL.Image to img2img and inpaint rather than tensor - To support clipseg, inpaint needs to accept an "L" or "1" format mask. Made the appropriate change. * minor fixes to inpaint code 1. If tensors are passed to inpaint as init_image and/or init_mask, then the post-generation image fixup code will be skipped. 2. Post-generation image fixup will work with either a black and white "L" or "RGB" mask, or an "RGBA" mask. Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-23 03:09:38 +00:00
# available to the dream script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
stable-diffusion-1.4:
2022-10-23 21:08:05 +00:00
config: configs/stable-diffusion/v1-inference.yaml
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1) * Removed duplicate fix_func for MPS * add support for loading VAE autoencoders To add a VAE autoencoder to an existing model: 1. Download the appropriate autoencoder and put it into models/ldm/stable-diffusion Note that you MUST use a VAE that was written for the original CompViz Stable Diffusion codebase. For v1.4, that would be the file named vae-ft-mse-840000-ema-pruned.ckpt that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original 2. Edit config/models.yaml to contain the following stanza, modifying `weights` and `vae` as required to match the weights and vae model file names. There is no requirement to rename the VAE file. ~~~ stable-diffusion-1.4: weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt description: Stable Diffusion v1.4 config: configs/stable-diffusion/v1-inference.yaml vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512 height: 512 ~~~ 3. Alternatively from within the `invoke.py` CLI, you may use the command `!editmodel stable-diffusion-1.4` to bring up a simple editor that will allow you to add the path to the VAE. 4. If you are just installing InvokeAI for the first time, you can also use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead to create the configuration from scratch. 5. That's it! * ported code refactor changes from PR #1221 - pass a PIL.Image to img2img and inpaint rather than tensor - To support clipseg, inpaint needs to accept an "L" or "1" format mask. Made the appropriate change. * minor fixes to inpaint code 1. If tensors are passed to inpaint as init_image and/or init_mask, then the post-generation image fixup code will be skipped. 2. Post-generation image fixup will work with either a black and white "L" or "RGB" mask, or an "RGBA" mask. Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-23 03:09:38 +00:00
weights: models/ldm/stable-diffusion-v1/model.ckpt
2022-10-23 21:08:05 +00:00
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1) * Removed duplicate fix_func for MPS * add support for loading VAE autoencoders To add a VAE autoencoder to an existing model: 1. Download the appropriate autoencoder and put it into models/ldm/stable-diffusion Note that you MUST use a VAE that was written for the original CompViz Stable Diffusion codebase. For v1.4, that would be the file named vae-ft-mse-840000-ema-pruned.ckpt that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original 2. Edit config/models.yaml to contain the following stanza, modifying `weights` and `vae` as required to match the weights and vae model file names. There is no requirement to rename the VAE file. ~~~ stable-diffusion-1.4: weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt description: Stable Diffusion v1.4 config: configs/stable-diffusion/v1-inference.yaml vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512 height: 512 ~~~ 3. Alternatively from within the `invoke.py` CLI, you may use the command `!editmodel stable-diffusion-1.4` to bring up a simple editor that will allow you to add the path to the VAE. 4. If you are just installing InvokeAI for the first time, you can also use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead to create the configuration from scratch. 5. That's it! * ported code refactor changes from PR #1221 - pass a PIL.Image to img2img and inpaint rather than tensor - To support clipseg, inpaint needs to accept an "L" or "1" format mask. Made the appropriate change. * minor fixes to inpaint code 1. If tensors are passed to inpaint as init_image and/or init_mask, then the post-generation image fixup code will be skipped. 2. Post-generation image fixup will work with either a black and white "L" or "RGB" mask, or an "RGBA" mask. Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-23 03:09:38 +00:00
width: 512
height: 512
default: true
stable-diffusion-1.5:
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1) * Removed duplicate fix_func for MPS * add support for loading VAE autoencoders To add a VAE autoencoder to an existing model: 1. Download the appropriate autoencoder and put it into models/ldm/stable-diffusion Note that you MUST use a VAE that was written for the original CompViz Stable Diffusion codebase. For v1.4, that would be the file named vae-ft-mse-840000-ema-pruned.ckpt that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original 2. Edit config/models.yaml to contain the following stanza, modifying `weights` and `vae` as required to match the weights and vae model file names. There is no requirement to rename the VAE file. ~~~ stable-diffusion-1.4: weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt description: Stable Diffusion v1.4 config: configs/stable-diffusion/v1-inference.yaml vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512 height: 512 ~~~ 3. Alternatively from within the `invoke.py` CLI, you may use the command `!editmodel stable-diffusion-1.4` to bring up a simple editor that will allow you to add the path to the VAE. 4. If you are just installing InvokeAI for the first time, you can also use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead to create the configuration from scratch. 5. That's it! * ported code refactor changes from PR #1221 - pass a PIL.Image to img2img and inpaint rather than tensor - To support clipseg, inpaint needs to accept an "L" or "1" format mask. Made the appropriate change. * minor fixes to inpaint code 1. If tensors are passed to inpaint as init_image and/or init_mask, then the post-generation image fixup code will be skipped. 2. Post-generation image fixup will work with either a black and white "L" or "RGB" mask, or an "RGBA" mask. Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-23 03:09:38 +00:00
config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
description: Stable Diffusion inference model version 1.5
width: 512
height: 512
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt