InvokeAI/invokeai/backend
Lincoln Stein fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
..
config during migration do not overwrite symlinks 2023-03-05 08:40:12 -05:00
generator fix(backend): simple typing fixes 2023-03-26 17:07:03 +11:00
image_util all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
model_management improve importation and conversion of legacy checkpoint files 2023-03-27 11:27:45 -04:00
prompting backend..conditioning: remove code for legacy model 2023-03-09 18:15:12 -08:00
restoration remove legacy ldm code 2023-03-04 18:16:59 -08:00
stable_diffusion load embeddings after a ckpt legacy model is converted to diffusers 2023-03-23 15:21:58 -04:00
training migrate to new HF diffusers cache location 2023-03-05 08:20:24 -05:00
util feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
web feat: use the predicted denoised image for previews 2023-03-09 20:28:06 -08:00
__init__.py do not reexport PipelineIntermediateState 2023-03-25 13:57:09 -04:00
args.py fix --png_compression command line argument 2023-03-14 00:24:05 -04:00
generate.py add restoration services to nodes 2023-03-11 17:00:00 -05:00
globals.py Unified spelling of Hugging Face 2023-03-05 07:30:35 -06:00
safety_checker.py restore NSFW checker 2023-03-11 16:16:44 -05:00