BREAKING CHANGES: invocations now require model key, not base/type/name

- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
This commit is contained in:
Lincoln Stein
2024-02-05 22:56:32 -05:00
committed by psychedelicious
parent fbded1c0f2
commit dfcf38be91
31 changed files with 727 additions and 496 deletions

View File

@ -192,6 +192,7 @@ def sdxl_base_files() -> List[Path]:
"text_encoder/model.onnx",
"text_encoder_2/config.json",
"text_encoder_2/model.onnx",
"text_encoder_2/model.onnx_data",
"tokenizer/merges.txt",
"tokenizer/special_tokens_map.json",
"tokenizer/tokenizer_config.json",
@ -202,6 +203,7 @@ def sdxl_base_files() -> List[Path]:
"tokenizer_2/vocab.json",
"unet/config.json",
"unet/model.onnx",
"unet/model.onnx_data",
"vae_decoder/config.json",
"vae_decoder/model.onnx",
"vae_encoder/config.json",