InvokeAI/invokeai/backend/model_manager/load/model_loaders
Lincoln Stein dfcf38be91 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-15 17:56:01 +11:00
..
__init__.py model loading and conversion implemented for vaes 2024-02-15 17:50:51 +11:00
controlnet.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
generic_diffusers.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
ip_adapter.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
lora.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
onnx.py Multiple refinements on loaders: 2024-02-15 17:51:07 +11:00
stable_diffusion.py loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-15 17:51:07 +11:00
textual_inversion.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
vae.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00