InvokeAI/invokeai/backend/model_manager/load
Lincoln Stein 49df4fa120 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-29 13:16:37 -05:00
..
convert_cache loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-29 13:16:36 -05:00
model_cache BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-29 13:16:37 -05:00
model_loaders BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-29 13:16:37 -05:00
__init__.py Multiple refinements on loaders: 2024-02-29 13:16:37 -05:00
load_base.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-29 13:16:37 -05:00
load_default.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-29 13:16:37 -05:00
memory_snapshot.py added textual inversion and lora loaders 2024-02-29 13:16:36 -05:00
model_util.py added textual inversion and lora loaders 2024-02-29 13:16:36 -05:00
optimizations.py add ram cache module and support files 2024-02-29 13:16:36 -05:00