InvokeAI/invokeai/backend/embeddings
Lincoln Stein dfcf38be91 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-15 17:56:01 +11:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
embedding_base.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
lora.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
model_patcher.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
textual_inversion.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00