InvokeAI/tests/backend
Lincoln Stein 49df4fa120 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-29 13:16:37 -05:00
..
ip_adapter Add support for IPAdapterFull models. The changes are based on this upstream PR: https://github.com/tencent-ailab/IP-Adapter/pull/139 . 2023-11-29 15:07:21 -08:00
model_management Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model. 2023-11-02 19:41:33 -07:00
model_manager_2 BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-29 13:16:37 -05:00
tiles In CalculateImageTilesEvenSplitInvocation to have overlap_fraction becomes just overlap. This is now in pixels rather than as a fraction of the tile size. 2023-12-17 15:10:50 +00:00
util fix import order 2023-12-02 11:56:41 -05:00
__init__.py POC of a test that depends on models. 2023-10-05 15:35:58 -04:00