InvokeAI/invokeai/backend/model_manager
Lincoln Stein 79d028ecbd BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-08 23:26:41 -05:00
..
load BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-08 23:26:41 -05:00
metadata Multiple refinements on loaders: 2024-02-05 21:55:11 -05:00
util Multiple refinements on loaders: 2024-02-05 21:55:11 -05:00
__init__.py loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-04 17:23:10 -05:00
config.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-08 23:26:41 -05:00
convert_ckpt_to_diffusers.py loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-04 17:23:10 -05:00
hash.py chore: ruff lint 2023-11-14 07:57:07 +11:00
merge.py Port the command-line tools to use model_manager2 (#5546) 2024-02-02 17:18:47 +00:00
probe.py Multiple refinements on loaders: 2024-02-05 21:55:11 -05:00
search.py Update invokeai/backend/model_manager/search.py 2023-12-10 12:24:50 -05:00