InvokeAI/invokeai/backend/util
Lincoln Stein dfcf38be91 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-15 17:56:01 +11:00
..
__init__.py loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-15 17:51:07 +11:00
attention.py Apply black, isort, flake8 2023-09-12 13:01:58 -04:00
db_maintenance.py add techjedi's database maintenance script 2023-09-20 17:46:49 -04:00
devices.py Multiple refinements on loaders: 2024-02-15 17:51:07 +11:00
hotfixes.py Fix ruff? 2024-02-01 20:40:28 -05:00
log.py Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
logging.py clean up get_logger() call 2023-12-04 22:41:59 -05:00
mps_fixes.py add note about discriminated union and Body() issue; blackified 2023-11-12 16:50:05 -05:00
silence_warnings.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-02-15 17:56:01 +11:00
test_utils.py chore(backend): rename ModelInfo -> LoadedModelInfo 2024-02-15 17:30:03 +11:00
util.py loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-15 17:51:07 +11:00