InvokeAI/invokeai/backend/model_manager/load
Lincoln Stein ba1f8878dd Fix issues identified during PR review by RyanjDick and brandonrising
- ModelMetadataStoreService is now injected into ModelRecordStoreService
  (these two services are really joined at the hip, and should someday be merged)
- ModelRecordStoreService is now injected into ModelManagerService
- Reduced timeout value for the various installer and download wait*() methods
- Introduced a Mock modelmanager for testing
- Removed bare print() statement with _logger in the install helper backend.
- Removed unused code from model loader init file
- Made `locker` a private variable in the `LoadedModel` object.
- Fixed up model merge frontend (will be deprecated anyway!)
2024-02-19 08:16:56 +11:00
..
convert_cache add a JIT download_and_cache() call to the model installer 2024-02-15 18:00:08 +11:00
model_cache References to context.services.model_manager.store.get_model can only accept keys, remove invalid assertion 2024-02-15 18:00:16 +11:00
model_loaders make model manager v2 ready for PR review 2024-02-15 18:00:08 +11:00
__init__.py Fix issues identified during PR review by RyanjDick and brandonrising 2024-02-19 08:16:56 +11:00
load_base.py Fix issues identified during PR review by RyanjDick and brandonrising 2024-02-19 08:16:56 +11:00
load_default.py Fix issues identified during PR review by RyanjDick and brandonrising 2024-02-19 08:16:56 +11:00
memory_snapshot.py added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
model_util.py make model manager v2 ready for PR review 2024-02-15 18:00:08 +11:00
optimizations.py add ram cache module and support files 2024-02-15 17:50:31 +11:00