InvokeAI/invokeai/backend/model_manager/load
Lincoln Stein 71a1740740 Remove core safetensors->diffusers conversion models
- No longer install core conversion models. Use the HuggingFace cache to load
  them if and when needed.

- Call directly into the diffusers library to perform conversions with only shallow
   wrappers around them to massage arguments, etc.

- At root configuration time, do not create all the possible model subdirectories,
  but let them be created and populated at model install time.

- Remove checks for missing core conversion files, since they are no
  longer installed.
2024-03-17 19:13:18 -04:00
..
convert_cache chore: ruff 2024-03-01 10:42:33 +11:00
model_cache Do not override log_memory_usage when debug logs are enabled. The speed cost of log_memory_usage=True is large. It is common to want debug log without enabling log_memory_usage. 2024-03-12 09:48:50 +11:00
model_loaders Remove core safetensors->diffusers conversion models 2024-03-17 19:13:18 -04:00
__init__.py chore: ruff 2024-03-01 10:42:33 +11:00
load_base.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
load_default.py fix(mm): misc typing fixes for model loaders 2024-03-05 23:50:19 +11:00
memory_snapshot.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
model_loader_registry.py Experiment with using absolute paths within model management 2024-03-08 15:36:14 -05:00
model_util.py make model manager v2 ready for PR review 2024-03-01 10:42:33 +11:00
optimizations.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00