InvokeAI/invokeai/backend/model_manager
psychedelicious 7726d312e1 feat(mm): default hashing algo to blake3_single
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.

For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.

For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.

The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
2024-03-22 08:26:36 +11:00
..
load fix(mm): regression from change to legacy conf dir change 2024-03-20 15:05:25 +11:00
metadata fix(mm): config.json to indicates diffusers model 2024-03-13 21:02:29 +11:00
util install model if diffusers or single file, cleaned up backend logic to not mess with existing model install 2024-03-13 21:02:29 +11:00
__init__.py chore: ruff 2024-03-01 10:42:33 +11:00
config.py record model_variant in t2i and clip_vision configs (#5989) 2024-03-19 20:14:12 +00:00
convert_ckpt_to_diffusers.py Remove core safetensors->diffusers conversion models 2024-03-17 19:13:18 -04:00
libc_util.py Tidy names and locations of modules 2024-03-01 10:42:33 +11:00
merge.py fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
probe.py feat(mm): default hashing algo to blake3_single 2024-03-22 08:26:36 +11:00
search.py docs(mm): format docstrings for ModelSearch 2024-03-10 12:09:47 +11:00
starter_models.py feat(mm): add starter models route 2024-03-20 15:05:25 +11:00