InvokeAI/invokeai/backend/model_manager
psychedelicious eb6e6548ed feat(mm): faster hashing for spinning disk HDDs
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31

- Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted.
- Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality.
- Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue
- Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3`
- Update all calls of the probe to use the app's configured hashing algorithm
- Update an external script that probes models
- Update tests
- Move ModelHash into its own module to avoid circuclar import issues
2024-03-14 15:54:42 +11:00
..
load Do not override log_memory_usage when debug logs are enabled. The speed cost of log_memory_usage=True is large. It is common to want debug log without enabling log_memory_usage. 2024-03-12 09:48:50 +11:00
metadata fix(mm): config.json to indicates diffusers model 2024-03-13 21:02:29 +11:00
util install model if diffusers or single file, cleaned up backend logic to not mess with existing model install 2024-03-13 21:02:29 +11:00
__init__.py chore: ruff 2024-03-01 10:42:33 +11:00
config.py fix(mm): remove default settings from IP adapter config 2024-03-08 12:44:58 -05:00
convert_ckpt_to_diffusers.py chore: ruff 2024-03-01 10:42:33 +11:00
libc_util.py Tidy names and locations of modules 2024-03-01 10:42:33 +11:00
merge.py fix(mm): fix incorrect calls to update_model 2024-03-05 23:50:19 +11:00
probe.py feat(mm): faster hashing for spinning disk HDDs 2024-03-14 15:54:42 +11:00
search.py docs(mm): format docstrings for ModelSearch 2024-03-10 12:09:47 +11:00