mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
eb6e6548ed
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31 - Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted. - Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality. - Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue - Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3` - Update all calls of the probe to use the app's configured hashing algorithm - Update an external script that probes models - Update tests - Move ModelHash into its own module to avoid circuclar import issues |
||
---|---|---|
.. | ||
configure_invokeai.py | ||
generate_openapi_schema.py | ||
generate_profile_graphs.sh | ||
images2prompt.py | ||
invokeai-cli.py | ||
invokeai-model-install.py | ||
invokeai-web.py | ||
make_models_markdown_table.py | ||
probe-model.py | ||
pypi_helper.py | ||
scan_models_directory.py | ||
sd-metadata.py | ||
update_config_docstring.py |