mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
feat(mm): default hashing algo to blake3_single
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second. For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file. For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal. The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
This commit is contained in:
@ -119,19 +119,19 @@ The provided token will be added as a `Bearer` token to the network requests to
|
||||
|
||||
#### Model Hashing
|
||||
|
||||
Models are hashed during installation, providing a stable identifier for models across all platforms. The default algorithm is `blake3`, with a multi-threaded implementation.
|
||||
|
||||
If your models are stored on a spinning hard drive, we suggest using `blake3_single`, the single-threaded implementation. The hashes are the same, but it's much faster on spinning disks.
|
||||
Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.
|
||||
|
||||
```yaml
|
||||
hashing_algorithm: blake3_single
|
||||
```
|
||||
|
||||
Model hashing is a one-time operation, but it may take a couple minutes to hash a large model collection. You may opt out of model hashing entirely by setting the algorithm to `random`.
|
||||
You might want to change this setting, depending on your system:
|
||||
|
||||
```yaml
|
||||
hashing_algorithm: random
|
||||
```
|
||||
- `blake3_single` (default): Single-threaded - best for spinning HDDs, still OK for SSDs
|
||||
- `blake3`: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disks
|
||||
- `random`: Skip hashing entirely - fastest but of course no hash
|
||||
|
||||
During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.
|
||||
|
||||
Most common algorithms are supported, like `md5`, `sha256`, and `sha512`. These are typically much, much slower than `blake3`.
|
||||
|
||||
|
Reference in New Issue
Block a user