mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Increase chunk size when computing diffusers SHAs (#3159)
When running this app first time in WSL2 environment, which is notoriously slow when it comes to IO, computing the SHAs of the models takes an eternity. Computing shas for sd2.1 ``` | Calculating sha256 hash of model files | sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 510.87s) ``` I increased the chunk size to 16MB reduce the number of round trips for loading the data. New results: ``` | Calculating sha256 hash of model files | sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 59.89s) ``` Higher values don't seem to make an impact.
This commit is contained in:
commit
f05095770c
@ -1204,7 +1204,7 @@ class ModelManager(object):
|
||||
return self.device.type == "cuda"
|
||||
|
||||
def _diffuser_sha256(
|
||||
self, name_or_path: Union[str, Path], chunksize=4096
|
||||
self, name_or_path: Union[str, Path], chunksize=16777216
|
||||
) -> Union[str, bytes]:
|
||||
path = None
|
||||
if isinstance(name_or_path, Path):
|
||||
|
Loading…
Reference in New Issue
Block a user