InvokeAI/invokeai/backend/model_manager/load/model_cache
psychedelicious c7562dd6c0
fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-27 19:15:23 +10:00
..
__init__.py BREAKING CHANGES: invocations now require model key, not base/type/name 2024-03-01 10:42:33 +11:00
model_cache_base.py Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api 2024-06-07 14:23:41 +10:00
model_cache_default.py fix(backend): mps should not use non_blocking 2024-06-27 19:15:23 +10:00
model_locker.py Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api 2024-06-07 14:23:41 +10:00