InvokeAI/invokeai/backend/model_manager/load
psychedelicious c7562dd6c0
fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-27 19:15:23 +10:00
..
convert_cache make download and convert cache keys safe for filename length 2024-04-28 12:24:36 -04:00
model_cache fix(backend): mps should not use non_blocking 2024-06-27 19:15:23 +10:00
model_loaders [MM] Add support for probing and loading SDXL VAE checkpoint files (#6524) 2024-06-20 02:57:27 +00:00
__init__.py add support for generic loading of diffusers directories 2024-06-07 13:54:30 +10:00
load_base.py Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api 2024-06-07 14:23:41 +10:00
load_default.py make download and convert cache keys safe for filename length 2024-04-28 12:24:36 -04:00
memory_snapshot.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00
model_loader_registry.py Experiment with using absolute paths within model management 2024-03-08 15:36:14 -05:00
model_util.py make model manager v2 ready for PR review 2024-03-01 10:42:33 +11:00
optimizations.py final tidying before marking PR as ready for review 2024-03-01 10:42:33 +11:00