InvokeAI/invokeai/backend/model_manager
psychedelicious c7562dd6c0
fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-27 19:15:23 +10:00
..
load fix(backend): mps should not use non_blocking 2024-06-27 19:15:23 +10:00
metadata refactor model_install to work with refactored download queue 2024-05-13 22:49:15 -04:00
util install model if diffusers or single file, cleaned up backend logic to not mess with existing model install 2024-03-13 21:02:29 +11:00
__init__.py [mm] Do not write diffuser model to disk when convert_cache set to zero (#6072) 2024-03-29 16:11:08 -04:00
config.py Initial functionality 2024-06-18 10:38:29 -04:00
convert_ckpt_to_diffusers.py feat(mm): use same pattern for vae converter as others 2024-04-01 12:34:49 +11:00
libc_util.py Tidy names and locations of modules 2024-03-01 10:42:33 +11:00
merge.py [util] Add generic torch device class (#6174) 2024-04-15 13:12:49 +00:00
probe.py [MM] Add support for probing and loading SDXL VAE checkpoint files (#6524) 2024-06-20 02:57:27 +00:00
search.py docs(mm): update ModelSearch 2024-03-28 12:35:41 +11:00
starter_models.py fix: update SDXL IP Adpater starter model to be ViT-H 2024-04-24 00:08:21 -04:00