InvokeAI/invokeai
psychedelicious cdc174d5d2 fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-28 07:55:34 +10:00
..
app Revert "Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array." 2024-06-26 12:51:51 -04:00
assets feat(api): chore: pydantic & fastapi upgrade 2023-10-17 14:59:25 +11:00
backend fix(backend): mps should not use non_blocking 2024-06-28 07:55:34 +10:00
configs feat(mm): support sdxl ckpt inpainting models 2024-04-28 12:57:27 +10:00
frontend feat(ui): handle new model_install_download_started event 2024-06-17 10:07:10 +10:00
invocation_api fix: SchedulerOutput not being imported correctly 2024-06-10 04:12:20 -07:00
version Update invokeai_version.py 2024-06-27 10:41:01 +10:00
__init__.py