InvokeAI/ldm
Mihail Dumitrescu e0951f28cf Refactor attention.CrossAttention to remove duplicate code and apply optimizations
Apply ~6% speedup by moving * self.scale to earlier on a smaller tensor.
When we have enough VRAM don't make a useless zeros tensor.
Switch between cuda/mps/cpu based on q.device.type to allow cleaner per architecture future optimizations.
For cuda and cpu keep VRAM usage and faster slicing consistent.
For cpu use smaller slices. Tested ~20% faster on i7, 9.8 to 7.7 s/it.
Fix = typo to self.mem_total >= 8 in einsum_op_mps_v2 as per #582 discussion.
2022-09-17 20:19:21 +03:00
..
data prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
dream tidy up generation of prompt when variations in use 2022-09-17 11:59:47 -04:00
gfpgan implementation of RFC #266 (#587) 2022-09-16 13:09:04 -04:00
models Refactor generate.py and dream.py (#534) 2022-09-14 07:02:31 -04:00
modules Refactor attention.CrossAttention to remove duplicate code and apply optimizations 2022-09-17 20:19:21 +03:00
generate.py Merge with PR #602 2022-09-16 16:35:34 -04:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py Disabled debug output (#436) 2022-09-11 22:34:35 -04:00
util.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00