InvokeAI/ldm
Damian Stewart 786b8878d6
Save and display per-token attention maps (#1866)
* attention maps saving to /tmp

* tidy up diffusers branch backporting of cross attention refactoring

* base64-encoding the attention maps image for generationResult

* cleanup/refactor conditioning.py

* attention maps and tokens being sent to web UI

* attention maps: restrict count to actual token count and improve robustness

* add argument type hint to image_to_dataURL function

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-10 15:57:41 +01:00
..
data Textual Inversion for M1 2022-09-27 01:39:17 +02:00
invoke Save and display per-token attention maps (#1866) 2022-12-10 15:57:41 +01:00
models Save and display per-token attention maps (#1866) 2022-12-10 15:57:41 +01:00
modules Save and display per-token attention maps (#1866) 2022-12-10 15:57:41 +01:00
__init__.py Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
generate.py Save and display per-token attention maps (#1866) 2022-12-10 15:57:41 +01:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py Squashed commit of the following: 2022-09-12 14:31:48 -04:00
util.py Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00