InvokeAI/ldm
xra fef632e0e1 tokenization logging (take 2)
This adds an option -t argument that will print out color-coded tokenization, SD has a maximum of 77 tokens, it silently discards tokens over the limit if your prompt is too long.
By using -t you can see how your prompt is being tokenized which helps prompt crafting.
2022-08-29 12:28:49 +09:00
..
data prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
dream Merge pull request #168 from blessedcoolant/bug-fixes 2022-08-29 13:05:53 +12:00
gfpgan Optimize and Improve GFPGAN and Real-ESRGAN Pipeline 2022-08-29 08:14:29 +12:00
models prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
modules Merge branch 'main' into half-precision-embeddings 2022-08-26 08:33:46 -07:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py tokenization logging (take 2) 2022-08-29 12:28:49 +09:00
util.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00