mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
fef632e0e1
This adds an option -t argument that will print out color-coded tokenization, SD has a maximum of 77 tokens, it silently discards tokens over the limit if your prompt is too long. By using -t you can see how your prompt is being tokenized which helps prompt crafting. |
||
---|---|---|
.. | ||
data | ||
dream | ||
gfpgan | ||
models | ||
modules | ||
lr_scheduler.py | ||
simplet2i.py | ||
util.py |