Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.
In the process it has also become more robust to badly-formed prompts.
Squashed commit of the following:
commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 17:05:57 2022 +0100
further cleanup
commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 16:07:57 2022 +0100
cleanup and document
commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:54:58 2022 +0100
works fully
commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:24:31 2022 +0100
further...
commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 14:08:57 2022 +0100
getting there...
commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 14:29:03 2022 +0200
wip doesn't compile
commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:21:43 2022 +0200
working with CrossAttentionCtonrol but no Attention support yet
commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:04:52 2022 +0200
wip rebuiling prompt parser
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
license terms the very first time this is run. After that, everything
works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
for textual inversion, but I don't think so.
The Args object would crap out when trying to retrieve metadata from
an image file that did not contain InvokeAI-generated metadata, such
as a JPG. This corrects that and returns dummy values (seed of zero,
prompt of '') to avoid downstream breakage.
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.
- prompt weighting, merging and cross-attention working
- cross-attention does not work with runwayML inpainting
model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
- change default model back to 1.4
- remove --fnformat from canonicalized dream prompt arguments
(not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
(definitely needed for image reproducibility)