- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
Original code predicated on the image being generated within InvokeAI.
- When outcropping an image you can now add a `--new_prompt` option, to specify
a new prompt to be used instead of the original one used to generate the image.
- Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
will pick one randomly.
- This PR also fixes the crash that happened when trying to outcrop an image
that does not contain InvokeAI metadata.
- this required an update to the invoke-ai fork of gfpgan
- simultaneously reverted consolidation of environment and
requirements files, as their presence in a directory
triggered setup.py to try to install a sub-package.
- Place preferred startup command switches in a file named
"invokeai.init". The file can consist of a single line of switches
such as "--web --steps=28", a series of switches on each
line, or any combination of the two.
Example:
```
--web
--host=0.0.0.0
--steps=28
--grid
-f 0.6 -C 11.0 -A k_euler_a
```
- The following options, which were previously only available within
the CLI, are now available on the command line as well:
--steps
--strength
--cfg_scale
--width
--height
--fit
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.
In the process it has also become more robust to badly-formed prompts.
Squashed commit of the following:
commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 17:05:57 2022 +0100
further cleanup
commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 16:07:57 2022 +0100
cleanup and document
commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:54:58 2022 +0100
works fully
commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:24:31 2022 +0100
further...
commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 14:08:57 2022 +0100
getting there...
commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 14:29:03 2022 +0200
wip doesn't compile
commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:21:43 2022 +0200
working with CrossAttentionCtonrol but no Attention support yet
commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:04:52 2022 +0200
wip rebuiling prompt parser
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1