Commit Graph

724 Commits

Author SHA1 Message Date
Lincoln Stein
636620b1d5 change initfile to ~/.invokeai
- adjust documentation
- also fix 'clipseg_models' to 'clipseg', which seems to be working now
2022-11-08 03:26:16 +00:00
Lincoln Stein
1fe41146f0 add support for an initialization file, invokeai.init
- Place preferred startup command switches in a file named
  "invokeai.init". The file can consist of a single line of switches
  such as "--web --steps=28", a series of switches on each
  line, or any combination of the two.

 Example:
 ```
   --web
   --host=0.0.0.0
   --steps=28
   --grid
   -f 0.6 -C 11.0 -A k_euler_a
```

- The following options, which were previously only available within
  the CLI, are now available on the command line as well:

  --steps
  --strength
  --cfg_scale
  --width
  --height
  --fit
2022-11-06 22:02:45 -05:00
Kyle Schouviller
f91fd27624 Bug fix for inpaint size 2022-11-06 09:25:50 -08:00
Kyle Schouviller
09e41e8f76 Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination 2022-11-06 09:25:50 -08:00
Lincoln Stein
17053ad8b7 fix duplicated argument introduced by conflict resolution 2022-11-05 16:01:55 -04:00
Lincoln Stein
fefb4dc1f8
Merge branch 'development' into fix_generate.py 2022-11-05 12:47:35 -07:00
Craig
d05b1b3544 Resize hires as an image 2022-11-05 11:54:23 -07:00
Craig
82d4904c07 Log strength with hires 2022-11-05 11:54:23 -07:00
Matthias Wild
36870a8f53
Merge branch 'development' into merge-main-into-development 2022-11-04 16:25:00 +01:00
damian0815
b70420951d fix parsing error doing eg forest ().swap(in winter) 2022-11-03 20:15:23 -04:00
wfng92
1f0c5b4cf1 Use array slicing to calc ddim timesteps 2022-11-03 20:11:04 -04:00
Lincoln Stein
174a9b78b0 Bring main back into a consistent state with other branches
- Due to misuse of rebase command, main was transiently
  in an inconsistent state.

- This repairs the damage, and adds a few post-release
  patches that ensure stable conda installs on Mac and Windows.
2022-11-03 15:44:06 -04:00
Lincoln Stein
240e5486c8 Merge branch 'spezialspezial-patch-9' into development 2022-11-02 18:35:00 -04:00
Lincoln Stein
aa247e68be use refined model by default 2022-11-02 18:29:34 -04:00
Lincoln Stein
895c47fd11 Merge branch 'patch-9' of https://github.com/spezialspezial/stable-diffusion into spezialspezial-patch-9 2022-11-02 18:24:55 -04:00
damian
d85cd99f17 add option to show intermediate latent space 2022-11-02 17:53:11 -04:00
spezialspezial
825fa6977d Update outcrop.py 2022-11-02 16:33:35 -04:00
spezialspezial
e332529fbd Prevent outcrop error when no callback is supplied 2022-11-02 16:33:35 -04:00
damian0815
2468a28e66 save VRAM by not recombining tensors that have been sliced to save VRAM 2022-11-01 22:39:48 -04:00
damian0815
e3ed748191 fix a bug that broke cross attention control index mapping 2022-11-01 22:39:39 -04:00
damian0815
3f5bf7ac44 report full size for fast latents and update conversion matrix for v1.5 2022-11-01 22:39:27 -04:00
spezialspezial
5e87062cf8 Option to directly invert the grayscale heatmap - fix 2022-11-01 22:24:31 -04:00
spezialspezial
3e7a459990 Update txt2mask.py 2022-11-01 22:24:31 -04:00
spezialspezial
bbf4c03e50 Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 22:24:31 -04:00
spezialspezial
b45e632f23 Option to directly invert the grayscale heatmap - fix 2022-11-01 22:18:00 -04:00
Lincoln Stein
2d84e28d32 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 22:11:04 -04:00
damian0815
0cc39f01a3 report full size for fast latents and update conversion matrix for v1.5 2022-11-02 13:55:29 +13:00
damian0815
688d7258f1 fix a bug that broke cross attention control index mapping 2022-11-02 13:54:54 +13:00
damian0815
4513320bf1 save VRAM by not recombining tensors that have been sliced to save VRAM 2022-11-02 13:54:54 +13:00
spezialspezial
6c9a2761f5
Optional refined model for Txt2Mask
Don't merge right now, just wanted to show the necessary changes
2022-11-02 00:33:46 +01:00
Lincoln Stein
533fd04ef0 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:40:36 -04:00
spezialspezial
2bdd738f03 Update txt2mask.py 2022-11-01 17:39:56 -04:00
spezialspezial
7782760541 Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 17:39:56 -04:00
damian
cdb107dcda add option to show intermediate latent space 2022-11-01 17:39:08 -04:00
damian
be1393a41c ensure existing exception handling code also handles new exception class 2022-11-01 17:37:26 -04:00
Damian at mba
e554c2607f Rebuilt prompt parsing logic
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.

In the process it has also become more robust to badly-formed prompts.

Squashed commit of the following:

commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 17:05:57 2022 +0100

    further cleanup

commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 16:07:57 2022 +0100

    cleanup and document

commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:54:58 2022 +0100

    works fully

commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:24:31 2022 +0100

    further...

commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 14:08:57 2022 +0100

    getting there...

commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 14:29:03 2022 +0200

    wip doesn't compile

commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:21:43 2022 +0200

    working with CrossAttentionCtonrol but no Attention support yet

commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:04:52 2022 +0200

    wip rebuiling prompt parser
2022-11-01 17:37:26 -04:00
damian0815
de2686d323 fix crash (be a little less aggressive clearing out the attention slice) 2022-11-01 17:35:43 -04:00
damian0815
0b72a4a35e be more aggressive at clearing out saved_attn_slice 2022-11-01 17:35:34 -04:00
Lincoln Stein
6215592b12 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:34:55 -04:00
damian0815
349cc25433 fix crash (be a little less aggressive clearing out the attention slice) 2022-11-01 17:34:28 -04:00
damian0815
214d276379 be more aggressive at clearing out saved_attn_slice 2022-11-01 17:34:28 -04:00
Lincoln Stein
ab2b5a691d fix model_cache memory management issues 2022-11-01 17:23:20 -04:00
Lincoln Stein
942a202945 fix model_cache memory management issues 2022-11-01 17:22:48 -04:00
Lincoln Stein
89da42ad79 Merge branch 'pin-options-panel' of https://github.com/psychedelicious/stable-diffusion into psychedelicious-pin-options-panel
- from PR #1301
2022-10-31 09:37:13 -04:00
Damian at mba
ced9c83e96 various prompting fixes 2022-10-31 09:34:56 -04:00
Lincoln Stein
80f2cfe3e3 set default max_models to 2 internally as well as as arg 2022-10-31 09:05:38 -04:00
Lincoln Stein
dc556cb1a7 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:55:53 -04:00
Lincoln Stein
0c8f0e3386 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:53:16 -04:00
Lincoln Stein
98f03053ba hard-code strength to 0.9 during outcropping 2022-10-31 07:52:34 -04:00
Lincoln Stein
59ef2471e1 improve outcropping performance
- applied inpainting parameters recommended by @kyle0654
- results are aesthetically pleasing
- Closes #1319
2022-10-31 07:52:26 -04:00