Commit Graph

783 Commits

Author SHA1 Message Date
Lincoln Stein
48aa6416dc enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
  Original code predicated on the image being generated within InvokeAI.
2022-11-10 21:53:52 -05:00
Lincoln Stein
47ddda1f64 Revert "Log strength with hires"
This reverts commit 82d4904c07.
2022-11-10 16:50:00 -05:00
Lincoln Stein
c248ae44d4 Revert "Resize hires as an image"
This reverts commit d05b1b3544.
2022-11-10 16:50:00 -05:00
Lincoln Stein
9342ad8d97 prevent crash when switching to an invalid model 2022-11-09 10:07:15 -05:00
damian0815
5214742d02 don't suppress exceptions when doing cross-attention control 2022-11-09 07:21:21 -05:00
damian0815
178f0c78d8 Fix #1362 by improving VRAM usage patterns when doing .swap()
commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:18:37 2022 +0100

    remove log spam

commit 7189d649622d4668b120b0dd278388ad672142c4
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:10:28 2022 +0100

    change the way saved slicing strategy is applied

commit 01c40f751ab72955140165c16f95ae411732265b
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:04:43 2022 +0100

    fix slicing_strategy_getter callsite

commit f8cfe25150a346958903316bc710737d99839923
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:56:22 2022 +0100

    cleanup, consistent dim=0 also tested

commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:34:09 2022 +0100

    refactored context, tested with non-sliced cross attention control

commit d58a46e39bf562e7459290d2444256e8c08ad0b6
Author: damian0815 <null@damianstewart.com>
Date:   Sun Nov 6 00:41:52 2022 +0100

    cleanup

commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:57:31 2022 +0100

    disable logs

commit 20ee89d93841b070738b3d8a4385c93b097d92eb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:36:58 2022 +0100

    slice saved attention if necessary

commit 0a7684a22c880ec0f48cc22bfed4526358f71546
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:32:38 2022 +0100

    raise instead of asserting

commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:31:00 2022 +0100

    store dim when saving slices

commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:27:16 2022 +0100

    don't retry on exception

commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:24:50 2022 +0100

    stuff

commit 032ab90e9533be8726301ec91b97137e2aadef9a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:20:17 2022 +0100

    more logging

commit 3dc34b387f033482305360e605809d95a40bf6f8
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:16:47 2022 +0100

    logs

commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:12:39 2022 +0100

    actually set save_slicing_strategy to True

commit f780e0a0a7c6b6a3db320891064da82589358c8a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:10:35 2022 +0100

    store slicing strategy

commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 20:43:48 2022 +0100

    still not it

commit 5e3a9541f8ae00bde524046963910323e20c40b7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 17:20:02 2022 +0100

    wip offloading attention slices on-demand

commit 4c2966aa856b6f3b446216da3619ae931552ef08
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 15:47:40 2022 +0100

    pre-emptive offloading, idk if it works

commit 572576755e9f0a878d38e8173e485126c0efbefb
Author: root <you@example.com>
Date:   Sat Nov 5 11:25:32 2022 +0000

    push attention slices to cpu. slow but saves memory.

commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 12:04:22 2022 +0100

    verbose logging

commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 11:50:48 2022 +0100

    wip fixing mem strategy crash (4 test on runpod)

commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
Author: damian0815 <null@damianstewart.com>
Date:   Fri Nov 4 09:02:40 2022 +0100

    wip, only works on cuda
2022-11-09 07:21:21 -05:00
Lincoln Stein
5606af5083 enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
  Original code predicated on the image being generated within InvokeAI.
2022-11-08 15:22:32 +00:00
Lincoln Stein
ededeaed86 Merge branch 'add-invokeai-initfile' into development 2022-11-08 13:41:11 +00:00
Lincoln Stein
636620b1d5 change initfile to ~/.invokeai
- adjust documentation
- also fix 'clipseg_models' to 'clipseg', which seems to be working now
2022-11-08 03:26:16 +00:00
Lincoln Stein
21961f0c32 Revert "Use array slicing to calc ddim timesteps"
This reverts commit 1f0c5b4cf1.
2022-11-07 15:37:53 -05:00
Lincoln Stein
1fe41146f0 add support for an initialization file, invokeai.init
- Place preferred startup command switches in a file named
  "invokeai.init". The file can consist of a single line of switches
  such as "--web --steps=28", a series of switches on each
  line, or any combination of the two.

 Example:
 ```
   --web
   --host=0.0.0.0
   --steps=28
   --grid
   -f 0.6 -C 11.0 -A k_euler_a
```

- The following options, which were previously only available within
  the CLI, are now available on the command line as well:

  --steps
  --strength
  --cfg_scale
  --width
  --height
  --fit
2022-11-06 22:02:45 -05:00
Kyle Schouviller
f91fd27624 Bug fix for inpaint size 2022-11-06 09:25:50 -08:00
Kyle Schouviller
09e41e8f76 Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination 2022-11-06 09:25:50 -08:00
Lincoln Stein
17053ad8b7 fix duplicated argument introduced by conflict resolution 2022-11-05 16:01:55 -04:00
Lincoln Stein
fefb4dc1f8
Merge branch 'development' into fix_generate.py 2022-11-05 12:47:35 -07:00
Craig
d05b1b3544 Resize hires as an image 2022-11-05 11:54:23 -07:00
Craig
82d4904c07 Log strength with hires 2022-11-05 11:54:23 -07:00
Matthias Wild
36870a8f53
Merge branch 'development' into merge-main-into-development 2022-11-04 16:25:00 +01:00
damian0815
b70420951d fix parsing error doing eg forest ().swap(in winter) 2022-11-03 20:15:23 -04:00
wfng92
1f0c5b4cf1 Use array slicing to calc ddim timesteps 2022-11-03 20:11:04 -04:00
Lincoln Stein
174a9b78b0 Bring main back into a consistent state with other branches
- Due to misuse of rebase command, main was transiently
  in an inconsistent state.

- This repairs the damage, and adds a few post-release
  patches that ensure stable conda installs on Mac and Windows.
2022-11-03 15:44:06 -04:00
Lincoln Stein
240e5486c8 Merge branch 'spezialspezial-patch-9' into development 2022-11-02 18:35:00 -04:00
Lincoln Stein
aa247e68be use refined model by default 2022-11-02 18:29:34 -04:00
Lincoln Stein
895c47fd11 Merge branch 'patch-9' of https://github.com/spezialspezial/stable-diffusion into spezialspezial-patch-9 2022-11-02 18:24:55 -04:00
damian
d85cd99f17 add option to show intermediate latent space 2022-11-02 17:53:11 -04:00
spezialspezial
825fa6977d Update outcrop.py 2022-11-02 16:33:35 -04:00
spezialspezial
e332529fbd Prevent outcrop error when no callback is supplied 2022-11-02 16:33:35 -04:00
damian0815
2468a28e66 save VRAM by not recombining tensors that have been sliced to save VRAM 2022-11-01 22:39:48 -04:00
damian0815
e3ed748191 fix a bug that broke cross attention control index mapping 2022-11-01 22:39:39 -04:00
damian0815
3f5bf7ac44 report full size for fast latents and update conversion matrix for v1.5 2022-11-01 22:39:27 -04:00
spezialspezial
5e87062cf8 Option to directly invert the grayscale heatmap - fix 2022-11-01 22:24:31 -04:00
spezialspezial
3e7a459990 Update txt2mask.py 2022-11-01 22:24:31 -04:00
spezialspezial
bbf4c03e50 Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 22:24:31 -04:00
spezialspezial
b45e632f23 Option to directly invert the grayscale heatmap - fix 2022-11-01 22:18:00 -04:00
Lincoln Stein
2d84e28d32 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 22:11:04 -04:00
damian0815
0cc39f01a3 report full size for fast latents and update conversion matrix for v1.5 2022-11-02 13:55:29 +13:00
damian0815
688d7258f1 fix a bug that broke cross attention control index mapping 2022-11-02 13:54:54 +13:00
damian0815
4513320bf1 save VRAM by not recombining tensors that have been sliced to save VRAM 2022-11-02 13:54:54 +13:00
spezialspezial
6c9a2761f5
Optional refined model for Txt2Mask
Don't merge right now, just wanted to show the necessary changes
2022-11-02 00:33:46 +01:00
Lincoln Stein
533fd04ef0 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:40:36 -04:00
spezialspezial
2bdd738f03 Update txt2mask.py 2022-11-01 17:39:56 -04:00
spezialspezial
7782760541 Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 17:39:56 -04:00
damian
cdb107dcda add option to show intermediate latent space 2022-11-01 17:39:08 -04:00
damian
be1393a41c ensure existing exception handling code also handles new exception class 2022-11-01 17:37:26 -04:00
Damian at mba
e554c2607f Rebuilt prompt parsing logic
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.

In the process it has also become more robust to badly-formed prompts.

Squashed commit of the following:

commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 17:05:57 2022 +0100

    further cleanup

commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 16:07:57 2022 +0100

    cleanup and document

commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:54:58 2022 +0100

    works fully

commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:24:31 2022 +0100

    further...

commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 14:08:57 2022 +0100

    getting there...

commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 14:29:03 2022 +0200

    wip doesn't compile

commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:21:43 2022 +0200

    working with CrossAttentionCtonrol but no Attention support yet

commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:04:52 2022 +0200

    wip rebuiling prompt parser
2022-11-01 17:37:26 -04:00
damian0815
de2686d323 fix crash (be a little less aggressive clearing out the attention slice) 2022-11-01 17:35:43 -04:00
damian0815
0b72a4a35e be more aggressive at clearing out saved_attn_slice 2022-11-01 17:35:34 -04:00
Lincoln Stein
6215592b12 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:34:55 -04:00
damian0815
349cc25433 fix crash (be a little less aggressive clearing out the attention slice) 2022-11-01 17:34:28 -04:00
damian0815
214d276379 be more aggressive at clearing out saved_attn_slice 2022-11-01 17:34:28 -04:00
Lincoln Stein
ab2b5a691d fix model_cache memory management issues 2022-11-01 17:23:20 -04:00
Lincoln Stein
942a202945 fix model_cache memory management issues 2022-11-01 17:22:48 -04:00
Lincoln Stein
89da42ad79 Merge branch 'pin-options-panel' of https://github.com/psychedelicious/stable-diffusion into psychedelicious-pin-options-panel
- from PR #1301
2022-10-31 09:37:13 -04:00
Damian at mba
ced9c83e96 various prompting fixes 2022-10-31 09:34:56 -04:00
Lincoln Stein
80f2cfe3e3 set default max_models to 2 internally as well as as arg 2022-10-31 09:05:38 -04:00
Lincoln Stein
dc556cb1a7 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:55:53 -04:00
Lincoln Stein
0c8f0e3386 add max_load_models parameter for model cache control
- ldm.generate.Generator() now takes an argument named `max_load_models`.
  This is an integer that limits the model cache size. When the cache
  reaches the limit, it will start purging older models from cache.

- CLI takes an argument --max_load_models, default to 2. This will keep
  one model in GPU and the other in CPU and switch back and forth
  quickly.

- To not cache models at all, pass --max_load_models=1
2022-10-31 08:53:16 -04:00
Lincoln Stein
98f03053ba hard-code strength to 0.9 during outcropping 2022-10-31 07:52:34 -04:00
Lincoln Stein
59ef2471e1 improve outcropping performance
- applied inpainting parameters recommended by @kyle0654
- results are aesthetically pleasing
- Closes #1319
2022-10-31 07:52:26 -04:00
Lincoln Stein
ce7651944d adapt outcrop.py to use new outpainting code
- unfortunately it does not look as good as the old code
  which just used plain inpainting.
2022-10-31 07:52:13 -04:00
Lincoln Stein
a3e0b285d8 fix embiggen crash 2022-10-31 07:52:06 -04:00
Lincoln Stein
3cdfedc649 hard-code strength to 0.9 during outcropping 2022-10-31 01:54:32 -04:00
Lincoln Stein
531f596bd1 improve outcropping performance
- applied inpainting parameters recommended by @kyle0654
- results are aesthetically pleasing
- Closes #1319
2022-10-31 01:37:12 -04:00
Lincoln Stein
8683426041 add seamless to metadata 2022-10-31 00:40:30 -04:00
Lincoln Stein
582fee6c3a adapt outcrop.py to use new outpainting code
- unfortunately it does not look as good as the old code
  which just used plain inpainting.
2022-10-31 00:20:53 -04:00
Lincoln Stein
2b39d1677c fix embiggen crash 2022-10-30 22:57:15 -04:00
Lincoln Stein
23d54ee69e fix mps crash with safety checker 2022-10-30 16:54:06 -04:00
Lincoln Stein
f70af7afb9 remove debug image gen from outcrop 2022-10-30 12:19:43 -04:00
Lincoln Stein
e7368d7231 preload_models interactively downloads sd model files 2022-10-30 12:19:05 -04:00
Lincoln Stein
231dfe01f4 fix incorrect thresholding reporting for karras noise; close #1300 2022-10-30 10:35:55 -04:00
spezialspezial
c2fab45a6e Prevent indexing error for mode RGB
I have not explicitly tested mode P
2022-10-29 18:20:53 -04:00
Lincoln Stein
0f4413da7d Merge branch 'inpainting-rebase' of https://github.com/psychedelicious/stable-diffusion into psychedelicious-inpainting-rebase 2022-10-29 13:42:00 -04:00
Lincoln Stein
fdf9b1c40c fix CLI inpainting crash 2022-10-29 11:47:06 -04:00
Lincoln Stein
13f26a99b8 documentation and usability fixes 2022-10-29 10:37:38 -04:00
Lincoln Stein
ef68a419f1 preload_models.py script downloads the weight files
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
  license terms the very first time this is run. After that, everything
  works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
  for textual inversion, but I don't think so.
2022-10-29 01:02:45 -04:00
Kyle Schouviller
43de16cae4 Don't try to tile fill if image doesn't have an alpha layer 2022-10-29 04:25:27 +11:00
Damian at mba
864d080502 handle all unicode characters 2022-10-28 10:39:12 -04:00
Lincoln Stein
05b8de5300 fix --hires to support inpainting model 2022-10-27 23:12:21 -04:00
Lincoln Stein
387f796ebe
Merge branch 'development' into development 2022-10-27 23:04:04 -04:00
Lincoln Stein
3033331f65 remove unneeded warnings from attention.py 2022-10-27 22:50:06 -04:00
Lincoln Stein
362b234cd1 fix long-standing issue with metadata retrieval
The Args object would crap out when trying to retrieve metadata from
an image file that did not contain InvokeAI-generated metadata, such
as a JPG. This corrects that and returns dummy values (seed of zero,
prompt of '') to avoid downstream breakage.
2022-10-27 22:43:34 -04:00
Lincoln Stein
2115874587 resolve conflicts with outpainting implementation 2022-10-27 18:06:38 -04:00
Lincoln Stein
cd5141f3d1 fix issues with outpaint merge 2022-10-27 18:02:08 -04:00
Lincoln Stein
b815aa2130
Merge branch 'development' into outpaint 2022-10-27 17:17:34 -04:00
Lincoln Stein
19a6e904ec resolved whitespace difference 2022-10-27 17:12:22 -04:00
Lincoln Stein
1200fbd3bd add threshold for switchover from Karras to LDM noise schedule 2022-10-27 17:07:50 -04:00
Lincoln Stein
d25bf7a55a cut over from karras to model noise schedule for higher steps
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.

This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
2022-10-27 17:06:49 -04:00
Damian at mba
245cf606a3 be more forgiving about prompts with ((words)) 2022-10-27 22:36:33 +02:00
Lincoln Stein
943616044a Merge branch 'switch-ksampler-noise-scheduler-adaptively' into development
- This sets a step switchover point at which the k-samplers stop using the
  Karras noise schedule and start using the LatentDiffusion noise schedule.
  The advantage of this is that the Karras schedule produces excellent
  results at low step counts but starts to become unstable at high
  steps.

- A new command argument --karras_max, lets the user set where the
  switchover occurs. Default is 29 steps (1-29 steps Karras),
  (30 or greater LDM)

- Tildebyte, sorry to do a fast forward three-way merge for this
  but rebasing was just too painful due to extensive recent
  changes to the diffuser code.
2022-10-27 16:11:26 -04:00
Lincoln Stein
943808b925 add threshold for switchover from Karras to LDM noise schedule 2022-10-27 15:50:32 -04:00
Damian at mba
e20108878c fix attention weight inside .swap() 2022-10-27 21:17:23 +02:00
Damian at mba
f73d349dfe refactor hybrid and cross attention control codepaths for readability 2022-10-27 19:40:37 +02:00
Damian at mba
dc86fc92ce fix crash parsing empty prompt "" 2022-10-27 19:01:54 +02:00
Lincoln Stein
aa785c3ef1 ready for merge after documentation added 2022-10-27 11:55:00 -04:00
Lincoln Stein
0eb07b7488 Merge branch 'outpaint' of https://github.com/Kyle0654/InvokeAI into Kyle0654-outpaint 2022-10-27 09:16:40 -04:00
Lincoln Stein
16e7cbdb38 tweaks to documentation and call signature for advanced prompting 2022-10-27 08:30:09 -04:00
Damian at mba
135c62f1a4 fix issue with hot-dog, improve () suppression 2022-10-27 07:37:48 -04:00
Lincoln Stein
799dc6d0df acceptable integration of new prompting system and inpainting
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.

- prompt weighting, merging and cross-attention working
  - cross-attention does not work with runwayML inpainting
    model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
  quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
2022-10-27 01:51:35 -04:00
Damian at mba
79689e87ce fix crash making embeddings from too-long prompts with attention weights 2022-10-26 22:42:17 -04:00
Lincoln Stein
0d0481ce75 inpaint model progress
- working with plain prompts, weighted prompts and merge prompts
- not tested with prompt2prompt
2022-10-26 22:40:01 -04:00