damian0815
178f0c78d8
Fix #1362 by improving VRAM usage patterns when doing .swap()
...
commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:18:37 2022 +0100
remove log spam
commit 7189d649622d4668b120b0dd278388ad672142c4
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:10:28 2022 +0100
change the way saved slicing strategy is applied
commit 01c40f751ab72955140165c16f95ae411732265b
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:04:43 2022 +0100
fix slicing_strategy_getter callsite
commit f8cfe25150a346958903316bc710737d99839923
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 11:56:22 2022 +0100
cleanup, consistent dim=0 also tested
commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 11:34:09 2022 +0100
refactored context, tested with non-sliced cross attention control
commit d58a46e39bf562e7459290d2444256e8c08ad0b6
Author: damian0815 <null@damianstewart.com>
Date: Sun Nov 6 00:41:52 2022 +0100
cleanup
commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:57:31 2022 +0100
disable logs
commit 20ee89d93841b070738b3d8a4385c93b097d92eb
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:36:58 2022 +0100
slice saved attention if necessary
commit 0a7684a22c880ec0f48cc22bfed4526358f71546
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:32:38 2022 +0100
raise instead of asserting
commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:31:00 2022 +0100
store dim when saving slices
commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:27:16 2022 +0100
don't retry on exception
commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:24:50 2022 +0100
stuff
commit 032ab90e9533be8726301ec91b97137e2aadef9a
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:20:17 2022 +0100
more logging
commit 3dc34b387f033482305360e605809d95a40bf6f8
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:16:47 2022 +0100
logs
commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:12:39 2022 +0100
actually set save_slicing_strategy to True
commit f780e0a0a7c6b6a3db320891064da82589358c8a
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:10:35 2022 +0100
store slicing strategy
commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
Author: damian <git@damianstewart.com>
Date: Sat Nov 5 20:43:48 2022 +0100
still not it
commit 5e3a9541f8ae00bde524046963910323e20c40b7
Author: damian <git@damianstewart.com>
Date: Sat Nov 5 17:20:02 2022 +0100
wip offloading attention slices on-demand
commit 4c2966aa856b6f3b446216da3619ae931552ef08
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 15:47:40 2022 +0100
pre-emptive offloading, idk if it works
commit 572576755e9f0a878d38e8173e485126c0efbefb
Author: root <you@example.com>
Date: Sat Nov 5 11:25:32 2022 +0000
push attention slices to cpu. slow but saves memory.
commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 12:04:22 2022 +0100
verbose logging
commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 11:50:48 2022 +0100
wip fixing mem strategy crash (4 test on runpod)
commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
Author: damian0815 <null@damianstewart.com>
Date: Fri Nov 4 09:02:40 2022 +0100
wip, only works on cuda
2022-11-09 07:21:21 -05:00
Lincoln Stein
ededeaed86
Merge branch 'add-invokeai-initfile' into development
2022-11-08 13:41:11 +00:00
Lincoln Stein
636620b1d5
change initfile to ~/.invokeai
...
- adjust documentation
- also fix 'clipseg_models' to 'clipseg', which seems to be working now
2022-11-08 03:26:16 +00:00
Lincoln Stein
21961f0c32
Revert "Use array slicing to calc ddim timesteps"
...
This reverts commit 1f0c5b4cf1
.
2022-11-07 15:37:53 -05:00
Lincoln Stein
1fe41146f0
add support for an initialization file, invokeai.init
...
- Place preferred startup command switches in a file named
"invokeai.init". The file can consist of a single line of switches
such as "--web --steps=28", a series of switches on each
line, or any combination of the two.
Example:
```
--web
--host=0.0.0.0
--steps=28
--grid
-f 0.6 -C 11.0 -A k_euler_a
```
- The following options, which were previously only available within
the CLI, are now available on the command line as well:
--steps
--strength
--cfg_scale
--width
--height
--fit
2022-11-06 22:02:45 -05:00
Kyle Schouviller
f91fd27624
Bug fix for inpaint size
2022-11-06 09:25:50 -08:00
Kyle Schouviller
09e41e8f76
Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination
2022-11-06 09:25:50 -08:00
Lincoln Stein
17053ad8b7
fix duplicated argument introduced by conflict resolution
2022-11-05 16:01:55 -04:00
Lincoln Stein
fefb4dc1f8
Merge branch 'development' into fix_generate.py
2022-11-05 12:47:35 -07:00
Craig
d05b1b3544
Resize hires as an image
2022-11-05 11:54:23 -07:00
Craig
82d4904c07
Log strength with hires
2022-11-05 11:54:23 -07:00
Matthias Wild
36870a8f53
Merge branch 'development' into merge-main-into-development
2022-11-04 16:25:00 +01:00
damian0815
b70420951d
fix parsing error doing eg forest ().swap(in winter)
2022-11-03 20:15:23 -04:00
wfng92
1f0c5b4cf1
Use array slicing to calc ddim timesteps
2022-11-03 20:11:04 -04:00
Lincoln Stein
174a9b78b0
Bring main back into a consistent state with other branches
...
- Due to misuse of rebase command, main was transiently
in an inconsistent state.
- This repairs the damage, and adds a few post-release
patches that ensure stable conda installs on Mac and Windows.
2022-11-03 15:44:06 -04:00
Lincoln Stein
240e5486c8
Merge branch 'spezialspezial-patch-9' into development
2022-11-02 18:35:00 -04:00
Lincoln Stein
aa247e68be
use refined model by default
2022-11-02 18:29:34 -04:00
Lincoln Stein
895c47fd11
Merge branch 'patch-9' of https://github.com/spezialspezial/stable-diffusion into spezialspezial-patch-9
2022-11-02 18:24:55 -04:00
damian
d85cd99f17
add option to show intermediate latent space
2022-11-02 17:53:11 -04:00
spezialspezial
825fa6977d
Update outcrop.py
2022-11-02 16:33:35 -04:00
spezialspezial
e332529fbd
Prevent outcrop error when no callback is supplied
2022-11-02 16:33:35 -04:00
damian0815
2468a28e66
save VRAM by not recombining tensors that have been sliced to save VRAM
2022-11-01 22:39:48 -04:00
damian0815
e3ed748191
fix a bug that broke cross attention control index mapping
2022-11-01 22:39:39 -04:00
damian0815
3f5bf7ac44
report full size for fast latents and update conversion matrix for v1.5
2022-11-01 22:39:27 -04:00
spezialspezial
5e87062cf8
Option to directly invert the grayscale heatmap - fix
2022-11-01 22:24:31 -04:00
spezialspezial
3e7a459990
Update txt2mask.py
2022-11-01 22:24:31 -04:00
spezialspezial
bbf4c03e50
Option to directly invert the grayscale heatmap
...
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 22:24:31 -04:00
spezialspezial
b45e632f23
Option to directly invert the grayscale heatmap - fix
2022-11-01 22:18:00 -04:00
Lincoln Stein
2d84e28d32
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 22:11:04 -04:00
damian0815
0cc39f01a3
report full size for fast latents and update conversion matrix for v1.5
2022-11-02 13:55:29 +13:00
damian0815
688d7258f1
fix a bug that broke cross attention control index mapping
2022-11-02 13:54:54 +13:00
damian0815
4513320bf1
save VRAM by not recombining tensors that have been sliced to save VRAM
2022-11-02 13:54:54 +13:00
spezialspezial
6c9a2761f5
Optional refined model for Txt2Mask
...
Don't merge right now, just wanted to show the necessary changes
2022-11-02 00:33:46 +01:00
Lincoln Stein
533fd04ef0
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 17:40:36 -04:00
spezialspezial
2bdd738f03
Update txt2mask.py
2022-11-01 17:39:56 -04:00
spezialspezial
7782760541
Option to directly invert the grayscale heatmap
...
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 17:39:56 -04:00
damian
cdb107dcda
add option to show intermediate latent space
2022-11-01 17:39:08 -04:00
damian
be1393a41c
ensure existing exception handling code also handles new exception class
2022-11-01 17:37:26 -04:00
Damian at mba
e554c2607f
Rebuilt prompt parsing logic
...
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.
In the process it has also become more robust to badly-formed prompts.
Squashed commit of the following:
commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 17:05:57 2022 +0100
further cleanup
commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 16:07:57 2022 +0100
cleanup and document
commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:54:58 2022 +0100
works fully
commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:24:31 2022 +0100
further...
commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 14:08:57 2022 +0100
getting there...
commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 14:29:03 2022 +0200
wip doesn't compile
commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:21:43 2022 +0200
working with CrossAttentionCtonrol but no Attention support yet
commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:04:52 2022 +0200
wip rebuiling prompt parser
2022-11-01 17:37:26 -04:00
damian0815
de2686d323
fix crash (be a little less aggressive clearing out the attention slice)
2022-11-01 17:35:43 -04:00
damian0815
0b72a4a35e
be more aggressive at clearing out saved_attn_slice
2022-11-01 17:35:34 -04:00
Lincoln Stein
6215592b12
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 17:34:55 -04:00
damian0815
349cc25433
fix crash (be a little less aggressive clearing out the attention slice)
2022-11-01 17:34:28 -04:00
damian0815
214d276379
be more aggressive at clearing out saved_attn_slice
2022-11-01 17:34:28 -04:00
Lincoln Stein
ab2b5a691d
fix model_cache memory management issues
2022-11-01 17:23:20 -04:00
Lincoln Stein
942a202945
fix model_cache memory management issues
2022-11-01 17:22:48 -04:00
Lincoln Stein
89da42ad79
Merge branch 'pin-options-panel' of https://github.com/psychedelicious/stable-diffusion into psychedelicious-pin-options-panel
...
- from PR #1301
2022-10-31 09:37:13 -04:00
Damian at mba
ced9c83e96
various prompting fixes
2022-10-31 09:34:56 -04:00
Lincoln Stein
80f2cfe3e3
set default max_models to 2 internally as well as as arg
2022-10-31 09:05:38 -04:00
Lincoln Stein
dc556cb1a7
add max_load_models parameter for model cache control
...
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
2022-10-31 08:55:53 -04:00