David Burnett
ffa91be3f1
Install older version of torch and matching torchvision, fix pytorch-lightning=1.7.7
2022-11-02 14:49:36 -04:00
Lincoln Stein
2d5294bca1
speculative change for .bat installer
2022-11-02 13:56:17 -04:00
blessedcoolant
0453f21127
Fresh Build For WebUI
2022-11-02 23:26:49 +13:00
damian0815
9fc09aa4bd
don't log base64 progress images
2022-11-02 22:32:31 +13:00
damian0815
2468a28e66
save VRAM by not recombining tensors that have been sliced to save VRAM
2022-11-01 22:39:48 -04:00
damian0815
e3ed748191
fix a bug that broke cross attention control index mapping
2022-11-01 22:39:39 -04:00
damian0815
3f5bf7ac44
report full size for fast latents and update conversion matrix for v1.5
2022-11-01 22:39:27 -04:00
damian0815
00378e1ea6
add damian0815 to contributors list
2022-11-01 22:38:16 -04:00
spezialspezial
5e87062cf8
Option to directly invert the grayscale heatmap - fix
2022-11-01 22:24:31 -04:00
spezialspezial
3e7a459990
Update txt2mask.py
2022-11-01 22:24:31 -04:00
spezialspezial
bbf4c03e50
Option to directly invert the grayscale heatmap
...
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 22:24:31 -04:00
spezialspezial
b45e632f23
Option to directly invert the grayscale heatmap - fix
2022-11-01 22:18:00 -04:00
mauwii
611a3a9753
fix name of caching step
2022-11-01 22:17:23 -04:00
mauwii
1611f0d181
readd caching of sd-models
...
- this would remove the necesarrity of the secret availability in PRs
2022-11-01 22:17:23 -04:00
Lincoln Stein
08835115e4
pin pytorch_lightning to 1.7.7, issue #1331
2022-11-01 22:11:44 -04:00
Lincoln Stein
2d84e28d32
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 22:11:04 -04:00
Lincoln Stein
57be9ae6c3
pin pytorch_lightning to 1.7.7, issue #1331
2022-11-01 22:10:12 -04:00
damian0815
ef17aae8ab
add damian0815 to contributors list
2022-11-02 13:55:52 +13:00
damian0815
0cc39f01a3
report full size for fast latents and update conversion matrix for v1.5
2022-11-02 13:55:29 +13:00
damian0815
688d7258f1
fix a bug that broke cross attention control index mapping
2022-11-02 13:54:54 +13:00
damian0815
4513320bf1
save VRAM by not recombining tensors that have been sliced to save VRAM
2022-11-02 13:54:54 +13:00
spezialspezial
6c9a2761f5
Optional refined model for Txt2Mask
...
Don't merge right now, just wanted to show the necessary changes
2022-11-02 00:33:46 +01:00
Lincoln Stein
533fd04ef0
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 17:40:36 -04:00
spezialspezial
2bdd738f03
Update txt2mask.py
2022-11-01 17:39:56 -04:00
spezialspezial
7782760541
Option to directly invert the grayscale heatmap
...
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 17:39:56 -04:00
damian0815
dff5681cf0
shorter strings
2022-11-01 17:39:08 -04:00
damian0815
5a2790a69b
convert progress display to a drop-down
2022-11-01 17:39:08 -04:00
damian0815
7c5305ccba
do not try to save base64 intermediates in gallery on cancellation
2022-11-01 17:39:08 -04:00
psychedelicious
4013e8ad6f
Fixes b64 image sending and displaying
2022-11-01 17:39:08 -04:00
damian
d1dfd257f9
wip base64
2022-11-01 17:39:08 -04:00
damian
5322d735ee
update frontend
2022-11-01 17:39:08 -04:00
damian
cdb107dcda
add option to show intermediate latent space
2022-11-01 17:39:08 -04:00
damian
be1393a41c
ensure existing exception handling code also handles new exception class
2022-11-01 17:37:26 -04:00
Damian at mba
e554c2607f
Rebuilt prompt parsing logic
...
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.
In the process it has also become more robust to badly-formed prompts.
Squashed commit of the following:
commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 17:05:57 2022 +0100
further cleanup
commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 16:07:57 2022 +0100
cleanup and document
commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:54:58 2022 +0100
works fully
commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:24:31 2022 +0100
further...
commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 14:08:57 2022 +0100
getting there...
commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 14:29:03 2022 +0200
wip doesn't compile
commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:21:43 2022 +0200
working with CrossAttentionCtonrol but no Attention support yet
commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:04:52 2022 +0200
wip rebuiling prompt parser
2022-11-01 17:37:26 -04:00
damian0815
de2686d323
fix crash (be a little less aggressive clearing out the attention slice)
2022-11-01 17:35:43 -04:00
damian0815
0b72a4a35e
be more aggressive at clearing out saved_attn_slice
2022-11-01 17:35:34 -04:00
Lincoln Stein
6215592b12
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
2022-11-01 17:34:55 -04:00
damian0815
349cc25433
fix crash (be a little less aggressive clearing out the attention slice)
2022-11-01 17:34:28 -04:00
damian0815
214d276379
be more aggressive at clearing out saved_attn_slice
2022-11-01 17:34:28 -04:00
Lincoln Stein
ef24d76adc
fix library problems in preload_modules
2022-11-01 17:23:27 -04:00
Lincoln Stein
ab2b5a691d
fix model_cache memory management issues
2022-11-01 17:23:20 -04:00
Lincoln Stein
942a202945
fix model_cache memory management issues
2022-11-01 17:22:48 -04:00
Lincoln Stein
1379642fc6
fix library problems in preload_modules
2022-11-01 14:34:23 -04:00
Lincoln Stein
408cf5e092
candidate install scripts for testing
2022-11-01 13:54:42 -04:00
Lincoln Stein
ce298d32b5
attempt to make batch install more reliable
...
1. added nvidia channel to environment.yml
2. updated pytorch-cuda requirement
3. let conda figure out what version of pytorch to install
4. add conda install status checking to .bat and .sh install files
5. in preload_models.py catch and handle download/access token errors
2022-11-01 12:02:22 -04:00
mauwii
d7107d931a
disable checks with sd-V1.4 model...
...
...to save some resources, since V1.5 is the default now
2022-10-31 21:35:33 -04:00
mauwii
147dcc2961
update test-invoke-conda.yml
...
- fix model dl path for sd-v1-4.ckpt
- copy configs/models.yaml.example to configs/models.yaml
2022-10-31 21:35:20 -04:00
mauwii
efd7f42414
fix models example weights for sd-v1.4
2022-10-31 21:35:09 -04:00
blessedcoolant
4e1b619ad7
[WebUI] Loopback Default False
2022-10-31 21:35:01 -04:00
mauwii
c7de2b2801
disable checks with sd-V1.4 model...
...
...to save some resources, since V1.5 is the default now
2022-10-31 21:19:53 -04:00