Compare commits
741 Commits
developmen
...
v2.3.0-rc6
Author | SHA1 | Date | |
---|---|---|---|
8a21fc1c50 | |||
275d5040f4 | |||
1b5930dcad | |||
d5810f6270 | |||
8c80da2844 | |||
a12189e088 | |||
a56e3014a4 | |||
53d2d34b3d | |||
ac23a321b0 | |||
f52b233205 | |||
8242fc8bad | |||
09b6f7572b | |||
bde6e96800 | |||
28b40bebbe | |||
1c9fd00f98 | |||
8ab66a211c | |||
bc03ff8b30 | |||
0247d63511 | |||
7604b36577 | |||
4a026bd46e | |||
6241fc19e0 | |||
25d7d71dd8 | |||
2432adb38f | |||
0402766f4d | |||
a9ef5d1532 | |||
a485d45400 | |||
a40bdef29f | |||
fc2670b4d6 | |||
c3807b044d | |||
b7ab025f40 | |||
633f702b39 | |||
0240656361 | |||
05bb9e444b | |||
0076757767 | |||
6ab03c4d08 | |||
142016827f | |||
466a82bcc2 | |||
05349f6cdc | |||
ab585aefae | |||
083ce9358b | |||
f56cf2400a | |||
fc53f6d47c | |||
2f70daef8f | |||
fc2a136eb0 | |||
ce3da40434 | |||
7933f27a72 | |||
1c197c602f | |||
90656aa7bf | |||
394b4a771e | |||
9c3f548900 | |||
5662d2daa8 | |||
fc0f966ad2 | |||
eb702a5049 | |||
1386d73302 | |||
6089f33e54 | |||
3a260cf54f | |||
9949a438f4 | |||
84c1122208 | |||
cc3d431928 | |||
c44b060a2e | |||
eff7fb89d8 | |||
cd5c112fcd | |||
563867fa99 | |||
2e230774c2 | |||
4ada4c9f1f | |||
9a6966924c | |||
0d62525f3d | |||
2ec864e37e | |||
9307ce3dc3 | |||
15996446e0 | |||
7a06c8fd89 | |||
4895fe8395 | |||
1e793a2dfe | |||
9c8fcaaf86 | |||
bf4344be51 | |||
f7532cdfd4 | |||
f1dd76c20b | |||
3016eeb6fb | |||
75b62d6ca8 | |||
82ae2769c8 | |||
61149abd2f | |||
eff126af6e | |||
0ca499cf96 | |||
3abf85e658 | |||
5095285854 | |||
93623a4449 | |||
0197459b02 | |||
1578bc68cc | |||
4ace397a99 | |||
d85a710211 | |||
536d534ab4 | |||
fc752a4e75 | |||
3c06d114c3 | |||
00d79c1fe3 | |||
60213893ab | |||
3b58413d9f | |||
1139884493 | |||
17e8f966d0 | |||
a42b25339f | |||
1b0731dd1a | |||
61c3886843 | |||
f76d57637e | |||
6bf73a0cf9 | |||
5145df21d9 | |||
e96ac61cb3 | |||
0e35d829c1 | |||
d08f048621 | |||
cfd453c1c7 | |||
a1b1a48fb3 | |||
b5160321bf | |||
0cc2a8176e | |||
9ac81c1dc4 | |||
50191774fc | |||
fcd9b813e3 | |||
813f92a1ae | |||
0d141c1d84 | |||
2e3cd03b27 | |||
4500c8b244 | |||
d569c9dec6 | |||
01a2b8c05b | |||
b23664c794 | |||
f06fefcacc | |||
7fa3a499bb | |||
c50b64ec1d | |||
76b0bdb6f9 | |||
b0ad109886 | |||
66b312c353 | |||
fc857f9d91 | |||
d6bd0cbf61 | |||
a32f6e9ea7 | |||
b41342a779 | |||
7603c8982c | |||
d351e365d6 | |||
d453afbf6b | |||
9ae55c91cc | |||
9e46badc40 | |||
ca0f3ec0e4 | |||
4b9be6113d | |||
31964c7c4c | |||
64f9fbda2f | |||
3ece2f19f0 | |||
c38b0b906d | |||
c79678a643 | |||
2217998010 | |||
3b43f3a5a1 | |||
3f193d2b97 | |||
9fe660c515 | |||
16356d5225 | |||
e04cb70c7c | |||
ddd5137cc6 | |||
b9aef33ae8 | |||
797e2f780d | |||
0642728484 | |||
fe9b4f4a3c | |||
756e50f641 | |||
2202288eb2 | |||
fc3378bb74 | |||
96228507d2 | |||
1fe5ec32f5 | |||
6dee9051a1 | |||
d58574ca46 | |||
d282000c05 | |||
80c5322ccc | |||
da181ce64e | |||
5ef66ca237 | |||
e99e720474 | |||
7aa331af8c | |||
9e943ff7dc | |||
b5040ba8d0 | |||
07462d1d99 | |||
d273fba42c | |||
735545dca1 | |||
328f87559b | |||
6f10b06a0c | |||
fd60c8297d | |||
480064fa06 | |||
3810d6a4ce | |||
44d36a0e0b | |||
3996ee843c | |||
6d966313b9 | |||
8ce9f07223 | |||
11ac50a6ea | |||
31146eb797 | |||
99cd598334 | |||
5441be8169 | |||
3e98b50b62 | |||
5f16148dea | |||
9628d45a92 | |||
6cbdd88fe2 | |||
d423db4f82 | |||
5c8c204a1b | |||
a03471c588 | |||
6608343455 | |||
abd972f099 | |||
bd57793a65 | |||
8cdc65effc | |||
85b553c567 | |||
af74a2d1f4 | |||
6fdc9ac224 | |||
8107d354d9 | |||
7ca8abb206 | |||
28c17613c4 | |||
eeb7a4c28c | |||
0009d82a92 | |||
e6d52d7ce6 | |||
8c726d3e3e | |||
56e2d22b6e | |||
053d11fe30 | |||
0066187651 | |||
d3d24fa816 | |||
4d58fed6b0 | |||
bde5874707 | |||
eed802f5d9 | |||
c13e11a264 | |||
1c377b7995 | |||
efe8dcaae9 | |||
fc8e3dbcd3 | |||
ec1e83e912 | |||
ab9daf1241 | |||
c061c1b1b6 | |||
b9cc56593e | |||
6a0e1c8673 | |||
371edc993a | |||
d71734c90d | |||
9ad4c03277 | |||
5299324321 | |||
817e36f8bf | |||
d044d4c577 | |||
3f1120e6f2 | |||
17d73d09c0 | |||
478c379534 | |||
c5c160a788 | |||
27ee939e4b | |||
c222cf7e64 | |||
b2a3b8bbf6 | |||
11cb03f7de | |||
6b1dc34523 | |||
44786b0496 | |||
d9ed0f6005 | |||
2e7a002308 | |||
5ce62e00c9 | |||
5a8c28de97 | |||
07e03b31b7 | |||
5ee5c5a012 | |||
3075c99ed2 | |||
2c0bee2a6d | |||
8f86aa7ded | |||
34e0d7aaa8 | |||
abe4e1ea91 | |||
f1f8ce604a | |||
47dbe7bc0d | |||
ebe6daac56 | |||
d209dab881 | |||
2ff47cdecf | |||
22c34aabfe | |||
b58a80109b | |||
c5a9e70e7f | |||
c5914ce236 | |||
242abac12d | |||
4b659982b7 | |||
71733bcfa1 | |||
d047e070b8 | |||
02c530e200 | |||
d36bbb817c | |||
9997fde144 | |||
9e22ed5c12 | |||
169c56e471 | |||
b186965e77 | |||
88526b9294 | |||
071a438745 | |||
93129fde32 | |||
802b95b9d9 | |||
c279314cf5 | |||
f75b194b76 | |||
bf1996bbcf | |||
d3962ab7b5 | |||
2296f5449e | |||
b6d37a70ca | |||
71b6ddf5fb | |||
14de7ed925 | |||
6556b200b5 | |||
d627cd1865 | |||
09b6104bfd | |||
1bb5b4ab32 | |||
c18db4e47b | |||
f9c92e3576 | |||
1ceb7a60db | |||
f509650ec5 | |||
0d0f35a1e2 | |||
6dbc42fc1a | |||
f6018fe5aa | |||
e4cd66216e | |||
995fbc78c8 | |||
3083f8313d | |||
c0614ac7f3 | |||
0186630514 | |||
d53df09203 | |||
12a29bfbc0 | |||
f36114eb94 | |||
c255481c11 | |||
7f81105acf | |||
c8de679dc3 | |||
85b18fe9ee | |||
e0d8c19da6 | |||
5567808237 | |||
2817f8a428 | |||
8e4c044ca2 | |||
9dc3832b9b | |||
046abb634e | |||
d3a469d136 | |||
e79f89b619 | |||
cbd967cbc4 | |||
e090c0dc10 | |||
c381788ab9 | |||
fb312f9ed3 | |||
729752620b | |||
8ed8bf52d0 | |||
a49d546125 | |||
288e31fc60 | |||
7b2c0d12a3 | |||
2978c3eb8d | |||
5e7ed964d2 | |||
93a24445dc | |||
95d147c5df | |||
41aed57449 | |||
34a3f4a820 | |||
1f5ad1b05e | |||
87c63f1f08 | |||
5b054dd5b7 | |||
fc5c8cc800 | |||
eb2ca4970b | |||
c2b10e6461 | |||
190d266060 | |||
8c8e1a448d | |||
c52dd7e3f4 | |||
a4aea1540b | |||
3c53b46a35 | |||
65fd6cd105 | |||
61403fe306 | |||
b2f288d6ec | |||
d1d12e4f92 | |||
eaf7934d74 | |||
079ec4cb5c | |||
38d0b1e3df | |||
fc6500e819 | |||
f521f5feba | |||
ce865a8d69 | |||
00839d02ab | |||
ce52d0c42b | |||
f687d90bca | |||
7473d814f5 | |||
b2c30c2093 | |||
a7048eea5f | |||
87c9398266 | |||
63c6019f92 | |||
8eaf0d8bfe | |||
5344481809 | |||
9f32daab2d | |||
884768c39d | |||
bc2194228e | |||
10c3afef17 | |||
98e9721101 | |||
66babb2e81 | |||
31a967965b | |||
b9c9b947cd | |||
1eee08a070 | |||
aca1b61413 | |||
e18beaff9c | |||
d7554b01fd | |||
70f8793700 | |||
0d4e6cbff5 | |||
ea61bf2c94 | |||
7dead7696c | |||
ffcc5ad795 | |||
48deb3e49d | |||
6c31225d19 | |||
c0610f7cb9 | |||
313b206ff8 | |||
f0fe483915 | |||
4ee8d104f0 | |||
89791d91e8 | |||
87f3da92e9 | |||
f169bb0020 | |||
155efadec2 | |||
bffe199ad7 | |||
0c2a511671 | |||
e94c8fa285 | |||
b3363a934d | |||
599c558c87 | |||
d35ec3398d | |||
96a900d1fe | |||
f00f7095f9 | |||
d7217e3801 | |||
fc5fdae562 | |||
a491644e56 | |||
ec2a509e01 | |||
6a3a0af676 | |||
ef4b03289a | |||
963b666844 | |||
5a788f8f73 | |||
5afb63e41b | |||
279ffcfe15 | |||
9b73292fcb | |||
67d91dc550 | |||
a1c0818a08 | |||
2cf825b169 | |||
292b0d70d8 | |||
c3aa3d48a0 | |||
9e3c947cd3 | |||
4b8aebabfb | |||
080fc4b380 | |||
195294e74f | |||
da81165a4b | |||
f3ff386491 | |||
da524f159e | |||
2d1eeec063 | |||
a8bb1a1109 | |||
d9fa505412 | |||
02ce602a38 | |||
9b1843307b | |||
f0010919f2 | |||
d113b4ad41 | |||
895505976e | |||
171f4aa71b | |||
775e1a21c7 | |||
3c3d893b9d | |||
33a5c83c74 | |||
7ee0edcb9e | |||
7bd2220a24 | |||
284b432ffd | |||
ab675af264 | |||
be58a6bfbc | |||
5a40aadbee | |||
e11f15cf78 | |||
ce17051b28 | |||
a2bdc8b579 | |||
1c62ae461e | |||
c5b802b596 | |||
b9ab9ffb4a | |||
f232068ab8 | |||
4556f29359 | |||
c1521be445 | |||
f3e952ecf0 | |||
aa4e8d8cf3 | |||
a7b2074106 | |||
2282e681f7 | |||
6e2365f835 | |||
e4ea98c277 | |||
2fd5fe6c89 | |||
4a9e93463d | |||
0b5c0c374e | |||
5750f5dac2 | |||
3fb095de88 | |||
c5fecfe281 | |||
1fa6a3558e | |||
2ee68cecd9 | |||
c8d1d4d159 | |||
529b19f8f6 | |||
be4f44fafd | |||
5aec48735e | |||
3c919f0337 | |||
858ddffab6 | |||
212fec669a | |||
fc2098834d | |||
8a31e5c5e3 | |||
bcc0110c59 | |||
ce1c5e70b8 | |||
ce00c9856f | |||
7e8f364d8d | |||
088cd2c4dd | |||
9460763eff | |||
fe46d9d0f7 | |||
563196bd03 | |||
d2a038200c | |||
d6ac0eeffd | |||
3a1724652e | |||
8c073a7818 | |||
8c94f6a234 | |||
5fa8f8be43 | |||
5b35fa53a7 | |||
a2ee32f57f | |||
4486169a83 | |||
bfeafa8d5e | |||
f86c8b043c | |||
251a409087 | |||
6fdbc1978d | |||
c855d2a350 | |||
4dd74cdc68 | |||
746e97ea1d | |||
241313c4a6 | |||
b6d1a17a1e | |||
c73434c2a3 | |||
69b15024a9 | |||
26e413ae9c | |||
91eb84c5d9 | |||
5d69bd408b | |||
21bf512056 | |||
6c6e534c1a | |||
010378153f | |||
9091b6e24a | |||
64700b07a8 | |||
34f8117241 | |||
c3f82d4481 | |||
3929bd3e13 | |||
caf7caddf7 | |||
9fded69f0c | |||
9f719883c8 | |||
5d4da31dcd | |||
686640af3a | |||
edc22e06c3 | |||
409a46e2c4 | |||
e7ee4ecac7 | |||
da6c690d7b | |||
7c4544f95e | |||
f173e0a085 | |||
2a90e0c55f | |||
9d103ef030 | |||
4cc60669c1 | |||
d456aea8f3 | |||
4151883cb2 | |||
a029d90630 | |||
211d6b3831 | |||
b40faa98bd | |||
8d4ad0de4e | |||
e4b2f815e8 | |||
0dd5804949 | |||
53476af72e | |||
61ee597f4b | |||
ad0b366e47 | |||
942f029a24 | |||
e0d7c466cc | |||
16c0132a6b | |||
7cb2fcf8b4 | |||
1a65d43569 | |||
1313e31f62 | |||
aa213285bb | |||
f691353570 | |||
1c75010f29 | |||
eba8fb58ed | |||
83a7e60fe5 | |||
d4e86feeeb | |||
427614d1df | |||
ce6fb8ea29 | |||
df858eb3f9 | |||
6523fd07ab | |||
a823e37126 | |||
4eed06903c | |||
79d577bff9 | |||
3521557541 | |||
e66b1a685c | |||
c351aa19eb | |||
aa1f46820f | |||
1d34405f4f | |||
f961e865f5 | |||
9eba6acb7f | |||
e32dd1d703 | |||
bbbfea488d | |||
c8a9848ad6 | |||
e88e274bf2 | |||
cca8d14c79 | |||
464aafa862 | |||
6e98b5535d | |||
ab2972f320 | |||
1ba40db361 | |||
f69fc68e06 | |||
7d8d4bcafb | |||
4fd97ceddd | |||
ded49523cd | |||
914e5fc4f8 | |||
ab4d391a3a | |||
82f59829b8 | |||
147834e99c | |||
f41da11d66 | |||
5c5454e4a5 | |||
dedbdeeafc | |||
d1770bff37 | |||
20652620d9 | |||
51613525a4 | |||
dc39f8d6a7 | |||
f1748d7017 | |||
de7abce464 | |||
2aa5bb6aad | |||
c0c4d7ca69 | |||
7d09d9da49 | |||
ffa54f4a35 | |||
69cc0993f8 | |||
1050f2726a | |||
f7170e4156 | |||
bfa8fed568 | |||
2923dfaed1 | |||
0932b4affa | |||
0b10835269 | |||
6e0f3475b4 | |||
9b9e276491 | |||
392c0725f3 | |||
2a2f38a016 | |||
7a4e647287 | |||
b8e1151a9c | |||
f39cb668fc | |||
6c015eedb3 | |||
834e56a513 | |||
652aaa809b | |||
89880e1f72 | |||
d94f955d9d | |||
64339af2dc | |||
5d20f47993 | |||
ccf8a46320 | |||
af3d72e001 | |||
1d78e1af9c | |||
1fd605604f | |||
f0b04c5066 | |||
2836976d6d | |||
474220ce8e | |||
4074705194 | |||
e89ff01caf | |||
2187d0f31c | |||
1219c39d78 | |||
bc0b0e4752 | |||
cd3da2900d | |||
4402ca10b2 | |||
1a1625406c | |||
36e6908266 | |||
7314f1a862 | |||
5c3cbd05f1 | |||
f4e7383490 | |||
96a12099ed | |||
e159bb3dce | |||
bd0c0d77d2 | |||
f745f78cb3 | |||
7efe0f3996 | |||
9f855a358a | |||
62b80a81d3 | |||
14587c9a95 | |||
fcae5defe3 | |||
e7144055d1 | |||
c857c6cc62 | |||
7ecb11cf86 | |||
e4b61923ae | |||
aa68e4e0da | |||
09365d6d2e | |||
b77f34998c | |||
0439b51a26 | |||
ef6870c714 | |||
8cbb50c204 | |||
12a8d7fc14 | |||
3d2b497eb0 | |||
786b8878d6 | |||
55132f6463 | |||
ed9186b099 | |||
d2026d0509 | |||
0bc4ed14cd | |||
06369d07c0 | |||
4e61069821 | |||
d7ba041007 | |||
3859302f1c | |||
865439114b | |||
4d76116152 | |||
42f5bd4e12 | |||
04e77f3858 | |||
1fc1eeec38 | |||
556081695a | |||
ad7917c7aa | |||
39cca8139f | |||
1d1988683b | |||
44a0055571 | |||
0cc01143d8 | |||
1c0247d58a | |||
d335f51e5f | |||
38cd968130 | |||
0111304982 | |||
c607d4fe6c | |||
6d6076d3c7 | |||
485fcc7fcb | |||
76633f500a | |||
ed6194351c | |||
f237744ab1 | |||
678cf8519e | |||
ee9de75b8d | |||
50f3847ef8 | |||
8596e3586c | |||
5ef1e0714b | |||
be871c3ab3 | |||
dec40d9b04 | |||
fe5c008dd5 | |||
72def2ae13 | |||
31cd76a2af | |||
00c78263ce | |||
5c31feb3a1 | |||
26f129cef8 | |||
292ee06751 | |||
c00d53fcce | |||
a78a8728fe | |||
6b5d19347a | |||
26671d8eed | |||
b487fa4391 | |||
12b98ba4ec | |||
fa25a64d37 | |||
29540452f2 | |||
c7960f930a | |||
c1c8b5026a | |||
5da42e0ad2 | |||
34d6f35408 | |||
401165ba35 | |||
6d8057c84f | |||
3f23dee6f4 | |||
8cdd961ad2 | |||
470b267939 | |||
bf399e303c | |||
b3d7ad7461 | |||
cd66b2c76d | |||
6b406e2b5e | |||
6737cc1443 | |||
7fd0eeb9f9 | |||
16e3b45fa2 | |||
2f07ea03a9 | |||
b563d75c58 | |||
a7b7b20d16 | |||
a47ef3ded9 | |||
7cb9b654f3 | |||
8819e12a86 | |||
967eb60ea9 | |||
b1091ecda1 | |||
2723dd9051 | |||
8f050d992e | |||
0346095876 | |||
f9bbc55f74 | |||
878a3907e9 | |||
4cfb41d9ae | |||
6ec64ecb3c | |||
540315edaa | |||
cf10a1b736 | |||
9fb2a43780 | |||
1b743f7d9b | |||
d7bf3f7d7b | |||
eba31e7caf | |||
bde456f9fa | |||
9ee83380e6 | |||
6982e6a469 | |||
0f4d71ed63 | |||
8f3f64b22e | |||
dba0280790 |
@ -1,3 +1,23 @@
|
|||||||
|
# use this file as a whitelist
|
||||||
*
|
*
|
||||||
!environment*.yml
|
!invokeai
|
||||||
!docker-build
|
!ldm
|
||||||
|
!pyproject.toml
|
||||||
|
!README.md
|
||||||
|
|
||||||
|
# Guard against pulling in any models that might exist in the directory tree
|
||||||
|
**/*.pt*
|
||||||
|
**/*.ckpt
|
||||||
|
|
||||||
|
# ignore frontend but whitelist dist
|
||||||
|
invokeai/frontend/**
|
||||||
|
!invokeai/frontend/dist
|
||||||
|
|
||||||
|
# ignore invokeai/assets but whitelist invokeai/assets/web
|
||||||
|
invokeai/assets
|
||||||
|
!invokeai/assets/web
|
||||||
|
|
||||||
|
# ignore python cache
|
||||||
|
**/__pycache__
|
||||||
|
**/*.py[cod]
|
||||||
|
**/*.egg-info
|
||||||
|
12
.editorconfig
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# All files
|
||||||
|
[*]
|
||||||
|
charset = utf-8
|
||||||
|
end_of_line = lf
|
||||||
|
indent_size = 2
|
||||||
|
indent_style = space
|
||||||
|
insert_final_newline = true
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
|
||||||
|
# Python
|
||||||
|
[*.py]
|
||||||
|
indent_size = 4
|
2
.gitattributes
vendored
@ -1,4 +1,4 @@
|
|||||||
# Auto normalizes line endings on commit so devs don't need to change local settings.
|
# Auto normalizes line endings on commit so devs don't need to change local settings.
|
||||||
# Only affects text files and ignores other file types.
|
# Only affects text files and ignores other file types.
|
||||||
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
|
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
|
||||||
* text=auto
|
* text=auto
|
||||||
|
4
.github/CODEOWNERS
vendored
@ -2,4 +2,6 @@ ldm/invoke/pngwriter.py @CapableWeb
|
|||||||
ldm/invoke/server_legacy.py @CapableWeb
|
ldm/invoke/server_legacy.py @CapableWeb
|
||||||
scripts/legacy_api.py @CapableWeb
|
scripts/legacy_api.py @CapableWeb
|
||||||
tests/legacy_tests.sh @CapableWeb
|
tests/legacy_tests.sh @CapableWeb
|
||||||
installer/ @tildebyte
|
installer/ @ebr
|
||||||
|
.github/workflows/ @mauwii
|
||||||
|
docker/ @mauwii
|
||||||
|
94
.github/workflows/build-container.yml
vendored
@ -1,48 +1,92 @@
|
|||||||
# Building the Image without pushing to confirm it is still buildable
|
|
||||||
# confirum functionality would unfortunately need way more resources
|
|
||||||
name: build container image
|
name: build container image
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
- 'main'
|
- 'main'
|
||||||
- 'development'
|
- 'update/ci/*'
|
||||||
|
tags:
|
||||||
|
- 'v*.*.*'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
docker:
|
docker:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
arch:
|
flavor:
|
||||||
- x86_64
|
- amd
|
||||||
- aarch64
|
- cuda
|
||||||
|
- cpu
|
||||||
include:
|
include:
|
||||||
- arch: x86_64
|
- flavor: amd
|
||||||
conda-env-file: environment-lin-cuda.yml
|
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
|
||||||
- arch: aarch64
|
dockerfile: docker/Dockerfile
|
||||||
conda-env-file: environment-lin-aarch64.yml
|
platforms: linux/amd64,linux/arm64
|
||||||
|
- flavor: cuda
|
||||||
|
pip-extra-index-url: ''
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
- flavor: cpu
|
||||||
|
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
name: ${{ matrix.arch }}
|
name: ${{ matrix.flavor }}
|
||||||
steps:
|
steps:
|
||||||
- name: prepare docker-tag
|
|
||||||
env:
|
|
||||||
repository: ${{ github.repository }}
|
|
||||||
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
|
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v4
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
images: ghcr.io/${{ github.repository }}
|
||||||
|
tags: |
|
||||||
|
type=ref,event=branch
|
||||||
|
type=ref,event=tag
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=sha,enable=true,prefix=sha-,format=short
|
||||||
|
flavor: |
|
||||||
|
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
|
||||||
|
suffix=-${{ matrix.flavor }},onlatest=false
|
||||||
- name: Set up QEMU
|
- name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v2
|
uses: docker/setup-qemu-action@v2
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
- name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v2
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
platforms: ${{ matrix.platforms }}
|
||||||
|
|
||||||
|
- name: Login to GitHub Container Registry
|
||||||
|
if: github.event_name != 'pull_request'
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Build container
|
- name: Build container
|
||||||
uses: docker/build-push-action@v3
|
uses: docker/build-push-action@v4
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
file: docker-build/Dockerfile
|
file: ${{ matrix.dockerfile }}
|
||||||
platforms: Linux/${{ matrix.arch }}
|
platforms: ${{ matrix.platforms }}
|
||||||
push: false
|
push: ${{ github.event_name != 'pull_request' }}
|
||||||
tags: ${{ env.dockertag }}:${{ matrix.arch }}
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
build-args: |
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
conda_env_file=${{ matrix.conda-env-file }}
|
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
|
||||||
conda_version=py39_4.12.0-Linux-${{ matrix.arch }}
|
cache-from: type=gha
|
||||||
invokeai_git=${{ github.repository }}
|
cache-to: type=gha,mode=max
|
||||||
invokeai_branch=${{ github.ref_name }}
|
|
||||||
|
- name: Output image, digest and metadata to summary
|
||||||
|
run: |
|
||||||
|
{
|
||||||
|
echo imageid: "${{ steps.docker_build.outputs.imageid }}"
|
||||||
|
echo digest: "${{ steps.docker_build.outputs.digest }}"
|
||||||
|
echo labels: "${{ steps.meta.outputs.labels }}"
|
||||||
|
echo tags: "${{ steps.meta.outputs.tags }}"
|
||||||
|
echo version: "${{ steps.meta.outputs.version }}"
|
||||||
|
} >> "$GITHUB_STEP_SUMMARY"
|
||||||
|
34
.github/workflows/clean-caches.yml
vendored
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
name: cleanup caches by a branch
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types:
|
||||||
|
- closed
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
cleanup:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Check out code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Cleanup
|
||||||
|
run: |
|
||||||
|
gh extension install actions/gh-actions-cache
|
||||||
|
|
||||||
|
REPO=${{ github.repository }}
|
||||||
|
BRANCH=${{ github.ref }}
|
||||||
|
|
||||||
|
echo "Fetching list of cache key"
|
||||||
|
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
|
||||||
|
|
||||||
|
## Setting this to not fail the workflow while deleting cache keys.
|
||||||
|
set +e
|
||||||
|
echo "Deleting caches..."
|
||||||
|
for cacheKey in $cacheKeysForPR
|
||||||
|
do
|
||||||
|
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
|
||||||
|
done
|
||||||
|
echo "Done"
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
29
.github/workflows/lint-frontend.yml
vendored
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
name: Lint frontend
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- 'invokeai/frontend/**'
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- 'invokeai/frontend/**'
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
working-directory: invokeai/frontend
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
lint-frontend:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
- name: Setup Node 18
|
||||||
|
uses: actions/setup-node@v3
|
||||||
|
with:
|
||||||
|
node-version: '18'
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- run: 'yarn install --frozen-lockfile'
|
||||||
|
- run: 'yarn tsc'
|
||||||
|
- run: 'yarn run madge'
|
||||||
|
- run: 'yarn run lint --max-warnings=0'
|
||||||
|
- run: 'yarn run prettier --check'
|
3
.github/workflows/mkdocs-material.yml
vendored
@ -7,6 +7,7 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
mkdocs-material:
|
mkdocs-material:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: checkout sources
|
- name: checkout sources
|
||||||
@ -22,7 +23,7 @@ jobs:
|
|||||||
- name: install requirements
|
- name: install requirements
|
||||||
run: |
|
run: |
|
||||||
python -m \
|
python -m \
|
||||||
pip install -r requirements-mkdocs.txt
|
pip install -r docs/requirements-mkdocs.txt
|
||||||
|
|
||||||
- name: confirm buildability
|
- name: confirm buildability
|
||||||
run: |
|
run: |
|
||||||
|
20
.github/workflows/pyflakes.yml
vendored
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- development
|
||||||
|
- 'release-candidate-*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
pyflakes:
|
||||||
|
name: runner / pyflakes
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
- name: pyflakes
|
||||||
|
uses: reviewdog/action-pyflakes@v1
|
||||||
|
with:
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
reporter: github-pr-review
|
41
.github/workflows/pypi-release.yml
vendored
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
name: PyPI Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- 'ldm/invoke/_version.py'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
release:
|
||||||
|
if: github.repository == 'invoke-ai/InvokeAI'
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
env:
|
||||||
|
TWINE_USERNAME: __token__
|
||||||
|
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
|
||||||
|
TWINE_NON_INTERACTIVE: 1
|
||||||
|
steps:
|
||||||
|
- name: checkout sources
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: install deps
|
||||||
|
run: pip install --upgrade build twine
|
||||||
|
|
||||||
|
- name: build package
|
||||||
|
run: python3 -m build
|
||||||
|
|
||||||
|
- name: check distribution
|
||||||
|
run: twine check dist/*
|
||||||
|
|
||||||
|
- name: check PyPI versions
|
||||||
|
if: github.ref == 'refs/heads/main'
|
||||||
|
run: |
|
||||||
|
pip install --upgrade requests
|
||||||
|
python -c "\
|
||||||
|
import scripts.pypi_helper; \
|
||||||
|
EXISTS=scripts.pypi_helper.local_on_pypi(); \
|
||||||
|
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
- name: upload package
|
||||||
|
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
|
||||||
|
run: twine upload dist/*
|
126
.github/workflows/test-invoke-conda.yml
vendored
@ -1,126 +0,0 @@
|
|||||||
name: Test invoke.py
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- 'main'
|
|
||||||
- 'development'
|
|
||||||
- 'fix-gh-actions-fork'
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- 'main'
|
|
||||||
- 'development'
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
matrix:
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
stable-diffusion-model:
|
|
||||||
# - 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
|
|
||||||
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
|
|
||||||
os:
|
|
||||||
- ubuntu-latest
|
|
||||||
- macOS-12
|
|
||||||
include:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
environment-file: environment-lin-cuda.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
- os: macOS-12
|
|
||||||
environment-file: environment-mac.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
# - stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
|
|
||||||
# stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
|
|
||||||
# stable-diffusion-model-switch: stable-diffusion-1.4
|
|
||||||
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
|
|
||||||
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
|
|
||||||
stable-diffusion-model-switch: stable-diffusion-1.5
|
|
||||||
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
env:
|
|
||||||
CONDA_ENV_NAME: invokeai
|
|
||||||
defaults:
|
|
||||||
run:
|
|
||||||
shell: ${{ matrix.default-shell }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout sources
|
|
||||||
id: checkout-sources
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: create models.yaml from example
|
|
||||||
run: cp configs/models.yaml.example configs/models.yaml
|
|
||||||
|
|
||||||
- name: create environment.yml
|
|
||||||
run: cp environments-and-requirements/${{ matrix.environment-file }} environment.yml
|
|
||||||
|
|
||||||
- name: Use cached conda packages
|
|
||||||
id: use-cached-conda-packages
|
|
||||||
uses: actions/cache@v3
|
|
||||||
with:
|
|
||||||
path: ~/conda_pkgs_dir
|
|
||||||
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
|
|
||||||
|
|
||||||
- name: Activate Conda Env
|
|
||||||
id: activate-conda-env
|
|
||||||
uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
activate-environment: ${{ env.CONDA_ENV_NAME }}
|
|
||||||
environment-file: environment.yml
|
|
||||||
miniconda-version: latest
|
|
||||||
|
|
||||||
- name: set test prompt to main branch validation
|
|
||||||
if: ${{ github.ref == 'refs/heads/main' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: set test prompt to development branch validation
|
|
||||||
if: ${{ github.ref == 'refs/heads/development' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: set test prompt to Pull Request validation
|
|
||||||
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Use Cached Stable Diffusion Model
|
|
||||||
id: cache-sd-model
|
|
||||||
uses: actions/cache@v3
|
|
||||||
env:
|
|
||||||
cache-name: cache-${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
with:
|
|
||||||
path: ${{ matrix.stable-diffusion-model-dl-path }}
|
|
||||||
key: ${{ env.cache-name }}
|
|
||||||
|
|
||||||
- name: Download ${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
id: download-stable-diffusion-model
|
|
||||||
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
|
|
||||||
run: |
|
|
||||||
[[ -d models/ldm/stable-diffusion-v1 ]] \
|
|
||||||
|| mkdir -p models/ldm/stable-diffusion-v1
|
|
||||||
curl \
|
|
||||||
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
|
|
||||||
-o ${{ matrix.stable-diffusion-model-dl-path }} \
|
|
||||||
-L ${{ matrix.stable-diffusion-model }}
|
|
||||||
|
|
||||||
- name: run preload_models.py
|
|
||||||
id: run-preload-models
|
|
||||||
run: |
|
|
||||||
python scripts/preload_models.py \
|
|
||||||
--no-interactive
|
|
||||||
|
|
||||||
- name: Run the tests
|
|
||||||
id: run-tests
|
|
||||||
run: |
|
|
||||||
time python scripts/invoke.py \
|
|
||||||
--model ${{ matrix.stable-diffusion-model-switch }} \
|
|
||||||
--from_file ${{ env.TEST_PROMPTS }}
|
|
||||||
|
|
||||||
- name: export conda env
|
|
||||||
id: export-conda-env
|
|
||||||
run: |
|
|
||||||
mkdir -p outputs/img-samples
|
|
||||||
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
|
|
||||||
|
|
||||||
- name: Archive results
|
|
||||||
id: archive-results
|
|
||||||
uses: actions/upload-artifact@v3
|
|
||||||
with:
|
|
||||||
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
path: outputs/img-samples
|
|
135
.github/workflows/test-invoke-pip.yml
vendored
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
name: Test invoke.py pip
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
pull_request:
|
||||||
|
types:
|
||||||
|
- 'ready_for_review'
|
||||||
|
- 'opened'
|
||||||
|
- 'synchronize'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
matrix:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
python-version:
|
||||||
|
# - '3.9'
|
||||||
|
- '3.10'
|
||||||
|
pytorch:
|
||||||
|
# - linux-cuda-11_6
|
||||||
|
- linux-cuda-11_7
|
||||||
|
- linux-rocm-5_2
|
||||||
|
- linux-cpu
|
||||||
|
- macos-default
|
||||||
|
- windows-cpu
|
||||||
|
# - windows-cuda-11_6
|
||||||
|
# - windows-cuda-11_7
|
||||||
|
include:
|
||||||
|
# - pytorch: linux-cuda-11_6
|
||||||
|
# os: ubuntu-22.04
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
|
||||||
|
# github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-cuda-11_7
|
||||||
|
os: ubuntu-22.04
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-rocm-5_2
|
||||||
|
os: ubuntu-22.04
|
||||||
|
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-cpu
|
||||||
|
os: ubuntu-22.04
|
||||||
|
extra-index-url: 'https://download.pytorch.org/whl/cpu'
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: macos-default
|
||||||
|
os: macOS-12
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: windows-cpu
|
||||||
|
os: windows-2022
|
||||||
|
github-env: $env:GITHUB_ENV
|
||||||
|
# - pytorch: windows-cuda-11_6
|
||||||
|
# os: windows-2022
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
|
||||||
|
# github-env: $env:GITHUB_ENV
|
||||||
|
# - pytorch: windows-cuda-11_7
|
||||||
|
# os: windows-2022
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
|
||||||
|
# github-env: $env:GITHUB_ENV
|
||||||
|
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
env:
|
||||||
|
PIP_USE_PEP517: '1'
|
||||||
|
steps:
|
||||||
|
- name: Checkout sources
|
||||||
|
id: checkout-sources
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: set test prompt to main branch validation
|
||||||
|
if: ${{ github.ref == 'refs/heads/main' }}
|
||||||
|
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: set test prompt to Pull Request validation
|
||||||
|
if: ${{ github.ref != 'refs/heads/main' }}
|
||||||
|
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: setup python
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: ${{ matrix.python-version }}
|
||||||
|
cache: pip
|
||||||
|
cache-dependency-path: pyproject.toml
|
||||||
|
|
||||||
|
- name: install invokeai
|
||||||
|
env:
|
||||||
|
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
|
||||||
|
run: >
|
||||||
|
pip3 install
|
||||||
|
--editable=".[test]"
|
||||||
|
|
||||||
|
- name: run pytest
|
||||||
|
id: run-pytest
|
||||||
|
run: pytest
|
||||||
|
|
||||||
|
- name: set INVOKEAI_OUTDIR
|
||||||
|
run: >
|
||||||
|
python -c
|
||||||
|
"import os;from ldm.invoke.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
|
||||||
|
>> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: run invokeai-configure
|
||||||
|
id: run-preload-models
|
||||||
|
env:
|
||||||
|
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
|
||||||
|
run: >
|
||||||
|
invokeai-configure
|
||||||
|
--yes
|
||||||
|
--default_only
|
||||||
|
--full-precision
|
||||||
|
# can't use fp16 weights without a GPU
|
||||||
|
|
||||||
|
- name: run invokeai
|
||||||
|
id: run-invokeai
|
||||||
|
env:
|
||||||
|
# Set offline mode to make sure configure preloaded successfully.
|
||||||
|
HF_HUB_OFFLINE: 1
|
||||||
|
HF_DATASETS_OFFLINE: 1
|
||||||
|
TRANSFORMERS_OFFLINE: 1
|
||||||
|
run: >
|
||||||
|
invokeai
|
||||||
|
--no-patchmatch
|
||||||
|
--no-nsfw_checker
|
||||||
|
--from_file ${{ env.TEST_PROMPTS }}
|
||||||
|
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
|
||||||
|
|
||||||
|
- name: Archive results
|
||||||
|
id: archive-results
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: results
|
||||||
|
path: ${{ env.INVOKEAI_OUTDIR }}
|
26
.gitignore
vendored
@ -1,4 +1,5 @@
|
|||||||
# ignore default image save location and model symbolic link
|
# ignore default image save location and model symbolic link
|
||||||
|
embeddings/
|
||||||
outputs/
|
outputs/
|
||||||
models/ldm/stable-diffusion-v1/model.ckpt
|
models/ldm/stable-diffusion-v1/model.ckpt
|
||||||
**/restoration/codeformer/weights
|
**/restoration/codeformer/weights
|
||||||
@ -6,6 +7,7 @@ models/ldm/stable-diffusion-v1/model.ckpt
|
|||||||
# ignore user models config
|
# ignore user models config
|
||||||
configs/models.user.yaml
|
configs/models.user.yaml
|
||||||
config/models.user.yml
|
config/models.user.yml
|
||||||
|
invokeai.init
|
||||||
|
|
||||||
# ignore the Anaconda/Miniconda installer used while building Docker image
|
# ignore the Anaconda/Miniconda installer used while building Docker image
|
||||||
anaconda.sh
|
anaconda.sh
|
||||||
@ -70,6 +72,7 @@ coverage.xml
|
|||||||
.hypothesis/
|
.hypothesis/
|
||||||
.pytest_cache/
|
.pytest_cache/
|
||||||
cover/
|
cover/
|
||||||
|
junit/
|
||||||
|
|
||||||
# Translations
|
# Translations
|
||||||
*.mo
|
*.mo
|
||||||
@ -193,11 +196,7 @@ checkpoints
|
|||||||
.DS_Store
|
.DS_Store
|
||||||
|
|
||||||
# Let the frontend manage its own gitignore
|
# Let the frontend manage its own gitignore
|
||||||
!frontend/*
|
!invokeai/frontend/*
|
||||||
frontend/apt-get
|
|
||||||
frontend/dist
|
|
||||||
frontend/sudo
|
|
||||||
frontend/update
|
|
||||||
|
|
||||||
# Scratch folder
|
# Scratch folder
|
||||||
.scratch/
|
.scratch/
|
||||||
@ -218,7 +217,7 @@ models/clipseg
|
|||||||
models/gfpgan
|
models/gfpgan
|
||||||
|
|
||||||
# ignore initfile
|
# ignore initfile
|
||||||
invokeai.init
|
.invokeai
|
||||||
|
|
||||||
# ignore environment.yml and requirements.txt
|
# ignore environment.yml and requirements.txt
|
||||||
# these are links to the real files in environments-and-requirements
|
# these are links to the real files in environments-and-requirements
|
||||||
@ -226,12 +225,11 @@ environment.yml
|
|||||||
requirements.txt
|
requirements.txt
|
||||||
|
|
||||||
# source installer files
|
# source installer files
|
||||||
source_installer/*zip
|
installer/*zip
|
||||||
source_installer/invokeAI
|
installer/install.bat
|
||||||
install.bat
|
installer/install.sh
|
||||||
install.sh
|
installer/update.bat
|
||||||
update.bat
|
installer/update.sh
|
||||||
update.sh
|
|
||||||
|
|
||||||
# this may be present if the user created a venv
|
# no longer stored in source directory
|
||||||
invokeai
|
models
|
128
CODE_OF_CONDUCT.md
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
# Contributor Covenant Code of Conduct
|
||||||
|
|
||||||
|
## Our Pledge
|
||||||
|
|
||||||
|
We as members, contributors, and leaders pledge to make participation in our
|
||||||
|
community a harassment-free experience for everyone, regardless of age, body
|
||||||
|
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||||
|
identity and expression, level of experience, education, socio-economic status,
|
||||||
|
nationality, personal appearance, race, religion, or sexual identity
|
||||||
|
and orientation.
|
||||||
|
|
||||||
|
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||||
|
diverse, inclusive, and healthy community.
|
||||||
|
|
||||||
|
## Our Standards
|
||||||
|
|
||||||
|
Examples of behavior that contributes to a positive environment for our
|
||||||
|
community include:
|
||||||
|
|
||||||
|
* Demonstrating empathy and kindness toward other people
|
||||||
|
* Being respectful of differing opinions, viewpoints, and experiences
|
||||||
|
* Giving and gracefully accepting constructive feedback
|
||||||
|
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||||
|
and learning from the experience
|
||||||
|
* Focusing on what is best not just for us as individuals, but for the
|
||||||
|
overall community
|
||||||
|
|
||||||
|
Examples of unacceptable behavior include:
|
||||||
|
|
||||||
|
* The use of sexualized language or imagery, and sexual attention or
|
||||||
|
advances of any kind
|
||||||
|
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||||
|
* Public or private harassment
|
||||||
|
* Publishing others' private information, such as a physical or email
|
||||||
|
address, without their explicit permission
|
||||||
|
* Other conduct which could reasonably be considered inappropriate in a
|
||||||
|
professional setting
|
||||||
|
|
||||||
|
## Enforcement Responsibilities
|
||||||
|
|
||||||
|
Community leaders are responsible for clarifying and enforcing our standards of
|
||||||
|
acceptable behavior and will take appropriate and fair corrective action in
|
||||||
|
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||||
|
or harmful.
|
||||||
|
|
||||||
|
Community leaders have the right and responsibility to remove, edit, or reject
|
||||||
|
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||||
|
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||||
|
decisions when appropriate.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This Code of Conduct applies within all community spaces, and also applies when
|
||||||
|
an individual is officially representing the community in public spaces.
|
||||||
|
Examples of representing our community include using an official e-mail address,
|
||||||
|
posting via an official social media account, or acting as an appointed
|
||||||
|
representative at an online or offline event.
|
||||||
|
|
||||||
|
## Enforcement
|
||||||
|
|
||||||
|
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||||
|
may be reported to the community leaders responsible for enforcement
|
||||||
|
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
|
||||||
|
be reviewed and investigated promptly and fairly.
|
||||||
|
|
||||||
|
All community leaders are obligated to respect the privacy and security of the
|
||||||
|
reporter of any incident.
|
||||||
|
|
||||||
|
## Enforcement Guidelines
|
||||||
|
|
||||||
|
Community leaders will follow these Community Impact Guidelines in determining
|
||||||
|
the consequences for any action they deem in violation of this Code of Conduct:
|
||||||
|
|
||||||
|
### 1. Correction
|
||||||
|
|
||||||
|
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||||
|
unprofessional or unwelcome in the community.
|
||||||
|
|
||||||
|
**Consequence**: A private, written warning from community leaders, providing
|
||||||
|
clarity around the nature of the violation and an explanation of why the
|
||||||
|
behavior was inappropriate. A public apology may be requested.
|
||||||
|
|
||||||
|
### 2. Warning
|
||||||
|
|
||||||
|
**Community Impact**: A violation through a single incident or series
|
||||||
|
of actions.
|
||||||
|
|
||||||
|
**Consequence**: A warning with consequences for continued behavior. No
|
||||||
|
interaction with the people involved, including unsolicited interaction with
|
||||||
|
those enforcing the Code of Conduct, for a specified period of time. This
|
||||||
|
includes avoiding interactions in community spaces as well as external channels
|
||||||
|
like social media. Violating these terms may lead to a temporary or
|
||||||
|
permanent ban.
|
||||||
|
|
||||||
|
### 3. Temporary Ban
|
||||||
|
|
||||||
|
**Community Impact**: A serious violation of community standards, including
|
||||||
|
sustained inappropriate behavior.
|
||||||
|
|
||||||
|
**Consequence**: A temporary ban from any sort of interaction or public
|
||||||
|
communication with the community for a specified period of time. No public or
|
||||||
|
private interaction with the people involved, including unsolicited interaction
|
||||||
|
with those enforcing the Code of Conduct, is allowed during this period.
|
||||||
|
Violating these terms may lead to a permanent ban.
|
||||||
|
|
||||||
|
### 4. Permanent Ban
|
||||||
|
|
||||||
|
**Community Impact**: Demonstrating a pattern of violation of community
|
||||||
|
standards, including sustained inappropriate behavior, harassment of an
|
||||||
|
individual, or aggression toward or disparagement of classes of individuals.
|
||||||
|
|
||||||
|
**Consequence**: A permanent ban from any sort of public interaction within
|
||||||
|
the community.
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||||
|
version 2.0, available at
|
||||||
|
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||||
|
|
||||||
|
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||||
|
enforcement ladder](https://github.com/mozilla/diversity).
|
||||||
|
|
||||||
|
[homepage]: https://www.contributor-covenant.org
|
||||||
|
|
||||||
|
For answers to common questions about this code of conduct, see the FAQ at
|
||||||
|
https://www.contributor-covenant.org/faq. Translations are available at
|
||||||
|
https://www.contributor-covenant.org/translations.
|
84
InvokeAI_Statement_of_Values.md
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
<img src="docs/assets/invoke_ai_banner.png" align="center">
|
||||||
|
|
||||||
|
Invoke-AI is a community of software developers, researchers, and user
|
||||||
|
interface experts who have come together on a voluntary basis to build
|
||||||
|
software tools which support cutting edge AI text-to-image
|
||||||
|
applications. This community is open to anyone who wishes to
|
||||||
|
contribute to the effort and has the skill and time to do so.
|
||||||
|
|
||||||
|
# Our Values
|
||||||
|
|
||||||
|
The InvokeAI team is a diverse community which includes individuals
|
||||||
|
from various parts of the world and many walks of life. Despite our
|
||||||
|
differences, we share a number of core values which we ask prospective
|
||||||
|
contributors to understand and respect. We believe:
|
||||||
|
|
||||||
|
1. That Open Source Software is a positive force in the world. We
|
||||||
|
create software that can be used, reused, and redistributed, without
|
||||||
|
restrictions, under a straightforward Open Source license (MIT). We
|
||||||
|
believe that Open Source benefits society as a whole by increasing the
|
||||||
|
availability of high quality software to all.
|
||||||
|
|
||||||
|
2. That those who create software should receive proper attribution
|
||||||
|
for their creative work. While we support the exchange and reuse of
|
||||||
|
Open Source Software, we feel strongly that the original authors of a
|
||||||
|
piece of code should receive credit for their contribution, and we
|
||||||
|
endeavor to do so whenever possible.
|
||||||
|
|
||||||
|
3. That there is moral ambiguity surrounding AI-assisted art. We are
|
||||||
|
aware of the moral and ethical issues surrounding the release of the
|
||||||
|
Stable Diffusion model and similar products. We are aware that, due to
|
||||||
|
the composition of their training sets, current AI-generated image
|
||||||
|
models are biased against certain ethnic groups, cultural concepts of
|
||||||
|
beauty, ethnic stereotypes, and gender roles.
|
||||||
|
|
||||||
|
1. We recognize the potential for harm to these groups that these biases
|
||||||
|
represent and trust that future AI models will take steps towards
|
||||||
|
reducing or eliminating the biases noted above, respect and give due
|
||||||
|
credit to the artists whose work is sourced, and call on developers
|
||||||
|
and users to favor these models over the older ones as they become
|
||||||
|
available.
|
||||||
|
|
||||||
|
4. We are deeply committed to ensuring that this technology benefits
|
||||||
|
everyone, including artists. We see AI art not as a replacement for
|
||||||
|
the artist, but rather as a tool to empower them. With that
|
||||||
|
in mind, we are constantly debating how to build systems that put
|
||||||
|
artists’ needs first: tools which can be readily integrated into an
|
||||||
|
artist’s existing workflows and practices, enhancing their work and
|
||||||
|
helping them to push it further. Every decision we take as a team,
|
||||||
|
which includes several artists, aims to build towards that goal.
|
||||||
|
|
||||||
|
5. That artificial intelligence can be a force for good in the world,
|
||||||
|
but must be used responsibly. Artificial intelligence technologies
|
||||||
|
have the potential to improve society, in everything from cancer care,
|
||||||
|
to customer service, to creative writing.
|
||||||
|
|
||||||
|
1. While we do not believe that software should arbitrarily limit what
|
||||||
|
users can do with it, we recognize that when used irresponsibly, AI
|
||||||
|
has the potential to do much harm. Our Discord server is actively
|
||||||
|
moderated in order to minimize the potential of harm from
|
||||||
|
user-contributed images. In addition, we ask users of our software to
|
||||||
|
refrain from using it in any way that would cause mental, emotional or
|
||||||
|
physical harm to individuals and vulnerable populations including (but
|
||||||
|
not limited to) women; minors; ethnic minorities; religious groups;
|
||||||
|
members of LGBTQIA communities; and people with disabilities or
|
||||||
|
impairments.
|
||||||
|
|
||||||
|
2. Note that some of the image generation AI models which the Invoke-AI
|
||||||
|
toolkit supports carry licensing agreements which impose restrictions
|
||||||
|
on how the model is used. We ask that our users read and agree to
|
||||||
|
these terms if they wish to make use of these models. These agreements
|
||||||
|
are distinct from the MIT license which applies to the InvokeAI
|
||||||
|
software and source code.
|
||||||
|
|
||||||
|
6. That mutual respect is key to a healthy software development
|
||||||
|
community. Members of the InvokeAI community are expected to treat
|
||||||
|
each other with respect, beneficence, and empathy. Each of us has a
|
||||||
|
different background and a unique set of skills. We strive to help
|
||||||
|
each other grow and gain new skills, and we apportion expectations in
|
||||||
|
a way that balances the members' time, skillset, and interest
|
||||||
|
area. Disputes are resolved by open and honest communication.
|
||||||
|
|
||||||
|
## Signature
|
||||||
|
|
||||||
|
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, **keturn**, and **ebr** (Eugene Brodsky). Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
|
189
README.md
@ -1,21 +1,17 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
# InvokeAI: A Stable Diffusion Toolkit
|
# InvokeAI: A Stable Diffusion Toolkit
|
||||||
|
|
||||||
_Formerly known as lstein/stable-diffusion_
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
[![discord badge]][discord link]
|
[![discord badge]][discord link]
|
||||||
|
|
||||||
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
|
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
|
||||||
|
|
||||||
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
|
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
|
||||||
|
|
||||||
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
|
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
|
||||||
|
|
||||||
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
|
|
||||||
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
|
|
||||||
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
|
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
|
||||||
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
|
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
|
||||||
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
|
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
|
||||||
@ -28,28 +24,41 @@ _Formerly known as lstein/stable-diffusion_
|
|||||||
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
|
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
|
||||||
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
|
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
|
||||||
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
|
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
|
||||||
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
|
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
|
||||||
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
|
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
|
||||||
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
|
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
|
||||||
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
|
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
This is a fork of
|
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
|
||||||
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
|
|
||||||
the open source text-to-image generator. It provides a streamlined
|
|
||||||
process with various new features and options to aid the image
|
|
||||||
generation process. It runs on Windows, Mac and Linux machines, with
|
|
||||||
GPU cards with as little as 4 GB of RAM. It provides both a polished
|
|
||||||
Web interface (see below), and an easy-to-use command-line interface.
|
|
||||||
|
|
||||||
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
**Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
||||||
|
|
||||||
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
|
_Note: InvokeAI is rapidly evolving. Please use the
|
||||||
|
|
||||||
|
|
||||||
_Note: This fork is rapidly evolving. Please use the
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
|
||||||
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
|
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
# Getting Started with InvokeAI
|
||||||
|
|
||||||
|
For full installation and upgrade instructions, please see:
|
||||||
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
||||||
|
|
||||||
|
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
|
||||||
|
2. Download the .zip file for your OS (Windows/macOS/Linux).
|
||||||
|
3. Unzip the file.
|
||||||
|
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
|
||||||
|
5. Wait a while, until it is done.
|
||||||
|
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
|
||||||
|
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
|
||||||
|
8. Type `banana sushi` in the box on the top left and click `Invoke`
|
||||||
|
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
@ -63,23 +72,31 @@ requests. Be sure to use the provided templates. They will help aid diagnose iss
|
|||||||
8. [Support](#support)
|
8. [Support](#support)
|
||||||
9. [Further Reading](#further-reading)
|
9. [Further Reading](#further-reading)
|
||||||
|
|
||||||
### Installation
|
## Installation
|
||||||
|
|
||||||
This fork is supported across Linux, Windows and Macintosh. Linux
|
This fork is supported across Linux, Windows and Macintosh. Linux
|
||||||
users can use either an Nvidia-based card (with CUDA support) or an
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
AMD card (using the ROCm driver). For full installation and upgrade
|
AMD card (using the ROCm driver). For full installation and upgrade
|
||||||
instructions, please see:
|
instructions, please see:
|
||||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
|
||||||
|
|
||||||
### Hardware Requirements
|
### Hardware Requirements
|
||||||
|
|
||||||
|
InvokeAI is supported across Linux, Windows and macOS. Linux
|
||||||
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
|
AMD card (using the ROCm driver).
|
||||||
|
|
||||||
#### System
|
#### System
|
||||||
|
|
||||||
You wil need one of the following:
|
You will need one of the following:
|
||||||
|
|
||||||
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||||
- An Apple computer with an M1 chip.
|
- An Apple computer with an M1 chip.
|
||||||
|
|
||||||
|
We do not recommend the GTX 1650 or 1660 series video cards. They are
|
||||||
|
unable to run in half-precision mode and do not have sufficient VRAM
|
||||||
|
to render 512x512 images.
|
||||||
|
|
||||||
#### Memory
|
#### Memory
|
||||||
|
|
||||||
- At least 12 GB Main Memory RAM.
|
- At least 12 GB Main Memory RAM.
|
||||||
@ -88,83 +105,48 @@ You wil need one of the following:
|
|||||||
|
|
||||||
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||||
|
|
||||||
**Note**
|
## Features
|
||||||
|
|
||||||
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
|
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
|
||||||
run the dream script in full-precision mode as shown below.
|
|
||||||
|
|
||||||
Similarly, specify full-precision mode on Apple M1 hardware.
|
### *Web Server & UI*
|
||||||
|
|
||||||
Precision is auto configured based on the device. If however you encounter
|
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
|
||||||
errors like 'expected type Float but found Half' or 'not implemented for Half'
|
|
||||||
you can try starting `invoke.py` with the `--precision=float32` flag:
|
|
||||||
|
|
||||||
```bash
|
### *Unified Canvas*
|
||||||
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
|
|
||||||
```
|
|
||||||
|
|
||||||
### Features
|
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
|
||||||
|
|
||||||
#### Major Features
|
### *Advanced Prompt Syntax*
|
||||||
|
|
||||||
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
|
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
|
||||||
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
|
|
||||||
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
|
|
||||||
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
|
|
||||||
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
|
|
||||||
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
|
|
||||||
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
|
|
||||||
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
|
|
||||||
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
|
|
||||||
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
|
|
||||||
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
|
|
||||||
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
|
|
||||||
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
|
|
||||||
|
|
||||||
#### Other Features
|
### *Command Line Interface*
|
||||||
|
|
||||||
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
|
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
|
||||||
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
|
|
||||||
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
|
### Other features
|
||||||
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
|
|
||||||
|
- *Support for both ckpt and diffusers models*
|
||||||
|
- *SD 2.0, 2.1 support*
|
||||||
|
- *Noise Control & Tresholding*
|
||||||
|
- *Popular Sampler Support*
|
||||||
|
- *Upscaling & Face Restoration Tools*
|
||||||
|
- *Embedding Manager & Support*
|
||||||
|
- *Model Manager & Support*
|
||||||
|
|
||||||
|
### Coming Soon
|
||||||
|
|
||||||
|
- *Node-Based Architecture & UI*
|
||||||
|
- And more...
|
||||||
|
|
||||||
### Latest Changes
|
### Latest Changes
|
||||||
|
|
||||||
- v2.0.1 (13 October 2022)
|
For our latest changes, view our [Release
|
||||||
- fix noisy images at high step count when using k* samplers
|
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
|
||||||
- dream.py script now calls invoke.py module directly rather than
|
[CHANGELOG](docs/CHANGELOG.md).
|
||||||
via a new python process (which could break the environment)
|
|
||||||
|
|
||||||
- v2.0.0 (9 October 2022)
|
## Troubleshooting
|
||||||
|
|
||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
|
|
||||||
for backward compatibility.
|
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
|
||||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/INPAINTING/">inpainting</a> and <a href="https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/">outpainting</a>
|
|
||||||
- img2img runs on all k* samplers
|
|
||||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a>
|
|
||||||
- Support for CodeFormer face reconstruction
|
|
||||||
- Support for Textual Inversion on Macintoshes
|
|
||||||
- Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a>
|
|
||||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
|
|
||||||
and "embiggen" upscaling. See the `!fix` command.
|
|
||||||
- New `--hires` option on `invoke>` line allows <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/#txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
|
|
||||||
- New `--perlin` and `--threshold` options allow you to add and control variation
|
|
||||||
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
|
|
||||||
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
|
|
||||||
and tweaking of previous settings.
|
|
||||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
|
|
||||||
- Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>.
|
|
||||||
New commands added:
|
|
||||||
- List command-line history with `!history`
|
|
||||||
- Search command-line history with `!search`
|
|
||||||
- Clear history with `!clear`
|
|
||||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
|
||||||
configure. To switch away from auto use the new flag like `--precision=float32`.
|
|
||||||
|
|
||||||
For older changelogs, please visit the **[CHANGELOG](https://invoke-ai.github.io/InvokeAI/CHANGELOG#v114-11-september-2022)**.
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
||||||
problems and other issues.
|
problems and other issues.
|
||||||
@ -172,14 +154,19 @@ problems and other issues.
|
|||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
||||||
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
|
cleanup, testing, or code reviews, is very much encouraged to do so.
|
||||||
to contribute to GitHub projects, here is a
|
|
||||||
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
|
|
||||||
|
|
||||||
A full set of contribution guidelines, along with templates, are in progress, but for now the most
|
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
|
||||||
important thing is to **make your pull request against the "development" branch**, and not against
|
|
||||||
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
|
If you are unfamiliar with how
|
||||||
changes.
|
to contribute to GitHub projects, here is a
|
||||||
|
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
|
||||||
|
|
||||||
|
We hope you enjoy using our software as much as we enjoy creating it,
|
||||||
|
and we hope that some of those of you who are reading this will elect
|
||||||
|
to become part of our community.
|
||||||
|
|
||||||
|
Welcome to InvokeAI!
|
||||||
|
|
||||||
### Contributors
|
### Contributors
|
||||||
|
|
||||||
@ -189,13 +176,7 @@ their time, hard work and effort.
|
|||||||
|
|
||||||
### Support
|
### Support
|
||||||
|
|
||||||
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
|
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
|
||||||
email if you use and like the script.
|
|
||||||
|
|
||||||
Original portions of the software are Copyright (c) 2020
|
Original portions of the software are Copyright (c) 2023 by respective contributors.
|
||||||
[Lincoln D. Stein](https://github.com/lstein)
|
|
||||||
|
|
||||||
### Further Reading
|
|
||||||
|
|
||||||
Please see the original README for more information on this software and underlying algorithm,
|
|
||||||
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).
|
|
||||||
|
@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
|
|||||||
|
|
||||||
# Uses
|
# Uses
|
||||||
|
|
||||||
## Direct Use
|
## Direct Use
|
||||||
The model is intended for research purposes only. Possible research areas and
|
The model is intended for research purposes only. Possible research areas and
|
||||||
tasks include
|
tasks include
|
||||||
|
|
||||||
@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|||||||
considerations.
|
considerations.
|
||||||
|
|
||||||
### Bias
|
### Bias
|
||||||
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
||||||
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
||||||
which consists of images that are primarily limited to English descriptions.
|
which consists of images that are primarily limited to English descriptions.
|
||||||
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
||||||
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
||||||
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
||||||
|
|
||||||
|
|
||||||
@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
|
|||||||
- LAION-2B (en) and subsets thereof (see next section)
|
- LAION-2B (en) and subsets thereof (see next section)
|
||||||
|
|
||||||
**Training Procedure**
|
**Training Procedure**
|
||||||
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
|
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
|
||||||
|
|
||||||
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
|
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
|
||||||
- Text prompts are encoded through a ViT-L/14 text-encoder.
|
- Text prompts are encoded through a ViT-L/14 text-encoder.
|
||||||
@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
|
|||||||
- **Batch:** 32 x 8 x 2 x 4 = 2048
|
- **Batch:** 32 x 8 x 2 x 4 = 2048
|
||||||
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
|
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
|
||||||
|
|
||||||
## Evaluation Results
|
## Evaluation Results
|
||||||
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
||||||
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
||||||
steps show the relative improvements of the checkpoints:
|
steps show the relative improvements of the checkpoints:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
|
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
|
||||||
## Environmental Impact
|
## Environmental Impact
|
||||||
|
@ -1,69 +0,0 @@
|
|||||||
from backend.modules.parse_seed_weights import parse_seed_weights
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
SAMPLER_CHOICES = [
|
|
||||||
"ddim",
|
|
||||||
"k_dpm_2_a",
|
|
||||||
"k_dpm_2",
|
|
||||||
"k_euler_a",
|
|
||||||
"k_euler",
|
|
||||||
"k_heun",
|
|
||||||
"k_lms",
|
|
||||||
"plms",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def parameters_to_command(params):
|
|
||||||
"""
|
|
||||||
Converts dict of parameters into a `invoke.py` REPL command.
|
|
||||||
"""
|
|
||||||
|
|
||||||
switches = list()
|
|
||||||
|
|
||||||
if "prompt" in params:
|
|
||||||
switches.append(f'"{params["prompt"]}"')
|
|
||||||
if "steps" in params:
|
|
||||||
switches.append(f'-s {params["steps"]}')
|
|
||||||
if "seed" in params:
|
|
||||||
switches.append(f'-S {params["seed"]}')
|
|
||||||
if "width" in params:
|
|
||||||
switches.append(f'-W {params["width"]}')
|
|
||||||
if "height" in params:
|
|
||||||
switches.append(f'-H {params["height"]}')
|
|
||||||
if "cfg_scale" in params:
|
|
||||||
switches.append(f'-C {params["cfg_scale"]}')
|
|
||||||
if "sampler_name" in params:
|
|
||||||
switches.append(f'-A {params["sampler_name"]}')
|
|
||||||
if "seamless" in params and params["seamless"] == True:
|
|
||||||
switches.append(f"--seamless")
|
|
||||||
if "hires_fix" in params and params["hires_fix"] == True:
|
|
||||||
switches.append(f"--hires")
|
|
||||||
if "init_img" in params and len(params["init_img"]) > 0:
|
|
||||||
switches.append(f'-I {params["init_img"]}')
|
|
||||||
if "init_mask" in params and len(params["init_mask"]) > 0:
|
|
||||||
switches.append(f'-M {params["init_mask"]}')
|
|
||||||
if "init_color" in params and len(params["init_color"]) > 0:
|
|
||||||
switches.append(f'--init_color {params["init_color"]}')
|
|
||||||
if "strength" in params and "init_img" in params:
|
|
||||||
switches.append(f'-f {params["strength"]}')
|
|
||||||
if "fit" in params and params["fit"] == True:
|
|
||||||
switches.append(f"--fit")
|
|
||||||
if "facetool" in params:
|
|
||||||
switches.append(f'-ft {params["facetool"]}')
|
|
||||||
if "facetool_strength" in params and params["facetool_strength"]:
|
|
||||||
switches.append(f'-G {params["facetool_strength"]}')
|
|
||||||
elif "gfpgan_strength" in params and params["gfpgan_strength"]:
|
|
||||||
switches.append(f'-G {params["gfpgan_strength"]}')
|
|
||||||
if "codeformer_fidelity" in params:
|
|
||||||
switches.append(f'-cf {params["codeformer_fidelity"]}')
|
|
||||||
if "upscale" in params and params["upscale"]:
|
|
||||||
switches.append(f'-U {params["upscale"][0]} {params["upscale"][1]}')
|
|
||||||
if "variation_amount" in params and params["variation_amount"] > 0:
|
|
||||||
switches.append(f'-v {params["variation_amount"]}')
|
|
||||||
if "with_variations" in params:
|
|
||||||
seed_weight_pairs = ",".join(
|
|
||||||
f"{seed}:{weight}" for seed, weight in params["with_variations"]
|
|
||||||
)
|
|
||||||
switches.append(f"-V {seed_weight_pairs}")
|
|
||||||
|
|
||||||
return " ".join(switches)
|
|
BIN
binary_installer/WinLongPathsEnabled.reg
Normal file
164
binary_installer/install.bat.in
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
@echo off
|
||||||
|
|
||||||
|
@rem This script will install git (if not found on the PATH variable)
|
||||||
|
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||||
|
@rem For users who already have git, this step will be skipped.
|
||||||
|
|
||||||
|
@rem Next, it'll download the project's source code.
|
||||||
|
@rem Then it will download a self-contained, standalone Python and unpack it.
|
||||||
|
@rem Finally, it'll create the Python virtual environment and preload the models.
|
||||||
|
|
||||||
|
@rem This enables a user to install this project without manually installing git or Python
|
||||||
|
|
||||||
|
@rem change to the script's directory
|
||||||
|
PUSHD "%~dp0"
|
||||||
|
|
||||||
|
set "no_cache_dir=--no-cache-dir"
|
||||||
|
if "%1" == "use-cache" (
|
||||||
|
set "no_cache_dir="
|
||||||
|
)
|
||||||
|
|
||||||
|
echo ***** Installing InvokeAI.. *****
|
||||||
|
@rem Config
|
||||||
|
set INSTALL_ENV_DIR=%cd%\installer_files\env
|
||||||
|
@rem https://mamba.readthedocs.io/en/latest/installation.html
|
||||||
|
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
|
||||||
|
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||||
|
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
|
||||||
|
|
||||||
|
set PACKAGES_TO_INSTALL=
|
||||||
|
|
||||||
|
call git --version >.tmp1 2>.tmp2
|
||||||
|
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
|
||||||
|
|
||||||
|
@rem Cleanup
|
||||||
|
del /q .tmp1 .tmp2
|
||||||
|
|
||||||
|
@rem (if necessary) install git into a contained environment
|
||||||
|
if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||||
|
@rem download micromamba
|
||||||
|
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
|
||||||
|
|
||||||
|
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.exe
|
||||||
|
|
||||||
|
@rem test the mamba binary
|
||||||
|
echo ***** Micromamba version: *****
|
||||||
|
call micromamba.exe --version
|
||||||
|
|
||||||
|
@rem create the installer env
|
||||||
|
if not exist "%INSTALL_ENV_DIR%" (
|
||||||
|
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
|
||||||
|
)
|
||||||
|
|
||||||
|
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
|
||||||
|
|
||||||
|
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
|
||||||
|
|
||||||
|
if not exist "%INSTALL_ENV_DIR%" (
|
||||||
|
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
del /q micromamba.exe
|
||||||
|
|
||||||
|
@rem For 'git' only
|
||||||
|
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
|
||||||
|
|
||||||
|
@rem Download/unpack/clean up InvokeAI release sourceball
|
||||||
|
set err_msg=----- InvokeAI source download failed -----
|
||||||
|
echo Trying to download "%RELEASE_URL%%RELEASE_SOURCEBALL%"
|
||||||
|
curl -L %RELEASE_URL%%RELEASE_SOURCEBALL% --output InvokeAI.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI source unpack failed -----
|
||||||
|
tar -zxf InvokeAI.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
del /q InvokeAI.tgz
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI source copy failed -----
|
||||||
|
cd InvokeAI-*
|
||||||
|
xcopy . .. /e /h
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
@rem cleanup
|
||||||
|
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
|
||||||
|
rd /s /q .dev_scripts .github docker-build tests
|
||||||
|
del /q requirements.in requirements-mkdocs.txt shell.nix
|
||||||
|
|
||||||
|
echo ***** Unpacked InvokeAI source *****
|
||||||
|
|
||||||
|
@rem Download/unpack/clean up python-build-standalone
|
||||||
|
set err_msg=----- Python download failed -----
|
||||||
|
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- Python unpack failed -----
|
||||||
|
tar -zxf python.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
del /q python.tgz
|
||||||
|
|
||||||
|
echo ***** Unpacked python-build-standalone *****
|
||||||
|
|
||||||
|
@rem create venv
|
||||||
|
set err_msg=----- problem creating venv -----
|
||||||
|
.\python\python -E -s -m venv .venv
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
call .venv\Scripts\activate.bat
|
||||||
|
|
||||||
|
echo ***** Created Python virtual environment *****
|
||||||
|
|
||||||
|
@rem Print venv's Python version
|
||||||
|
set err_msg=----- problem calling venv's python -----
|
||||||
|
echo We're running under
|
||||||
|
.venv\Scripts\python --version
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- pip update failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location --upgrade pip wheel
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
echo ***** Updated pip and wheel *****
|
||||||
|
|
||||||
|
set err_msg=----- requirements file copy failed -----
|
||||||
|
copy binary_installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- main pip install failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -r requirements.txt
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
echo ***** Installed Python dependencies *****
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI setup failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -e .
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
copy binary_installer\invoke.bat.in .\invoke.bat
|
||||||
|
echo ***** Installed invoke launcher script ******
|
||||||
|
|
||||||
|
@rem more cleanup
|
||||||
|
rd /s /q binary_installer installer_files
|
||||||
|
|
||||||
|
@rem preload the models
|
||||||
|
call .venv\Scripts\python scripts\configure_invokeai.py
|
||||||
|
set err_msg=----- model download clone failed -----
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
deactivate
|
||||||
|
|
||||||
|
echo ***** Finished downloading models *****
|
||||||
|
|
||||||
|
echo All done! Execute the file invoke.bat in this directory to start InvokeAI
|
||||||
|
pause
|
||||||
|
exit
|
||||||
|
|
||||||
|
:err_exit
|
||||||
|
echo %err_msg%
|
||||||
|
pause
|
||||||
|
exit
|
235
binary_installer/install.sh.in
Normal file
@ -0,0 +1,235 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# ensure we're in the correct folder in case user's CWD is somewhere else
|
||||||
|
scriptdir=$(dirname "$0")
|
||||||
|
cd "$scriptdir"
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
IFS=$'\n\t'
|
||||||
|
|
||||||
|
function _err_exit {
|
||||||
|
if test "$1" -ne 0
|
||||||
|
then
|
||||||
|
echo -e "Error code $1; Error caught was '$2'"
|
||||||
|
read -p "Press any key to exit..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# This script will install git (if not found on the PATH variable)
|
||||||
|
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||||
|
# For users who already have git, this step will be skipped.
|
||||||
|
|
||||||
|
# Next, it'll download the project's source code.
|
||||||
|
# Then it will download a self-contained, standalone Python and unpack it.
|
||||||
|
# Finally, it'll create the Python virtual environment and preload the models.
|
||||||
|
|
||||||
|
# This enables a user to install this project without manually installing git or Python
|
||||||
|
|
||||||
|
echo -e "\n***** Installing InvokeAI into $(pwd)... *****\n"
|
||||||
|
|
||||||
|
export no_cache_dir="--no-cache-dir"
|
||||||
|
if [ $# -ge 1 ]; then
|
||||||
|
if [ "$1" = "use-cache" ]; then
|
||||||
|
export no_cache_dir=""
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
OS_NAME=$(uname -s)
|
||||||
|
case "${OS_NAME}" in
|
||||||
|
Linux*) OS_NAME="linux";;
|
||||||
|
Darwin*) OS_NAME="darwin";;
|
||||||
|
*) echo -e "\n----- Unknown OS: $OS_NAME! This script runs only on Linux or macOS -----\n" && exit
|
||||||
|
esac
|
||||||
|
|
||||||
|
OS_ARCH=$(uname -m)
|
||||||
|
case "${OS_ARCH}" in
|
||||||
|
x86_64*) ;;
|
||||||
|
arm64*) ;;
|
||||||
|
*) echo -e "\n----- Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64 -----\n" && exit
|
||||||
|
esac
|
||||||
|
|
||||||
|
# https://mamba.readthedocs.io/en/latest/installation.html
|
||||||
|
MAMBA_OS_NAME=$OS_NAME
|
||||||
|
MAMBA_ARCH=$OS_ARCH
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
MAMBA_OS_NAME="osx"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$OS_ARCH" == "linux" ]; then
|
||||||
|
MAMBA_ARCH="aarch64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$OS_ARCH" == "x86_64" ]; then
|
||||||
|
MAMBA_ARCH="64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
PY_ARCH=$OS_ARCH
|
||||||
|
if [ "$OS_ARCH" == "arm64" ]; then
|
||||||
|
PY_ARCH="aarch64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Compute device ('cd' segment of reqs files) detect goes here
|
||||||
|
# This needs a ton of work
|
||||||
|
# Suggestions:
|
||||||
|
# - lspci
|
||||||
|
# - check $PATH for nvidia-smi, gtt CUDA/GPU version from output
|
||||||
|
# - Surely there's a similar utility for AMD?
|
||||||
|
CD="cuda"
|
||||||
|
if [ "$OS_NAME" == "darwin" ] && [ "$OS_ARCH" == "arm64" ]; then
|
||||||
|
CD="mps"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# config
|
||||||
|
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
|
||||||
|
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
|
||||||
|
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
|
||||||
|
elif [ "$OS_NAME" == "linux" ]; then
|
||||||
|
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-unknown-linux-gnu-install_only.tar.gz
|
||||||
|
fi
|
||||||
|
echo "INSTALLING $RELEASE_SOURCEBALL FROM $RELEASE_URL"
|
||||||
|
|
||||||
|
PACKAGES_TO_INSTALL=""
|
||||||
|
|
||||||
|
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
||||||
|
|
||||||
|
# (if necessary) install git and conda into a contained environment
|
||||||
|
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
|
||||||
|
# download micromamba
|
||||||
|
echo -e "\n***** Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to micromamba *****\n"
|
||||||
|
|
||||||
|
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvjO bin/micromamba > micromamba
|
||||||
|
|
||||||
|
chmod u+x ./micromamba
|
||||||
|
|
||||||
|
# test the mamba binary
|
||||||
|
echo -e "\n***** Micromamba version: *****\n"
|
||||||
|
./micromamba --version
|
||||||
|
|
||||||
|
# create the installer env
|
||||||
|
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||||
|
./micromamba create -y --prefix "$INSTALL_ENV_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e "\n***** Packages to install:$PACKAGES_TO_INSTALL *****\n"
|
||||||
|
|
||||||
|
./micromamba install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge "$PACKAGES_TO_INSTALL"
|
||||||
|
|
||||||
|
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||||
|
echo -e "\n----- There was a problem while initializing micromamba. Cannot continue. -----\n"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
rm -f micromamba.exe
|
||||||
|
|
||||||
|
export PATH="$INSTALL_ENV_DIR/bin:$PATH"
|
||||||
|
|
||||||
|
# Download/unpack/clean up InvokeAI release sourceball
|
||||||
|
_err_msg="\n----- InvokeAI source download failed -----\n"
|
||||||
|
curl -L $RELEASE_URL/$RELEASE_SOURCEBALL --output InvokeAI.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
_err_msg="\n----- InvokeAI source unpack failed -----\n"
|
||||||
|
tar -zxf InvokeAI.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
rm -f InvokeAI.tgz
|
||||||
|
|
||||||
|
_err_msg="\n----- InvokeAI source copy failed -----\n"
|
||||||
|
cd InvokeAI-*
|
||||||
|
cp -r . ..
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# cleanup
|
||||||
|
rm -rf InvokeAI-*/
|
||||||
|
rm -rf .dev_scripts/ .github/ docker-build/ tests/ requirements.in requirements-mkdocs.txt shell.nix
|
||||||
|
|
||||||
|
echo -e "\n***** Unpacked InvokeAI source *****\n"
|
||||||
|
|
||||||
|
# Download/unpack/clean up python-build-standalone
|
||||||
|
_err_msg="\n----- Python download failed -----\n"
|
||||||
|
curl -L $PYTHON_BUILD_STANDALONE_URL/$PYTHON_BUILD_STANDALONE --output python.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
_err_msg="\n----- Python unpack failed -----\n"
|
||||||
|
tar -zxf python.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
rm -f python.tgz
|
||||||
|
|
||||||
|
echo -e "\n***** Unpacked python-build-standalone *****\n"
|
||||||
|
|
||||||
|
# create venv
|
||||||
|
_err_msg="\n----- problem creating venv -----\n"
|
||||||
|
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
# patch sysconfig so that extensions can build properly
|
||||||
|
# adapted from https://github.com/cashapp/hermit-packages/commit/fcba384663892f4d9cfb35e8639ff7a28166ee43
|
||||||
|
PYTHON_INSTALL_DIR="$(pwd)/python"
|
||||||
|
SYSCONFIG="$(echo python/lib/python*/_sysconfigdata_*.py)"
|
||||||
|
TMPFILE="$(mktemp)"
|
||||||
|
chmod +w "${SYSCONFIG}"
|
||||||
|
cp "${SYSCONFIG}" "${TMPFILE}"
|
||||||
|
sed "s,'/install,'${PYTHON_INSTALL_DIR},g" "${TMPFILE}" > "${SYSCONFIG}"
|
||||||
|
rm -f "${TMPFILE}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
./python/bin/python3 -E -s -m venv .venv
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
echo -e "\n***** Created Python virtual environment *****\n"
|
||||||
|
|
||||||
|
# Print venv's Python version
|
||||||
|
_err_msg="\n----- problem calling venv's python -----\n"
|
||||||
|
echo -e "We're running under"
|
||||||
|
.venv/bin/python3 --version
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
_err_msg="\n----- pip update failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location --upgrade pip
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Updated pip *****\n"
|
||||||
|
|
||||||
|
_err_msg="\n----- requirements file copy failed -----\n"
|
||||||
|
cp binary_installer/py3.10-${OS_NAME}-"${OS_ARCH}"-${CD}-reqs.txt requirements.txt
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
_err_msg="\n----- main pip install failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -r requirements.txt
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Installed Python dependencies *****\n"
|
||||||
|
|
||||||
|
_err_msg="\n----- InvokeAI setup failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -e .
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Installed InvokeAI *****\n"
|
||||||
|
|
||||||
|
cp binary_installer/invoke.sh.in ./invoke.sh
|
||||||
|
chmod a+rx ./invoke.sh
|
||||||
|
echo -e "\n***** Installed invoke launcher script ******\n"
|
||||||
|
|
||||||
|
# more cleanup
|
||||||
|
rm -rf binary_installer/ installer_files/
|
||||||
|
|
||||||
|
# preload the models
|
||||||
|
.venv/bin/python3 scripts/configure_invokeai.py
|
||||||
|
_err_msg="\n----- model download clone failed -----\n"
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
deactivate
|
||||||
|
|
||||||
|
echo -e "\n***** Finished downloading models *****\n"
|
||||||
|
|
||||||
|
echo "All done! Run the command"
|
||||||
|
echo " $scriptdir/invoke.sh"
|
||||||
|
echo "to start InvokeAI."
|
||||||
|
read -p "Press any key to exit..."
|
||||||
|
exit
|
36
binary_installer/invoke.bat.in
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
@echo off
|
||||||
|
|
||||||
|
PUSHD "%~dp0"
|
||||||
|
call .venv\Scripts\activate.bat
|
||||||
|
|
||||||
|
echo Do you want to generate images using the
|
||||||
|
echo 1. command-line
|
||||||
|
echo 2. browser-based UI
|
||||||
|
echo OR
|
||||||
|
echo 3. open the developer console
|
||||||
|
set /p choice="Please enter 1, 2 or 3: "
|
||||||
|
if /i "%choice%" == "1" (
|
||||||
|
echo Starting the InvokeAI command-line.
|
||||||
|
.venv\Scripts\python scripts\invoke.py %*
|
||||||
|
) else if /i "%choice%" == "2" (
|
||||||
|
echo Starting the InvokeAI browser-based UI.
|
||||||
|
.venv\Scripts\python scripts\invoke.py --web %*
|
||||||
|
) else if /i "%choice%" == "3" (
|
||||||
|
echo Developer Console
|
||||||
|
echo Python command is:
|
||||||
|
where python
|
||||||
|
echo Python version is:
|
||||||
|
python --version
|
||||||
|
echo *************************
|
||||||
|
echo You are now in the system shell, with the local InvokeAI Python virtual environment activated,
|
||||||
|
echo so that you can troubleshoot this InvokeAI installation as necessary.
|
||||||
|
echo *************************
|
||||||
|
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
|
||||||
|
call cmd /k
|
||||||
|
) else (
|
||||||
|
echo Invalid selection
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
|
||||||
|
deactivate
|
46
binary_installer/invoke.sh.in
Normal file
@ -0,0 +1,46 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
. .venv/bin/activate
|
||||||
|
|
||||||
|
# set required env var for torch on mac MPS
|
||||||
|
if [ "$(uname -s)" == "Darwin" ]; then
|
||||||
|
export PYTORCH_ENABLE_MPS_FALLBACK=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Do you want to generate images using the"
|
||||||
|
echo "1. command-line"
|
||||||
|
echo "2. browser-based UI"
|
||||||
|
echo "OR"
|
||||||
|
echo "3. open the developer console"
|
||||||
|
echo "Please enter 1, 2, or 3:"
|
||||||
|
read choice
|
||||||
|
|
||||||
|
case $choice in
|
||||||
|
1)
|
||||||
|
printf "\nStarting the InvokeAI command-line..\n";
|
||||||
|
.venv/bin/python scripts/invoke.py $*;
|
||||||
|
;;
|
||||||
|
2)
|
||||||
|
printf "\nStarting the InvokeAI browser-based UI..\n";
|
||||||
|
.venv/bin/python scripts/invoke.py --web $*;
|
||||||
|
;;
|
||||||
|
3)
|
||||||
|
printf "\nDeveloper Console:\n";
|
||||||
|
printf "Python command is:\n\t";
|
||||||
|
which python;
|
||||||
|
printf "Python version is:\n\t";
|
||||||
|
python --version;
|
||||||
|
echo "*************************"
|
||||||
|
echo "You are now in your user shell ($SHELL) with the local InvokeAI Python virtual environment activated,";
|
||||||
|
echo "so that you can troubleshoot this InvokeAI installation as necessary.";
|
||||||
|
printf "*************************\n"
|
||||||
|
echo "*** Type \`exit\` to quit this shell and deactivate the Python virtual environment *** ";
|
||||||
|
/usr/bin/env "$SHELL";
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Invalid selection";
|
||||||
|
exit
|
||||||
|
;;
|
||||||
|
esac
|
2097
binary_installer/py3.10-darwin-arm64-mps-reqs.txt
Normal file
2077
binary_installer/py3.10-darwin-x86_64-cpu-reqs.txt
Normal file
2103
binary_installer/py3.10-linux-x86_64-cuda-reqs.txt
Normal file
2109
binary_installer/py3.10-windows-x86_64-cuda-reqs.txt
Normal file
17
binary_installer/readme.txt
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
InvokeAI
|
||||||
|
|
||||||
|
Project homepage: https://github.com/invoke-ai/InvokeAI
|
||||||
|
|
||||||
|
Installation on Windows:
|
||||||
|
NOTE: You might need to enable Windows Long Paths. If you're not sure,
|
||||||
|
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
|
||||||
|
file. Note that you will need to have admin privileges in order to
|
||||||
|
do this.
|
||||||
|
|
||||||
|
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
|
||||||
|
|
||||||
|
Installation on Linux and Mac:
|
||||||
|
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
|
||||||
|
|
||||||
|
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
|
||||||
|
file (on Linux/Mac) to start InvokeAI.
|
33
binary_installer/requirements.in
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
--prefer-binary
|
||||||
|
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
|
||||||
|
--extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
|
--trusted-host https://download.pytorch.org
|
||||||
|
accelerate~=0.15
|
||||||
|
albumentations
|
||||||
|
diffusers[torch]~=0.11
|
||||||
|
einops
|
||||||
|
eventlet
|
||||||
|
flask_cors
|
||||||
|
flask_socketio
|
||||||
|
flaskwebgui==1.0.3
|
||||||
|
getpass_asterisk
|
||||||
|
imageio-ffmpeg
|
||||||
|
pyreadline3
|
||||||
|
realesrgan
|
||||||
|
send2trash
|
||||||
|
streamlit
|
||||||
|
taming-transformers-rom1504
|
||||||
|
test-tube
|
||||||
|
torch-fidelity
|
||||||
|
torch==1.12.1 ; platform_system == 'Darwin'
|
||||||
|
torch==1.12.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||||
|
torchvision==0.13.1 ; platform_system == 'Darwin'
|
||||||
|
torchvision==0.13.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||||
|
transformers
|
||||||
|
picklescan
|
||||||
|
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
|
||||||
|
https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip
|
||||||
|
https://github.com/invoke-ai/GFPGAN/archive/3f5d2397361199bc4a91c08bb7d80f04d7805615.zip ; platform_system=='Windows'
|
||||||
|
https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system=='Linux' or platform_system=='Darwin'
|
||||||
|
https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip
|
||||||
|
https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip
|
@ -1,27 +0,0 @@
|
|||||||
# This file describes the alternative machine learning models
|
|
||||||
# available to InvokeAI script.
|
|
||||||
#
|
|
||||||
# To add a new model, follow the examples below. Each
|
|
||||||
# model requires a model config file, a weights file,
|
|
||||||
# and the width and height of the images it
|
|
||||||
# was trained on.
|
|
||||||
stable-diffusion-1.5:
|
|
||||||
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
|
|
||||||
weights: ./models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
|
|
||||||
config: ./configs/stable-diffusion/v1-inference.yaml
|
|
||||||
width: 512
|
|
||||||
height: 512
|
|
||||||
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
|
||||||
default: true
|
|
||||||
stable-diffusion-1.4:
|
|
||||||
description: Stable Diffusion inference model version 1.4
|
|
||||||
config: configs/stable-diffusion/v1-inference.yaml
|
|
||||||
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
|
|
||||||
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
|
||||||
width: 512
|
|
||||||
height: 512
|
|
||||||
inpainting-1.5:
|
|
||||||
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
|
||||||
config: configs/stable-diffusion/v1-inpainting-inference.yaml
|
|
||||||
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
|
||||||
description: RunwayML SD 1.5 model optimized for inpainting
|
|
@ -1,110 +0,0 @@
|
|||||||
model:
|
|
||||||
base_learning_rate: 5.0e-03
|
|
||||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
|
||||||
params:
|
|
||||||
linear_start: 0.00085
|
|
||||||
linear_end: 0.0120
|
|
||||||
num_timesteps_cond: 1
|
|
||||||
log_every_t: 200
|
|
||||||
timesteps: 1000
|
|
||||||
first_stage_key: image
|
|
||||||
cond_stage_key: caption
|
|
||||||
image_size: 64
|
|
||||||
channels: 4
|
|
||||||
cond_stage_trainable: true # Note: different from the one we trained before
|
|
||||||
conditioning_key: crossattn
|
|
||||||
monitor: val/loss_simple_ema
|
|
||||||
scale_factor: 0.18215
|
|
||||||
use_ema: False
|
|
||||||
embedding_reg_weight: 0.0
|
|
||||||
|
|
||||||
personalization_config:
|
|
||||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
|
||||||
params:
|
|
||||||
placeholder_strings: ["*"]
|
|
||||||
initializer_words: ["sculpture"]
|
|
||||||
per_image_tokens: false
|
|
||||||
num_vectors_per_token: 1
|
|
||||||
progressive_words: False
|
|
||||||
|
|
||||||
unet_config:
|
|
||||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
|
||||||
params:
|
|
||||||
image_size: 32 # unused
|
|
||||||
in_channels: 4
|
|
||||||
out_channels: 4
|
|
||||||
model_channels: 320
|
|
||||||
attention_resolutions: [ 4, 2, 1 ]
|
|
||||||
num_res_blocks: 2
|
|
||||||
channel_mult: [ 1, 2, 4, 4 ]
|
|
||||||
num_heads: 8
|
|
||||||
use_spatial_transformer: True
|
|
||||||
transformer_depth: 1
|
|
||||||
context_dim: 768
|
|
||||||
use_checkpoint: True
|
|
||||||
legacy: False
|
|
||||||
|
|
||||||
first_stage_config:
|
|
||||||
target: ldm.models.autoencoder.AutoencoderKL
|
|
||||||
params:
|
|
||||||
embed_dim: 4
|
|
||||||
monitor: val/rec_loss
|
|
||||||
ddconfig:
|
|
||||||
double_z: true
|
|
||||||
z_channels: 4
|
|
||||||
resolution: 256
|
|
||||||
in_channels: 3
|
|
||||||
out_ch: 3
|
|
||||||
ch: 128
|
|
||||||
ch_mult:
|
|
||||||
- 1
|
|
||||||
- 2
|
|
||||||
- 4
|
|
||||||
- 4
|
|
||||||
num_res_blocks: 2
|
|
||||||
attn_resolutions: []
|
|
||||||
dropout: 0.0
|
|
||||||
lossconfig:
|
|
||||||
target: torch.nn.Identity
|
|
||||||
|
|
||||||
cond_stage_config:
|
|
||||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
|
||||||
|
|
||||||
data:
|
|
||||||
target: main.DataModuleFromConfig
|
|
||||||
params:
|
|
||||||
batch_size: 1
|
|
||||||
num_workers: 2
|
|
||||||
wrap: false
|
|
||||||
train:
|
|
||||||
target: ldm.data.personalized.PersonalizedBase
|
|
||||||
params:
|
|
||||||
size: 512
|
|
||||||
set: train
|
|
||||||
per_image_tokens: false
|
|
||||||
repeats: 100
|
|
||||||
validation:
|
|
||||||
target: ldm.data.personalized.PersonalizedBase
|
|
||||||
params:
|
|
||||||
size: 512
|
|
||||||
set: val
|
|
||||||
per_image_tokens: false
|
|
||||||
repeats: 10
|
|
||||||
|
|
||||||
lightning:
|
|
||||||
modelcheckpoint:
|
|
||||||
params:
|
|
||||||
every_n_train_steps: 500
|
|
||||||
callbacks:
|
|
||||||
image_logger:
|
|
||||||
target: main.ImageLogger
|
|
||||||
params:
|
|
||||||
batch_frequency: 500
|
|
||||||
max_images: 8
|
|
||||||
increase_log_steps: False
|
|
||||||
|
|
||||||
trainer:
|
|
||||||
benchmark: True
|
|
||||||
max_steps: 4000000
|
|
||||||
# max_steps: 4000
|
|
||||||
|
|
@ -1,79 +0,0 @@
|
|||||||
model:
|
|
||||||
base_learning_rate: 1.0e-04
|
|
||||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
|
||||||
params:
|
|
||||||
linear_start: 0.00085
|
|
||||||
linear_end: 0.0120
|
|
||||||
num_timesteps_cond: 1
|
|
||||||
log_every_t: 200
|
|
||||||
timesteps: 1000
|
|
||||||
first_stage_key: "jpg"
|
|
||||||
cond_stage_key: "txt"
|
|
||||||
image_size: 64
|
|
||||||
channels: 4
|
|
||||||
cond_stage_trainable: false # Note: different from the one we trained before
|
|
||||||
conditioning_key: crossattn
|
|
||||||
monitor: val/loss_simple_ema
|
|
||||||
scale_factor: 0.18215
|
|
||||||
use_ema: False
|
|
||||||
|
|
||||||
scheduler_config: # 10000 warmup steps
|
|
||||||
target: ldm.lr_scheduler.LambdaLinearScheduler
|
|
||||||
params:
|
|
||||||
warm_up_steps: [ 10000 ]
|
|
||||||
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
|
|
||||||
f_start: [ 1.e-6 ]
|
|
||||||
f_max: [ 1. ]
|
|
||||||
f_min: [ 1. ]
|
|
||||||
|
|
||||||
personalization_config:
|
|
||||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
|
||||||
params:
|
|
||||||
placeholder_strings: ["*"]
|
|
||||||
initializer_words: ['face', 'man', 'photo', 'africanmale']
|
|
||||||
per_image_tokens: false
|
|
||||||
num_vectors_per_token: 1
|
|
||||||
progressive_words: False
|
|
||||||
|
|
||||||
unet_config:
|
|
||||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
|
||||||
params:
|
|
||||||
image_size: 32 # unused
|
|
||||||
in_channels: 4
|
|
||||||
out_channels: 4
|
|
||||||
model_channels: 320
|
|
||||||
attention_resolutions: [ 4, 2, 1 ]
|
|
||||||
num_res_blocks: 2
|
|
||||||
channel_mult: [ 1, 2, 4, 4 ]
|
|
||||||
num_heads: 8
|
|
||||||
use_spatial_transformer: True
|
|
||||||
transformer_depth: 1
|
|
||||||
context_dim: 768
|
|
||||||
use_checkpoint: True
|
|
||||||
legacy: False
|
|
||||||
|
|
||||||
first_stage_config:
|
|
||||||
target: ldm.models.autoencoder.AutoencoderKL
|
|
||||||
params:
|
|
||||||
embed_dim: 4
|
|
||||||
monitor: val/rec_loss
|
|
||||||
ddconfig:
|
|
||||||
double_z: true
|
|
||||||
z_channels: 4
|
|
||||||
resolution: 256
|
|
||||||
in_channels: 3
|
|
||||||
out_ch: 3
|
|
||||||
ch: 128
|
|
||||||
ch_mult:
|
|
||||||
- 1
|
|
||||||
- 2
|
|
||||||
- 4
|
|
||||||
- 4
|
|
||||||
num_res_blocks: 2
|
|
||||||
attn_resolutions: []
|
|
||||||
dropout: 0.0
|
|
||||||
lossconfig:
|
|
||||||
target: torch.nn.Identity
|
|
||||||
|
|
||||||
cond_stage_config:
|
|
||||||
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder
|
|
@ -1,79 +0,0 @@
|
|||||||
model:
|
|
||||||
base_learning_rate: 7.5e-05
|
|
||||||
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
|
|
||||||
params:
|
|
||||||
linear_start: 0.00085
|
|
||||||
linear_end: 0.0120
|
|
||||||
num_timesteps_cond: 1
|
|
||||||
log_every_t: 200
|
|
||||||
timesteps: 1000
|
|
||||||
first_stage_key: "jpg"
|
|
||||||
cond_stage_key: "txt"
|
|
||||||
image_size: 64
|
|
||||||
channels: 4
|
|
||||||
cond_stage_trainable: false # Note: different from the one we trained before
|
|
||||||
conditioning_key: hybrid # important
|
|
||||||
monitor: val/loss_simple_ema
|
|
||||||
scale_factor: 0.18215
|
|
||||||
finetune_keys: null
|
|
||||||
|
|
||||||
scheduler_config: # 10000 warmup steps
|
|
||||||
target: ldm.lr_scheduler.LambdaLinearScheduler
|
|
||||||
params:
|
|
||||||
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
|
|
||||||
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
|
|
||||||
f_start: [ 1.e-6 ]
|
|
||||||
f_max: [ 1. ]
|
|
||||||
f_min: [ 1. ]
|
|
||||||
|
|
||||||
personalization_config:
|
|
||||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
|
||||||
params:
|
|
||||||
placeholder_strings: ["*"]
|
|
||||||
initializer_words: ['face', 'man', 'photo', 'africanmale']
|
|
||||||
per_image_tokens: false
|
|
||||||
num_vectors_per_token: 1
|
|
||||||
progressive_words: False
|
|
||||||
|
|
||||||
unet_config:
|
|
||||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
|
||||||
params:
|
|
||||||
image_size: 32 # unused
|
|
||||||
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
|
|
||||||
out_channels: 4
|
|
||||||
model_channels: 320
|
|
||||||
attention_resolutions: [ 4, 2, 1 ]
|
|
||||||
num_res_blocks: 2
|
|
||||||
channel_mult: [ 1, 2, 4, 4 ]
|
|
||||||
num_heads: 8
|
|
||||||
use_spatial_transformer: True
|
|
||||||
transformer_depth: 1
|
|
||||||
context_dim: 768
|
|
||||||
use_checkpoint: True
|
|
||||||
legacy: False
|
|
||||||
|
|
||||||
first_stage_config:
|
|
||||||
target: ldm.models.autoencoder.AutoencoderKL
|
|
||||||
params:
|
|
||||||
embed_dim: 4
|
|
||||||
monitor: val/rec_loss
|
|
||||||
ddconfig:
|
|
||||||
double_z: true
|
|
||||||
z_channels: 4
|
|
||||||
resolution: 256
|
|
||||||
in_channels: 3
|
|
||||||
out_ch: 3
|
|
||||||
ch: 128
|
|
||||||
ch_mult:
|
|
||||||
- 1
|
|
||||||
- 2
|
|
||||||
- 4
|
|
||||||
- 4
|
|
||||||
num_res_blocks: 2
|
|
||||||
attn_resolutions: []
|
|
||||||
dropout: 0.0
|
|
||||||
lossconfig:
|
|
||||||
target: torch.nn.Identity
|
|
||||||
|
|
||||||
cond_stage_config:
|
|
||||||
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder
|
|
@ -1,110 +0,0 @@
|
|||||||
model:
|
|
||||||
base_learning_rate: 5.0e-03
|
|
||||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
|
||||||
params:
|
|
||||||
linear_start: 0.00085
|
|
||||||
linear_end: 0.0120
|
|
||||||
num_timesteps_cond: 1
|
|
||||||
log_every_t: 200
|
|
||||||
timesteps: 1000
|
|
||||||
first_stage_key: image
|
|
||||||
cond_stage_key: caption
|
|
||||||
image_size: 64
|
|
||||||
channels: 4
|
|
||||||
cond_stage_trainable: true # Note: different from the one we trained before
|
|
||||||
conditioning_key: crossattn
|
|
||||||
monitor: val/loss_simple_ema
|
|
||||||
scale_factor: 0.18215
|
|
||||||
use_ema: False
|
|
||||||
embedding_reg_weight: 0.0
|
|
||||||
|
|
||||||
personalization_config:
|
|
||||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
|
||||||
params:
|
|
||||||
placeholder_strings: ["*"]
|
|
||||||
initializer_words: ['face', 'man', 'photo', 'africanmale']
|
|
||||||
per_image_tokens: false
|
|
||||||
num_vectors_per_token: 6
|
|
||||||
progressive_words: False
|
|
||||||
|
|
||||||
unet_config:
|
|
||||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
|
||||||
params:
|
|
||||||
image_size: 32 # unused
|
|
||||||
in_channels: 4
|
|
||||||
out_channels: 4
|
|
||||||
model_channels: 320
|
|
||||||
attention_resolutions: [ 4, 2, 1 ]
|
|
||||||
num_res_blocks: 2
|
|
||||||
channel_mult: [ 1, 2, 4, 4 ]
|
|
||||||
num_heads: 8
|
|
||||||
use_spatial_transformer: True
|
|
||||||
transformer_depth: 1
|
|
||||||
context_dim: 768
|
|
||||||
use_checkpoint: True
|
|
||||||
legacy: False
|
|
||||||
|
|
||||||
first_stage_config:
|
|
||||||
target: ldm.models.autoencoder.AutoencoderKL
|
|
||||||
params:
|
|
||||||
embed_dim: 4
|
|
||||||
monitor: val/rec_loss
|
|
||||||
ddconfig:
|
|
||||||
double_z: true
|
|
||||||
z_channels: 4
|
|
||||||
resolution: 256
|
|
||||||
in_channels: 3
|
|
||||||
out_ch: 3
|
|
||||||
ch: 128
|
|
||||||
ch_mult:
|
|
||||||
- 1
|
|
||||||
- 2
|
|
||||||
- 4
|
|
||||||
- 4
|
|
||||||
num_res_blocks: 2
|
|
||||||
attn_resolutions: []
|
|
||||||
dropout: 0.0
|
|
||||||
lossconfig:
|
|
||||||
target: torch.nn.Identity
|
|
||||||
|
|
||||||
cond_stage_config:
|
|
||||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
|
||||||
|
|
||||||
data:
|
|
||||||
target: main.DataModuleFromConfig
|
|
||||||
params:
|
|
||||||
batch_size: 1
|
|
||||||
num_workers: 2
|
|
||||||
wrap: false
|
|
||||||
train:
|
|
||||||
target: ldm.data.personalized.PersonalizedBase
|
|
||||||
params:
|
|
||||||
size: 512
|
|
||||||
set: train
|
|
||||||
per_image_tokens: false
|
|
||||||
repeats: 100
|
|
||||||
validation:
|
|
||||||
target: ldm.data.personalized.PersonalizedBase
|
|
||||||
params:
|
|
||||||
size: 512
|
|
||||||
set: val
|
|
||||||
per_image_tokens: false
|
|
||||||
repeats: 10
|
|
||||||
|
|
||||||
lightning:
|
|
||||||
modelcheckpoint:
|
|
||||||
params:
|
|
||||||
every_n_train_steps: 500
|
|
||||||
callbacks:
|
|
||||||
image_logger:
|
|
||||||
target: main.ImageLogger
|
|
||||||
params:
|
|
||||||
batch_frequency: 500
|
|
||||||
max_images: 5
|
|
||||||
increase_log_steps: False
|
|
||||||
|
|
||||||
trainer:
|
|
||||||
benchmark: False
|
|
||||||
max_steps: 6200
|
|
||||||
# max_steps: 4000
|
|
||||||
|
|
@ -1,84 +0,0 @@
|
|||||||
FROM ubuntu AS get_miniconda
|
|
||||||
|
|
||||||
SHELL ["/bin/bash", "-c"]
|
|
||||||
|
|
||||||
# install wget
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y \
|
|
||||||
wget \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# download and install miniconda
|
|
||||||
ARG conda_version=py39_4.12.0-Linux-x86_64
|
|
||||||
ARG conda_prefix=/opt/conda
|
|
||||||
RUN wget --progress=dot:giga -O /miniconda.sh \
|
|
||||||
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
|
|
||||||
&& bash /miniconda.sh -b -p ${conda_prefix} \
|
|
||||||
&& rm -f /miniconda.sh
|
|
||||||
|
|
||||||
FROM ubuntu AS invokeai
|
|
||||||
|
|
||||||
# use bash
|
|
||||||
SHELL [ "/bin/bash", "-c" ]
|
|
||||||
|
|
||||||
# clean bashrc
|
|
||||||
RUN echo "" > ~/.bashrc
|
|
||||||
|
|
||||||
# Install necesarry packages
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y \
|
|
||||||
--no-install-recommends \
|
|
||||||
gcc \
|
|
||||||
git \
|
|
||||||
libgl1-mesa-glx \
|
|
||||||
libglib2.0-0 \
|
|
||||||
pip \
|
|
||||||
python3 \
|
|
||||||
python3-dev \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# clone repository, create models.yaml and create symlinks
|
|
||||||
ARG invokeai_git=invoke-ai/InvokeAI
|
|
||||||
ARG invokeai_branch=main
|
|
||||||
ARG project_name=invokeai
|
|
||||||
ARG conda_env_file=environment-lin-cuda.yml
|
|
||||||
RUN git clone -b ${invokeai_branch} https://github.com/${invokeai_git}.git "/${project_name}" \
|
|
||||||
&& cp \
|
|
||||||
"/${project_name}/configs/models.yaml.example" \
|
|
||||||
"/${project_name}/configs/models.yaml" \
|
|
||||||
&& ln -sf \
|
|
||||||
"/${project_name}/environments-and-requirements/${conda_env_file}" \
|
|
||||||
"/${project_name}/environment.yml" \
|
|
||||||
&& ln -sf \
|
|
||||||
/data/models/v1-5-pruned-emaonly.ckpt \
|
|
||||||
"/${project_name}/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt" \
|
|
||||||
&& ln -sf \
|
|
||||||
/data/outputs/ \
|
|
||||||
"/${project_name}/outputs"
|
|
||||||
|
|
||||||
# set workdir
|
|
||||||
WORKDIR "/${project_name}"
|
|
||||||
|
|
||||||
# install conda env and preload models
|
|
||||||
ARG conda_prefix=/opt/conda
|
|
||||||
COPY --from=get_miniconda "${conda_prefix}" "${conda_prefix}"
|
|
||||||
RUN source "${conda_prefix}/etc/profile.d/conda.sh" \
|
|
||||||
&& conda init bash \
|
|
||||||
&& source ~/.bashrc \
|
|
||||||
&& conda env create \
|
|
||||||
--name "${project_name}" \
|
|
||||||
&& rm -Rf ~/.cache \
|
|
||||||
&& conda clean -afy \
|
|
||||||
&& echo "conda activate ${project_name}" >> ~/.bashrc
|
|
||||||
|
|
||||||
RUN source ~/.bashrc \
|
|
||||||
&& python scripts/preload_models.py \
|
|
||||||
--no-interactive
|
|
||||||
|
|
||||||
# Copy entrypoint and set env
|
|
||||||
ENV CONDA_PREFIX="${conda_prefix}"
|
|
||||||
ENV PROJECT_NAME="${project_name}"
|
|
||||||
COPY docker-build/entrypoint.sh /
|
|
||||||
ENTRYPOINT [ "/entrypoint.sh" ]
|
|
@ -1,84 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
|
|
||||||
# configure values by using env when executing build.sh
|
|
||||||
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
|
|
||||||
|
|
||||||
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
|
|
||||||
|
|
||||||
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
|
|
||||||
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
|
|
||||||
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment-lin-cuda.yml}
|
|
||||||
invokeai_git=${INVOKEAI_GIT:-invoke-ai/InvokeAI}
|
|
||||||
invokeai_branch=${INVOKEAI_BRANCH:-main}
|
|
||||||
huggingface_token=${HUGGINGFACE_TOKEN?}
|
|
||||||
|
|
||||||
# print the settings
|
|
||||||
echo "You are using these values:"
|
|
||||||
echo -e "project_name:\t\t ${project_name}"
|
|
||||||
echo -e "volumename:\t\t ${volumename}"
|
|
||||||
echo -e "arch:\t\t\t ${arch}"
|
|
||||||
echo -e "platform:\t\t ${platform}"
|
|
||||||
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
|
|
||||||
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
|
|
||||||
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
|
|
||||||
echo -e "invokeai_git:\t\t ${invokeai_git}"
|
|
||||||
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
|
|
||||||
|
|
||||||
_runAlpine() {
|
|
||||||
docker run \
|
|
||||||
--rm \
|
|
||||||
--interactive \
|
|
||||||
--tty \
|
|
||||||
--mount source="$volumename",target=/data \
|
|
||||||
--workdir /data \
|
|
||||||
alpine "$@"
|
|
||||||
}
|
|
||||||
|
|
||||||
_copyCheckpoints() {
|
|
||||||
echo "creating subfolders for models and outputs"
|
|
||||||
_runAlpine mkdir models
|
|
||||||
_runAlpine mkdir outputs
|
|
||||||
echo "downloading v1-5-pruned-emaonly.ckpt"
|
|
||||||
_runAlpine wget \
|
|
||||||
--header="Authorization: Bearer ${huggingface_token}" \
|
|
||||||
-O models/v1-5-pruned-emaonly.ckpt \
|
|
||||||
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
|
|
||||||
echo "done"
|
|
||||||
}
|
|
||||||
|
|
||||||
_checkVolumeContent() {
|
|
||||||
_runAlpine ls -lhA /data/models
|
|
||||||
}
|
|
||||||
|
|
||||||
_getModelMd5s() {
|
|
||||||
_runAlpine \
|
|
||||||
alpine sh -c "md5sum /data/models/*.ckpt"
|
|
||||||
}
|
|
||||||
|
|
||||||
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
|
|
||||||
echo "Volume already exists"
|
|
||||||
if [[ -z "$(_checkVolumeContent)" ]]; then
|
|
||||||
echo "looks empty, copying checkpoint"
|
|
||||||
_copyCheckpoints
|
|
||||||
fi
|
|
||||||
echo "Models in ${volumename}:"
|
|
||||||
_checkVolumeContent
|
|
||||||
else
|
|
||||||
echo -n "createing docker volume "
|
|
||||||
docker volume create "${volumename}"
|
|
||||||
_copyCheckpoints
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build Container
|
|
||||||
docker build \
|
|
||||||
--platform="${platform}" \
|
|
||||||
--tag "${invokeai_tag}" \
|
|
||||||
--build-arg project_name="${project_name}" \
|
|
||||||
--build-arg conda_version="${invokeai_conda_version}" \
|
|
||||||
--build-arg conda_prefix="${invokeai_conda_prefix}" \
|
|
||||||
--build-arg conda_env_file="${invokeai_conda_env_file}" \
|
|
||||||
--build-arg invokeai_git="${invokeai_git}" \
|
|
||||||
--build-arg invokeai_branch="${invokeai_branch}" \
|
|
||||||
--file ./docker-build/Dockerfile \
|
|
||||||
.
|
|
@ -1,8 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
|
|
||||||
conda activate "${PROJECT_NAME}"
|
|
||||||
|
|
||||||
python scripts/invoke.py \
|
|
||||||
${@:---web --host=0.0.0.0}
|
|
@ -1,13 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
project_name=${PROJECT_NAME:-invokeai}
|
|
||||||
volumename=${VOLUMENAME:-${project_name}_data}
|
|
||||||
arch=${ARCH:-x86_64}
|
|
||||||
platform=${PLATFORM:-Linux/${arch}}
|
|
||||||
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
|
|
||||||
|
|
||||||
export project_name
|
|
||||||
export volumename
|
|
||||||
export arch
|
|
||||||
export platform
|
|
||||||
export invokeai_tag
|
|
@ -1,15 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
|
|
||||||
|
|
||||||
docker run \
|
|
||||||
--interactive \
|
|
||||||
--tty \
|
|
||||||
--rm \
|
|
||||||
--platform "$platform" \
|
|
||||||
--name "$project_name" \
|
|
||||||
--hostname "$project_name" \
|
|
||||||
--mount source="$volumename",target=/data \
|
|
||||||
--publish 9090:9090 \
|
|
||||||
"$invokeai_tag" ${1:+$@}
|
|
86
docker/Dockerfile
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
ARG PYTHON_VERSION=3.9
|
||||||
|
##################
|
||||||
|
## base image ##
|
||||||
|
##################
|
||||||
|
FROM python:${PYTHON_VERSION}-slim AS python-base
|
||||||
|
|
||||||
|
# prepare for buildkit cache
|
||||||
|
RUN rm -f /etc/apt/apt.conf.d/docker-clean
|
||||||
|
|
||||||
|
# Install necesarry packages
|
||||||
|
RUN \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
apt-get update \
|
||||||
|
&& apt-get install \
|
||||||
|
-yqq \
|
||||||
|
--no-install-recommends \
|
||||||
|
libgl1-mesa-glx=20.3.* \
|
||||||
|
libglib2.0-0=2.66.* \
|
||||||
|
libopencv-dev=4.5.* \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# set working directory and path
|
||||||
|
ARG APPDIR=/usr/src
|
||||||
|
ARG APPNAME=InvokeAI
|
||||||
|
WORKDIR ${APPDIR}
|
||||||
|
ENV PATH=${APPDIR}/${APPNAME}/bin:$PATH
|
||||||
|
|
||||||
|
#######################
|
||||||
|
## build pyproject ##
|
||||||
|
#######################
|
||||||
|
FROM python-base AS pyproject-builder
|
||||||
|
ENV PIP_USE_PEP517=1
|
||||||
|
|
||||||
|
# prepare for buildkit cache
|
||||||
|
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
|
||||||
|
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
|
||||||
|
RUN mkdir -p ${PIP_CACHE_DIR}
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
RUN \
|
||||||
|
--mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
apt-get update \
|
||||||
|
&& apt-get install \
|
||||||
|
-yqq \
|
||||||
|
--no-install-recommends \
|
||||||
|
build-essential=12.9 \
|
||||||
|
gcc=4:10.2.* \
|
||||||
|
python3-dev=3.9.* \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# create virtual environment
|
||||||
|
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
python3 -m venv "${APPNAME}" \
|
||||||
|
--upgrade-deps
|
||||||
|
|
||||||
|
# copy sources
|
||||||
|
COPY --link . .
|
||||||
|
|
||||||
|
# install pyproject.toml
|
||||||
|
ARG PIP_EXTRA_INDEX_URL
|
||||||
|
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
|
||||||
|
ARG PIP_PACKAGE=.
|
||||||
|
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
"${APPDIR}/${APPNAME}/bin/pip" install ${PIP_PACKAGE}
|
||||||
|
|
||||||
|
# build patchmatch
|
||||||
|
RUN python3 -c "from patchmatch import patch_match"
|
||||||
|
|
||||||
|
#####################
|
||||||
|
## runtime image ##
|
||||||
|
#####################
|
||||||
|
FROM python-base AS runtime
|
||||||
|
|
||||||
|
# setup environment
|
||||||
|
COPY --from=pyproject-builder --link ${APPDIR}/${APPNAME} ${APPDIR}/${APPNAME}
|
||||||
|
ENV INVOKEAI_ROOT=/data
|
||||||
|
ENV INVOKE_MODEL_RECONFIGURE="--yes --default_only"
|
||||||
|
|
||||||
|
# set Entrypoint and default CMD
|
||||||
|
ENTRYPOINT [ "invokeai" ]
|
||||||
|
CMD [ "--web", "--host=0.0.0.0" ]
|
||||||
|
VOLUME [ "/data" ]
|
||||||
|
|
||||||
|
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
|
44
docker/build.sh
Executable file
@ -0,0 +1,44 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
|
||||||
|
# Some possible pip extra-index urls (cuda 11.7 is available without extra url):
|
||||||
|
# CUDA 11.6: https://download.pytorch.org/whl/cu116
|
||||||
|
# ROCm 5.2: https://download.pytorch.org/whl/rocm5.2
|
||||||
|
# CPU: https://download.pytorch.org/whl/cpu
|
||||||
|
# as found on https://pytorch.org/get-started/locally/
|
||||||
|
|
||||||
|
SCRIPTDIR=$(dirname "$0")
|
||||||
|
cd "$SCRIPTDIR" || exit 1
|
||||||
|
|
||||||
|
source ./env.sh
|
||||||
|
|
||||||
|
DOCKERFILE=${INVOKE_DOCKERFILE:-Dockerfile}
|
||||||
|
|
||||||
|
# print the settings
|
||||||
|
echo -e "You are using these values:\n"
|
||||||
|
echo -e "Dockerfile:\t\t${DOCKERFILE}"
|
||||||
|
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
|
||||||
|
echo -e "Volumename:\t\t${VOLUMENAME}"
|
||||||
|
echo -e "Platform:\t\t${PLATFORM}"
|
||||||
|
echo -e "Registry:\t\t${CONTAINER_REGISTRY}"
|
||||||
|
echo -e "Repository:\t\t${CONTAINER_REPOSITORY}"
|
||||||
|
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
|
||||||
|
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
|
||||||
|
|
||||||
|
# Create docker volume
|
||||||
|
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
|
||||||
|
echo -e "Volume already exists\n"
|
||||||
|
else
|
||||||
|
echo -n "createing docker volume "
|
||||||
|
docker volume create "${VOLUMENAME}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build Container
|
||||||
|
DOCKER_BUILDKIT=1 docker build \
|
||||||
|
--platform="${PLATFORM}" \
|
||||||
|
--tag="${CONTAINER_IMAGE}" \
|
||||||
|
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
|
||||||
|
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
|
||||||
|
--file="${DOCKERFILE}" \
|
||||||
|
..
|
38
docker/env.sh
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
|
||||||
|
# Decide which container flavor to build if not specified
|
||||||
|
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
|
||||||
|
# Check for CUDA and ROCm
|
||||||
|
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
|
||||||
|
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
|
||||||
|
if [[ "$(uname -s)" != "Darwin" && "${CUDA_AVAILABLE}" == "True" ]]; then
|
||||||
|
CONTAINER_FLAVOR="cuda"
|
||||||
|
elif [[ "$(uname -s)" != "Darwin" && "${ROCM_AVAILABLE}" == "True" ]]; then
|
||||||
|
CONTAINER_FLAVOR="rocm"
|
||||||
|
else
|
||||||
|
CONTAINER_FLAVOR="cpu"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
# Set PIP_EXTRA_INDEX_URL based on container flavor
|
||||||
|
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
|
||||||
|
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
|
||||||
|
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
|
||||||
|
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
|
||||||
|
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
|
||||||
|
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Variables shared by build.sh and run.sh
|
||||||
|
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
|
||||||
|
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME,,}_data"}"
|
||||||
|
ARCH="${ARCH-$(uname -m)}"
|
||||||
|
PLATFORM="${PLATFORM-Linux/${ARCH}}"
|
||||||
|
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
|
||||||
|
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
|
||||||
|
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
|
||||||
|
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
|
||||||
|
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
|
||||||
|
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
|
||||||
|
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"
|
31
docker/run.sh
Executable file
@ -0,0 +1,31 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
|
||||||
|
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
|
||||||
|
|
||||||
|
SCRIPTDIR=$(dirname "$0")
|
||||||
|
cd "$SCRIPTDIR" || exit 1
|
||||||
|
|
||||||
|
source ./env.sh
|
||||||
|
|
||||||
|
echo -e "You are using these values:\n"
|
||||||
|
echo -e "Volumename:\t${VOLUMENAME}"
|
||||||
|
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
|
||||||
|
echo -e "local Models:\t${MODELSPATH:-unset}\n"
|
||||||
|
|
||||||
|
docker run \
|
||||||
|
--interactive \
|
||||||
|
--tty \
|
||||||
|
--rm \
|
||||||
|
--platform="${PLATFORM}" \
|
||||||
|
--name="${REPOSITORY_NAME,,}" \
|
||||||
|
--hostname="${REPOSITORY_NAME,,}" \
|
||||||
|
--mount=source="${VOLUMENAME}",target=/data \
|
||||||
|
${MODELSPATH:+-u "$(id -u):$(id -g)"} \
|
||||||
|
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
|
||||||
|
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
|
||||||
|
--publish=9090:9090 \
|
||||||
|
--cap-add=sys_nice \
|
||||||
|
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
|
||||||
|
"${CONTAINER_IMAGE}" ${1:+$@}
|
@ -4,180 +4,377 @@ title: Changelog
|
|||||||
|
|
||||||
# :octicons-log-16: **Changelog**
|
# :octicons-log-16: **Changelog**
|
||||||
|
|
||||||
|
## v2.3.0 <small>(15 January 2023)</small>
|
||||||
|
|
||||||
|
**Transition to diffusers
|
||||||
|
|
||||||
|
Version 2.3 provides support for both the traditional `.ckpt` weight
|
||||||
|
checkpoint files as well as the HuggingFace `diffusers` format. This
|
||||||
|
introduces several changes you should know about.
|
||||||
|
|
||||||
|
1. The models.yaml format has been updated. There are now two
|
||||||
|
different type of configuration stanza. The traditional ckpt
|
||||||
|
one will look like this, with a `format` of `ckpt` and a
|
||||||
|
`weights` field that points to the absolute or ROOTDIR-relative
|
||||||
|
location of the ckpt file.
|
||||||
|
|
||||||
|
```
|
||||||
|
inpainting-1.5:
|
||||||
|
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
|
||||||
|
repo_id: runwayml/stable-diffusion-inpainting
|
||||||
|
format: ckpt
|
||||||
|
width: 512
|
||||||
|
height: 512
|
||||||
|
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
||||||
|
config: configs/stable-diffusion/v1-inpainting-inference.yaml
|
||||||
|
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||||
|
```
|
||||||
|
|
||||||
|
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
|
||||||
|
with a `format` of `diffusers` and a `repo_id` that points to the
|
||||||
|
repository ID of the model on HuggingFace:
|
||||||
|
|
||||||
|
```
|
||||||
|
stable-diffusion-2.1:
|
||||||
|
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
|
||||||
|
repo_id: stabilityai/stable-diffusion-2-1
|
||||||
|
format: diffusers
|
||||||
|
```
|
||||||
|
|
||||||
|
A configuration stanza for a diffuers model stored locally should
|
||||||
|
look like this, with a `format` of `diffusers`, but a `path` field
|
||||||
|
that points at the directory that contains `model_index.json`:
|
||||||
|
|
||||||
|
```
|
||||||
|
waifu-diffusion:
|
||||||
|
description: Latest waifu diffusion 1.4
|
||||||
|
format: diffusers
|
||||||
|
path: models/diffusers/hakurei-haifu-diffusion-1.4
|
||||||
|
```
|
||||||
|
|
||||||
|
2. In order of precedence, InvokeAI will now use HF_HOME, then
|
||||||
|
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
|
||||||
|
store HuggingFace diffusers models.
|
||||||
|
|
||||||
|
Consequently, the format of the models directory has changed to
|
||||||
|
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
|
||||||
|
are not set, diffusers models are now automatically downloaded
|
||||||
|
and retrieved from the directory `ROOTDIR/models/diffusers`,
|
||||||
|
while other models are stored in the directory
|
||||||
|
`ROOTDIR/models/hub`. This organization is the same as that used
|
||||||
|
by HuggingFace for its cache management.
|
||||||
|
|
||||||
|
This allows you to share diffusers and ckpt model files easily with
|
||||||
|
other machine learning applications that use the HuggingFace
|
||||||
|
libraries. To do this, set the environment variable HF_HOME
|
||||||
|
before starting up InvokeAI to tell it what directory to
|
||||||
|
cache models in. To tell InvokeAI to use the standard HuggingFace
|
||||||
|
cache directory, you would set HF_HOME like this (Linux/Mac):
|
||||||
|
|
||||||
|
`export HF_HOME=~/.cache/huggingface`
|
||||||
|
|
||||||
|
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
|
||||||
|
environment variable if HF_HOME is not set; this path
|
||||||
|
takes precedence over `ROOTDIR/models` to allow for the same sharing
|
||||||
|
with other machine learning applications that use HuggingFace
|
||||||
|
libraries.
|
||||||
|
|
||||||
|
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
|
||||||
|
will be a one-time migration from the old models directory format
|
||||||
|
to the new one. You will see a message about this the first time
|
||||||
|
you start `invoke.py`.
|
||||||
|
|
||||||
|
4. Both the front end back ends of the model manager have been
|
||||||
|
rewritten to accommodate diffusers. You can import models using
|
||||||
|
their local file path, using their URLs, or their HuggingFace
|
||||||
|
repo_ids. On the command line, all these syntaxes work:
|
||||||
|
|
||||||
|
```
|
||||||
|
!import_model stabilityai/stable-diffusion-2-1-base
|
||||||
|
!import_model /opt/sd-models/sd-1.4.ckpt
|
||||||
|
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
|
||||||
|
```
|
||||||
|
|
||||||
|
**KNOWN BUGS (15 January 2023)
|
||||||
|
|
||||||
|
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
|
||||||
|
stable-diffusion-2.1 models can only be run as `diffusers` models
|
||||||
|
when the `xformer` library is installed and configured. Without
|
||||||
|
`xformers`, InvokeAI returns black images.
|
||||||
|
|
||||||
|
2. Inpainting and outpainting have regressed in quality.
|
||||||
|
|
||||||
|
Both these issues are being actively worked on.
|
||||||
|
|
||||||
|
## v2.2.4 <small>(11 December 2022)</small>
|
||||||
|
|
||||||
|
**the `invokeai` directory**
|
||||||
|
|
||||||
|
Previously there were two directories to worry about, the directory that
|
||||||
|
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
|
||||||
|
directory that contained the models files, embeddings, configuration and
|
||||||
|
outputs. With the 2.2.4 release, this dual system is done away with, and
|
||||||
|
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
|
||||||
|
live in a directory named `invokeai`. By default this directory is located in
|
||||||
|
your home directory (e.g. `\Users\yourname` on Windows), but you can select
|
||||||
|
where it goes at install time.
|
||||||
|
|
||||||
|
After installation, you can delete the install directory (the one that the zip
|
||||||
|
file creates when it unpacks). Do **not** delete or move the `invokeai`
|
||||||
|
directory!
|
||||||
|
|
||||||
|
**Initialization file `invokeai/invokeai.init`**
|
||||||
|
|
||||||
|
You can place frequently-used startup options in this file, such as the default
|
||||||
|
number of steps or your preferred sampler. To keep everything in one place, this
|
||||||
|
file has now been moved into the `invokeai` directory and is named
|
||||||
|
`invokeai.init`.
|
||||||
|
|
||||||
|
**To update from Version 2.2.3**
|
||||||
|
|
||||||
|
The easiest route is to download and unpack one of the 2.2.4 installer files.
|
||||||
|
When it asks you for the location of the `invokeai` runtime directory, respond
|
||||||
|
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
|
||||||
|
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
|
||||||
|
and answer "Y" when asked if you want to reuse the directory.
|
||||||
|
|
||||||
|
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
|
||||||
|
does not know about the new directory layout and won't be fully functional.
|
||||||
|
|
||||||
|
**To update to 2.2.5 (and beyond) there's now an update path**
|
||||||
|
|
||||||
|
As they become available, you can update to more recent versions of InvokeAI
|
||||||
|
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
|
||||||
|
Running it without any arguments will install the most recent version of
|
||||||
|
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
|
||||||
|
script with an argument in the command shell. This syntax accepts the path to
|
||||||
|
the desired release's zip file, which you can find by clicking on the green
|
||||||
|
"Code" button on this repository's home page.
|
||||||
|
|
||||||
|
**Other 2.2.4 Improvements**
|
||||||
|
|
||||||
|
- Fix InvokeAI GUI initialization by @addianto in #1687
|
||||||
|
- fix link in documentation by @lstein in #1728
|
||||||
|
- Fix broken link by @ShawnZhong in #1736
|
||||||
|
- Remove reference to binary installer by @lstein in #1731
|
||||||
|
- documentation fixes for 2.2.3 by @lstein in #1740
|
||||||
|
- Modify installer links to point closer to the source installer by @ebr in
|
||||||
|
#1745
|
||||||
|
- add documentation warning about 1650/60 cards by @lstein in #1753
|
||||||
|
- Fix Linux source URL in installation docs by @andybearman in #1756
|
||||||
|
- Make install instructions discoverable in readme by @damian0815 in #1752
|
||||||
|
- typo fix by @ofirkris in #1755
|
||||||
|
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
|
||||||
|
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
|
||||||
|
in #1765
|
||||||
|
- stability and usage improvements to binary & source installers by @lstein in
|
||||||
|
#1760
|
||||||
|
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
|
||||||
|
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
|
||||||
|
- invoke script cds to its location before running by @lstein in #1805
|
||||||
|
- Make PaperCut and VoxelArt models load again by @lstein in #1730
|
||||||
|
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
|
||||||
|
#1817
|
||||||
|
- Clean up readme by @hipsterusername in #1820
|
||||||
|
- Optimized Docker build with support for external working directory by @ebr in
|
||||||
|
#1544
|
||||||
|
- disable pushing the cloud container by @mauwii in #1831
|
||||||
|
- Fix docker push github action and expand with additional metadata by @ebr in
|
||||||
|
#1837
|
||||||
|
- Fix Broken Link To Notebook by @VedantMadane in #1821
|
||||||
|
- Account for flat models by @spezialspezial in #1766
|
||||||
|
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
|
||||||
|
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
|
||||||
|
@SammCheese in #1848
|
||||||
|
- Make force free GPU memory work in img2img by @addianto in #1844
|
||||||
|
- New installer by @lstein
|
||||||
|
|
||||||
|
## v2.2.3 <small>(2 December 2022)</small>
|
||||||
|
|
||||||
|
!!! Note
|
||||||
|
|
||||||
|
This point release removes references to the binary installer from the
|
||||||
|
installation guide. The binary installer is not stable at the current
|
||||||
|
time. First time users are encouraged to use the "source" installer as
|
||||||
|
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
|
||||||
|
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
|
||||||
|
potential for users to create and iterate on their creations. The following
|
||||||
|
sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.2.2 <small>(30 November 2022)</small>
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](https://invoke-ai.github.io/InvokeAI/features/UNIFIED_CANVAS/).
|
||||||
|
This new workflow is the biggest enhancement added to the WebUI to date, and
|
||||||
|
unlocks a stunning amount of potential for users to create and iterate on their
|
||||||
|
creations. The following sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.2.0 <small>(2 December 2022)</small>
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
|
||||||
|
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
|
||||||
|
potential for users to create and iterate on their creations. The following
|
||||||
|
sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.1.3 <small>(13 November 2022)</small>
|
||||||
|
|
||||||
|
- A choice of installer scripts that automate installation and configuration.
|
||||||
|
See
|
||||||
|
[Installation](installation/index.md).
|
||||||
|
- A streamlined manual installation process that works for both Conda and
|
||||||
|
PIP-only installs. See
|
||||||
|
[Manual Installation](installation/INSTALL_MANUAL.md).
|
||||||
|
- The ability to save frequently-used startup options (model to load, steps,
|
||||||
|
sampler, etc) in a `.invokeai` file. See
|
||||||
|
[Client](features/CLI.md)
|
||||||
|
- Support for AMD GPU cards (non-CUDA) on Linux machines.
|
||||||
|
- Multiple bugs and edge cases squashed.
|
||||||
|
|
||||||
## v2.1.0 <small>(2 November 2022)</small>
|
## v2.1.0 <small>(2 November 2022)</small>
|
||||||
|
|
||||||
- update mac instructions to use invokeai for env name by @willwillems in
|
- update mac instructions to use invokeai for env name by @willwillems in #1030
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1030
|
- Update .gitignore by @blessedcoolant in #1040
|
||||||
- Update .gitignore by @blessedcoolant in
|
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1040
|
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
|
||||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579
|
- Print out the device type which is used by @manzke in #1073
|
||||||
missing after merge by @skurovec in
|
- Hires Addition by @hipsterusername in #1063
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1056
|
|
||||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1060
|
|
||||||
- Print out the device type which is used by @manzke in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1073
|
|
||||||
- Hires Addition by @hipsterusername in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1063
|
|
||||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||||
@skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
@skurovec in #1081
|
||||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||||
warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
warning by @db3000 in #1077
|
||||||
- fix noisy images at high step counts by @lstein in
|
- fix noisy images at high step counts by @lstein in #1086
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1086
|
- Generalize facetool strength argument by @db3000 in #1078
|
||||||
- Generalize facetool strength argument by @db3000 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1078
|
|
||||||
- Enable fast switching among models at the invoke> command line by @lstein in
|
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1066
|
#1066
|
||||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in
|
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1095
|
- Update generate.py by @unreleased in #1109
|
||||||
- Update generate.py by @unreleased in
|
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1109
|
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
|
||||||
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in
|
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1125
|
- Only output facetool parameters if enhancing faces by @db3000 in #1119
|
||||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1123
|
|
||||||
- Fix broken doc links, fix malaprop in the project subtitle by @majick in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1131
|
|
||||||
- Only output facetool parameters if enhancing faces by @db3000 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1119
|
|
||||||
- Update gitignore to ignore codeformer weights at new location by
|
- Update gitignore to ignore codeformer weights at new location by
|
||||||
@spezialspezial in https://github.com/invoke-ai/InvokeAI/pull/1136
|
@spezialspezial in #1136
|
||||||
- fix links to point to invoke-ai.github.io #1117 by @mauwii in
|
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1143
|
- Rework-mkdocs by @mauwii in #1144
|
||||||
- Rework-mkdocs by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1144
|
|
||||||
- add option to CLI and pngwriter that allows user to set PNG compression level
|
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||||
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
by @lstein in #1127
|
||||||
- Fix img2img DDIM index out of bound by @wfng92 in
|
- Fix img2img DDIM index out of bound by @wfng92 in #1137
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1137
|
- Fix gh actions by @mauwii in #1128
|
||||||
- Fix gh actions by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1128
|
- update mac instructions to use invokeai for env name by @willwillems in #1030
|
||||||
- update mac instructions to use invokeai for env name by @willwillems in
|
- Update .gitignore by @blessedcoolant in #1040
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1030
|
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
|
||||||
- Update .gitignore by @blessedcoolant in
|
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1040
|
- Print out the device type which is used by @manzke in #1073
|
||||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579
|
- Hires Addition by @hipsterusername in #1063
|
||||||
missing after merge by @skurovec in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1056
|
|
||||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1060
|
|
||||||
- Print out the device type which is used by @manzke in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1073
|
|
||||||
- Hires Addition by @hipsterusername in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1063
|
|
||||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||||
@skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
@skurovec in #1081
|
||||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||||
warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
warning by @db3000 in #1077
|
||||||
- fix noisy images at high step counts by @lstein in
|
- fix noisy images at high step counts by @lstein in #1086
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1086
|
- Generalize facetool strength argument by @db3000 in #1078
|
||||||
- Generalize facetool strength argument by @db3000 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1078
|
|
||||||
- Enable fast switching among models at the invoke> command line by @lstein in
|
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1066
|
#1066
|
||||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in
|
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1095
|
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
|
||||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in
|
- Only output facetool parameters if enhancing faces by @db3000 in #1119
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1123
|
|
||||||
- Only output facetool parameters if enhancing faces by @db3000 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1119
|
|
||||||
- add option to CLI and pngwriter that allows user to set PNG compression level
|
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||||
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
by @lstein in #1127
|
||||||
- Fix img2img DDIM index out of bound by @wfng92 in
|
- Fix img2img DDIM index out of bound by @wfng92 in #1137
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1137
|
- Add text prompt to inpaint mask support by @lstein in #1133
|
||||||
- Add text prompt to inpaint mask support by @lstein in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1133
|
|
||||||
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
|
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/976
|
#976
|
||||||
- WebUI: Adds Codeformer support by @psychedelicious in
|
- WebUI: Adds Codeformer support by @psychedelicious in #1151
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1151
|
- Skips normalizing prompts for web UI metadata by @psychedelicious in #1165
|
||||||
- Skips normalizing prompts for web UI metadata by @psychedelicious in
|
- Add Asymmetric Tiling by @carson-katri in #1132
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1165
|
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in #1172
|
||||||
- Add Asymmetric Tiling by @carson-katri in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1132
|
|
||||||
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1172
|
|
||||||
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
|
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
|
||||||
in https://github.com/invoke-ai/InvokeAI/pull/1175
|
in #1175
|
||||||
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
|
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
|
||||||
in https://github.com/invoke-ai/InvokeAI/pull/1178
|
in #1178
|
||||||
- Fix typo in docs: s/Formally/Formerly by @noodlebox in
|
- Fix typo in docs: s/Formally/Formerly by @noodlebox in #1176
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1176
|
- fix clipseg loading problems by @lstein in #1177
|
||||||
- fix clipseg loading problems by @lstein in
|
- Correct color channels in upscale using array slicing by @wfng92 in #1181
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1177
|
|
||||||
- Correct color channels in upscale using array slicing by @wfng92 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1181
|
|
||||||
- Web UI: Filters existing images when adding new images; Fixes #1085 by
|
- Web UI: Filters existing images when adding new images; Fixes #1085 by
|
||||||
@psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1171
|
@psychedelicious in #1171
|
||||||
- fix a number of bugs in textual inversion by @lstein in
|
- fix a number of bugs in textual inversion by @lstein in #1190
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1190
|
- Improve !fetch, add !replay command by @ArDiouscuros in #882
|
||||||
- Improve !fetch, add !replay command by @ArDiouscuros in
|
- Fix generation of image with s>1000 by @holstvoogd in #951
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/882
|
- Web UI: Gallery improvements by @psychedelicious in #1198
|
||||||
- Fix generation of image with s>1000 by @holstvoogd in
|
- Update CLI.md by @krummrey in #1211
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/951
|
- outcropping improvements by @lstein in #1207
|
||||||
- Web UI: Gallery improvements by @psychedelicious in
|
- add support for loading VAE autoencoders by @lstein in #1216
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1198
|
- remove duplicate fix_func for MPS by @wfng92 in #1210
|
||||||
- Update CLI.md by @krummrey in https://github.com/invoke-ai/InvokeAI/pull/1211
|
- Metadata storage and retrieval fixes by @lstein in #1204
|
||||||
- outcropping improvements by @lstein in
|
- nix: add shell.nix file by @Cloudef in #1170
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1207
|
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in #1185
|
||||||
- add support for loading VAE autoencoders by @lstein in
|
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in #1187
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1216
|
|
||||||
- remove duplicate fix_func for MPS by @wfng92 in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1210
|
|
||||||
- Metadata storage and retrieval fixes by @lstein in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1204
|
|
||||||
- nix: add shell.nix file by @Cloudef in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1170
|
|
||||||
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1185
|
|
||||||
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1187
|
|
||||||
- Allow user to generate images with initial noise as on M1 / mps system by
|
- Allow user to generate images with initial noise as on M1 / mps system by
|
||||||
@ArDiouscuros in https://github.com/invoke-ai/InvokeAI/pull/981
|
@ArDiouscuros in #981
|
||||||
- feat: adding filename format template by @plucked in
|
- feat: adding filename format template by @plucked in #968
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/968
|
- Web UI: Fixes broken bundle by @psychedelicious in #1242
|
||||||
- Web UI: Fixes broken bundle by @psychedelicious in
|
- Support runwayML custom inpainting model by @lstein in #1243
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1242
|
- Update IMG2IMG.md by @talitore in #1262
|
||||||
- Support runwayML custom inpainting model by @lstein in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1243
|
|
||||||
- Update IMG2IMG.md by @talitore in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1262
|
|
||||||
- New dockerfile - including a build- and a run- script as well as a GH-Action
|
- New dockerfile - including a build- and a run- script as well as a GH-Action
|
||||||
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1233
|
by @mauwii in #1233
|
||||||
- cut over from karras to model noise schedule for higher steps by @lstein in
|
- cut over from karras to model noise schedule for higher steps by @lstein in
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1222
|
#1222
|
||||||
- Prompt tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1268
|
- Prompt tweaks by @lstein in #1268
|
||||||
- Outpainting implementation by @Kyle0654 in
|
- Outpainting implementation by @Kyle0654 in #1251
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1251
|
- fixing aspect ratio on hires by @tjennings in #1249
|
||||||
- fixing aspect ratio on hires by @tjennings in
|
- Fix-build-container-action by @mauwii in #1274
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1249
|
- handle all unicode characters by @damian0815 in #1276
|
||||||
- Fix-build-container-action by @mauwii in
|
- adds models.user.yml to .gitignore by @JakeHL in #1281
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1274
|
- remove debug branch, set fail-fast to false by @mauwii in #1284
|
||||||
- handle all unicode characters by @damian0815 in
|
- Protect-secrets-on-pr by @mauwii in #1285
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1276
|
- Web UI: Adds initial inpainting implementation by @psychedelicious in #1225
|
||||||
- adds models.user.yml to .gitignore by @JakeHL in
|
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in #1289
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1281
|
- Use proper authentication to download model by @mauwii in #1287
|
||||||
- remove debug branch, set fail-fast to false by @mauwii in
|
- Prevent indexing error for mode RGB by @spezialspezial in #1294
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1284
|
|
||||||
- Protect-secrets-on-pr by @mauwii in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1285
|
|
||||||
- Web UI: Adds initial inpainting implementation by @psychedelicious in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1225
|
|
||||||
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1289
|
|
||||||
- Use proper authentication to download model by @mauwii in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1287
|
|
||||||
- Prevent indexing error for mode RGB by @spezialspezial in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1294
|
|
||||||
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
|
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
|
||||||
unecesarry caches by @mauwii in
|
unecesarry caches by @mauwii in #1293
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1293
|
- add --no-interactive to configure_invokeai step by @mauwii in #1302
|
||||||
- add --no-interactive to preload_models step by @mauwii in
|
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1302
|
|
||||||
- 1-click installer and updater. Uses micromamba to install git and conda into a
|
- 1-click installer and updater. Uses micromamba to install git and conda into a
|
||||||
contained environment (if necessary) before running the normal installation
|
contained environment (if necessary) before running the normal installation
|
||||||
script by @cmdr2 in https://github.com/invoke-ai/InvokeAI/pull/1253
|
script by @cmdr2 in #1253
|
||||||
- preload_models.py script downloads the weight files by @lstein in
|
- configure_invokeai.py script downloads the weight files by @lstein in #1290
|
||||||
https://github.com/invoke-ai/InvokeAI/pull/1290
|
|
||||||
|
|
||||||
## v2.0.1 <small>(13 October 2022)</small>
|
## v2.0.1 <small>(13 October 2022)</small>
|
||||||
|
|
||||||
|
BIN
docs/assets/canvas/biker_granny.png
Normal file
After Width: | Height: | Size: 359 KiB |
BIN
docs/assets/canvas/biker_jacket_granny.png
Normal file
After Width: | Height: | Size: 528 KiB |
BIN
docs/assets/canvas/mask_granny.png
Normal file
After Width: | Height: | Size: 601 KiB |
BIN
docs/assets/canvas/staging_area.png
Normal file
After Width: | Height: | Size: 59 KiB |
BIN
docs/assets/canvas_preview.png
Normal file
After Width: | Height: | Size: 142 KiB |
BIN
docs/assets/concepts/image1.png
Normal file
After Width: | Height: | Size: 122 KiB |
BIN
docs/assets/concepts/image2.png
Normal file
After Width: | Height: | Size: 128 KiB |
BIN
docs/assets/concepts/image3.png
Normal file
After Width: | Height: | Size: 99 KiB |
BIN
docs/assets/concepts/image4.png
Normal file
After Width: | Height: | Size: 112 KiB |
BIN
docs/assets/concepts/image5.png
Normal file
After Width: | Height: | Size: 107 KiB |
BIN
docs/assets/invoke_ai_banner.png
Normal file
After Width: | Height: | Size: 169 KiB |
BIN
docs/assets/textual-inversion/ti-frontend.png
Normal file
After Width: | Height: | Size: 124 KiB |
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: CLI
|
title: Command-Line Interface
|
||||||
---
|
---
|
||||||
|
|
||||||
# :material-bash: CLI
|
# :material-bash: CLI
|
||||||
@ -130,20 +130,34 @@ file should contain the startup options as you would type them on the
|
|||||||
command line (`--steps=10 --grid`), one argument per line, or a
|
command line (`--steps=10 --grid`), one argument per line, or a
|
||||||
mixture of both using any of the accepted command switch formats:
|
mixture of both using any of the accepted command switch formats:
|
||||||
|
|
||||||
!!! example ""
|
!!! example "my unmodified initialization file"
|
||||||
|
|
||||||
```bash
|
```bash title="~/.invokeai" linenums="1"
|
||||||
--web
|
# InvokeAI initialization file
|
||||||
--steps=28
|
# This is the InvokeAI initialization file, which contains command-line default values.
|
||||||
--grid
|
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
|
||||||
-f 0.6 -C 11.0 -A k_euler_a
|
# or renaming it and then running invokeai-configure again.
|
||||||
|
|
||||||
|
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
|
||||||
|
--root="/Users/mauwii/invokeai"
|
||||||
|
|
||||||
|
# the --outdir option controls the default location of image files.
|
||||||
|
--outdir="/Users/mauwii/invokeai/outputs"
|
||||||
|
|
||||||
|
# You may place other frequently-used startup commands here, one or more per line.
|
||||||
|
# Examples:
|
||||||
|
# --web --host=0.0.0.0
|
||||||
|
# --steps=20
|
||||||
|
# -Ak_euler_a -C10.0
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that the initialization file only accepts the command line arguments.
|
!!! note
|
||||||
There are additional arguments that you can provide on the `invoke>` command
|
|
||||||
line (such as `-n` or `--iterations`) that cannot be entered into this file.
|
The initialization file only accepts the command line arguments.
|
||||||
Also be alert for empty blank lines at the end of the file, which will cause
|
There are additional arguments that you can provide on the `invoke>` command
|
||||||
an arguments error at startup time.
|
line (such as `-n` or `--iterations`) that cannot be entered into this file.
|
||||||
|
Also be alert for empty blank lines at the end of the file, which will cause
|
||||||
|
an arguments error at startup time.
|
||||||
|
|
||||||
## List of prompt arguments
|
## List of prompt arguments
|
||||||
|
|
||||||
@ -195,15 +209,17 @@ Here are the invoke> command that apply to txt2img:
|
|||||||
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
||||||
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
||||||
|
|
||||||
Note that the width and height of the image must be multiples of 64. You can
|
!!! note
|
||||||
provide different values, but they will be rounded down to the nearest multiple
|
|
||||||
of 64.
|
|
||||||
|
|
||||||
### This is an example of img2img:
|
the width and height of the image must be multiples of 64. You can
|
||||||
|
provide different values, but they will be rounded down to the nearest multiple
|
||||||
|
of 64.
|
||||||
|
|
||||||
```
|
!!! example "This is a example of img2img"
|
||||||
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
|
||||||
```
|
```bash
|
||||||
|
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
||||||
|
```
|
||||||
|
|
||||||
This will modify the indicated vacation photograph by making it more like the
|
This will modify the indicated vacation photograph by making it more like the
|
||||||
prompt. Results will vary greatly depending on what is in the image. We also ask
|
prompt. Results will vary greatly depending on what is in the image. We also ask
|
||||||
@ -253,7 +269,7 @@ description of the part of the image to replace. For example, if you have an
|
|||||||
image of a breakfast plate with a bagel, toast and scrambled eggs, you can
|
image of a breakfast plate with a bagel, toast and scrambled eggs, you can
|
||||||
selectively mask the bagel and replace it with a piece of cake this way:
|
selectively mask the bagel and replace it with a piece of cake this way:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
|
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -265,20 +281,26 @@ are getting too much or too little masking you can adjust the threshold down (to
|
|||||||
get more mask), or up (to get less). In this example, by passing `-tm` a higher
|
get more mask), or up (to get less). In this example, by passing `-tm` a higher
|
||||||
value, we are insisting on a more stringent classification.
|
value, we are insisting on a more stringent classification.
|
||||||
|
|
||||||
```
|
```bash
|
||||||
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
|
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
|
||||||
```
|
```
|
||||||
|
|
||||||
# Other Commands
|
### Custom Styles and Subjects
|
||||||
|
|
||||||
|
You can load and use hundreds of community-contributed Textual
|
||||||
|
Inversion models just by typing the appropriate trigger phrase. Please
|
||||||
|
see [Concepts Library](CONCEPTS.md) for more details.
|
||||||
|
|
||||||
|
## Other Commands
|
||||||
|
|
||||||
The CLI offers a number of commands that begin with "!".
|
The CLI offers a number of commands that begin with "!".
|
||||||
|
|
||||||
## Postprocessing images
|
### Postprocessing images
|
||||||
|
|
||||||
To postprocess a file using face restoration or upscaling, use the `!fix`
|
To postprocess a file using face restoration or upscaling, use the `!fix`
|
||||||
command.
|
command.
|
||||||
|
|
||||||
### `!fix`
|
#### `!fix`
|
||||||
|
|
||||||
This command runs a post-processor on a previously-generated image. It takes a
|
This command runs a post-processor on a previously-generated image. It takes a
|
||||||
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
|
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
|
||||||
@ -305,19 +327,19 @@ Some examples:
|
|||||||
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
|
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
|
||||||
```
|
```
|
||||||
|
|
||||||
### !mask
|
#### `!mask`
|
||||||
|
|
||||||
This command takes an image, a text prompt, and uses the `clipseg` algorithm to
|
This command takes an image, a text prompt, and uses the `clipseg` algorithm to
|
||||||
automatically generate a mask of the area that matches the text prompt. It is
|
automatically generate a mask of the area that matches the text prompt. It is
|
||||||
useful for debugging the text masking process prior to inpainting with the
|
useful for debugging the text masking process prior to inpainting with the
|
||||||
`--text_mask` argument. See [INPAINTING.md] for details.
|
`--text_mask` argument. See [INPAINTING.md] for details.
|
||||||
|
|
||||||
## Model selection and importation
|
### Model selection and importation
|
||||||
|
|
||||||
The CLI allows you to add new models on the fly, as well as to switch among them
|
The CLI allows you to add new models on the fly, as well as to switch among them
|
||||||
rapidly without leaving the script.
|
rapidly without leaving the script.
|
||||||
|
|
||||||
### !models
|
#### `!models`
|
||||||
|
|
||||||
This prints out a list of the models defined in `config/models.yaml'. The active
|
This prints out a list of the models defined in `config/models.yaml'. The active
|
||||||
model is bold-faced
|
model is bold-faced
|
||||||
@ -330,7 +352,7 @@ laion400m not loaded <no description>
|
|||||||
waifu-diffusion not loaded Waifu Diffusion v1.3
|
waifu-diffusion not loaded Waifu Diffusion v1.3
|
||||||
</pre>
|
</pre>
|
||||||
|
|
||||||
### !switch <model>
|
#### `!switch <model>`
|
||||||
|
|
||||||
This quickly switches from one model to another without leaving the CLI script.
|
This quickly switches from one model to another without leaving the CLI script.
|
||||||
`invoke.py` uses a memory caching system; once a model has been loaded,
|
`invoke.py` uses a memory caching system; once a model has been loaded,
|
||||||
@ -355,7 +377,7 @@ invoke> !switch waifu-diffusion
|
|||||||
| Making attention of type 'vanilla' with 512 in_channels
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
| Using faster float16 precision
|
| Using faster float16 precision
|
||||||
>> Model loaded in 18.24s
|
>> Model loaded in 18.24s
|
||||||
>> Max VRAM used to load the model: 2.17G
|
>> Max VRAM used to load the model: 2.17G
|
||||||
>> Current VRAM usage:2.17G
|
>> Current VRAM usage:2.17G
|
||||||
>> Setting Sampler to k_lms
|
>> Setting Sampler to k_lms
|
||||||
|
|
||||||
@ -375,7 +397,7 @@ laion400m not loaded <no description>
|
|||||||
waifu-diffusion cached Waifu Diffusion v1.3
|
waifu-diffusion cached Waifu Diffusion v1.3
|
||||||
</pre>
|
</pre>
|
||||||
|
|
||||||
### !import_model <path/to/model/weights>
|
#### `!import_model <path/to/model/weights>`
|
||||||
|
|
||||||
This command imports a new model weights file into InvokeAI, makes it available
|
This command imports a new model weights file into InvokeAI, makes it available
|
||||||
for image generation within the script, and writes out the configuration for the
|
for image generation within the script, and writes out the configuration for the
|
||||||
@ -422,10 +444,10 @@ OK to import [n]? <b>y</b>
|
|||||||
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
| Using faster float16 precision
|
| Using faster float16 precision
|
||||||
invoke>
|
invoke>
|
||||||
</pre>
|
</pre>
|
||||||
|
|
||||||
###!edit_model <name_of_model>
|
#### `!edit_model <name_of_model>`
|
||||||
|
|
||||||
The `!edit_model` command can be used to modify a model that is already defined
|
The `!edit_model` command can be used to modify a model that is already defined
|
||||||
in `config/models.yaml`. Call it with the short name of the model you wish to
|
in `config/models.yaml`. Call it with the short name of the model you wish to
|
||||||
@ -462,12 +484,12 @@ text... Outputs: [2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix
|
|||||||
"outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512
|
"outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512
|
||||||
-H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 ```
|
-H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 ```
|
||||||
|
|
||||||
## History processing
|
### History processing
|
||||||
|
|
||||||
The CLI provides a series of convenient commands for reviewing previous actions,
|
The CLI provides a series of convenient commands for reviewing previous actions,
|
||||||
retrieving them, modifying them, and re-running them.
|
retrieving them, modifying them, and re-running them.
|
||||||
|
|
||||||
### !history
|
#### `!history`
|
||||||
|
|
||||||
The invoke script keeps track of all the commands you issue during a session,
|
The invoke script keeps track of all the commands you issue during a session,
|
||||||
allowing you to re-run them. On Mac and Linux systems, it also writes the
|
allowing you to re-run them. On Mac and Linux systems, it also writes the
|
||||||
@ -479,20 +501,22 @@ during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
|
|||||||
can then repeat a command by using the command `!NNN`, where "NNN" is the
|
can then repeat a command by using the command `!NNN`, where "NNN" is the
|
||||||
history line number. For example:
|
history line number. For example:
|
||||||
|
|
||||||
```bash
|
!!! example ""
|
||||||
invoke> !history
|
|
||||||
...
|
|
||||||
[14] happy woman sitting under tree wearing broad hat and flowing garment
|
|
||||||
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
|
|
||||||
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
|
|
||||||
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
...
|
|
||||||
invoke> !20
|
|
||||||
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
```
|
|
||||||
|
|
||||||
### !fetch
|
```bash
|
||||||
|
invoke> !history
|
||||||
|
...
|
||||||
|
[14] happy woman sitting under tree wearing broad hat and flowing garment
|
||||||
|
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
|
||||||
|
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
|
||||||
|
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
|
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
|
...
|
||||||
|
invoke> !20
|
||||||
|
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
|
```
|
||||||
|
|
||||||
|
####`!fetch`
|
||||||
|
|
||||||
This command retrieves the generation parameters from a previously generated
|
This command retrieves the generation parameters from a previously generated
|
||||||
image and either loads them into the command line (Linux|Mac), or prints them
|
image and either loads them into the command line (Linux|Mac), or prints them
|
||||||
@ -502,33 +526,36 @@ a folder with image png files, and wildcard \*.png to retrieve the dream command
|
|||||||
used to generate the images, and save them to a file commands.txt for further
|
used to generate the images, and save them to a file commands.txt for further
|
||||||
processing.
|
processing.
|
||||||
|
|
||||||
This example loads the generation command for a single png file:
|
!!! example "load the generation command for a single png file"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> !fetch 0000015.8929913.png
|
invoke> !fetch 0000015.8929913.png
|
||||||
# the script returns the next line, ready for editing and running:
|
# the script returns the next line, ready for editing and running:
|
||||||
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
||||||
```
|
```
|
||||||
|
|
||||||
This one fetches the generation commands from a batch of files and stores them
|
!!! example "fetch the generation commands from a batch of files and store them into `selected.txt`"
|
||||||
into `selected.txt`:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> !fetch outputs\selected-imgs\*.png selected.txt
|
invoke> !fetch outputs\selected-imgs\*.png selected.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
### !replay
|
#### `!replay`
|
||||||
|
|
||||||
This command replays a text file generated by !fetch or created manually
|
This command replays a text file generated by !fetch or created manually
|
||||||
|
|
||||||
```
|
!!! example
|
||||||
invoke> !replay outputs\selected-imgs\selected.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that these commands may behave unexpectedly if given a PNG file that was
|
```bash
|
||||||
not generated by InvokeAI.
|
invoke> !replay outputs\selected-imgs\selected.txt
|
||||||
|
```
|
||||||
|
|
||||||
### !search <search string>
|
!!! note
|
||||||
|
|
||||||
|
These commands may behave unexpectedly if given a PNG file that was
|
||||||
|
not generated by InvokeAI.
|
||||||
|
|
||||||
|
#### `!search <search string>`
|
||||||
|
|
||||||
This is similar to !history but it only returns lines that contain
|
This is similar to !history but it only returns lines that contain
|
||||||
`search string`. For example:
|
`search string`. For example:
|
||||||
@ -538,7 +565,7 @@ invoke> !search surreal
|
|||||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
```
|
```
|
||||||
|
|
||||||
### `!clear`
|
#### `!clear`
|
||||||
|
|
||||||
This clears the search history from memory and disk. Be advised that this
|
This clears the search history from memory and disk. Be advised that this
|
||||||
operation is irreversible and does not issue any warnings!
|
operation is irreversible and does not issue any warnings!
|
||||||
|
131
docs/features/CONCEPTS.md
Normal file
@ -0,0 +1,131 @@
|
|||||||
|
---
|
||||||
|
title: Concepts Library
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
|
||||||
|
|
||||||
|
## Using Textual Inversion Files
|
||||||
|
|
||||||
|
Textual inversion (TI) files are small models that customize the output of
|
||||||
|
Stable Diffusion image generation. They can augment SD with specialized subjects
|
||||||
|
and artistic styles. They are also known as "embeds" in the machine learning
|
||||||
|
world.
|
||||||
|
|
||||||
|
Each TI file introduces one or more vocabulary terms to the SD model. These are
|
||||||
|
known in InvokeAI as "triggers." Triggers are often, but not always, denoted
|
||||||
|
using angle brackets as in "<trigger-phrase>". The two most common type of
|
||||||
|
TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
|
||||||
|
different TI training packages. InvokeAI supports both formats, but its
|
||||||
|
[built-in TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
|
||||||
|
|
||||||
|
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
|
||||||
|
amassed a large ligrary of >800 community-contributed TI files covering a
|
||||||
|
broad range of subjects and styles. InvokeAI has built-in support for this
|
||||||
|
library which downloads and merges TI files automatically upon request. You can
|
||||||
|
also install your own or others' TI files by placing them in a designated
|
||||||
|
directory.
|
||||||
|
|
||||||
|
### An Example
|
||||||
|
|
||||||
|
Here are a few examples to illustrate how it works. All these images were
|
||||||
|
generated using the command-line client and the Stable Diffusion 1.5 model:
|
||||||
|
|
||||||
|
| Japanese gardener | Japanese gardener <ghibli-face> | Japanese gardener <hoi4-leaders> | Japanese gardener <cartoona-animals> |
|
||||||
|
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
|
||||||
|
|  |  |  |  |
|
||||||
|
|
||||||
|
You can also combine styles and concepts:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
| A portrait of <alf> in <cartoona-animal> style |
|
||||||
|
| :--------------------------------------------------------: |
|
||||||
|
|  |
|
||||||
|
</figure>
|
||||||
|
## Using a Hugging Face Concept
|
||||||
|
|
||||||
|
!!! warning "Authenticating to HuggingFace"
|
||||||
|
|
||||||
|
Some concepts require valid authentication to HuggingFace. Without it, they will not be downloaded
|
||||||
|
and will be silently ignored.
|
||||||
|
|
||||||
|
If you used an installer to install InvokeAI, you may have already set a HuggingFace token.
|
||||||
|
If you skipped this step, you can:
|
||||||
|
|
||||||
|
- run the InvokeAI configuration script again (if you used a manual installer): `invokeai-configure`
|
||||||
|
- set one of the `HUGGINGFACE_TOKEN` or `HUGGING_FACE_HUB_TOKEN` environment variables to contain your token
|
||||||
|
|
||||||
|
Finally, if you already used any HuggingFace library on your computer, you might already have a token
|
||||||
|
in your local cache. Check for a hidden `.huggingface` directory in your home folder. If it
|
||||||
|
contains a `token` file, then you are all set.
|
||||||
|
|
||||||
|
|
||||||
|
Hugging Face TI concepts are downloaded and installed automatically as you
|
||||||
|
require them. This requires your machine to be connected to the Internet. To
|
||||||
|
find out what each concept is for, you can browse the
|
||||||
|
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
|
||||||
|
look at examples of what each concept produces.
|
||||||
|
|
||||||
|
When you have an idea of a concept you wish to try, go to the command-line
|
||||||
|
client (CLI) and type a `<` character and the beginning of the Hugging Face
|
||||||
|
concept name you wish to load. Press ++tab++, and the CLI will show you all
|
||||||
|
matching concepts. You can also type `<` and hit ++tab++ to get a listing of all
|
||||||
|
~800 concepts, but be prepared to scroll up to see them all! If there is more
|
||||||
|
than one match you can continue to type and ++tab++ until the concept is
|
||||||
|
completed.
|
||||||
|
|
||||||
|
!!! example
|
||||||
|
|
||||||
|
if you type in `<x` and hit ++tab++, you'll be prompted with the completions:
|
||||||
|
|
||||||
|
```py
|
||||||
|
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
|
||||||
|
```
|
||||||
|
|
||||||
|
Now type `id` and press ++tab++. It will be autocompleted to `<xidiversity>`
|
||||||
|
because this is a unique match.
|
||||||
|
|
||||||
|
Finish your prompt and generate as usual. You may include multiple concept terms
|
||||||
|
in the prompt.
|
||||||
|
|
||||||
|
If you have never used this concept before, you will see a message that the TI
|
||||||
|
model is being downloaded and installed. After this, the concept will be saved
|
||||||
|
locally (in the `models/sd-concepts-library` directory) for future use.
|
||||||
|
|
||||||
|
Several steps happen during downloading and installation, including a scan of
|
||||||
|
the file for malicious code. Should any errors occur, you will be warned and the
|
||||||
|
concept will fail to load. Generation will then continue treating the trigger
|
||||||
|
term as a normal string of characters (e.g. as literal `<ghibli-face>`).
|
||||||
|
|
||||||
|
You can also use `<concept-names>` in the WebGUI's prompt textbox. There is no
|
||||||
|
autocompletion at this time.
|
||||||
|
|
||||||
|
## Installing your Own TI Files
|
||||||
|
|
||||||
|
You may install any number of `.pt` and `.bin` files simply by copying them into
|
||||||
|
the `embeddings` directory of the InvokeAI runtime directory (usually `invokeai`
|
||||||
|
in your home directory). You may create subdirectories in order to organize the
|
||||||
|
files in any way you wish. Be careful not to overwrite one file with another.
|
||||||
|
For example, TI files generated by the Hugging Face toolkit share the named
|
||||||
|
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
|
||||||
|
|
||||||
|
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
|
||||||
|
files it finds there. At startup you will see a message similar to this one:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the `*` trigger term. This is a placeholder term that many early TI
|
||||||
|
tutorials taught people to use rather than a more descriptive term.
|
||||||
|
Unfortunately, if you have multiple TI files that all use this term, only the
|
||||||
|
first one loaded will be triggered by use of the term.
|
||||||
|
|
||||||
|
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
|
||||||
|
or more TI files together. If it encounters a collision of terms, the script
|
||||||
|
will prompt you to select new terms that do not collide. See
|
||||||
|
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
|
||||||
|
|
||||||
|
## Further Reading
|
||||||
|
|
||||||
|
Please see [the repository](https://github.com/rinongal/textual_inversion) and
|
||||||
|
associated paper for details and limitations.
|
@ -85,7 +85,7 @@ increasing size, every tile after the first in a row or column
|
|||||||
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
||||||
the input/`--init_img` is same size as a tile, the ideal (for time)
|
the input/`--init_img` is same size as a tile, the ideal (for time)
|
||||||
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
||||||
4.0 etc..
|
4.0, etc.
|
||||||
|
|
||||||
`-embiggen_tiles <spaced list of tiles>`
|
`-embiggen_tiles <spaced list of tiles>`
|
||||||
|
|
||||||
@ -100,6 +100,15 @@ Tiles are numbered starting with one, and left-to-right,
|
|||||||
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
||||||
middle row would be `4 5 6`.
|
middle row would be `4 5 6`.
|
||||||
|
|
||||||
|
`-embiggen_strength <strength>`
|
||||||
|
|
||||||
|
Another advanced option if you want to experiment with the strength parameter
|
||||||
|
that embiggen uses when it calls Img2Img. Values range from 0.0 to 1.0
|
||||||
|
and lower values preserve more of the character of the initial image.
|
||||||
|
Values that are too high will result in a completely different end image,
|
||||||
|
while values that are too low will result in an image not dissimilar to one
|
||||||
|
you would get with ESRGAN upscaling alone. The default value is 0.4.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
!!! example ""
|
!!! example ""
|
||||||
|
@ -12,21 +12,19 @@ stable diffusion to build the prompt on top of the image you provide, preserving
|
|||||||
the original's basic shape and layout. To use it, provide the `--init_img`
|
the original's basic shape and layout. To use it, provide the `--init_img`
|
||||||
option as shown here:
|
option as shown here:
|
||||||
|
|
||||||
```commandline
|
!!! example ""
|
||||||
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
|
||||||
```
|
|
||||||
|
|
||||||
This will take the original image shown here:
|
```commandline
|
||||||
|
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
||||||
|
```
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
{ width=320 }
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
and generate a new image based on it as shown here:
|
| original image | generated image |
|
||||||
|
| :------------: | :-------------: |
|
||||||
|
| { width=320 } | { width=320 } |
|
||||||
|
|
||||||
<figure markdown>
|
</figure>
|
||||||
{ width=320 }
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
||||||
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
|
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
|
||||||
@ -88,13 +86,15 @@ from a prompt. If the step count is 10, then the "latent space" (Stable
|
|||||||
Diffusion's internal representation of the image) for the prompt "fire" with
|
Diffusion's internal representation of the image) for the prompt "fire" with
|
||||||
seed `1592514025` develops something like this:
|
seed `1592514025` develops something like this:
|
||||||
|
|
||||||
```bash
|
!!! example ""
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
```bash
|
||||||

|
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||||
</figure>
|
```
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
{ width=720 }
|
||||||
|
</figure>
|
||||||
|
|
||||||
Put simply: starting from a frame of fuzz/static, SD finds details in each frame
|
Put simply: starting from a frame of fuzz/static, SD finds details in each frame
|
||||||
that it thinks look like "fire" and brings them a little bit more into focus,
|
that it thinks look like "fire" and brings them a little bit more into focus,
|
||||||
@ -109,25 +109,23 @@ into the sequence at the appropriate point, with just the right amount of noise.
|
|||||||
|
|
||||||
### A concrete example
|
### A concrete example
|
||||||
|
|
||||||
I want SD to draw a fire based on this hand-drawn image:
|
!!! example "I want SD to draw a fire based on this hand-drawn image"
|
||||||
|
|
||||||
<figure markdown>
|
{ align=left }
|
||||||

|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Let's only do 10 steps, to make it easier to see what's happening. If strength
|
Let's only do 10 steps, to make it easier to see what's happening. If strength
|
||||||
is `0.7`, this is what the internal steps the algorithm has to take will look
|
is `0.7`, this is what the internal steps the algorithm has to take will look
|
||||||
like:
|
like:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
With strength `0.4`, the steps look more like this:
|
With strength `0.4`, the steps look more like this:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
||||||
`0.4`, and notice also how much longer the sequence is with `0.7`:
|
`0.4`, and notice also how much longer the sequence is with `0.7`:
|
||||||
|
@ -158,7 +158,7 @@ when filling in missing regions. It has an almost uncanny ability to blend the
|
|||||||
new regions with existing ones in a semantically coherent way.
|
new regions with existing ones in a semantically coherent way.
|
||||||
|
|
||||||
To install the inpainting model, follow the
|
To install the inpainting model, follow the
|
||||||
[instructions](../installation/INSTALLING_MODELS.md) for installing a new model.
|
[instructions](../installation/050_INSTALLING_MODELS.md) for installing a new model.
|
||||||
You may use either the CLI (`invoke.py` script) or directly edit the
|
You may use either the CLI (`invoke.py` script) or directly edit the
|
||||||
`configs/models.yaml` configuration file to do this. The main thing to watch out
|
`configs/models.yaml` configuration file to do this. The main thing to watch out
|
||||||
for is that the the model `config` option must be set up to use
|
for is that the the model `config` option must be set up to use
|
||||||
|
76
docs/features/MODEL_MERGING.md
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
title: Model Merging
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-image-off: Model Merging
|
||||||
|
|
||||||
|
## How to Merge Models
|
||||||
|
|
||||||
|
As of version 2.3, InvokeAI comes with a script that allows you to
|
||||||
|
merge two or three diffusers-type models into a new merged model. The
|
||||||
|
resulting model will combine characteristics of the original, and can
|
||||||
|
be used to teach an old model new tricks.
|
||||||
|
|
||||||
|
You may run the merge script by starting the invoke launcher
|
||||||
|
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
|
||||||
|
models_. This will launch a text-based interactive user interface that
|
||||||
|
prompts you to select the models to merge, how to merge them, and the
|
||||||
|
merged model name.
|
||||||
|
|
||||||
|
Alternatively you may activate InvokeAI's virtual environment from the
|
||||||
|
command line, and call the script via `merge_models --gui` to open up
|
||||||
|
a version that has a nice graphical front end. To get the commandline-
|
||||||
|
only version, omit `--gui`.
|
||||||
|
|
||||||
|
The user interface for the text-based interactive script is
|
||||||
|
straightforward. It shows you a series of setting fields. Use control-N (^N)
|
||||||
|
to move to the next field, and control-P (^P) to move to the previous
|
||||||
|
one. You can also use TAB and shift-TAB to move forward and
|
||||||
|
backward. Once you are in a multiple choice field, use the up and down
|
||||||
|
cursor arrows to move to your desired selection, and press <SPACE> or
|
||||||
|
<ENTER> to select it. Change text fields by typing in them, and adjust
|
||||||
|
scrollbars using the left and right arrow keys.
|
||||||
|
|
||||||
|
Once you are happy with your settings, press the OK button. Note that
|
||||||
|
there may be two pages of settings, depending on the height of your
|
||||||
|
screen, and the OK button may be on the second page. Advance past the
|
||||||
|
last field of the first page to get to the second page, and reverse
|
||||||
|
this to get back.
|
||||||
|
|
||||||
|
If the merge runs successfully, it will create a new diffusers model
|
||||||
|
under the selected name and register it with InvokeAI.
|
||||||
|
|
||||||
|
## The Settings
|
||||||
|
|
||||||
|
* Model Selection -- there are three multiple choice fields that
|
||||||
|
display all the diffusers-style models that InvokeAI knows about.
|
||||||
|
If you do not see the model you are looking for, then it is probably
|
||||||
|
a legacy checkpoint model and needs to be converted using the
|
||||||
|
`invoke` command-line client and its `!optimize` command. You
|
||||||
|
must select at least two models to merge. The third can be left at
|
||||||
|
"None" if you desire.
|
||||||
|
|
||||||
|
* Alpha -- This is the ratio to use when combining models. It ranges
|
||||||
|
from 0 to 1. The higher the value, the more weight is given to the
|
||||||
|
2d and (optionally) 3d models. So if you have two models named "A"
|
||||||
|
and "B", an alpha value of 0.25 will give you a merged model that is
|
||||||
|
25% A and 75% B.
|
||||||
|
|
||||||
|
* Interpolation Method -- This is the method used to combine
|
||||||
|
weights. The options are "weighted_sum" (the default), "sigmoid",
|
||||||
|
"inv_sigmoid" and "add_difference". Each produces slightly different
|
||||||
|
results. When three models are in use, only "add_difference" is
|
||||||
|
available. (TODO: cite a reference that describes what these
|
||||||
|
interpolation methods actually do and how to decide among them).
|
||||||
|
|
||||||
|
* Force -- Not all models are compatible with each other. The merge
|
||||||
|
script will check for compatibility and refuse to merge ones that
|
||||||
|
are incompatible. Set this checkbox to try merging anyway.
|
||||||
|
|
||||||
|
* Name for merged model - This is the name for the new model. Please
|
||||||
|
use InvokeAI conventions - only alphanumeric letters and the
|
||||||
|
characters ".+-".
|
||||||
|
|
||||||
|
## Caveats
|
||||||
|
|
||||||
|
This is a new script and may contain bugs.
|
89
docs/features/NSFW.md
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
title: The NSFW Checker
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-image-off: NSFW Checker
|
||||||
|
|
||||||
|
## The NSFW ("Safety") Checker
|
||||||
|
|
||||||
|
The Stable Diffusion image generation models will produce sexual
|
||||||
|
imagery if deliberately prompted, and will occasionally produce such
|
||||||
|
images when this is not intended. Such images are colloquially known
|
||||||
|
as "Not Safe for Work" (NSFW). This behavior is due to the nature of
|
||||||
|
the training set that Stable Diffusion was trained on, which culled
|
||||||
|
millions of "aesthetic" images from the Internet.
|
||||||
|
|
||||||
|
You may not wish to be exposed to these images, and in some
|
||||||
|
jurisdictions it may be illegal to publicly distribute such imagery,
|
||||||
|
including mounting a publicly-available server that provides
|
||||||
|
unfiltered images to the public. Furthermore, the [Stable Diffusion
|
||||||
|
weights
|
||||||
|
License](https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE-ModelWeights.txt)
|
||||||
|
forbids the model from being used to "exploit any of the
|
||||||
|
vulnerabilities of a specific group of persons."
|
||||||
|
|
||||||
|
For these reasons Stable Diffusion offers a "safety checker," a
|
||||||
|
machine learning model trained to recognize potentially disturbing
|
||||||
|
imagery. When a potentially NSFW image is detected, the checker will
|
||||||
|
blur the image and paste a warning icon on top. The checker can be
|
||||||
|
turned on and off on the command line using `--nsfw_checker` and
|
||||||
|
`--no-nsfw_checker`.
|
||||||
|
|
||||||
|
At installation time, InvokeAI will ask whether the checker should be
|
||||||
|
activated by default (neither argument given on the command line). The
|
||||||
|
response is stored in the InvokeAI initialization file (usually
|
||||||
|
`.invokeai` in your home directory). You can change the default at any
|
||||||
|
time by opening this file in a text editor and commenting or
|
||||||
|
uncommenting the line `--nsfw_checker`.
|
||||||
|
|
||||||
|
## Caveats
|
||||||
|
|
||||||
|
There are a number of caveats that you need to be aware of.
|
||||||
|
|
||||||
|
### Accuracy
|
||||||
|
|
||||||
|
The checker is [not perfect](https://arxiv.org/abs/2210.04610).It will
|
||||||
|
occasionally flag innocuous images (false positives), and will
|
||||||
|
frequently miss violent and gory imagery (false negatives). It rarely
|
||||||
|
fails to flag sexual imagery, but this has been known to happen. For
|
||||||
|
these reasons, the InvokeAI team prefers to refer to the software as a
|
||||||
|
"NSFW Checker" rather than "safety checker."
|
||||||
|
|
||||||
|
### Memory Usage and Performance
|
||||||
|
|
||||||
|
The NSFW checker consumes an additional 1.2G of GPU VRAM on top of the
|
||||||
|
3.4G of VRAM used by Stable Diffusion v1.5 (this is with
|
||||||
|
half-precision arithmetic). This means that the checker will not run
|
||||||
|
successfully on GPU cards with less than 6GB VRAM, and will reduce the
|
||||||
|
size of the images that you can produce.
|
||||||
|
|
||||||
|
The checker also introduces a slight performance penalty. Images will
|
||||||
|
take ~1 second longer to generate when the checker is
|
||||||
|
activated. Generally this is not noticeable.
|
||||||
|
|
||||||
|
### Intermediate Images in the Web UI
|
||||||
|
|
||||||
|
The checker only operates on the final image produced by the Stable
|
||||||
|
Diffusion algorithm. If you are using the Web UI and have enabled the
|
||||||
|
display of intermediate images, you will briefly be exposed to a
|
||||||
|
low-resolution (mosaicized) version of the final image before it is
|
||||||
|
flagged by the checker and replaced by a fully blurred version. You
|
||||||
|
are encouraged to turn **off** intermediate image rendering when you
|
||||||
|
are using the checker. Future versions of InvokeAI will apply
|
||||||
|
additional blurring to intermediate images when the checker is active.
|
||||||
|
|
||||||
|
### Watermarking
|
||||||
|
|
||||||
|
InvokeAI does not apply any sort of watermark to images it
|
||||||
|
generates. However, it does write metadata into the PNG data area,
|
||||||
|
including the prompt used to generate the image and relevant parameter
|
||||||
|
settings. These fields can be examined using the `sd-metadata.py`
|
||||||
|
script that comes with the InvokeAI package.
|
||||||
|
|
||||||
|
Note that several other Stable Diffusion distributions offer
|
||||||
|
wavelet-based "invisible" watermarking. We have experimented with the
|
||||||
|
library used to generate these watermarks and have reached the
|
||||||
|
conclusion that while the watermarking library may be adding
|
||||||
|
watermarks to PNG images, the currently available version is unable to
|
||||||
|
retrieve them successfully. If and when a functioning version of the
|
||||||
|
library becomes available, we will offer this feature as well.
|
@ -133,29 +133,6 @@ outputs = g.txt2img("a unicorn in manhattan")
|
|||||||
|
|
||||||
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
|
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
|
||||||
|
|
||||||
Please see ldm/generate.py for more information. A set of example scripts is coming RSN.
|
Please see the documentation in ldm/generate.py for more information.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## **Preload Models**
|
|
||||||
|
|
||||||
In situations where you have limited internet connectivity or are blocked behind a firewall, you can
|
|
||||||
use the preload script to preload the required files for Stable Diffusion to run.
|
|
||||||
|
|
||||||
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the
|
|
||||||
internet. In the following runs, it will load up the cached versions of the required files from the
|
|
||||||
`.cache` directory of the system.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
|
||||||
preloading bert tokenizer...
|
|
||||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
|
||||||
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
|
|
||||||
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
|
|
||||||
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
|
|
||||||
...success
|
|
||||||
preloading kornia requirements...
|
|
||||||
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
|
|
||||||
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
|
|
||||||
...success
|
|
||||||
```
|
|
||||||
|
@ -120,7 +120,7 @@ A number of caveats:
|
|||||||
(`--iterations`) argument.
|
(`--iterations`) argument.
|
||||||
|
|
||||||
3. Your results will be _much_ better if you use the `inpaint-1.5` model
|
3. Your results will be _much_ better if you use the `inpaint-1.5` model
|
||||||
released by runwayML and installed by default by `scripts/preload_models.py`.
|
released by runwayML and installed by default by `invokeai-configure`.
|
||||||
This model was trained specifically to harmoniously fill in image gaps. The
|
This model was trained specifically to harmoniously fill in image gaps. The
|
||||||
standard model will work as well, but you may notice color discontinuities at
|
standard model will work as well, but you may notice color discontinuities at
|
||||||
the border.
|
the border.
|
||||||
|
@ -28,21 +28,17 @@ should "just work" without further intervention. Simply pass the `--upscale`
|
|||||||
the popup in the Web GUI.
|
the popup in the Web GUI.
|
||||||
|
|
||||||
**GFPGAN** requires a series of downloadable model files to work. These are
|
**GFPGAN** requires a series of downloadable model files to work. These are
|
||||||
loaded when you run `scripts/preload_models.py`. If GFPAN is failing with an
|
loaded when you run `invokeai-configure`. If GFPAN is failing with an
|
||||||
error, please run the following from the InvokeAI directory:
|
error, please run the following from the InvokeAI directory:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python scripts/preload_models.py
|
invokeai-configure
|
||||||
```
|
```
|
||||||
|
|
||||||
If you do not run this script in advance, the GFPGAN module will attempt to
|
If you do not run this script in advance, the GFPGAN module will attempt to
|
||||||
download the models files the first time you try to perform facial
|
download the models files the first time you try to perform facial
|
||||||
reconstruction.
|
reconstruction.
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
You will now have access to two new prompt arguments.
|
|
||||||
|
|
||||||
### Upscaling
|
### Upscaling
|
||||||
|
|
||||||
`-U : <upscaling_factor> <upscaling_strength>`
|
`-U : <upscaling_factor> <upscaling_strength>`
|
||||||
@ -110,7 +106,7 @@ This repo also allows you to perform face restoration using
|
|||||||
[CodeFormer](https://github.com/sczhou/CodeFormer).
|
[CodeFormer](https://github.com/sczhou/CodeFormer).
|
||||||
|
|
||||||
In order to setup CodeFormer to work, you need to download the models like with
|
In order to setup CodeFormer to work, you need to download the models like with
|
||||||
GFPGAN. You can do this either by running `preload_models.py` or by manually
|
GFPGAN. You can do this either by running `invokeai-configure` or by manually
|
||||||
downloading the
|
downloading the
|
||||||
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
|
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
|
||||||
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
|
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
|
||||||
@ -119,7 +115,7 @@ You can use `-ft` prompt argument to swap between CodeFormer and the default
|
|||||||
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
|
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
|
||||||
strength of the restoration effect.
|
strength of the restoration effect.
|
||||||
|
|
||||||
### Usage
|
### CodeFormer Usage
|
||||||
|
|
||||||
The following command will perform face restoration with CodeFormer instead of
|
The following command will perform face restoration with CodeFormer instead of
|
||||||
the default gfpgan.
|
the default gfpgan.
|
||||||
@ -160,7 +156,7 @@ A new file named `000044.2945021133.fixed.png` will be created in the output
|
|||||||
directory. Note that the `!fix` command does not replace the original file,
|
directory. Note that the `!fix` command does not replace the original file,
|
||||||
unlike the behavior at generate time.
|
unlike the behavior at generate time.
|
||||||
|
|
||||||
### Disabling
|
## How to disable
|
||||||
|
|
||||||
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
|
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
|
||||||
you can disable them on the invoke.py command line with the `--no_restore` and
|
you can disable them on the invoke.py command line with the `--no_restore` and
|
||||||
|
@ -20,16 +20,55 @@ would type at the invoke> prompt:
|
|||||||
Then pass this file's name to `invoke.py` when you invoke it:
|
Then pass this file's name to `invoke.py` when you invoke it:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
(invokeai) ~/stable-diffusion$ python3 scripts/invoke.py --from_file "path/to/prompts.txt"
|
python scripts/invoke.py --from_file "/path/to/prompts.txt"
|
||||||
```
|
```
|
||||||
|
|
||||||
You may read a series of prompts from standard input by providing a filename of
|
You may also read a series of prompts from standard input by providing
|
||||||
`-`:
|
a filename of `-`. For example, here is a python script that creates a
|
||||||
|
matrix of prompts, each one varying slightly:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
(invokeai) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/invoke.py --from_file -
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
adjectives = ['sunny','rainy','overcast']
|
||||||
|
samplers = ['k_lms','k_euler_a','k_heun']
|
||||||
|
cfg = [7.5, 9, 11]
|
||||||
|
|
||||||
|
for adj in adjectives:
|
||||||
|
for samp in samplers:
|
||||||
|
for cg in cfg:
|
||||||
|
print(f'a {adj} day -A{samp} -C{cg}')
|
||||||
```
|
```
|
||||||
|
|
||||||
|
It's output looks like this (abbreviated):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
a sunny day -Aklms -C7.5
|
||||||
|
a sunny day -Aklms -C9
|
||||||
|
a sunny day -Aklms -C11
|
||||||
|
a sunny day -Ak_euler_a -C7.5
|
||||||
|
a sunny day -Ak_euler_a -C9
|
||||||
|
...
|
||||||
|
a overcast day -Ak_heun -C9
|
||||||
|
a overcast day -Ak_heun -C11
|
||||||
|
```
|
||||||
|
|
||||||
|
To feed it to invoke.py, pass the filename of "-"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python matrix.py | python scripts/invoke.py --from_file -
|
||||||
|
```
|
||||||
|
|
||||||
|
When the script is finished, each of the 27 combinations
|
||||||
|
of adjective, sampler and CFG will be executed.
|
||||||
|
|
||||||
|
The command-line interface provides `!fetch` and `!replay` commands
|
||||||
|
which allow you to read the prompts from a single previously-generated
|
||||||
|
image or a whole directory of them, write the prompts to a file, and
|
||||||
|
then replay them. Or you can create your own file of prompts and feed
|
||||||
|
them to the command-line client from within an interactive session.
|
||||||
|
See [Command-Line Interface](CLI.md) for details.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## **Negative and Unconditioned Prompts**
|
## **Negative and Unconditioned Prompts**
|
||||||
@ -51,7 +90,9 @@ original prompt:
|
|||||||
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
That image has a woman, so if we want the horse without a rider, we can
|
That image has a woman, so if we want the horse without a rider, we can
|
||||||
@ -61,7 +102,9 @@ this:
|
|||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
That's nice - but say we also don't want the image to be quite so blue. We can
|
That's nice - but say we also don't want the image to be quite so blue. We can
|
||||||
@ -70,7 +113,9 @@ add "blue" to the list of negative prompts, so it's now [woman blue]:
|
|||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
Getting close - but there's no sense in having a saddle when our horse doesn't
|
Getting close - but there's no sense in having a saddle when our horse doesn't
|
||||||
@ -79,7 +124,9 @@ have a rider, so we'll add one more negative prompt: [woman blue saddle].
|
|||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
!!! notes "Notes about this feature:"
|
!!! notes "Notes about this feature:"
|
||||||
@ -124,8 +171,12 @@ this prompt of `a man picking apricots from a tree`, let's see what happens if
|
|||||||
we increase and decrease how much attention we want Stable Diffusion to pay to
|
we increase and decrease how much attention we want Stable Diffusion to pay to
|
||||||
the word `apricots`:
|
the word `apricots`:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
Using `-` to reduce apricot-ness:
|
Using `-` to reduce apricot-ness:
|
||||||
|
|
||||||
| `a man picking apricots- from a tree` | `a man picking apricots-- from a tree` | `a man picking apricots--- from a tree` |
|
| `a man picking apricots- from a tree` | `a man picking apricots-- from a tree` | `a man picking apricots--- from a tree` |
|
||||||
@ -141,8 +192,12 @@ Using `+` to increase apricot-ness:
|
|||||||
You can also change the balance between different parts of a prompt. For
|
You can also change the balance between different parts of a prompt. For
|
||||||
example, below is a `mountain man`:
|
example, below is a `mountain man`:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
And here he is with more mountain:
|
And here he is with more mountain:
|
||||||
|
|
||||||
| `mountain+ man` | `mountain++ man` | `mountain+++ man` |
|
| `mountain+ man` | `mountain++ man` | `mountain+++ man` |
|
||||||
@ -184,28 +239,24 @@ Generate an image with a given prompt, record the seed of the image, and then
|
|||||||
use the `prompt2prompt` syntax to substitute words in the original prompt for
|
use the `prompt2prompt` syntax to substitute words in the original prompt for
|
||||||
words in a new prompt. This works for `img2img` as well.
|
words in a new prompt. This works for `img2img` as well.
|
||||||
|
|
||||||
- `a ("fluffy cat").swap("smiling dog") eating a hotdog`.
|
For example, consider the prompt `a cat.swap(dog) playing with a ball in the forest`. Normally, because of the word words interact with each other when doing a stable diffusion image generation, these two prompts would generate different compositions:
|
||||||
- quotes optional: `a (fluffy cat).swap(smiling dog) eating a hotdog`.
|
- `a cat playing with a ball in the forest`
|
||||||
- for single word substitutions parentheses are also optional:
|
- `a dog playing with a ball in the forest`
|
||||||
`a cat.swap(dog) eating a hotdog`.
|
|
||||||
- Supports options `s_start`, `s_end`, `t_start`, `t_end` (each 0-1) loosely
|
| `a cat playing with a ball in the forest` | `a dog playing with a ball in the forest` |
|
||||||
corresponding to bloc97's `prompt_edit_spatial_start/_end` and
|
| --- | --- |
|
||||||
`prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
|
| img | img |
|
||||||
intuitively understand.
|
|
||||||
- Example usage:`a (cat).swap(dog, s_end=0.3) eating a hotdog` - the `s_end`
|
|
||||||
argument means that the "spatial" (self-attention) edit will stop having any
|
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
|
||||||
effect after 30% (=0.3) of the steps have been done, leaving Stable
|
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
|
||||||
Diffusion with 70% of the steps where it is free to decide for itself how to
|
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to bloc97's `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
|
||||||
reshape the cat-form into a dog form.
|
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
|
||||||
- The numbers represent a percentage through the step sequence where the edits
|
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
|
||||||
should happen. 0 means the start (noisy starting image), 1 is the end (final
|
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
|
||||||
image).
|
> For img2img, the step sequence does not start at 0 but instead at `(1.0-strength)` - so if the img2img `strength` is `0.7`, `t_start` and `t_end` must both be greater than `0.3` (`1.0-0.7`) to have any effect.
|
||||||
- For img2img, the step sequence does not start at 0 but instead at
|
|
||||||
(1-strength) - so if strength is 0.7, s_start and s_end must both be
|
Prompt2prompt `.swap()` is not compatible with xformers, which will be temporarily disabled when doing a `.swap()` - so you should expect to use more VRAM and run slower that with xformers enabled.
|
||||||
greater than 0.3 (1-0.7) to have any effect.
|
|
||||||
- Convenience option `shape_freedom` (0-1) to specify how much "freedom" Stable
|
|
||||||
Diffusion should have to change the shape of the subject being swapped.
|
|
||||||
- `a (cat).swap(dog, shape_freedom=0.5) eating a hotdog`.
|
|
||||||
|
|
||||||
The `prompt2prompt` code is based off
|
The `prompt2prompt` code is based off
|
||||||
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
||||||
@ -259,14 +310,18 @@ usual, unless you fix the seed, the prompts will give you different results each
|
|||||||
time you run them.
|
time you run them.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
### "blue sphere, red cube, hybrid"
|
### "blue sphere, red cube, hybrid"
|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
This example doesn't use melding at all and represents the default way of mixing
|
This example doesn't use melding at all and represents the default way of mixing
|
||||||
concepts.
|
concepts.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
It's interesting to see how the AI expressed the concept of "cube" as the four
|
It's interesting to see how the AI expressed the concept of "cube" as the four
|
||||||
@ -274,6 +329,7 @@ quadrants of the enclosing frame. If you look closely, there is depth there, so
|
|||||||
the enclosing frame is actually a cube.
|
the enclosing frame is actually a cube.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
### "blue sphere:0.25 red cube:0.75 hybrid"
|
### "blue sphere:0.25 red cube:0.75 hybrid"
|
||||||
|
|
||||||

|

|
||||||
@ -286,6 +342,7 @@ the AI's "latent space" of semantic representations. Where is Ludwig
|
|||||||
Wittgenstein when you need him?
|
Wittgenstein when you need him?
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
### "blue sphere:0.75 red cube:0.25 hybrid"
|
### "blue sphere:0.75 red cube:0.25 hybrid"
|
||||||
|
|
||||||

|

|
||||||
@ -296,6 +353,7 @@ Definitely more blue-spherey. The cube is gone entirely, but it's really cool
|
|||||||
abstract art.
|
abstract art.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
### "blue sphere:0.5 red cube:0.5 hybrid"
|
### "blue sphere:0.5 red cube:0.5 hybrid"
|
||||||
|
|
||||||

|

|
||||||
@ -306,6 +364,7 @@ Whoa...! I see blue and red, but no spheres or cubes. Is the word "hybrid"
|
|||||||
summoning up the concept of some sort of scifi creature? Let's find out.
|
summoning up the concept of some sort of scifi creature? Let's find out.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
### "blue sphere:0.5 red cube:0.5"
|
### "blue sphere:0.5 red cube:0.5"
|
||||||
|
|
||||||

|

|
||||||
|
@ -10,83 +10,261 @@ You may personalize the generated images to provide your own styles or objects
|
|||||||
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
|
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
|
||||||
model as a (.pt) embeddings file. Alternatively, you may use or train
|
model as a (.pt) embeddings file. Alternatively, you may use or train
|
||||||
HuggingFace Concepts embeddings files (.bin) from
|
HuggingFace Concepts embeddings files (.bin) from
|
||||||
<https://huggingface.co/sd-concepts-library> and its associated notebooks.
|
<https://huggingface.co/sd-concepts-library> and its associated
|
||||||
|
notebooks.
|
||||||
|
|
||||||
## **Training**
|
## **Hardware and Software Requirements**
|
||||||
|
|
||||||
To train, prepare a folder that contains images sized at 512x512 and execute the
|
You will need a GPU to perform training in a reasonable length of
|
||||||
following:
|
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
|
||||||
|
library](../installation/070_INSTALL_XFORMERS) to accelerate the
|
||||||
|
training process further. During training, about ~8 GB is temporarily
|
||||||
|
needed in order to store intermediate models, checkpoints and logs.
|
||||||
|
|
||||||
### WINDOWS
|
## **Preparing for Training**
|
||||||
|
|
||||||
As the default backend is not available on Windows, if you're using that
|
To train, prepare a folder that contains 3-5 images that illustrate
|
||||||
platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND` to `gloo`
|
the object or concept. It is good to provide a variety of examples or
|
||||||
|
poses to avoid overtraining the system. Format these images as PNG
|
||||||
|
(preferred) or JPG. You do not need to resize or crop the images in
|
||||||
|
advance, but for more control you may wish to do so.
|
||||||
|
|
||||||
```bash
|
Place the training images in a directory on the machine InvokeAI runs
|
||||||
python3 ./main.py -t \
|
on. We recommend placing them in a subdirectory of the
|
||||||
--base ./configs/stable-diffusion/v1-finetune.yaml \
|
`text-inversion-training-data` folder located in the InvokeAI root
|
||||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
directory, ordinarily `~/invokeai` (Linux/Mac), or
|
||||||
-n my_cat \
|
`C:\Users\your_name\invokeai` (Windows). For example, to create an
|
||||||
--gpus 0 \
|
embedding for the "psychedelic" style, you'd place the training images
|
||||||
--data_root D:/textual-inversion/my_cat \
|
into the directory
|
||||||
--init_word 'cat'
|
`~invokeai/text-inversion-training-data/psychedelic`.
|
||||||
|
|
||||||
|
## **Launching Training Using the Console Front End**
|
||||||
|
|
||||||
|
InvokeAI 2.3 and higher comes with a text console-based training front
|
||||||
|
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
|
||||||
|
start the front end by selecting choice (3):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
Do you want to generate images using the
|
||||||
|
1. command-line
|
||||||
|
2. browser-based UI
|
||||||
|
3. textual inversion training
|
||||||
|
4. open the developer console
|
||||||
|
Please enter 1, 2, 3, or 4: [1] 3
|
||||||
```
|
```
|
||||||
|
|
||||||
During the training process, files will be created in
|
From the command line, with the InvokeAI virtual environment active,
|
||||||
`/logs/[project][time][project]/` where you can see the process.
|
you can launch the front end with the command `textual_inversion
|
||||||
|
--gui`.
|
||||||
|
|
||||||
Conditioning contains the training prompts inputs, reconstruction the input
|
This will launch a text-based front end that will look like this:
|
||||||
images for the training epoch samples, samples scaled for a sample of the prompt
|
|
||||||
and one with the init word provided.
|
|
||||||
|
|
||||||
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
|
<figure markdown>
|
||||||
|

|
||||||
|
</figure>
|
||||||
|
|
||||||
!!! note
|
The interface is keyboard-based. Move from field to field using
|
||||||
|
control-N (^N) to move to the next field and control-P (^P) to the
|
||||||
|
previous one. <Tab> and <shift-TAB> work as well. Once a field is
|
||||||
|
active, use the cursor keys. In a checkbox group, use the up and down
|
||||||
|
cursor keys to move from choice to choice, and <space> to select a
|
||||||
|
choice. In a scrollbar, use the left and right cursor keys to increase
|
||||||
|
and decrease the value of the scroll. In textfields, type the desired
|
||||||
|
values.
|
||||||
|
|
||||||
According to the associated paper, the optimal number of
|
The number of parameters may look intimidating, but in most cases the
|
||||||
images is 3-5. Your model may not converge if you use more images than
|
predefined defaults work fine. The red circled fields in the above
|
||||||
that.
|
illustration are the ones you will adjust most frequently.
|
||||||
|
|
||||||
Training will run indefinitely, but you may wish to stop it (with ctrl-c) before
|
### Model Name
|
||||||
the heat death of the universe, when you find a low loss epoch or around ~5000
|
|
||||||
iterations. Note that you can set a fixed limit on the number of training steps
|
|
||||||
by decreasing the "max_steps" option in
|
|
||||||
configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
|
|
||||||
|
|
||||||
## **Run the Model**
|
This will list all the diffusers models that are currently
|
||||||
|
installed. Select the one you wish to use as the basis for your
|
||||||
|
embedding. Be aware that if you use a SD-1.X-based model for your
|
||||||
|
training, you will only be able to use this embedding with other
|
||||||
|
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
|
||||||
|
be able to use the embeddings with models based on SD-2.X.
|
||||||
|
|
||||||
Once the model is trained, specify the trained .pt or .bin file when starting
|
### Trigger Term
|
||||||
invoke using
|
|
||||||
|
|
||||||
```bash
|
This is the prompt term you will use to trigger the embedding. Type a
|
||||||
python3 ./scripts/invoke.py \
|
single word or phrase you wish to use as the trigger, example
|
||||||
--embedding_path /path/to/embedding.pt
|
"psychedelic" (without angle brackets). Within InvokeAI, you will then
|
||||||
|
be able to activate the trigger using the syntax `<psychedelic>`.
|
||||||
|
|
||||||
|
### Initializer
|
||||||
|
|
||||||
|
This is a single character that is used internally during the training
|
||||||
|
process as a placeholder for the trigger term. It defaults to "*" and
|
||||||
|
can usually be left alone.
|
||||||
|
|
||||||
|
### Resume from last saved checkpoint
|
||||||
|
|
||||||
|
As training proceeds, textual inversion will write a series of
|
||||||
|
intermediate files that can be used to resume training from where it
|
||||||
|
was left off in the case of an interruption. This checkbox will be
|
||||||
|
automatically selected if you provide a previously used trigger term
|
||||||
|
and at least one checkpoint file is found on disk.
|
||||||
|
|
||||||
|
Note that as of 20 January 2023, resume does not seem to be working
|
||||||
|
properly due to an issue with the upstream code.
|
||||||
|
|
||||||
|
### Data Training Directory
|
||||||
|
|
||||||
|
This is the location of the images to be used for training. When you
|
||||||
|
select a trigger term like "my-trigger", the frontend will prepopulate
|
||||||
|
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
|
||||||
|
but you can change the path to wherever you want.
|
||||||
|
|
||||||
|
### Output Destination Directory
|
||||||
|
|
||||||
|
This is the location of the logs, checkpoint files, and embedding
|
||||||
|
files created during training. When you select a trigger term like
|
||||||
|
"my-trigger", the frontend will prepopulate this field with
|
||||||
|
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
|
||||||
|
path to wherever you want.
|
||||||
|
|
||||||
|
### Image resolution
|
||||||
|
|
||||||
|
The images in the training directory will be automatically scaled to
|
||||||
|
the value you use here. For best results, you will want to use the
|
||||||
|
same default resolution of the underlying model (512 pixels for
|
||||||
|
SD-1.5, 768 for the larger version of SD-2.1).
|
||||||
|
|
||||||
|
### Center crop images
|
||||||
|
|
||||||
|
If this is selected, your images will be center cropped to make them
|
||||||
|
square before resizing them to the desired resolution. Center cropping
|
||||||
|
can indiscriminately cut off the top of subjects' heads for portrait
|
||||||
|
aspect images, so if you have images like this, you may wish to use a
|
||||||
|
photoeditor to manually crop them to a square aspect ratio.
|
||||||
|
|
||||||
|
### Mixed precision
|
||||||
|
|
||||||
|
Select the floating point precision for the embedding. "no" will
|
||||||
|
result in a full 32-bit precision, "fp16" will provide 16-bit
|
||||||
|
precision, and "bf16" will provide mixed precision (only available
|
||||||
|
when XFormers is used).
|
||||||
|
|
||||||
|
### Max training steps
|
||||||
|
|
||||||
|
How many steps the training will take before the model converges. Most
|
||||||
|
training sets will converge with 2000-3000 steps.
|
||||||
|
|
||||||
|
### Batch size
|
||||||
|
|
||||||
|
This adjusts how many training images are processed simultaneously in
|
||||||
|
each step. Higher values will cause the training process to run more
|
||||||
|
quickly, but use more memory. The default size will run with GPUs with
|
||||||
|
as little as 12 GB.
|
||||||
|
|
||||||
|
### Learning rate
|
||||||
|
|
||||||
|
The rate at which the system adjusts its internal weights during
|
||||||
|
training. Higher values risk overtraining (getting the same image each
|
||||||
|
time), and lower values will take more steps to train a good
|
||||||
|
model. The default of 0.0005 is conservative; you may wish to increase
|
||||||
|
it to 0.005 to speed up training.
|
||||||
|
|
||||||
|
### Scale learning rate by number of GPUs, steps and batch size
|
||||||
|
|
||||||
|
If this is selected (the default) the system will adjust the provided
|
||||||
|
learning rate to improve performance.
|
||||||
|
|
||||||
|
### Use xformers acceleration
|
||||||
|
|
||||||
|
This will activate XFormers memory-efficient attention. You need to
|
||||||
|
have XFormers installed for this to have an effect.
|
||||||
|
|
||||||
|
### Learning rate scheduler
|
||||||
|
|
||||||
|
This adjusts how the learning rate changes over the course of
|
||||||
|
training. The default "constant" means to use a constant learning rate
|
||||||
|
for the entire training session. The other values scale the learning
|
||||||
|
rate according to various formulas.
|
||||||
|
|
||||||
|
Only "constant" is supported by the XFormers library.
|
||||||
|
|
||||||
|
### Gradient accumulation steps
|
||||||
|
|
||||||
|
This is a parameter that allows you to use bigger batch sizes than
|
||||||
|
your GPU's VRAM would ordinarily accommodate, at the cost of some
|
||||||
|
performance.
|
||||||
|
|
||||||
|
### Warmup steps
|
||||||
|
|
||||||
|
If "constant_with_warmup" is selected in the learning rate scheduler,
|
||||||
|
then this provides the number of warmup steps. Warmup steps have a
|
||||||
|
very low learning rate, and are one way of preventing early
|
||||||
|
overtraining.
|
||||||
|
|
||||||
|
## The training run
|
||||||
|
|
||||||
|
Start the training run by advancing to the OK button (bottom right)
|
||||||
|
and pressing <enter>. A series of progress messages will be displayed
|
||||||
|
as the training process proceeds. This may take an hour or two,
|
||||||
|
depending on settings and the speed of your system. Various log and
|
||||||
|
checkpoint files will be written into the output directory (ordinarily
|
||||||
|
`~/invokeai/text-inversion-output/my-model/`)
|
||||||
|
|
||||||
|
At the end of successful training, the system will copy the file
|
||||||
|
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
|
||||||
|
directory, using a subdirectory named after the trigger token. For
|
||||||
|
example, if the trigger token was `psychedelic`, then look for the
|
||||||
|
embeddings file in
|
||||||
|
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
|
||||||
|
|
||||||
|
You may now launch InvokeAI and try out a prompt that uses the trigger
|
||||||
|
term. For example `a plate of banana sushi in <psychedelic> style`.
|
||||||
|
|
||||||
|
## **Training with the Command-Line Script**
|
||||||
|
|
||||||
|
Training can also be done using a traditional command-line script. It
|
||||||
|
can be launched from within the "developer's console", or from the
|
||||||
|
command line after activating InvokeAI's virtual environment.
|
||||||
|
|
||||||
|
It accepts a large number of arguments, which can be summarized by
|
||||||
|
passing the `--help` argument:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
textual_inversion --help
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, to utilize your subject at the invoke prompt
|
Typical usage is shown here:
|
||||||
|
```sh
|
||||||
```bash
|
textual_inversion \
|
||||||
invoke> "a photo of *"
|
--model=stable-diffusion-1.5 \
|
||||||
|
--resolution=512 \
|
||||||
|
--learnable_property=style \
|
||||||
|
--initializer_token='*' \
|
||||||
|
--placeholder_token='<psychedelic>' \
|
||||||
|
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
|
||||||
|
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
|
||||||
|
--scale_lr \
|
||||||
|
--train_batch_size=8 \
|
||||||
|
--gradient_accumulation_steps=4 \
|
||||||
|
--max_train_steps=3000 \
|
||||||
|
--learning_rate=0.0005 \
|
||||||
|
--resume_from_checkpoint=latest \
|
||||||
|
--lr_scheduler=constant \
|
||||||
|
--mixed_precision=fp16 \
|
||||||
|
--only_save_embeds
|
||||||
```
|
```
|
||||||
|
|
||||||
This also works with image2image
|
## Reading
|
||||||
|
|
||||||
```bash
|
For more information on textual inversion, please see the following
|
||||||
invoke> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
resources:
|
||||||
```
|
|
||||||
|
|
||||||
For .pt files it's also possible to train multiple tokens (modify the
|
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
|
||||||
placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine
|
associated paper for details and limitations.
|
||||||
LDM checkpoints using:
|
* [HuggingFace's textual inversion training
|
||||||
|
page](https://huggingface.co/docs/diffusers/training/text_inversion)
|
||||||
|
* [HuggingFace example script
|
||||||
|
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
|
||||||
|
(Note that this script is similar to, but not identical, to
|
||||||
|
`textual_inversion`, but produces embed files that are completely compatible.
|
||||||
|
|
||||||
```bash
|
---
|
||||||
python3 ./scripts/merge_embeddings.py \
|
|
||||||
--manager_ckpts /path/to/first/embedding.pt \
|
|
||||||
[</path/to/second/embedding.pt>,[...]] \
|
|
||||||
--output_path /path/to/output/embedding.pt
|
|
||||||
```
|
|
||||||
|
|
||||||
Credit goes to rinongal and the repository
|
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team
|
||||||
|
|
||||||
Please see [the repository](https://github.com/rinongal/textual_inversion) and
|
|
||||||
associated paper for details and limitations.
|
|
284
docs/features/UNIFIED_CANVAS.md
Normal file
@ -0,0 +1,284 @@
|
|||||||
|
---
|
||||||
|
title: Unified Canvas
|
||||||
|
---
|
||||||
|
|
||||||
|
The Unified Canvas is a tool designed to streamline and simplify the process of
|
||||||
|
composing an image using Stable Diffusion. It offers artists all of the
|
||||||
|
available Stable Diffusion generation modes (Text To Image, Image To Image,
|
||||||
|
Inpainting, and Outpainting) as a single unified workflow. The flexibility of
|
||||||
|
the tool allows you to tweak and edit image generations, extend images beyond
|
||||||
|
their initial size, and to create new content in a freeform way both inside and
|
||||||
|
outside of existing images.
|
||||||
|
|
||||||
|
This document explains the basics of using the Unified Canvas, introducing you
|
||||||
|
to its features and tools one by one. It also describes some of the more
|
||||||
|
advanced tools available to power users of the Canvas.
|
||||||
|
|
||||||
|
## Basics
|
||||||
|
|
||||||
|
The Unified Canvas consists of two layers: the **Base Layer** and the **Mask
|
||||||
|
Layer**. You can swap from one layer to the other by selecting the layer you
|
||||||
|
want in the drop-down menu on the top left corner of the Unified Canvas, or by
|
||||||
|
pressing the (Q) hotkey.
|
||||||
|
|
||||||
|
### Base Layer
|
||||||
|
|
||||||
|
The **Base Layer** is the image content currently managed by the Canvas, and can
|
||||||
|
be exported at any time to the gallery by using the **Save to Gallery** option.
|
||||||
|
When the Base Layer is selected, the Brush (B) and Eraser (E) tools will
|
||||||
|
directly manipulate the base layer. Any images uploaded to the Canvas, or sent
|
||||||
|
to the Unified Canvas from the gallery, will clear out all existing content and
|
||||||
|
set the Base layer to the new image.
|
||||||
|
|
||||||
|
### Staging Area
|
||||||
|
|
||||||
|
When you generate images, they will display in the Canvas's **Staging Area**,
|
||||||
|
alongside the Staging Area toolbar buttons. While the Staging Area is active,
|
||||||
|
you cannot interact with the Canvas itself.
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Accepting generations will commit the new generation to the **Base Layer**. You
|
||||||
|
can review all generated images using the Prev/Next arrows, save any individual
|
||||||
|
generations to your gallery (without committing to the Base layer) or discard
|
||||||
|
generations. While you can Undo a discard in an individual Canvas session, any
|
||||||
|
generations that are not saved will be lost when the Canvas resets.
|
||||||
|
|
||||||
|
### Mask Layer
|
||||||
|
|
||||||
|
The **Mask Layer** consists of any masked sections that have been created to
|
||||||
|
inform Inpainting generations. You can paint a new mask, or edit an existing
|
||||||
|
mask, using the Brush tool and the Eraser with the Mask layer set as your Active
|
||||||
|
layer. Any masked areas will only affect generation inside of the current
|
||||||
|
bounding box.
|
||||||
|
|
||||||
|
### Bounding Box
|
||||||
|
|
||||||
|
When generating a new image, Invoke will process and apply new images within the
|
||||||
|
area denoted by the **Bounding Box**. The Width & Height settings of the
|
||||||
|
Bounding Box, as well as its location within the Unified Canvas and pixels or
|
||||||
|
empty space that it encloses, determine how new invocations are generated - see
|
||||||
|
[Inpainting & Outpainting](#inpainting-and-outpainting) below. The Bounding Box
|
||||||
|
can be moved and resized using the Move (V) tool. It can also be resized using
|
||||||
|
the Bounding Box options in the Options Panel. By using these controls you can
|
||||||
|
generate larger or smaller images, control which sections of the image are being
|
||||||
|
processed, as well as control Bounding Box tools like the Bounding Box
|
||||||
|
fill/erase.
|
||||||
|
|
||||||
|
### <a name="inpainting-and-outpainting"></a> Inpainting & Outpainting
|
||||||
|
|
||||||
|
"Inpainting" means asking the AI to refine part of an image while leaving the
|
||||||
|
rest alone. For example, updating a portrait of your grandmother to have her
|
||||||
|
wear a biker's jacket.
|
||||||
|
|
||||||
|
| masked original | inpaint result |
|
||||||
|
| :-------------------------------------------------------------: | :----------------------------------------------------------------------------------------: |
|
||||||
|
|  |  |
|
||||||
|
|
||||||
|
"Outpainting" means asking the AI to expand the original image beyond its
|
||||||
|
original borders, making a bigger image that's still based on the original. For
|
||||||
|
example, extending the above image of your Grandmother in a biker's jacket to
|
||||||
|
include her wearing jeans (and while we're at it, a motorcycle!)
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
When you are using the Unified Canvas, Invoke decides automatically whether to
|
||||||
|
do Inpainting, Outpainting, ImageToImage, or TextToImage by looking inside the
|
||||||
|
area enclosed by the Bounding Box. It chooses the appropriate type of generation
|
||||||
|
based on whether the Bounding Box contains empty (transparent) areas on the Base
|
||||||
|
layer, or whether it contains colored areas from previous generations (or from
|
||||||
|
painted brushstrokes) on the Base layer, and/or whether the Mask layer contains
|
||||||
|
any brushstrokes. See [Generation Methods](#generation-methods) below for more
|
||||||
|
information.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
To get started with the Unified Canvas, you will want to generate a new base
|
||||||
|
layer using Txt2Img or importing an initial image. We'll refer to either of
|
||||||
|
these methods as the "initial image" in the below guide.
|
||||||
|
|
||||||
|
From there, you can consider the following techniques to augment your image:
|
||||||
|
|
||||||
|
- **New Images**: Move the bounding box to an empty area of the Canvas, type in
|
||||||
|
your prompt, and Invoke, to generate a new image using the Text to Image
|
||||||
|
function.
|
||||||
|
- **Image Correction**: Use the color picker and brush tool to paint corrections
|
||||||
|
on the image, switch to the Mask layer, and brush a mask over your painted
|
||||||
|
area to use **Inpainting**. You can also use the **ImageToImage** generation
|
||||||
|
method to invoke new interpretations of the image.
|
||||||
|
- **Image Expansion**: Move the bounding box to include a portion of your
|
||||||
|
initial image, and a portion of transparent/empty pixels, then Invoke using a
|
||||||
|
prompt that describes what you'd like to see in that area. This will Outpaint
|
||||||
|
the image. You'll typically find more coherent results if you keep about
|
||||||
|
50-60% of the original image in the bounding box. Make sure that the Image To
|
||||||
|
Image Strength slider is set to a high value - you may need to set it higher
|
||||||
|
than you are used to.
|
||||||
|
- **New Content on Existing Images**: If you want to add new details or objects
|
||||||
|
into your image, use the brush tool to paint a sketch of what you'd like to
|
||||||
|
see on the image, switch to the Mask layer, and brush a mask over your painted
|
||||||
|
area to use **Inpainting**. If the masked area is small, consider using a
|
||||||
|
smaller bounding box to take advantage of Invoke's automatic Scaling features,
|
||||||
|
which can help to produce better details.
|
||||||
|
- **And more**: There are a number of creative ways to use the Canvas, and the
|
||||||
|
above are just starting points. We're excited to see what you come up with!
|
||||||
|
|
||||||
|
## <a name="generation-methods"></a> Generation Methods
|
||||||
|
|
||||||
|
The Canvas can use all generation methods available (Txt2Img, Img2Img,
|
||||||
|
Inpainting, and Outpainting), and these will be automatically selected and used
|
||||||
|
based on the current selection area within the Bounding Box.
|
||||||
|
|
||||||
|
### Text to Image
|
||||||
|
|
||||||
|
If the Bounding Box is placed over an area of Canvas with an **empty Base
|
||||||
|
Layer**, invoking a new image will use **TextToImage**. This generates an
|
||||||
|
entirely new image based on your prompt.
|
||||||
|
|
||||||
|
### Image to Image
|
||||||
|
|
||||||
|
If the Bounding Box is placed over an area of Canvas with an **existing Base
|
||||||
|
Layer area with no transparent pixels or masks**, invoking a new image will use
|
||||||
|
**ImageToImage**. This uses the image within the bounding box and your prompt to
|
||||||
|
interpret a new image. The image will be closer to your original image at lower
|
||||||
|
Image to Image strengths.
|
||||||
|
|
||||||
|
### Inpainting
|
||||||
|
|
||||||
|
If the Bounding Box is placed over an area of Canvas with an **existing Base
|
||||||
|
Layer and any pixels selected using the Mask layer**, invoking a new image will
|
||||||
|
use **Inpainting**. Inpainting uses the existing colors/forms in the masked area
|
||||||
|
in order to generate a new image for the masked area only. The unmasked portion
|
||||||
|
of the image will remain the same. Image to Image strength applies to the
|
||||||
|
inpainted area.
|
||||||
|
|
||||||
|
If you desire something completely different from the original image in your new
|
||||||
|
generation (i.e., if you want Invoke to ignore existing colors/forms), consider
|
||||||
|
toggling the Inpaint Replace setting on, and use high values for both Inpaint
|
||||||
|
Replace and Image To Image Strength.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
By default, the **Scale Before Processing** option — which
|
||||||
|
inpaints more coherent details by generating at a larger resolution and then
|
||||||
|
scaling — is only activated when the Bounding Box is relatively small.
|
||||||
|
To get the best inpainting results you should therefore resize your Bounding
|
||||||
|
Box to the smallest area that contains your mask and enough surrounding detail
|
||||||
|
to help Stable Diffusion understand the context of what you want it to draw.
|
||||||
|
You should also update your prompt so that it describes _just_ the area within
|
||||||
|
the Bounding Box.
|
||||||
|
|
||||||
|
### Outpainting
|
||||||
|
|
||||||
|
If the Bounding Box is placed over an area of Canvas partially filled by an
|
||||||
|
existing Base Layer area and partially by transparent pixels or masks, invoking
|
||||||
|
a new image will use **Outpainting**, as well as **Inpainting** any masked
|
||||||
|
areas.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
Features with non-obvious behavior are detailed below, in order to provide
|
||||||
|
clarity on the intent and common use cases we expect for utilizing them.
|
||||||
|
|
||||||
|
### Toolbar
|
||||||
|
|
||||||
|
#### Mask Options
|
||||||
|
|
||||||
|
- **Enable Mask** - This flag can be used to Enable or Disable the currently
|
||||||
|
painted mask. If you have painted a mask, but you don't want it affect the
|
||||||
|
next invocation, but you _also_ don't want to delete it, then you can set this
|
||||||
|
option to Disable. When you want the mask back, set this back to Enable.
|
||||||
|
- **Preserve Masked Area** - When enabled, Preserve Masked Area inverts the
|
||||||
|
effect of the Mask on the Inpainting process. Pixels in masked areas will be
|
||||||
|
kept unchanged, and unmasked areas will be regenerated.
|
||||||
|
|
||||||
|
#### Creative Tools
|
||||||
|
|
||||||
|
- **Brush - Base/Mask Modes** - The Brush tool switches automatically between
|
||||||
|
different modes of operation for the Base and Mask layers respectively.
|
||||||
|
- On the Base layer, the brush will directly paint on the Canvas using the
|
||||||
|
color selected on the Brush Options menu.
|
||||||
|
- On the Mask layer, the brush will create a new mask. If you're finding the
|
||||||
|
mask difficult to see over the existing content of the Unified Canvas, you
|
||||||
|
can change the color it is drawn with using the color selector on the Mask
|
||||||
|
Options dropdown.
|
||||||
|
- **Erase Bounding Box** - On the Base layer, erases all pixels within the
|
||||||
|
Bounding Box.
|
||||||
|
- **Fill Bounding Box** - On the Base layer, fills all pixels within the
|
||||||
|
Bounding Box with the currently selected color.
|
||||||
|
|
||||||
|
#### Canvas Tools
|
||||||
|
|
||||||
|
- **Move Tool** - Allows for manipulation of the Canvas view (by dragging on the
|
||||||
|
Canvas, outside the bounding box), the Bounding Box (by dragging the edges of
|
||||||
|
the box), or the Width/Height of the Bounding Box (by dragging one of the 9
|
||||||
|
directional handles).
|
||||||
|
- **Reset View** - Click to re-orients the view to the center of the Bounding
|
||||||
|
Box.
|
||||||
|
- **Merge Visible** - If your browser is having performance problems drawing the
|
||||||
|
image in the Unified Canvas, click this to consolidate all of the information
|
||||||
|
currently being rendered by your browser into a merged copy of the image. This
|
||||||
|
lowers the resource requirements and should improve performance.
|
||||||
|
|
||||||
|
### Seam Correction
|
||||||
|
|
||||||
|
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
|
||||||
|
by Stable Diffusion into your existing image. To do this, the area around the
|
||||||
|
`seam` at the boundary between your image and the new generation is
|
||||||
|
automatically blended to produce a seamless output. In a fully automatic
|
||||||
|
process, a mask is generated to cover the seam, and then the area of the seam is
|
||||||
|
Inpainted.
|
||||||
|
|
||||||
|
Although the default options should work well most of the time, sometimes it can
|
||||||
|
help to alter the parameters that control the seam Inpainting. A wider seam and
|
||||||
|
a blur setting of about 1/3 of the seam have been noted as producing
|
||||||
|
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with
|
||||||
|
both sides). Seam strength of 0.7 is best for reducing hard seams.
|
||||||
|
|
||||||
|
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
|
||||||
|
mask around the seam.
|
||||||
|
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
|
||||||
|
masked area.
|
||||||
|
- **Seam Strength** - The Image To Image Strength parameter used for the
|
||||||
|
Inpainting generation that is applied to the seam area.
|
||||||
|
- **Seam Steps** - The number of generation steps that should be used to Inpaint
|
||||||
|
the seam.
|
||||||
|
|
||||||
|
### Infill & Scaling
|
||||||
|
|
||||||
|
- **Scale Before Processing & W/H**: When generating images with a bounding box
|
||||||
|
smaller than the optimized W/H of the model (e.g., 512x512 for SD1.5), this
|
||||||
|
feature first generates at a larger size with the same aspect ratio, and then
|
||||||
|
scales that image down to fill the selected area. This is particularly useful
|
||||||
|
when inpainting very small details. Scaling is optional but is enabled by
|
||||||
|
default.
|
||||||
|
- **Inpaint Replace**: When Inpainting, the default method is to utilize the
|
||||||
|
existing RGB values of the Base layer to inform the generation process. If
|
||||||
|
Inpaint Replace is enabled, noise is generated and blended with the existing
|
||||||
|
pixels (completely replacing the original RGB values at an Inpaint Replace
|
||||||
|
value of 1). This can help generate more variation from the pixels on the Base
|
||||||
|
layers.
|
||||||
|
- When using Inpaint Replace you should use a higher Image To Image Strength
|
||||||
|
value, especially at higher Inpaint Replace values
|
||||||
|
- **Infill Method**: Invoke currently supports two methods for producing RGB
|
||||||
|
values for use in the Outpainting process: Patchmatch and Tile. We believe
|
||||||
|
that Patchmatch is the superior method, however we provide support for Tile in
|
||||||
|
case Patchmatch cannot be installed or is unavailable on your computer.
|
||||||
|
- **Tile Size**: The Tile method for Outpainting sources small portions of the
|
||||||
|
original image and randomly place these into the areas being Outpainted. This
|
||||||
|
value sets the size of those tiles.
|
||||||
|
|
||||||
|
## Hot Keys
|
||||||
|
|
||||||
|
The Unified Canvas is a tool that excels when you use hotkeys. You can view the
|
||||||
|
full list of keyboard shortcuts, updated with all new features, by clicking the
|
||||||
|
Keyboard Shortcuts icon at the top right of the InvokeAI WebUI.
|
@ -303,6 +303,8 @@ The WebGUI is only rapid development. Check back regularly for updates!
|
|||||||
| `--cors [CORS ...]` | Additional allowed origins, comma-separated |
|
| `--cors [CORS ...]` | Additional allowed origins, comma-separated |
|
||||||
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
|
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
|
||||||
| `--port PORT` | Web server: Port to listen on |
|
| `--port PORT` | Web server: Port to listen on |
|
||||||
|
| `--certfile CERTFILE` | Web server: Path to certificate file to use for SSL. Use together with --keyfile |
|
||||||
|
| `--keyfile KEYFILE` | Web server: Path to private key file to use for SSL. Use together with --certfile' |
|
||||||
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
|
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
|
||||||
|
|
||||||
### Web Specific Features
|
### Web Specific Features
|
||||||
|
@ -4,59 +4,72 @@ title: WebUI Hotkey List
|
|||||||
|
|
||||||
# :material-keyboard: **WebUI Hotkey List**
|
# :material-keyboard: **WebUI Hotkey List**
|
||||||
|
|
||||||
## General
|
## App Hotkeys
|
||||||
|
|
||||||
| Setting | Hotkey |
|
| Setting | Hotkey |
|
||||||
| ----------------- | ---------------------- |
|
| --------------- | ------------------ |
|
||||||
| ++a++ | Set All Parameters |
|
| ++ctrl+enter++ | Invoke |
|
||||||
| ++s++ | Set Seed |
|
| ++shift+x++ | Cancel |
|
||||||
| ++u++ | Upscale |
|
| ++alt+a++ | Focus Prompt |
|
||||||
| ++r++ | Restoration |
|
| ++o++ | Toggle Options |
|
||||||
| ++i++ | Show Metadata |
|
| ++shift+o++ | Pin Options |
|
||||||
| ++d++ ++d++ ++l++ | Delete Image |
|
| ++z++ | Toggle Viewer |
|
||||||
| ++alt+a++ | Focus prompt input |
|
| ++g++ | Toggle Gallery |
|
||||||
| ++shift+i++ | Send To Image to Image |
|
| ++f++ | Maximize Workspace |
|
||||||
| ++ctrl+enter++ | Start processing |
|
| ++1++ - ++5++ | Change Tabs |
|
||||||
| ++shift+x++ | cancel Processing |
|
| ++"`"++ | Toggle Console |
|
||||||
| ++shift+d++ | Toggle Dark Mode |
|
|
||||||
| ++"`"++ | Toggle console |
|
|
||||||
|
|
||||||
## Tabs
|
## General Hotkeys
|
||||||
|
|
||||||
| Setting | Hotkey |
|
| Setting | Hotkey |
|
||||||
| ------- | ------------------------- |
|
| -------------- | ---------------------- |
|
||||||
| ++1++ | Go to Text To Image Tab |
|
| ++p++ | Set Prompt |
|
||||||
| ++2++ | Go to Image to Image Tab |
|
| ++s++ | Set Seed |
|
||||||
| ++3++ | Go to Inpainting Tab |
|
| ++a++ | Set Parameters |
|
||||||
| ++4++ | Go to Outpainting Tab |
|
| ++shift+r++ | Restore Faces |
|
||||||
| ++5++ | Go to Nodes Tab |
|
| ++shift+u++ | Upscale |
|
||||||
| ++6++ | Go to Post Processing Tab |
|
| ++i++ | Show Info |
|
||||||
|
| ++shift+i++ | Send To Image To Image |
|
||||||
|
| ++del++ | Delete Image |
|
||||||
|
| ++esc++ | Close Panels |
|
||||||
|
|
||||||
## Gallery
|
## Gallery Hotkeys
|
||||||
|
|
||||||
| Setting | Hotkey |
|
| Setting | Hotkey |
|
||||||
| -------------- | ------------------------------- |
|
| ----------------------| --------------------------- |
|
||||||
| ++g++ | Toggle Gallery |
|
| ++arrow-left++ | Previous Image |
|
||||||
| ++left++ | Go to previous image in gallery |
|
| ++arrow-right++ | Next Image |
|
||||||
| ++right++ | Go to next image in gallery |
|
| ++shift+g++ | Toggle Gallery Pin |
|
||||||
| ++shift+p++ | Pin gallery |
|
| ++shift+arrow-up++ | Increase Gallery Image Size |
|
||||||
| ++shift+up++ | Increase gallery image size |
|
| ++shift+arrow-down++ | Decrease Gallery Image Size |
|
||||||
| ++shift+down++ | Decrease gallery image size |
|
|
||||||
| ++shift+r++ | Reset image gallery size |
|
|
||||||
|
|
||||||
## Inpainting
|
## Unified Canvas Hotkeys
|
||||||
|
|
||||||
| Setting | Hotkey |
|
| Setting | Hotkey |
|
||||||
| ---------------------------- | --------------------- |
|
| --------------------------------- | ---------------------- |
|
||||||
| ++"["++ | Decrease brush size |
|
| ++b++ | Select Brush |
|
||||||
| ++"]"++ | Increase brush size |
|
| ++e++ | Select Eraser |
|
||||||
| ++alt+"["++ | Decrease mask opacity |
|
| ++bracket-left++ | Decrease Brush Size |
|
||||||
| ++alt+"]"++ | Increase mask opacity |
|
| ++bracket-right++ | Increase Brush Size |
|
||||||
| ++b++ | Select brush |
|
| ++shift+bracket-left++ | Decrease Brush Opacity |
|
||||||
| ++e++ | Select eraser |
|
| ++shift+bracket-right++ | Increase Brush Opacity |
|
||||||
| ++ctrl+z++ | Undo brush stroke |
|
| ++v++ | Move Tool |
|
||||||
| ++ctrl+shift+z++, ++ctrl+y++ | Redo brush stroke |
|
| ++shift+f++ | Fill Bounding Box |
|
||||||
| ++h++ | Hide mask |
|
| ++del++ / ++backspace++ | Erase Bounding Box |
|
||||||
| ++shift+m++ | Invert mask |
|
| ++c++ | Select Color Picker |
|
||||||
| ++shift+c++ | Clear mask |
|
| ++n++ | Toggle Snap |
|
||||||
| ++shift+j++ | Expand canvas |
|
| ++"Hold Space"++ | Quick Toggle Move |
|
||||||
|
| ++q++ | Toggle Layer |
|
||||||
|
| ++shift+c++ | Clear Mask |
|
||||||
|
| ++h++ | Hide Mask |
|
||||||
|
| ++shift+h++ | Show/Hide Bounding Box |
|
||||||
|
| ++shift+m++ | Merge Visible |
|
||||||
|
| ++shift+s++ | Save To Gallery |
|
||||||
|
| ++ctrl+c++ | Copy To Clipboard |
|
||||||
|
| ++shift+d++ | Download Image |
|
||||||
|
| ++ctrl+z++ | Undo |
|
||||||
|
| ++ctrl+y++ / ++ctrl+shift+z++ | Redo |
|
||||||
|
| ++r++ | Reset View |
|
||||||
|
| ++arrow-left++ | Previous Staging Image |
|
||||||
|
| ++arrow-right++ | Next Staging Image |
|
||||||
|
| ++enter++ | Accept Staging Image |
|
5
docs/features/index.md
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
title: Overview
|
||||||
|
---
|
||||||
|
|
||||||
|
Here you can find the documentation for different features.
|
@ -39,7 +39,7 @@ Looking for a short version? Here's a TL;DR in 3 tables.
|
|||||||
!!! tip "suggestions"
|
!!! tip "suggestions"
|
||||||
|
|
||||||
For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.
|
For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.
|
||||||
|
|
||||||
For variability, use `K_EULER_A` (runs 2x as quick as `K_DPM_2_A`).
|
For variability, use `K_EULER_A` (runs 2x as quick as `K_DPM_2_A`).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
238
docs/index.md
@ -6,15 +6,14 @@ title: Home
|
|||||||
The Docs you find here (/docs/*) are built and deployed via mkdocs. If you want to run a local version to verify your changes, it's as simple as::
|
The Docs you find here (/docs/*) are built and deployed via mkdocs. If you want to run a local version to verify your changes, it's as simple as::
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install -r requirements-mkdocs.txt
|
pip install -r docs/requirements-mkdocs.txt
|
||||||
mkdocs serve
|
mkdocs serve
|
||||||
```
|
```
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<div align="center" markdown>
|
<div align="center" markdown>
|
||||||
|
|
||||||
# ^^**InvokeAI: A Stable Diffusion Toolkit**^^ :tools: <br> <small>Formerly known as lstein/stable-diffusion</small>
|
[](https://github.com/invoke-ai/InvokeAI)
|
||||||
|
|
||||||
[](https://github.com/invoke-ai/InvokeAI)
|
|
||||||
|
|
||||||
[![discord badge]][discord link]
|
[![discord badge]][discord link]
|
||||||
|
|
||||||
@ -70,7 +69,11 @@ image-to-image generator. It provides a streamlined process with various new
|
|||||||
features and options to aid the image generation process. It runs on Windows,
|
features and options to aid the image generation process. It runs on Windows,
|
||||||
Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
|
Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
|
||||||
|
|
||||||
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>]
|
||||||
|
[<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a
|
||||||
|
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a
|
||||||
|
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas &
|
||||||
|
Q&A</a>]
|
||||||
|
|
||||||
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
|
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
|
||||||
|
|
||||||
@ -80,11 +83,25 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
|
|||||||
|
|
||||||
## :octicons-package-dependencies-24: Installation
|
## :octicons-package-dependencies-24: Installation
|
||||||
|
|
||||||
This fork is supported across Linux, Windows and Macintosh. Linux
|
This fork is supported across Linux, Windows and Macintosh. Linux users can use
|
||||||
users can use either an Nvidia-based card (with CUDA support) or an
|
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
|
||||||
AMD card (using the ROCm driver). For full installation and upgrade
|
driver).
|
||||||
instructions, please see:
|
|
||||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
First time users, please see
|
||||||
|
[Automated Installer](installation/INSTALL_AUTOMATED.md) for a walkthrough of
|
||||||
|
getting InvokeAI up and running on your system. For alternative installation and
|
||||||
|
upgrade instructions, please see:
|
||||||
|
[InvokeAI Installation Overview](installation/)
|
||||||
|
|
||||||
|
Users who wish to make use of the **PyPatchMatch** inpainting functions
|
||||||
|
will need to perform a bit of extra work to enable this
|
||||||
|
module. Instructions can be found at [Installing
|
||||||
|
PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md).
|
||||||
|
|
||||||
|
If you have an NVIDIA card, you can benefit from the significant
|
||||||
|
memory savings and performance benefits provided by Facebook Lab's
|
||||||
|
**xFormers** module. Instructions for Linux and Windows users can be found
|
||||||
|
at [Installing xFormers](installation/070_INSTALL_XFORMERS.md).
|
||||||
|
|
||||||
## :fontawesome-solid-computer: Hardware Requirements
|
## :fontawesome-solid-computer: Hardware Requirements
|
||||||
|
|
||||||
@ -93,25 +110,29 @@ instructions, please see:
|
|||||||
You wil need one of the following:
|
You wil need one of the following:
|
||||||
|
|
||||||
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||||
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only)
|
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
|
||||||
|
only)
|
||||||
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
|
||||||
|
|
||||||
|
We do **not recommend** the following video cards due to issues with their
|
||||||
|
running in half-precision mode and having insufficient VRAM to render 512x512
|
||||||
|
images in full-precision mode:
|
||||||
|
|
||||||
|
- NVIDIA 10xx series cards such as the 1080ti
|
||||||
|
- GTX 1650 series cards
|
||||||
|
- GTX 1660 series cards
|
||||||
|
|
||||||
### :fontawesome-solid-memory: Memory
|
### :fontawesome-solid-memory: Memory
|
||||||
|
|
||||||
- At least 12 GB Main Memory RAM.
|
- At least 12 GB Main Memory RAM.
|
||||||
|
|
||||||
### :fontawesome-regular-hard-drive: Disk
|
### :fontawesome-regular-hard-drive: Disk
|
||||||
|
|
||||||
- At least 12 GB of free disk space for the machine learning model, Python, and
|
- At least 18 GB of free disk space for the machine learning model, Python, and
|
||||||
all its dependencies.
|
all its dependencies.
|
||||||
|
|
||||||
!!! info
|
!!! info
|
||||||
|
|
||||||
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the invoke script in
|
|
||||||
full-precision mode as shown below.
|
|
||||||
|
|
||||||
Similarly, specify full-precision mode on Apple M1 hardware.
|
|
||||||
|
|
||||||
Precision is auto configured based on the device. If however you encounter errors like
|
Precision is auto configured based on the device. If however you encounter errors like
|
||||||
`expected type Float but found Half` or `not implemented for Half` you can try starting
|
`expected type Float but found Half` or `not implemented for Half` you can try starting
|
||||||
`invoke.py` with the `--precision=float32` flag:
|
`invoke.py` with the `--precision=float32` flag:
|
||||||
@ -120,101 +141,116 @@ You wil need one of the following:
|
|||||||
(invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
|
(invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## :octicons-gift-24: InvokeAI Features
|
||||||
|
|
||||||
|
- [The InvokeAI Web Interface](features/WEB.md) -
|
||||||
|
[WebGUI hotkey reference guide](features/WEBUIHOTKEYS.md) -
|
||||||
|
[WebGUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
|
||||||
|
<!-- seperator -->
|
||||||
|
- [The Command Line Interace](features/CLI.md) -
|
||||||
|
[Image2Image](features/IMG2IMG.md) - [Inpainting](features/INPAINTING.md) -
|
||||||
|
[Outpainting](features/OUTPAINTING.md) -
|
||||||
|
[Adding custom styles and subjects](features/CONCEPTS.md) -
|
||||||
|
[Upscaling and Face Reconstruction](features/POSTPROCESS.md)
|
||||||
|
<!-- seperator -->
|
||||||
|
- [Generating Variations](features/VARIATIONS.md)
|
||||||
|
<!-- seperator -->
|
||||||
|
- [Prompt Engineering](features/PROMPTS.md)
|
||||||
|
<!-- seperator -->
|
||||||
|
- [Model Merging](features/MODEL_MERGING.md)
|
||||||
|
<!-- seperator -->
|
||||||
|
- Miscellaneous
|
||||||
|
- [NSFW Checker](features/NSFW.md)
|
||||||
|
- [Embiggen upscaling](features/EMBIGGEN.md)
|
||||||
|
- [Other](features/OTHER.md)
|
||||||
|
|
||||||
## :octicons-log-16: Latest Changes
|
## :octicons-log-16: Latest Changes
|
||||||
|
|
||||||
### v2.1.3 <small>(13 November 2022)</small>
|
### v2.2.4 <small>(11 December 2022)</small>
|
||||||
|
|
||||||
- A choice of installer scripts that automate installation and configuration. See [Installation](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/installation/INSTALL.md).
|
#### the `invokeai` directory
|
||||||
- A streamlined manual installation process that works for both Conda and PIP-only installs. See [Manual Installation](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/installation/INSTALL_MANUAL.md).
|
|
||||||
- The ability to save frequently-used startup options (model to load, steps, sampler, etc) in a `.invokeai` file. See [Client](https://github.com/invoke-ai/InvokeAI/blob/2.1.3-rc6/docs/features/CLI.md)
|
|
||||||
- Support for AMD GPU cards (non-CUDA) on Linux machines.
|
|
||||||
- Multiple bugs and edge cases squashed.
|
|
||||||
|
|
||||||
### v2.1.0 <small>(2 November 2022)</small>
|
Previously there were two directories to worry about, the directory that
|
||||||
|
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
|
||||||
|
directory that contained the models files, embeddings, configuration and
|
||||||
|
outputs. With the 2.2.4 release, this dual system is done away with, and
|
||||||
|
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
|
||||||
|
live in a directory named `invokeai`. By default this directory is located in
|
||||||
|
your home directory (e.g. `\Users\yourname` on Windows), but you can select
|
||||||
|
where it goes at install time.
|
||||||
|
|
||||||
- [Inpainting](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
|
After installation, you can delete the install directory (the one that the zip
|
||||||
support in the WebGUI
|
file creates when it unpacks). Do **not** delete or move the `invokeai`
|
||||||
- Greatly improved navigation and user experience in the
|
directory!
|
||||||
[WebGUI](https://invoke-ai.github.io/InvokeAI/features/WEB/)
|
|
||||||
- The prompt syntax has been enhanced with
|
|
||||||
[prompt weighting, cross-attention and prompt merging](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/).
|
|
||||||
- You can now load
|
|
||||||
[multiple models and switch among them quickly](https://docs.google.com/presentation/d/1WywGA1rny7bpFh7CLSdTr4nNpVKdlUeT0Bj0jCsILyU/edit?usp=sharing)
|
|
||||||
without leaving the CLI.
|
|
||||||
- The installation process (via `scripts/preload_models.py`) now lets you select
|
|
||||||
among several popular
|
|
||||||
[Stable Diffusion models](https://invoke-ai.github.io/InvokeAI/installation/INSTALLING_MODELS/)
|
|
||||||
and downloads and installs them on your behalf. Among other models, this
|
|
||||||
script will install the current Stable Diffusion 1.5 model as well as a
|
|
||||||
StabilityAI variable autoencoder (VAE) which improves face generation.
|
|
||||||
- Tired of struggling with photoeditors to get the masked region of for
|
|
||||||
inpainting just right? Let the AI make the mask for you using
|
|
||||||
[text masking](https://docs.google.com/presentation/d/1pWoY510hCVjz0M6X9CBbTznZgW2W5BYNKrmZm7B45q8/edit#slide=id.p).
|
|
||||||
This feature allows you to specify the part of the image to paint over using
|
|
||||||
just English-language phrases.
|
|
||||||
- Tired of seeing the head of your subjects cropped off? Uncrop them in the CLI
|
|
||||||
with the
|
|
||||||
[outcrop feature](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/#outcrop).
|
|
||||||
- Tired of seeing your subject's bodies duplicated or mangled when generating
|
|
||||||
larger-dimension images? Check out the `--hires` option in the CLI, or select
|
|
||||||
the corresponding toggle in the WebGUI.
|
|
||||||
- We now support textual inversion and fine-tune .bin styles and subjects from
|
|
||||||
the Hugging Face archive of
|
|
||||||
[SD Concepts](https://huggingface.co/sd-concepts-library). Load the .bin file
|
|
||||||
using the `--embedding_path` option. (The next version will support merging
|
|
||||||
and loading of multiple simultaneous models).
|
|
||||||
- ...
|
|
||||||
|
|
||||||
### v2.0.1 <small>(13 October 2022)</small>
|
##### Initialization file `invokeai/invokeai.init`
|
||||||
|
|
||||||
- fix noisy images at high step count when using k\* samplers
|
You can place frequently-used startup options in this file, such as the default
|
||||||
- dream.py script now calls invoke.py module directly rather than via a new
|
number of steps or your preferred sampler. To keep everything in one place, this
|
||||||
python process (which could break the environment)
|
file has now been moved into the `invokeai` directory and is named
|
||||||
|
`invokeai.init`.
|
||||||
|
|
||||||
### v2.0.0 <small>(9 October 2022)</small>
|
#### To update from Version 2.2.3
|
||||||
|
|
||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
The easiest route is to download and unpack one of the 2.2.4 installer files.
|
||||||
backward compatibility.
|
When it asks you for the location of the `invokeai` runtime directory, respond
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
|
||||||
- Support for
|
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/INPAINTING/">inpainting</a>
|
and answer "Y" when asked if you want to reuse the directory.
|
||||||
and
|
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/">outpainting</a>
|
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
|
||||||
- img2img runs on all k\* samplers
|
does not know about the new directory layout and won't be fully functional.
|
||||||
- Support for
|
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative
|
#### To update to 2.2.5 (and beyond) there's now an update path.
|
||||||
prompts</a>
|
|
||||||
- Support for CodeFormer face reconstruction
|
As they become available, you can update to more recent versions of InvokeAI
|
||||||
- Support for Textual Inversion on Macintoshes
|
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
|
||||||
- Support in both WebGUI and CLI for
|
Running it without any arguments will install the most recent version of
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing
|
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
|
||||||
of previously-generated images</a> using facial reconstruction, ESRGAN
|
script with an argument in the command shell. This syntax accepts the path to
|
||||||
upscaling, outcropping (similar to DALL-E infinite canvas), and "embiggen"
|
the desired release's zip file, which you can find by clicking on the green
|
||||||
upscaling. See the `!fix` command.
|
"Code" button on this repository's home page.
|
||||||
- New `--hires` option on `invoke>` line allows
|
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/CLI/#txt2img">larger
|
#### Other 2.2.4 Improvements
|
||||||
images to be created without duplicating elements</a>, at the cost of some
|
|
||||||
performance.
|
- Fix InvokeAI GUI initialization by @addianto in #1687
|
||||||
- New `--perlin` and `--threshold` options allow you to add and control
|
- fix link in documentation by @lstein in #1728
|
||||||
variation during image generation (see
|
- Fix broken link by @ShawnZhong in #1736
|
||||||
<a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding
|
- Remove reference to binary installer by @lstein in #1731
|
||||||
and Perlin Noise Initialization</a>
|
- documentation fixes for 2.2.3 by @lstein in #1740
|
||||||
- Extensive metadata now written into PNG files, allowing reliable regeneration
|
- Modify installer links to point closer to the source installer by @ebr in
|
||||||
of images and tweaking of previous settings.
|
#1745
|
||||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
|
- add documentation warning about 1650/60 cards by @lstein in #1753
|
||||||
platforms.
|
- Fix Linux source URL in installation docs by @andybearman in #1756
|
||||||
- Improved
|
- Make install instructions discoverable in readme by @damian0815 in #1752
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line
|
- typo fix by @ofirkris in #1755
|
||||||
completion behavior</a>. New commands added:
|
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
|
||||||
- List command-line history with `!history`
|
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
|
||||||
- Search command-line history with `!search`
|
in #1765
|
||||||
- Clear history with `!clear`
|
- stability and usage improvements to binary & source installers by @lstein in
|
||||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
#1760
|
||||||
configure. To switch away from auto use the new flag like
|
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
|
||||||
`--precision=float32`.
|
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
|
||||||
|
- invoke script cds to its location before running by @lstein in #1805
|
||||||
|
- Make PaperCut and VoxelArt models load again by @lstein in #1730
|
||||||
|
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
|
||||||
|
#1817
|
||||||
|
- Clean up readme by @hipsterusername in #1820
|
||||||
|
- Optimized Docker build with support for external working directory by @ebr in
|
||||||
|
#1544
|
||||||
|
- disable pushing the cloud container by @mauwii in #1831
|
||||||
|
- Fix docker push github action and expand with additional metadata by @ebr in
|
||||||
|
#1837
|
||||||
|
- Fix Broken Link To Notebook by @VedantMadane in #1821
|
||||||
|
- Account for flat models by @spezialspezial in #1766
|
||||||
|
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
|
||||||
|
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
|
||||||
|
@SammCheese in #1848
|
||||||
|
- Make force free GPU memory work in img2img by @addianto in #1844
|
||||||
|
- New installer by @lstein
|
||||||
|
|
||||||
For older changelogs, please visit the
|
For older changelogs, please visit the
|
||||||
**[CHANGELOG](CHANGELOG/#v114-11-september-2022)**.
|
**[CHANGELOG](CHANGELOG/#v223-2-december-2022)**.
|
||||||
|
|
||||||
## :material-target: Troubleshooting
|
## :material-target: Troubleshooting
|
||||||
|
|
||||||
|
319
docs/installation/010_INSTALL_AUTOMATED.md
Normal file
@ -0,0 +1,319 @@
|
|||||||
|
---
|
||||||
|
title: Installing with the Automated Installer
|
||||||
|
---
|
||||||
|
|
||||||
|
# InvokeAI Automated Installation
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
The automated installer is a shell script that attempts to automate every step
|
||||||
|
needed to install and run InvokeAI on a stock computer running recent versions
|
||||||
|
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
||||||
|
version of InvokeAI with the option to upgrade to experimental versions later.
|
||||||
|
|
||||||
|
## Walk through
|
||||||
|
|
||||||
|
1. Make sure that your system meets the
|
||||||
|
[hardware requirements](../index.md#hardware-requirements) and has the
|
||||||
|
appropriate GPU drivers installed. In particular, if you are a Linux user
|
||||||
|
with an AMD GPU installed, you may need to install the
|
||||||
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
|
!!! info "Required Space"
|
||||||
|
|
||||||
|
Installation requires roughly 18G of free disk space to load the libraries and
|
||||||
|
recommended model weights files.
|
||||||
|
|
||||||
|
Regardless of your destination disk, your *system drive* (`C:\` on Windows, `/` on macOS/Linux) requires at least 6GB of free disk space to download and cache python dependencies. NOTE for Linux users: if your temporary directory is mounted as a `tmpfs`, ensure it has sufficient space.
|
||||||
|
|
||||||
|
2. Check that your system has an up-to-date Python installed. To do this, open
|
||||||
|
up a command-line window ("Terminal" on Linux and Macintosh, "Command" or
|
||||||
|
"Powershell" on Windows) and type `python --version`. If Python is
|
||||||
|
installed, it will print out the version number. If it is version `3.9.1` or `3.10.x`, you meet requirements.
|
||||||
|
|
||||||
|
!!! warning "At this time we do not recommend Python 3.11"
|
||||||
|
|
||||||
|
!!! warning "If you see an older version, or get a command not found error"
|
||||||
|
|
||||||
|
Go to [Python Downloads](https://www.python.org/downloads/) and
|
||||||
|
download the appropriate installer package for your platform. We recommend
|
||||||
|
[Version 3.10.9](https://www.python.org/downloads/release/python-3109/),
|
||||||
|
which has been extensively tested with InvokeAI.
|
||||||
|
|
||||||
|
|
||||||
|
_Please select your platform in the section below for platform-specific
|
||||||
|
setup requirements._
|
||||||
|
|
||||||
|
=== "Windows users"
|
||||||
|
|
||||||
|
- During the Python configuration process,
|
||||||
|
look out for a checkbox to add Python to your PATH
|
||||||
|
and select it. If the install script complains that it can't
|
||||||
|
find python, then open the Python installer again and choose
|
||||||
|
"Modify" existing installation.
|
||||||
|
|
||||||
|
- Installation requires an up to date version of the Microsoft Visual C libraries. Please install the 2015-2022 libraries available here: https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
|
||||||
|
|
||||||
|
=== "Mac users"
|
||||||
|
|
||||||
|
- After installing Python, you may need to run the
|
||||||
|
following command from the Terminal in order to install the Web
|
||||||
|
certificates needed to download model data from https sites. If
|
||||||
|
you see lots of CERTIFICATE ERRORS during the last part of the
|
||||||
|
install, this is the problem, and you can fix it with this command:
|
||||||
|
|
||||||
|
`/Applications/Python\ 3.10/Install\ Certificates.command`
|
||||||
|
|
||||||
|
- You may need to install the Xcode command line tools. These
|
||||||
|
are a set of tools that are needed to run certain applications in a
|
||||||
|
Terminal, including InvokeAI. This package is provided directly by Apple.
|
||||||
|
|
||||||
|
- To install, open a terminal window and run `xcode-select
|
||||||
|
--install`. You will get a macOS system popup guiding you through the
|
||||||
|
install. If you already have them installed, you will instead see some
|
||||||
|
output in the Terminal advising you that the tools are already installed.
|
||||||
|
|
||||||
|
- More information can be found here:
|
||||||
|
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
|
||||||
|
|
||||||
|
=== "Linux users"
|
||||||
|
|
||||||
|
For reasons that are not entirely clear, installing the correct version of Python can be a bit of a challenge on Ubuntu, Linux Mint, Pop!_OS, and other Debian-derived distributions.
|
||||||
|
|
||||||
|
On Ubuntu 22.04 and higher, run the following:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y python3 python3-pip python3-venv
|
||||||
|
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.10 3
|
||||||
|
```
|
||||||
|
|
||||||
|
On Ubuntu 20.04, the process is slightly different:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y software-properties-common
|
||||||
|
sudo add-apt-repository -y ppa:deadsnakes/ppa
|
||||||
|
sudo apt install python3.10 python3-pip python3.10-venv
|
||||||
|
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.10 3
|
||||||
|
```
|
||||||
|
|
||||||
|
Both `python` and `python3` commands are now pointing at Python3.10. You can still access older versions of Python by calling `python2`, `python3.8`, etc.
|
||||||
|
|
||||||
|
Linux systems require a couple of additional graphics libraries to be installed for proper functioning of `python3-opencv`. Please run the following:
|
||||||
|
|
||||||
|
`sudo apt update && sudo apt install -y libglib2.0-0 libgl1-mesa-glx`
|
||||||
|
|
||||||
|
3. The source installer is distributed in ZIP files. Go to the
|
||||||
|
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
||||||
|
look for a series of files named:
|
||||||
|
|
||||||
|
- InvokeAI-installer-2.X.X.zip
|
||||||
|
|
||||||
|
(Where 2.X.X is the current release number).
|
||||||
|
|
||||||
|
Download the latest release.
|
||||||
|
|
||||||
|
4. Unpack the zip file into a convenient directory. This will create a new
|
||||||
|
directory named "InvokeAI-Installer". This example shows how this would look
|
||||||
|
using the `unzip` command-line tool, but you may use any graphical or
|
||||||
|
command-line Zip extractor:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> unzip InvokeAI-installer-2.X.X-windows.zip
|
||||||
|
Archive: C: \Linco\Downloads\InvokeAI-installer-2.X.X-windows.zip
|
||||||
|
creating: InvokeAI-Installer\
|
||||||
|
inflating: InvokeAI-Installer\install.bat
|
||||||
|
inflating: InvokeAI-Installer\readme.txt
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
After successful installation, you can delete the `InvokeAI-Installer`
|
||||||
|
directory.
|
||||||
|
|
||||||
|
5. **Windows only** Please double-click on the file WinLongPathsEnabled.reg and
|
||||||
|
accept the dialog box that asks you if you wish to modify your registry.
|
||||||
|
This activates long filename support on your system and will prevent
|
||||||
|
mysterious errors during installation.
|
||||||
|
|
||||||
|
6. If you are using a desktop GUI, double-click the installer file. It will be
|
||||||
|
named `install.bat` on Windows systems and `install.sh` on Linux and
|
||||||
|
Macintosh systems.
|
||||||
|
|
||||||
|
On Windows systems you will probably get an "Untrusted Publisher" warning.
|
||||||
|
Click on "More Info" and select "Run Anyway." You trust us, right?
|
||||||
|
|
||||||
|
7. Alternatively, from the command line, run the shell script or .bat file:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd InvokeAI-Installer
|
||||||
|
C:\Documents\Linco\invokeAI> install.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
8. The script will ask you to choose where to install InvokeAI. Select a
|
||||||
|
directory with at least 18G of free space for a full install. InvokeAI and
|
||||||
|
all its support files will be installed into a new directory named
|
||||||
|
`invokeai` located at the location you specify.
|
||||||
|
|
||||||
|
- The default is to install the `invokeai` directory in your home directory,
|
||||||
|
usually `C:\Users\YourName\invokeai` on Windows systems,
|
||||||
|
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
|
||||||
|
on Macintoshes, where "YourName" is your login name.
|
||||||
|
|
||||||
|
- The script uses tab autocompletion to suggest directory path completions.
|
||||||
|
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
|
||||||
|
to suggest completions.
|
||||||
|
|
||||||
|
9. Sit back and let the install script work. It will install the third-party
|
||||||
|
libraries needed by InvokeAI, then download the current InvokeAI release and
|
||||||
|
install it.
|
||||||
|
|
||||||
|
Be aware that some of the library download and install steps take a long
|
||||||
|
time. In particular, the `pytorch` package is quite large and often appears
|
||||||
|
to get "stuck" at 99.9%. Have patience and the installation step will
|
||||||
|
eventually resume. However, there are occasions when the library install
|
||||||
|
does legitimately get stuck. If you have been waiting for more than ten
|
||||||
|
minutes and nothing is happening, you can interrupt the script with ^C. You
|
||||||
|
may restart it and it will pick up where it left off.
|
||||||
|
|
||||||
|
10. After installation completes, the installer will launch the configuration script, which will guide you through the first-time process
|
||||||
|
of selecting one or more Stable Diffusion model weights files, downloading
|
||||||
|
and configuring them. We provide a list of popular models that InvokeAI
|
||||||
|
performs well with. However, you can add more weight files later on using
|
||||||
|
the command-line client or the Web UI. See
|
||||||
|
[Installing Models](050_INSTALLING_MODELS.md) for details.
|
||||||
|
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you must agree to in order to use. The script will list the
|
||||||
|
steps you need to take to create an account on the official site that hosts
|
||||||
|
the weights files, accept the agreement, and provide an access token that
|
||||||
|
allows InvokeAI to legally download and install the weights files.
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [Installing Models](050_INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
11. The script will now exit and you'll be ready to generate some images. Look
|
||||||
|
for the directory `invokeai` installed in the location you chose at the
|
||||||
|
beginning of the install session. Look for a shell script named `invoke.sh`
|
||||||
|
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
|
||||||
|
it or typing its name at the command-line:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd invokeai
|
||||||
|
C:\Documents\Linco\invokeAI> invoke.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
- The `invoke.bat` (`invoke.sh`) script will give you the choice of starting
|
||||||
|
(1) the command-line interface, or (2) the web GUI. If you start the
|
||||||
|
latter, you can load the user interface by pointing your browser at
|
||||||
|
http://localhost:9090.
|
||||||
|
|
||||||
|
- The script also offers you a third option labeled "open the developer
|
||||||
|
console". If you choose this option, you will be dropped into a
|
||||||
|
command-line interface in which you can run python commands directly,
|
||||||
|
access developer tools, and launch InvokeAI with customized options.
|
||||||
|
|
||||||
|
12. You can launch InvokeAI with several different command-line arguments that
|
||||||
|
customize its behavior. For example, you can change the location of the
|
||||||
|
image output directory, or select your favorite sampler. See the
|
||||||
|
[Command-Line Interface](../features/CLI.md) for a full list of the options.
|
||||||
|
|
||||||
|
- To set defaults that will take effect every time you launch InvokeAI,
|
||||||
|
use a text editor (e.g. Notepad) to exit the file
|
||||||
|
`invokeai\invokeai.init`. It contains a variety of examples that you can
|
||||||
|
follow to add and modify launch options.
|
||||||
|
|
||||||
|
!!! warning "The `invokeai` directory contains the `invokeai` application, its
|
||||||
|
configuration files, the model weight files, and outputs of image generation.
|
||||||
|
Once InvokeAI is installed, do not move or remove this directory."
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### _Package dependency conflicts_
|
||||||
|
|
||||||
|
If you have previously installed InvokeAI or another Stable Diffusion package,
|
||||||
|
the installer may occasionally pick up outdated libraries and either the
|
||||||
|
installer or `invoke` will fail with complaints about library conflicts. You can
|
||||||
|
address this by entering the `invokeai` directory and running `update.sh`, which
|
||||||
|
will bring InvokeAI up to date with the latest libraries.
|
||||||
|
|
||||||
|
### ldm from pypi
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Some users have tried to correct dependency problems by installing
|
||||||
|
the `ldm` package from PyPi.org. Unfortunately this is an unrelated package that
|
||||||
|
has nothing to do with the 'latent diffusion model' used by InvokeAI. Installing
|
||||||
|
ldm will make matters worse. If you've installed ldm, uninstall it with
|
||||||
|
`pip uninstall ldm`.
|
||||||
|
|
||||||
|
### Corrupted configuration file
|
||||||
|
|
||||||
|
Everything seems to install ok, but `invokeai` complains of a corrupted
|
||||||
|
configuration file and goes back into the configuration process (asking you to
|
||||||
|
download models, etc), but this doesn't fix the problem.
|
||||||
|
|
||||||
|
This issue is often caused by a misconfigured configuration directive in the
|
||||||
|
`invokeai\invokeai.init` initialization file that contains startup settings. The
|
||||||
|
easiest way to fix the problem is to move the file out of the way and re-run
|
||||||
|
`invokeai-configure`. Enter the developer's console (option 3 of the launcher
|
||||||
|
script) and run this command:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
invokeai-configure --root=.
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the dot (.) after `--root`. It is part of the command.
|
||||||
|
|
||||||
|
_If none of these maneuvers fixes the problem_ then please report the problem to
|
||||||
|
the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
|
||||||
|
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive
|
||||||
|
assistance.
|
||||||
|
|
||||||
|
### other problems
|
||||||
|
|
||||||
|
If you run into problems during or after installation, the InvokeAI team is
|
||||||
|
available to help you. Either create an
|
||||||
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
||||||
|
make a request for help on the "bugs-and-support" channel of our
|
||||||
|
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
||||||
|
organization, but typically somebody will be available to help you within 24
|
||||||
|
hours, and often much sooner.
|
||||||
|
|
||||||
|
## Updating to newer versions
|
||||||
|
|
||||||
|
This distribution is changing rapidly, and we add new features on a daily basis.
|
||||||
|
To update to the latest released version (recommended), run the `update.sh`
|
||||||
|
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
||||||
|
release and re-run the `invokeai-configure` script to download any updated
|
||||||
|
models files that may be needed. You can also use this to add additional models
|
||||||
|
that you did not select at installation time.
|
||||||
|
|
||||||
|
You can now close the developer console and run `invoke` as before. If you get
|
||||||
|
complaints about missing models, then you may need to do the additional step of
|
||||||
|
running `invokeai-configure`. This happens relatively infrequently. To do
|
||||||
|
this, simply open up the developer's console again and type
|
||||||
|
`invokeai-configure`.
|
||||||
|
|
||||||
|
You may also use the `update` script to install any selected version of
|
||||||
|
InvokeAI. From https://github.com/invoke-ai/InvokeAI, navigate to the zip file
|
||||||
|
link of the version you wish to install. You can find the zip links by going to
|
||||||
|
the one of the release pages and looking for the **Assets** section at the
|
||||||
|
bottom. Alternatively, you can browse "branches" and "tags" at the top of the
|
||||||
|
big code directory on the InvokeAI welcome page. When you find the version you
|
||||||
|
want to install, go to the green "<> Code" button at the top, and copy the
|
||||||
|
"Download ZIP" link.
|
||||||
|
|
||||||
|
Now run `update.sh` (or `update.bat`) with the version number of the desired InvokeAI
|
||||||
|
version as its argument. For example, this will install the old 2.2.0 release.
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
update.sh v2.2.0
|
||||||
|
```
|
||||||
|
|
||||||
|
You can get the list of version numbers by going to the [releases
|
||||||
|
page](https://github.com/invoke-ai/InvokeAI/releases) or by browsing
|
||||||
|
the (Tags)[https://github.com/invoke-ai/InvokeAI/tags] list from the
|
||||||
|
Code section of the main github page.
|
177
docs/installation/020_INSTALL_MANUAL.md
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
---
|
||||||
|
title: Installing Manually
|
||||||
|
---
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
!!! warning "This is for advanced Users"
|
||||||
|
|
||||||
|
**python experience is mandatory**
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
!!! tip As of InvokeAI v2.3.0 installation using the `conda` package manager
|
||||||
|
is no longer being supported. It will likely still work, but we are not testing
|
||||||
|
this installation method.
|
||||||
|
|
||||||
|
On Windows systems, you are encouraged to install and use the
|
||||||
|
[PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
||||||
|
which provides compatibility with Linux and Mac shells and nice features such as
|
||||||
|
command-line completion.
|
||||||
|
|
||||||
|
To install InvokeAI with virtual environments and the PIP package manager,
|
||||||
|
please follow these steps:
|
||||||
|
|
||||||
|
1. Please make sure you are using Python 3.9 or 3.10. The rest of the install
|
||||||
|
procedure depends on this and will not work with other versions:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -V
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
||||||
|
GitHub:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create InvokeAI folder where you will follow the rest of the
|
||||||
|
steps.
|
||||||
|
|
||||||
|
3. Create a directory of to contain your InvokeAI installation (known as the "runtime"
|
||||||
|
or "root" directory). This is where your models, configs, and outputs will live
|
||||||
|
by default. Please keep in mind the disk space requirements - you will need at
|
||||||
|
least 18GB (as of this writing) for the models and the virtual environment.
|
||||||
|
From now on we will refer to this directory as `INVOKEAI_ROOT`. This keeps the
|
||||||
|
runtime directory separate from the source code and aids in updating.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export INVOKEAI_ROOT="~/invokeai"
|
||||||
|
mkdir ${INVOKEAI_ROOT}
|
||||||
|
```
|
||||||
|
|
||||||
|
4. From within the InvokeAI top-level directory, create and activate a virtual
|
||||||
|
environment named `.venv` and prompt displaying `InvokeAI`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m venv ${INVOKEAI_ROOT}/.venv \
|
||||||
|
--prompt invokeai \
|
||||||
|
--upgrade-deps \
|
||||||
|
--copies
|
||||||
|
source ${INVOKEAI_ROOT}/.venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
You **may** create your virtual environment anywhere on the filesystem.
|
||||||
|
But IF you choose a location that is *not* inside the `$INVOKEAI_ROOT` directory,
|
||||||
|
then you must set the `INVOKEAI_ROOT` environment variable in your shell environment,
|
||||||
|
for example, by editing `~/.bashrc` or `~/.zshrc` files, or setting the Windows environment
|
||||||
|
variable. Refer to your operating system / shell documentation for the correct way of doing so.
|
||||||
|
|
||||||
|
5. Make sure that pip is installed in your virtual environment an up to date:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m pip install --upgrade pip
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Install Package
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --use-pep517 .
|
||||||
|
```
|
||||||
|
|
||||||
|
Deactivate and reactivate your runtime directory so that the invokeai-specific commands
|
||||||
|
become available in the environment
|
||||||
|
|
||||||
|
```
|
||||||
|
deactivate && source ${INVOKEAI_ROOT}/.venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Set up the runtime directory
|
||||||
|
|
||||||
|
In this step you will initialize your runtime directory with the downloaded
|
||||||
|
models, model config files, directory for textual inversion embeddings, and
|
||||||
|
your outputs.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invokeai-configure --root ${INVOKEAI_ROOT}
|
||||||
|
```
|
||||||
|
|
||||||
|
The script `invokeai-configure` will interactively guide you through the
|
||||||
|
process of downloading and installing the weights files needed for InvokeAI.
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you have to agree to. The script will list the steps you need
|
||||||
|
to take to create an account on the site that hosts the weights files,
|
||||||
|
accept the agreement, and provide an access token that allows InvokeAI to
|
||||||
|
legally download and install the weights files.
|
||||||
|
|
||||||
|
If you get an error message about a module not being installed, check that
|
||||||
|
the `invokeai` environment is active and if not, repeat step 5.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [here](050_INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
7. Run the command-line- or the web- interface:
|
||||||
|
|
||||||
|
Activate the environment (with `source .venv/bin/activate`), and then run
|
||||||
|
the script `invokeai`. If you selected a non-default location for the
|
||||||
|
runtime directory, please specify the path with the `--root_dir` option
|
||||||
|
(abbreviated below as `--root`):
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
!!! warning "Make sure that the virtual environment is activated, which should create `(invokeai)` in front of your prompt!"
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invokeai --root ~/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invokeai --web --root ~/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invokeai --web --host 0.0.0.0 --root ~/invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
If you choose the run the web interface, point your browser at
|
||||||
|
http://localhost:9090 in order to load the GUI.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
You can permanently set the location of the runtime directory by setting the environment variable `INVOKEAI_ROOT` to the path of the directory. As mentioned previously, this is
|
||||||
|
**required** if your virtual environment is located outside of your runtime directory.
|
||||||
|
|
||||||
|
8. Render away!
|
||||||
|
|
||||||
|
Browse the [features](../features/CLI.md) section to learn about all the
|
||||||
|
things you can do with InvokeAI.
|
||||||
|
|
||||||
|
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
||||||
|
card with the ROCm driver, you may have to wait for over a minute the first
|
||||||
|
time you try to generate an image. Fortunately, after the warm-up period
|
||||||
|
rendering will be fast.
|
||||||
|
|
||||||
|
9. Subsequently, to relaunch the script, activate the virtual environment, and
|
||||||
|
then launch `invokeai` command. If you forget to activate the virtual
|
||||||
|
environment you will most likeley receive a `command not found` error.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Do not move the runtime directory after installation. The virtual environment has absolute paths in it that get confused if the directory is moved.
|
279
docs/installation/040_INSTALL_DOCKER.md
Normal file
@ -0,0 +1,279 @@
|
|||||||
|
---
|
||||||
|
title: Installing with Docker
|
||||||
|
---
|
||||||
|
|
||||||
|
# :fontawesome-brands-docker: Docker
|
||||||
|
|
||||||
|
!!! warning "For end users"
|
||||||
|
|
||||||
|
We highly recommend to Install InvokeAI locally using [these instructions](index.md)
|
||||||
|
|
||||||
|
!!! tip "For developers"
|
||||||
|
|
||||||
|
For container-related development tasks or for enabling easy
|
||||||
|
deployment to other environments (on-premises or cloud), follow these
|
||||||
|
instructions.
|
||||||
|
|
||||||
|
For general use, install locally to leverage your machine's GPU.
|
||||||
|
|
||||||
|
## Why containers?
|
||||||
|
|
||||||
|
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also
|
||||||
|
use a Docker volume to store the largest model files and image outputs as a
|
||||||
|
first step in decoupling storage and compute. Future enhancements can do this
|
||||||
|
for other assets. See [Processes](https://12factor.net/processes) under the
|
||||||
|
Twelve-Factor App methodology for details on why running applications in such a
|
||||||
|
stateless fashion is important.
|
||||||
|
|
||||||
|
You can specify the target platform when building the image and running the
|
||||||
|
container. You'll also need to specify the InvokeAI requirements file that
|
||||||
|
matches the container's OS and the architecture it will run on.
|
||||||
|
|
||||||
|
Developers on Apple silicon (M1/M2): You
|
||||||
|
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
|
||||||
|
and performance is reduced compared with running it directly on macOS but for
|
||||||
|
development purposes it's fine. Once you're done with development tasks on your
|
||||||
|
laptop you can build for the target platform and architecture and deploy to
|
||||||
|
another environment with NVIDIA GPUs on-premises or in the cloud.
|
||||||
|
|
||||||
|
## Installation in a Linux container (desktop)
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
#### Install [Docker](https://github.com/santisbon/guides#docker)
|
||||||
|
|
||||||
|
On the [Docker Desktop app](https://docs.docker.com/get-docker/), go to
|
||||||
|
Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
|
||||||
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
|
||||||
|
increase Swap and Disk image size too.
|
||||||
|
|
||||||
|
#### Get a Huggingface-Token
|
||||||
|
|
||||||
|
Besides the Docker Agent you will need an Account on
|
||||||
|
[huggingface.co](https://huggingface.co/join).
|
||||||
|
|
||||||
|
After you succesfully registered your account, go to
|
||||||
|
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
|
||||||
|
a token and copy it, since you will need in for the next step.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
Set the fork you want to use and other variables.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
|
||||||
|
I preffer to save my env vars
|
||||||
|
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
|
||||||
|
them when I come back.
|
||||||
|
|
||||||
|
The build- and run- scripts contain default values for almost everything,
|
||||||
|
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
|
||||||
|
created in the last step.
|
||||||
|
|
||||||
|
Some Suggestions of variables you may want to change besides the Token:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
|
||||||
|
| Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description |
|
||||||
|
| ----------------------------------------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| `HUGGING_FACE_HUB_TOKEN` | No default, but **required**! | This is the only **required** variable, without it you can't download the huggingface models |
|
||||||
|
| `REPOSITORY_NAME` | The Basename of the Repo folder | This name will used as the container repository/image name |
|
||||||
|
| `VOLUMENAME` | `${REPOSITORY_NAME,,}_data` | Name of the Docker Volume where model files will be stored |
|
||||||
|
| `ARCH` | arch of the build machine | Can be changed if you want to build the image for another arch |
|
||||||
|
| `CONTAINER_REGISTRY` | ghcr.io | Name of the Container Registry to use for the full tag |
|
||||||
|
| `CONTAINER_REPOSITORY` | `$(whoami)/${REPOSITORY_NAME}` | Name of the Container Repository |
|
||||||
|
| `CONTAINER_FLAVOR` | `cuda` | The flavor of the image to built, available options are `cuda`, `rocm` and `cpu`. If you choose `rocm` or `cpu`, the extra-index-url will be selected automatically, unless you set one yourself. |
|
||||||
|
| `CONTAINER_TAG` | `${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}` | The Container Repository / Tag which will be used |
|
||||||
|
| `INVOKE_DOCKERFILE` | `Dockerfile` | The Dockerfile which should be built, handy for development |
|
||||||
|
| `PIP_EXTRA_INDEX_URL` | | If you want to use a custom pip-extra-index-url |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
#### Build the Image
|
||||||
|
|
||||||
|
I provided a build script, which is located next to the Dockerfile in
|
||||||
|
`docker/build.sh`. It can be executed from repository root like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./docker/build.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
The build Script not only builds the container, but also creates the docker
|
||||||
|
volume if not existing yet.
|
||||||
|
|
||||||
|
#### Run the Container
|
||||||
|
|
||||||
|
After the build process is done, you can run the container via the provided
|
||||||
|
`docker/run.sh` script
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./docker/run.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
When used without arguments, the container will start the webserver and provide
|
||||||
|
you the link to open it. But if you want to use some other parameters you can
|
||||||
|
also do so.
|
||||||
|
|
||||||
|
!!! example "run script example"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./docker/run.sh "banana sushi" -Ak_lms -S42 -s10
|
||||||
|
```
|
||||||
|
|
||||||
|
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
|
||||||
|
|
||||||
|
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running the container on your GPU
|
||||||
|
|
||||||
|
If you have an Nvidia GPU, you can enable InvokeAI to run on the GPU by running
|
||||||
|
the container with an extra environment variable to enable GPU usage and have
|
||||||
|
the process run much faster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
GPU_FLAGS=all ./docker/run.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This passes the `--gpus all` to docker and uses the GPU.
|
||||||
|
|
||||||
|
If you don't have a GPU (or your host is not yet setup to use it) you will see a
|
||||||
|
message like this:
|
||||||
|
|
||||||
|
`docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].`
|
||||||
|
|
||||||
|
You can use the full set of GPU combinations documented here:
|
||||||
|
|
||||||
|
https://docs.docker.com/config/containers/resource_constraints/#gpu
|
||||||
|
|
||||||
|
For example, use `GPU_FLAGS=device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a` to
|
||||||
|
choose a specific device identified by a UUID.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
!!! warning "Deprecated"
|
||||||
|
|
||||||
|
From here on you will find the the previous Docker-Docs, which will still
|
||||||
|
provide some usefull informations.
|
||||||
|
|
||||||
|
## Usage (time to have fun)
|
||||||
|
|
||||||
|
### Startup
|
||||||
|
|
||||||
|
If you're on a **Linux container** the `invoke` script is **automatically
|
||||||
|
started** and the output dir set to the Docker volume you created earlier.
|
||||||
|
|
||||||
|
If you're **directly on macOS follow these startup instructions**. With the
|
||||||
|
Conda environment activated (`conda activate ldm`), run the interactive
|
||||||
|
interface that combines the functionality of the original scripts `txt2img` and
|
||||||
|
`img2img`: Use the more accurate but VRAM-intensive full precision math because
|
||||||
|
half-precision requires autocast and won't work. By default the images are saved
|
||||||
|
in `outputs/img-samples/`.
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
python3 scripts/invoke.py --full_precision
|
||||||
|
```
|
||||||
|
|
||||||
|
You'll get the script's prompt. You can see available options or quit.
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
invoke> -h
|
||||||
|
invoke> q
|
||||||
|
```
|
||||||
|
|
||||||
|
### Text to Image
|
||||||
|
|
||||||
|
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
|
||||||
|
image. This will let you know that everything is set up correctly. Then increase
|
||||||
|
steps to 100 or more for good (but slower) results. The prompt can be in quotes
|
||||||
|
or not.
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
invoke> The hulk fighting with sheldon cooper -s5 -n1
|
||||||
|
invoke> "woman closeup highly detailed" -s 150
|
||||||
|
# Reuse previous seed and apply face restoration
|
||||||
|
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
||||||
|
```
|
||||||
|
|
||||||
|
You'll need to experiment to see if face restoration is making it better or
|
||||||
|
worse for your specific prompt.
|
||||||
|
|
||||||
|
If you're on a container the output is set to the Docker volume. You can copy it
|
||||||
|
wherever you want. You can download it from the Docker Desktop app, Volumes,
|
||||||
|
my-vol, data. Or you can copy it from your Mac terminal. Keep in mind
|
||||||
|
`docker cp` can't expand `*.png` so you'll need to specify the image file name.
|
||||||
|
|
||||||
|
On your host Mac (you can use the name of any container that mounted the
|
||||||
|
volume):
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
||||||
|
```
|
||||||
|
|
||||||
|
### Image to Image
|
||||||
|
|
||||||
|
You can also do text-guided image-to-image translation. For example, turning a
|
||||||
|
sketch into a detailed drawing.
|
||||||
|
|
||||||
|
`strength` is a value between 0.0 and 1.0 that controls the amount of noise that
|
||||||
|
is added to the input image. Values that approach 1.0 allow for lots of
|
||||||
|
variations but will also produce images that are not semantically consistent
|
||||||
|
with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
|
||||||
|
|
||||||
|
Make sure your input image size dimensions are multiples of 64 e.g. 512x512.
|
||||||
|
Otherwise you'll get `Error: product of dimension sizes > 2**31'`. If you still
|
||||||
|
get the error
|
||||||
|
[try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.)
|
||||||
|
like 512x256.
|
||||||
|
|
||||||
|
If you're on a Docker container, copy your input image into the Docker volume
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
|
||||||
|
```
|
||||||
|
|
||||||
|
Try it out generating an image (or more). The `invoke` script needs absolute
|
||||||
|
paths to find the image so don't use `~`.
|
||||||
|
|
||||||
|
If you're on your Mac
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
|
||||||
|
```
|
||||||
|
|
||||||
|
If you're on a Linux container on your Mac
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Web Interface
|
||||||
|
|
||||||
|
You can use the `invoke` script with a graphical web interface. Start the web
|
||||||
|
server with:
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
python3 scripts/invoke.py --full_precision --web
|
||||||
|
```
|
||||||
|
|
||||||
|
If it's running on your Mac point your Mac web browser to
|
||||||
|
<http://127.0.0.1:9090>
|
||||||
|
|
||||||
|
Press Control-C at the command line to stop the web server.
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
Some text you can add at the end of the prompt to make it very pretty:
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
cinematic photo, highly detailed, cinematic lighting, ultra-detailed, ultrarealistic, photorealism, Octane Rendering, cyberpunk lights, Hyper Detail, 8K, HD, Unreal Engine, V-Ray, full hd, cyberpunk, abstract, 3d octane render + 4k UHD + immense detail + dramatic lighting + well lit + black, purple, blue, pink, cerulean, teal, metallic colours, + fine details, ultra photoreal, photographic, concept art, cinematic composition, rule of thirds, mysterious, eerie, photorealism, breathtaking detailed, painting art deco pattern, by hsiao, ron cheng, john james audubon, bizarre compositions, exquisite detail, extremely moody lighting, painted by greg rutkowski makoto shinkai takashi takeuchi studio ghibli, akihiko yoshida
|
||||||
|
```
|
||||||
|
|
||||||
|
The original scripts should work as well.
|
||||||
|
|
||||||
|
```Shell
|
||||||
|
python3 scripts/orig_scripts/txt2img.py --help
|
||||||
|
python3 scripts/orig_scripts/txt2img.py --ddim_steps 100 --n_iter 1 --n_samples 1 --plms --prompt "new born baby kitten. Hyper Detail, Octane Rendering, Unreal Engine, V-Ray"
|
||||||
|
python3 scripts/orig_scripts/txt2img.py --ddim_steps 5 --n_iter 1 --n_samples 1 --plms --prompt "ocean" # or --klms
|
||||||
|
```
|
252
docs/installation/050_INSTALLING_MODELS.md
Normal file
@ -0,0 +1,252 @@
|
|||||||
|
---
|
||||||
|
title: Installing Models
|
||||||
|
---
|
||||||
|
|
||||||
|
# :octicons-paintbrush-16: Installing Models
|
||||||
|
|
||||||
|
## Model Weight Files
|
||||||
|
|
||||||
|
The model weight files ('\*.ckpt') are the Stable Diffusion "secret sauce". They
|
||||||
|
are the product of training the AI on millions of captioned images gathered from
|
||||||
|
multiple sources.
|
||||||
|
|
||||||
|
Originally there was only a single Stable Diffusion weights file, which many
|
||||||
|
people named `model.ckpt`. Now there are dozens or more that have been "fine
|
||||||
|
tuned" to provide particulary styles, genres, or other features. InvokeAI allows
|
||||||
|
you to install and run multiple model weight files and switch between them
|
||||||
|
quickly in the command-line and web interfaces.
|
||||||
|
|
||||||
|
This manual will guide you through installing and configuring model weight
|
||||||
|
files.
|
||||||
|
|
||||||
|
## Base Models
|
||||||
|
|
||||||
|
InvokeAI comes with support for a good initial set of models listed in the model
|
||||||
|
configuration file `configs/models.yaml`. They are:
|
||||||
|
|
||||||
|
| Model | Weight File | Description | DOWNLOAD FROM |
|
||||||
|
| -------------------- | --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
|
||||||
|
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|
||||||
|
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original |
|
||||||
|
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
|
||||||
|
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 |
|
||||||
|
| `<all models>` | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ |
|
||||||
|
|
||||||
|
Note that these files are covered by an "Ethical AI" license which forbids
|
||||||
|
certain uses. You will need to create an account on the Hugging Face website and
|
||||||
|
accept the license terms before you can access the files.
|
||||||
|
|
||||||
|
The predefined configuration file for InvokeAI (located at
|
||||||
|
`configs/models.yaml`) provides entries for each of these weights files.
|
||||||
|
`stable-diffusion-1.5` is the default model used, and we strongly recommend that
|
||||||
|
you install this weights file if nothing else.
|
||||||
|
|
||||||
|
## Community-Contributed Models
|
||||||
|
|
||||||
|
There are too many to list here and more are being contributed every day.
|
||||||
|
Hugging Face maintains a
|
||||||
|
[fast-growing repository](https://huggingface.co/sd-concepts-library) of
|
||||||
|
fine-tune (".bin") models that can be imported into InvokeAI by passing the
|
||||||
|
`--embedding_path` option to the `invoke.py` command.
|
||||||
|
|
||||||
|
[This page](https://rentry.org/sdmodels) hosts a large list of official and
|
||||||
|
unofficial Stable Diffusion models and where they can be obtained.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
There are three ways to install weights files:
|
||||||
|
|
||||||
|
1. During InvokeAI installation, the `invokeai-configure` script can download
|
||||||
|
them for you.
|
||||||
|
|
||||||
|
2. You can use the command-line interface (CLI) to import, configure and modify
|
||||||
|
new models files.
|
||||||
|
|
||||||
|
3. You can download the files manually and add the appropriate entries to
|
||||||
|
`models.yaml`.
|
||||||
|
|
||||||
|
### Installation via `invokeai-configure`
|
||||||
|
|
||||||
|
This is the most automatic way. Run `invokeai-configure` from the
|
||||||
|
console. It will ask you to select which models to download and lead you through
|
||||||
|
the steps of setting up a Hugging Face account if you haven't done so already.
|
||||||
|
|
||||||
|
To start, run `invokeai-configure` from within the InvokeAI:
|
||||||
|
directory
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
```text
|
||||||
|
Loading Python libraries...
|
||||||
|
|
||||||
|
** INTRODUCTION **
|
||||||
|
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
|
||||||
|
and other large models that are needed for text to image generation. At any point you may interrupt
|
||||||
|
this program and resume later.
|
||||||
|
|
||||||
|
** WEIGHT SELECTION **
|
||||||
|
Would you like to download the Stable Diffusion model weights now? [y]
|
||||||
|
|
||||||
|
Choose the weight file(s) you wish to download. Before downloading you
|
||||||
|
will be given the option to view and change your selections.
|
||||||
|
|
||||||
|
[1] stable-diffusion-1.5:
|
||||||
|
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
|
||||||
|
Download? [y]
|
||||||
|
[2] inpainting-1.5:
|
||||||
|
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
|
||||||
|
Download? [y]
|
||||||
|
[3] stable-diffusion-1.4:
|
||||||
|
The original Stable Diffusion version 1.4 weight file (4.27 GB)
|
||||||
|
Download? [n] n
|
||||||
|
[4] waifu-diffusion-1.3:
|
||||||
|
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27 GB)
|
||||||
|
Download? [n] y
|
||||||
|
[5] ft-mse-improved-autoencoder-840000:
|
||||||
|
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
|
||||||
|
Download? [y] y
|
||||||
|
The following weight files will be downloaded:
|
||||||
|
[1] stable-diffusion-1.5*
|
||||||
|
[2] inpainting-1.5
|
||||||
|
[4] waifu-diffusion-1.3
|
||||||
|
[5] ft-mse-improved-autoencoder-840000
|
||||||
|
*default
|
||||||
|
Ok to download? [y]
|
||||||
|
** LICENSE AGREEMENT FOR WEIGHT FILES **
|
||||||
|
|
||||||
|
1. To download the Stable Diffusion weight files you need to read and accept the
|
||||||
|
CreativeML Responsible AI license. If you have not already done so, please
|
||||||
|
create an account using the "Sign Up" button:
|
||||||
|
|
||||||
|
https://huggingface.co
|
||||||
|
|
||||||
|
You will need to verify your email address as part of the HuggingFace
|
||||||
|
registration process.
|
||||||
|
|
||||||
|
2. After creating the account, login under your account and accept
|
||||||
|
the license terms located here:
|
||||||
|
|
||||||
|
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
|
||||||
|
|
||||||
|
Press <enter> when you are ready to continue:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
When the script is complete, you will find the downloaded weights files in
|
||||||
|
`models/ldm/stable-diffusion-v1` and a matching configuration file in
|
||||||
|
`configs/models.yaml`.
|
||||||
|
|
||||||
|
You can run the script again to add any models you didn't select the first time.
|
||||||
|
Note that as a safety measure the script will _never_ remove a
|
||||||
|
previously-installed weights file. You will have to do this manually.
|
||||||
|
|
||||||
|
### Installation via the CLI
|
||||||
|
|
||||||
|
You can install a new model, including any of the community-supported ones, via
|
||||||
|
the command-line client's `!import_model` command.
|
||||||
|
|
||||||
|
1. First download the desired model weights file and place it under
|
||||||
|
`models/ldm/stable-diffusion-v1/`. You may rename the weights file to
|
||||||
|
something more memorable if you wish. Record the path of the weights file
|
||||||
|
(e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
|
||||||
|
|
||||||
|
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`.
|
||||||
|
|
||||||
|
3. At the `invoke>` command-line, enter the command
|
||||||
|
`!import_model <path to model>`. For example:
|
||||||
|
|
||||||
|
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
||||||
|
|
||||||
|
!!! tip "the CLI supports file path autocompletion"
|
||||||
|
|
||||||
|
Type a bit of the path name and hit ++tab++ in order to get a choice of
|
||||||
|
possible completions.
|
||||||
|
|
||||||
|
!!! tip "on Windows, you can drag model files onto the command-line"
|
||||||
|
|
||||||
|
Once you have typed in `!import_model `, you can drag the model `.ckpt` file
|
||||||
|
onto the command-line to insert the model path. This way, you don't need to
|
||||||
|
type it or copy/paste.
|
||||||
|
|
||||||
|
4. Follow the wizard's instructions to complete installation as shown in the
|
||||||
|
example here:
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
```text
|
||||||
|
invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
>> Model import in process. Please enter the values needed to configure this model:
|
||||||
|
|
||||||
|
Name for this model: arabian-nights
|
||||||
|
Description of this model: Arabian Nights Fine Tune v1.0
|
||||||
|
Configuration file for this model: configs/stable-diffusion/v1-inference.yaml
|
||||||
|
Default image width: 512
|
||||||
|
Default image height: 512
|
||||||
|
>> New configuration:
|
||||||
|
arabian-nights:
|
||||||
|
config: configs/stable-diffusion/v1-inference.yaml
|
||||||
|
description: Arabian Nights Fine Tune v1.0
|
||||||
|
height: 512
|
||||||
|
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
width: 512
|
||||||
|
OK to import [n]? y
|
||||||
|
>> Caching model stable-diffusion-1.4 in system RAM
|
||||||
|
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
| LatentDiffusion: Running in eps-prediction mode
|
||||||
|
| DiffusionWrapper has 859.52 M params.
|
||||||
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
|
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
||||||
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
|
| Using faster float16 precision
|
||||||
|
```
|
||||||
|
|
||||||
|
If you've previously installed the fine-tune VAE file
|
||||||
|
`vae-ft-mse-840000-ema-pruned.ckpt`, the wizard will also ask you if you want to
|
||||||
|
add this VAE to the model.
|
||||||
|
|
||||||
|
The appropriate entry for this model will be added to `configs/models.yaml` and
|
||||||
|
it will be available to use in the CLI immediately.
|
||||||
|
|
||||||
|
The CLI has additional commands for switching among, viewing, editing, deleting
|
||||||
|
the available models. These are described in
|
||||||
|
[Command Line Client](../features/CLI.md#model-selection-and-importation), but
|
||||||
|
the two most frequently-used are `!models` and `!switch <name of model>`. The
|
||||||
|
first prints a table of models that InvokeAI knows about and their load status.
|
||||||
|
The second will load the requested model and lets you switch back and forth
|
||||||
|
quickly among loaded models.
|
||||||
|
|
||||||
|
### Manually editing of `configs/models.yaml`
|
||||||
|
|
||||||
|
If you are comfortable with a text editor then you may simply edit `models.yaml`
|
||||||
|
directly.
|
||||||
|
|
||||||
|
First you need to download the desired .ckpt file and place it in
|
||||||
|
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the previous
|
||||||
|
section. Record the path to the weights file, e.g.
|
||||||
|
`models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
||||||
|
|
||||||
|
Then using a **text** editor (e.g. the Windows Notepad application), open the
|
||||||
|
file `configs/models.yaml`, and add a new stanza that follows this model:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
arabian-nights-1.0:
|
||||||
|
description: A great fine-tune in Arabian Nights style
|
||||||
|
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
config: ./configs/stable-diffusion/v1-inference.yaml
|
||||||
|
width: 512
|
||||||
|
height: 512
|
||||||
|
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||||
|
default: false
|
||||||
|
```
|
||||||
|
|
||||||
|
| name | description |
|
||||||
|
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| arabian-nights-1.0 | This is the name of the model that you will refer to from within the CLI and the WebGUI when you need to load and use the model. |
|
||||||
|
| description | Any description that you want to add to the model to remind you what it is. |
|
||||||
|
| weights | Relative path to the .ckpt weights file for this model. |
|
||||||
|
| config | This is the confusingly-named configuration file for the model itself. Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens to need a custom configuration, in which case the place you downloaded it from will tell you what to use instead. For example, the runwayML custom inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`. This is already inclued in the InvokeAI distribution and is configured automatically for you by the `invokeai-configure` script. |
|
||||||
|
| vae | If you want to add a VAE file to the model, then enter its path here. |
|
||||||
|
| width, height | This is the width and height of the images used to train the model. Currently they are always 512 and 512. |
|
||||||
|
|
||||||
|
Save the `models.yaml` and relaunch InvokeAI. The new model should now be
|
||||||
|
available for your use.
|
111
docs/installation/060_INSTALL_PATCHMATCH.md
Normal file
@ -0,0 +1,111 @@
|
|||||||
|
---
|
||||||
|
title: Installing PyPatchMatch
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-image-size-select-large: Installing PyPatchMatch
|
||||||
|
|
||||||
|
pypatchmatch is a Python module for inpainting images. It is not needed to run
|
||||||
|
InvokeAI, but it greatly improves the quality of inpainting and outpainting and
|
||||||
|
is recommended.
|
||||||
|
|
||||||
|
Unfortunately, it is a C++ optimized module and installation can be somewhat
|
||||||
|
challenging. This guide leads you through the steps.
|
||||||
|
|
||||||
|
## Windows
|
||||||
|
|
||||||
|
You're in luck! On Windows platforms PyPatchMatch will install automatically on
|
||||||
|
Windows systems with no extra intervention.
|
||||||
|
|
||||||
|
## Macintosh
|
||||||
|
|
||||||
|
You need to have opencv installed so that pypatchmatch can be built:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
brew install opencv
|
||||||
|
```
|
||||||
|
|
||||||
|
The next time you start `invoke`, after sucesfully installing opencv, pypatchmatch will be built.
|
||||||
|
|
||||||
|
## Linux
|
||||||
|
|
||||||
|
Prior to installing PyPatchMatch, you need to take the following steps:
|
||||||
|
|
||||||
|
### Debian Based Distros
|
||||||
|
|
||||||
|
1. Install the `build-essential` tools:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install build-essential
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install `opencv`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install python3-opencv libopencv-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Activate the environment you use for invokeai, either with `conda` or with a
|
||||||
|
virtual environment.
|
||||||
|
|
||||||
|
4. Install pypatchmatch:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install pypatchmatch
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Confirm that pypatchmatch is installed. At the command-line prompt enter
|
||||||
|
`python`, and then at the `>>>` line type
|
||||||
|
`from patchmatch import patch_match`: It should look like the follwing:
|
||||||
|
|
||||||
|
```py
|
||||||
|
Python 3.9.5 (default, Nov 23 2021, 15:27:38)
|
||||||
|
[GCC 9.3.0] on linux
|
||||||
|
Type "help", "copyright", "credits" or "license" for more information.
|
||||||
|
>>> from patchmatch import patch_match
|
||||||
|
Compiling and loading c extensions from "/home/lstein/Projects/InvokeAI/.invokeai-env/src/pypatchmatch/patchmatch".
|
||||||
|
rm -rf build/obj libpatchmatch.so
|
||||||
|
mkdir: created directory 'build/obj'
|
||||||
|
mkdir: created directory 'build/obj/csrc/'
|
||||||
|
[dep] csrc/masked_image.cpp ...
|
||||||
|
[dep] csrc/nnf.cpp ...
|
||||||
|
[dep] csrc/inpaint.cpp ...
|
||||||
|
[dep] csrc/pyinterface.cpp ...
|
||||||
|
[CC] csrc/pyinterface.cpp ...
|
||||||
|
[CC] csrc/inpaint.cpp ...
|
||||||
|
[CC] csrc/nnf.cpp ...
|
||||||
|
[CC] csrc/masked_image.cpp ...
|
||||||
|
[link] libpatchmatch.so ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Arch Based Distros
|
||||||
|
|
||||||
|
1. Install the `base-devel` package:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo pacman -Syu
|
||||||
|
sudo pacman -S --needed base-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install `opencv`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo pacman -S opencv
|
||||||
|
```
|
||||||
|
|
||||||
|
or for CUDA support
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo pacman -S opencv-cuda
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Fix the naming of the `opencv` package configuration file:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd /usr/lib/pkgconfig/
|
||||||
|
ln -sf opencv4.pc opencv.pc
|
||||||
|
```
|
||||||
|
|
||||||
|
[**Next, Follow Steps 4-6 from the Debian Section above**](#linux)
|
||||||
|
|
||||||
|
If you see no errors, then you're ready to go!
|
206
docs/installation/070_INSTALL_XFORMERS.md
Normal file
@ -0,0 +1,206 @@
|
|||||||
|
---
|
||||||
|
title: Installing xFormers
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-image-size-select-large: Installing xformers
|
||||||
|
|
||||||
|
xFormers is toolbox that integrates with the pyTorch and CUDA
|
||||||
|
libraries to provide accelerated performance and reduced memory
|
||||||
|
consumption for applications using the transformers machine learning
|
||||||
|
architecture. After installing xFormers, InvokeAI users who have
|
||||||
|
CUDA GPUs will see a noticeable decrease in GPU memory consumption and
|
||||||
|
an increase in speed.
|
||||||
|
|
||||||
|
xFormers can be installed into a working InvokeAI installation without
|
||||||
|
any code changes or other updates. This document explains how to
|
||||||
|
install xFormers.
|
||||||
|
|
||||||
|
## Pip Install
|
||||||
|
|
||||||
|
For both Windows and Linux, you can install `xformers` in just a
|
||||||
|
couple of steps from the command line.
|
||||||
|
|
||||||
|
If you are used to launching `invoke.sh` or `invoke.bat` to start
|
||||||
|
InvokeAI, then run the launcher and select the "developer's console"
|
||||||
|
to get to the command line. If you run invoke.py directly from the
|
||||||
|
command line, then just be sure to activate it's virtual environment.
|
||||||
|
|
||||||
|
Then run the following three commands:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install xformers==0.0.16rc425
|
||||||
|
pip install triton
|
||||||
|
python -m xformers.info output
|
||||||
|
```
|
||||||
|
|
||||||
|
The first command installs `xformers`, the second installs the
|
||||||
|
`triton` training accelerator, and the third prints out the `xformers`
|
||||||
|
installation status. If all goes well, you'll see a report like the
|
||||||
|
following:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
xFormers 0.0.16rc425
|
||||||
|
memory_efficient_attention.cutlassF: available
|
||||||
|
memory_efficient_attention.cutlassB: available
|
||||||
|
memory_efficient_attention.flshattF: available
|
||||||
|
memory_efficient_attention.flshattB: available
|
||||||
|
memory_efficient_attention.smallkF: available
|
||||||
|
memory_efficient_attention.smallkB: available
|
||||||
|
memory_efficient_attention.tritonflashattF: available
|
||||||
|
memory_efficient_attention.tritonflashattB: available
|
||||||
|
swiglu.fused.p.cpp: available
|
||||||
|
is_triton_available: True
|
||||||
|
is_functorch_available: False
|
||||||
|
pytorch.version: 1.13.1+cu117
|
||||||
|
pytorch.cuda: available
|
||||||
|
gpu.compute_capability: 8.6
|
||||||
|
gpu.name: NVIDIA RTX A2000 12GB
|
||||||
|
build.info: available
|
||||||
|
build.cuda_version: 1107
|
||||||
|
build.python_version: 3.10.9
|
||||||
|
build.torch_version: 1.13.1+cu117
|
||||||
|
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
|
||||||
|
build.env.XFORMERS_BUILD_TYPE: Release
|
||||||
|
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
|
||||||
|
build.env.NVCC_FLAGS: None
|
||||||
|
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.16rc425
|
||||||
|
source.privacy: open source
|
||||||
|
```
|
||||||
|
|
||||||
|
## Source Builds
|
||||||
|
|
||||||
|
`xformers` is currently under active development and at some point you
|
||||||
|
may wish to build it from sourcce to get the latest features and
|
||||||
|
bugfixes.
|
||||||
|
|
||||||
|
### Source Build on Linux
|
||||||
|
|
||||||
|
Note that xFormers only works with true NVIDIA GPUs and will not work
|
||||||
|
properly with the ROCm driver for AMD acceleration.
|
||||||
|
|
||||||
|
xFormers is not currently available as a pip binary wheel and must be
|
||||||
|
installed from source. These instructions were written for a system
|
||||||
|
running Ubuntu 22.04, but other Linux distributions should be able to
|
||||||
|
adapt this recipe.
|
||||||
|
|
||||||
|
#### 1. Install CUDA Toolkit 11.7
|
||||||
|
|
||||||
|
You will need the CUDA developer's toolkit in order to compile and
|
||||||
|
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
|
||||||
|
package.** It is out of date and will cause conflicts among the NVIDIA
|
||||||
|
driver and binaries. Instead install the CUDA Toolkit package provided
|
||||||
|
by NVIDIA itself. Go to [CUDA Toolkit 11.7
|
||||||
|
Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive)
|
||||||
|
and use the target selection wizard to choose your platform and Linux
|
||||||
|
distribution. Select an installer type of "runfile (local)" at the
|
||||||
|
last step.
|
||||||
|
|
||||||
|
This will provide you with a recipe for downloading and running a
|
||||||
|
install shell script that will install the toolkit and drivers. For
|
||||||
|
example, the install script recipe for Ubuntu 22.04 running on a
|
||||||
|
x86_64 system is:
|
||||||
|
|
||||||
|
```
|
||||||
|
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda_11.7.0_515.43.04_linux.run
|
||||||
|
sudo sh cuda_11.7.0_515.43.04_linux.run
|
||||||
|
```
|
||||||
|
|
||||||
|
Rather than cut-and-paste this example, We recommend that you walk
|
||||||
|
through the toolkit wizard in order to get the most up to date
|
||||||
|
installer for your system.
|
||||||
|
|
||||||
|
#### 2. Confirm/Install pyTorch 1.13 with CUDA 11.7 support
|
||||||
|
|
||||||
|
If you are using InvokeAI 2.3 or higher, these will already be
|
||||||
|
installed. If not, you can check whether you have the needed libraries
|
||||||
|
using a quick command. Activate the invokeai virtual environment,
|
||||||
|
either by entering the "developer's console", or manually with a
|
||||||
|
command similar to `source ~/invokeai/.venv/bin/activate` (depending
|
||||||
|
on where your `invokeai` directory is.
|
||||||
|
|
||||||
|
Then run the command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
python -c 'exec("import torch\nprint(torch.__version__)")'
|
||||||
|
```
|
||||||
|
|
||||||
|
If it prints __1.13.1+cu117__ you're good. If not, you can install the
|
||||||
|
most up to date libraries with this command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install --upgrade --force-reinstall torch torchvision
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Install the triton module
|
||||||
|
|
||||||
|
This module isn't necessary for xFormers image inference optimization,
|
||||||
|
but avoids a startup warning.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install triton
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Install source code build prerequisites
|
||||||
|
|
||||||
|
To build xFormers from source, you will need the `build-essentials`
|
||||||
|
package. If you don't have it installed already, run:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install build-essential
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Build xFormers
|
||||||
|
|
||||||
|
There is no pip wheel package for xFormers at this time (January
|
||||||
|
2023). Although there is a conda package, InvokeAI no longer
|
||||||
|
officially supports conda installations and you're on your own if you
|
||||||
|
wish to try this route.
|
||||||
|
|
||||||
|
Following the recipe provided at the [xFormers GitHub
|
||||||
|
page](https://github.com/facebookresearch/xformers), and with the
|
||||||
|
InvokeAI virtual environment active (see step 1) run the following
|
||||||
|
commands:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install ninja
|
||||||
|
export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
|
||||||
|
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
|
||||||
|
```
|
||||||
|
|
||||||
|
The TORCH_CUDA_ARCH_LIST is a list of GPU architectures to compile
|
||||||
|
xFormer support for. You can speed up compilation by selecting
|
||||||
|
the architecture specific for your system. You'll find the list of
|
||||||
|
GPUs and their architectures at NVIDIA's [GPU Compute
|
||||||
|
Capability](https://developer.nvidia.com/cuda-gpus) table.
|
||||||
|
|
||||||
|
If the compile and install completes successfully, you can check that
|
||||||
|
xFormers is installed with this command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
python -m xformers.info
|
||||||
|
```
|
||||||
|
|
||||||
|
If suiccessful, the top of the listing should indicate "available" for
|
||||||
|
each of the `memory_efficient_attention` modules, as shown here:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
memory_efficient_attention.cutlassF: available
|
||||||
|
memory_efficient_attention.cutlassB: available
|
||||||
|
memory_efficient_attention.flshattF: available
|
||||||
|
memory_efficient_attention.flshattB: available
|
||||||
|
memory_efficient_attention.smallkF: available
|
||||||
|
memory_efficient_attention.smallkB: available
|
||||||
|
memory_efficient_attention.tritonflashattF: available
|
||||||
|
memory_efficient_attention.tritonflashattB: available
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
You can now launch InvokeAI and enjoy the benefits of xFormers.
|
||||||
|
|
||||||
|
### Windows
|
||||||
|
|
||||||
|
To come
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
(c) Copyright 2023 Lincoln Stein and the InvokeAI Development Team
|
@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
title: build binary installers
|
||||||
|
---
|
||||||
|
|
||||||
|
# :simple-buildkite: How to build "binary" installers (InvokeAI-mac/windows/linux_on_*.zip)
|
||||||
|
|
||||||
|
## 1. Ensure `installers/requirements.in` is correct
|
||||||
|
|
||||||
|
and up to date on the branch to be installed.
|
||||||
|
|
||||||
|
## <a name="step-2"></a> 2. Run `pip-compile` on each platform.
|
||||||
|
|
||||||
|
On each target platform, in the branch that is to be installed, and
|
||||||
|
inside the InvokeAI git root folder, run the following commands:
|
||||||
|
|
||||||
|
```commandline
|
||||||
|
conda activate invokeai # or however you activate python
|
||||||
|
pip install pip-tools
|
||||||
|
pip-compile --allow-unsafe --generate-hashes --output-file=binary_installer/<reqsfile>.txt binary_installer/requirements.in
|
||||||
|
```
|
||||||
|
where `<reqsfile>.txt` is whichever of
|
||||||
|
```commandline
|
||||||
|
py3.10-darwin-arm64-mps-reqs.txt
|
||||||
|
py3.10-darwin-x86_64-reqs.txt
|
||||||
|
py3.10-linux-x86_64-cuda-reqs.txt
|
||||||
|
py3.10-windows-x86_64-cuda-reqs.txt
|
||||||
|
```
|
||||||
|
matches the current OS and architecture.
|
||||||
|
> There is no way to cross-compile these. They must be done on a system matching the target OS and arch.
|
||||||
|
|
||||||
|
## <a name="step-3"></a> 3. Set github repository and branch
|
||||||
|
|
||||||
|
Once all reqs files have been collected and committed **to the branch
|
||||||
|
to be installed**, edit `binary_installer/install.sh.in` and `binary_installer/install.bat.in` so that `RELEASE_URL`
|
||||||
|
and `RELEASE_SOURCEBALL` point to the github repo and branch that is
|
||||||
|
to be installed.
|
||||||
|
|
||||||
|
For example, to install `main` branch of `InvokeAI`, they should be
|
||||||
|
set as follows:
|
||||||
|
|
||||||
|
`install.sh.in`:
|
||||||
|
```commandline
|
||||||
|
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
`install.bat.in`:
|
||||||
|
```commandline
|
||||||
|
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Or, to install `damians-cool-feature` branch of `damian0815`, set them
|
||||||
|
as follows:
|
||||||
|
|
||||||
|
`install.sh.in`:
|
||||||
|
```commandline
|
||||||
|
RELEASE_URL=https://github.com/damian0815/InvokeAI
|
||||||
|
RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
`install.bat.in`:
|
||||||
|
```commandline
|
||||||
|
set RELEASE_URL=https://github.com/damian0815/InvokeAI
|
||||||
|
set RELEASE_SOURCEBALL=/archive/refs/heads/damians-cool-feature.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
The branch and repo specified here **must** contain the correct reqs
|
||||||
|
files. The installer zip files **do not** contain requirements files,
|
||||||
|
they are pulled from the specified branch during the installation
|
||||||
|
process.
|
||||||
|
|
||||||
|
## 4. Create zip files.
|
||||||
|
|
||||||
|
cd into the `installers/` folder and run
|
||||||
|
`./create_installers.sh`. This will create
|
||||||
|
`InvokeAI-mac_on_<branch>.zip`,
|
||||||
|
`InvokeAI-windows_on_<branch>.zip` and
|
||||||
|
`InvokeAI-linux_on_<branch>.zip`. These files can be distributed to end users.
|
||||||
|
|
||||||
|
These zips will continue to function as installers for all future
|
||||||
|
pushes to those branches, as long as necessary changes to
|
||||||
|
`requirements.in` are propagated in a timely manner to the
|
||||||
|
`py3.10-*-reqs.txt` files using pip-compile as outlined in [step
|
||||||
|
2](#step-2).
|
||||||
|
|
||||||
|
To actually install, users should unzip the appropriate zip file into an empty
|
||||||
|
folder and run `install.sh` on macOS/Linux or `install.bat` on
|
||||||
|
Windows.
|
@ -1,246 +0,0 @@
|
|||||||
---
|
|
||||||
title: Installing Models
|
|
||||||
---
|
|
||||||
|
|
||||||
# :octicons-paintbrush-16: Installing Models
|
|
||||||
|
|
||||||
## Model Weight Files
|
|
||||||
|
|
||||||
The model weight files ('\*.ckpt') are the Stable Diffusion "secret sauce". They
|
|
||||||
are the product of training the AI on millions of captioned images gathered from
|
|
||||||
multiple sources.
|
|
||||||
|
|
||||||
Originally there was only a single Stable Diffusion weights file, which many
|
|
||||||
people named `model.ckpt`. Now there are dozens or more that have been "fine
|
|
||||||
tuned" to provide particulary styles, genres, or other features. InvokeAI allows
|
|
||||||
you to install and run multiple model weight files and switch between them
|
|
||||||
quickly in the command-line and web interfaces.
|
|
||||||
|
|
||||||
This manual will guide you through installing and configuring model weight
|
|
||||||
files.
|
|
||||||
|
|
||||||
## Base Models
|
|
||||||
|
|
||||||
InvokeAI comes with support for a good initial set of models listed in the model
|
|
||||||
configuration file `configs/models.yaml`. They are:
|
|
||||||
|
|
||||||
| Model | Weight File | Description | DOWNLOAD FROM |
|
|
||||||
| -------------------- | --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
|
|
||||||
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|
|
||||||
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original |
|
|
||||||
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
|
|
||||||
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 |
|
|
||||||
| `<all models>` | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ |
|
|
||||||
|
|
||||||
Note that these files are covered by an "Ethical AI" license which forbids
|
|
||||||
certain uses. You will need to create an account on the Hugging Face website and
|
|
||||||
accept the license terms before you can access the files.
|
|
||||||
|
|
||||||
The predefined configuration file for InvokeAI (located at
|
|
||||||
`configs/models.yaml`) provides entries for each of these weights files.
|
|
||||||
`stable-diffusion-1.5` is the default model used, and we strongly recommend that
|
|
||||||
you install this weights file if nothing else.
|
|
||||||
|
|
||||||
## Community-Contributed Models
|
|
||||||
|
|
||||||
There are too many to list here and more are being contributed every day.
|
|
||||||
Hugging Face maintains a
|
|
||||||
[fast-growing repository](https://huggingface.co/sd-concepts-library) of
|
|
||||||
fine-tune (".bin") models that can be imported into InvokeAI by passing the
|
|
||||||
`--embedding_path` option to the `invoke.py` command.
|
|
||||||
|
|
||||||
[This page](https://rentry.org/sdmodels) hosts a large list of official and
|
|
||||||
unofficial Stable Diffusion models and where they can be obtained.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
There are three ways to install weights files:
|
|
||||||
|
|
||||||
1. During InvokeAI installation, the `preload_models.py` script can download
|
|
||||||
them for you.
|
|
||||||
|
|
||||||
2. You can use the command-line interface (CLI) to import, configure and modify
|
|
||||||
new models files.
|
|
||||||
|
|
||||||
3. You can download the files manually and add the appropriate entries to
|
|
||||||
`models.yaml`.
|
|
||||||
|
|
||||||
### Installation via `preload_models.py`
|
|
||||||
|
|
||||||
This is the most automatic way. Run `scripts/preload_models.py` from the
|
|
||||||
console. It will ask you to select which models to download and lead you through
|
|
||||||
the steps of setting up a Hugging Face account if you haven't done so already.
|
|
||||||
|
|
||||||
To start, run `python scripts/preload_models.py` from within the InvokeAI:
|
|
||||||
directory
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
```text
|
|
||||||
Loading Python libraries...
|
|
||||||
|
|
||||||
** INTRODUCTION **
|
|
||||||
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
|
|
||||||
and other large models that are needed for text to image generation. At any point you may interrupt
|
|
||||||
this program and resume later.
|
|
||||||
|
|
||||||
** WEIGHT SELECTION **
|
|
||||||
Would you like to download the Stable Diffusion model weights now? [y]
|
|
||||||
|
|
||||||
Choose the weight file(s) you wish to download. Before downloading you
|
|
||||||
will be given the option to view and change your selections.
|
|
||||||
|
|
||||||
[1] stable-diffusion-1.5:
|
|
||||||
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
|
|
||||||
Download? [y]
|
|
||||||
[2] inpainting-1.5:
|
|
||||||
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
|
|
||||||
Download? [y]
|
|
||||||
[3] stable-diffusion-1.4:
|
|
||||||
The original Stable Diffusion version 1.4 weight file (4.27 GB)
|
|
||||||
Download? [n] n
|
|
||||||
[4] waifu-diffusion-1.3:
|
|
||||||
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
|
|
||||||
Download? [n] y
|
|
||||||
[5] ft-mse-improved-autoencoder-840000:
|
|
||||||
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
|
|
||||||
Download? [y] y
|
|
||||||
The following weight files will be downloaded:
|
|
||||||
[1] stable-diffusion-1.5*
|
|
||||||
[2] inpainting-1.5
|
|
||||||
[4] waifu-diffusion-1.3
|
|
||||||
[5] ft-mse-improved-autoencoder-840000
|
|
||||||
*default
|
|
||||||
Ok to download? [y]
|
|
||||||
** LICENSE AGREEMENT FOR WEIGHT FILES **
|
|
||||||
|
|
||||||
1. To download the Stable Diffusion weight files you need to read and accept the
|
|
||||||
CreativeML Responsible AI license. If you have not already done so, please
|
|
||||||
create an account using the "Sign Up" button:
|
|
||||||
|
|
||||||
https://huggingface.co
|
|
||||||
|
|
||||||
You will need to verify your email address as part of the HuggingFace
|
|
||||||
registration process.
|
|
||||||
|
|
||||||
2. After creating the account, login under your account and accept
|
|
||||||
the license terms located here:
|
|
||||||
|
|
||||||
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
|
|
||||||
|
|
||||||
Press <enter> when you are ready to continue:
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
When the script is complete, you will find the downloaded weights files in
|
|
||||||
`models/ldm/stable-diffusion-v1` and a matching configuration file in
|
|
||||||
`configs/models.yaml`.
|
|
||||||
|
|
||||||
You can run the script again to add any models you didn't select the first time.
|
|
||||||
Note that as a safety measure the script will _never_ remove a
|
|
||||||
previously-installed weights file. You will have to do this manually.
|
|
||||||
|
|
||||||
### Installation via the CLI
|
|
||||||
|
|
||||||
You can install a new model, including any of the community-supported ones, via
|
|
||||||
the command-line client's `!import_model` command.
|
|
||||||
|
|
||||||
1. First download the desired model weights file and place it under
|
|
||||||
`models/ldm/stable-diffusion-v1/`. You may rename the weights file to
|
|
||||||
something more memorable if you wish. Record the path of the weights file
|
|
||||||
(e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
|
|
||||||
|
|
||||||
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`.
|
|
||||||
|
|
||||||
3. At the `invoke>` command-line, enter the command
|
|
||||||
`!import_model <path to model>`. For example:
|
|
||||||
|
|
||||||
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
|
||||||
|
|
||||||
!!! tip "the CLI supports file path autocompletion"
|
|
||||||
|
|
||||||
Type a bit of the path name and hit ++tab++ in order to get a choice of
|
|
||||||
possible completions.
|
|
||||||
|
|
||||||
4. Follow the wizard's instructions to complete installation as shown in the
|
|
||||||
example here:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
```text
|
|
||||||
invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
|
||||||
>> Model import in process. Please enter the values needed to configure this model:
|
|
||||||
|
|
||||||
Name for this model: arabian-nights
|
|
||||||
Description of this model: Arabian Nights Fine Tune v1.0
|
|
||||||
Configuration file for this model: configs/stable-diffusion/v1-inference.yaml
|
|
||||||
Default image width: 512
|
|
||||||
Default image height: 512
|
|
||||||
>> New configuration:
|
|
||||||
arabian-nights:
|
|
||||||
config: configs/stable-diffusion/v1-inference.yaml
|
|
||||||
description: Arabian Nights Fine Tune v1.0
|
|
||||||
height: 512
|
|
||||||
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
|
||||||
width: 512
|
|
||||||
OK to import [n]? y
|
|
||||||
>> Caching model stable-diffusion-1.4 in system RAM
|
|
||||||
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
|
||||||
| LatentDiffusion: Running in eps-prediction mode
|
|
||||||
| DiffusionWrapper has 859.52 M params.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Using faster float16 precision
|
|
||||||
```
|
|
||||||
|
|
||||||
If you've previously installed the fine-tune VAE file
|
|
||||||
`vae-ft-mse-840000-ema-pruned.ckpt`, the wizard will also ask you if you want to
|
|
||||||
add this VAE to the model.
|
|
||||||
|
|
||||||
The appropriate entry for this model will be added to `configs/models.yaml` and
|
|
||||||
it will be available to use in the CLI immediately.
|
|
||||||
|
|
||||||
The CLI has additional commands for switching among, viewing, editing, deleting
|
|
||||||
the available models. These are described in
|
|
||||||
[Command Line Client](../features/CLI.md#model-selection-and-importation), but
|
|
||||||
the two most frequently-used are `!models` and `!switch <name of model>`. The
|
|
||||||
first prints a table of models that InvokeAI knows about and their load status.
|
|
||||||
The second will load the requested model and lets you switch back and forth
|
|
||||||
quickly among loaded models.
|
|
||||||
|
|
||||||
### Manually editing of `configs/models.yaml`
|
|
||||||
|
|
||||||
If you are comfortable with a text editor then you may simply edit `models.yaml`
|
|
||||||
directly.
|
|
||||||
|
|
||||||
First you need to download the desired .ckpt file and place it in
|
|
||||||
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the previous
|
|
||||||
section. Record the path to the weights file, e.g.
|
|
||||||
`models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
|
||||||
|
|
||||||
Then using a **text** editor (e.g. the Windows Notepad application), open the
|
|
||||||
file `configs/models.yaml`, and add a new stanza that follows this model:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
arabian-nights-1.0:
|
|
||||||
description: A great fine-tune in Arabian Nights style
|
|
||||||
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
|
||||||
config: ./configs/stable-diffusion/v1-inference.yaml
|
|
||||||
width: 512
|
|
||||||
height: 512
|
|
||||||
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
|
||||||
default: false
|
|
||||||
```
|
|
||||||
|
|
||||||
| name | description |
|
|
||||||
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| arabian-nights-1.0 | This is the name of the model that you will refer to from within the CLI and the WebGUI when you need to load and use the model. |
|
|
||||||
| description | Any description that you want to add to the model to remind you what it is. |
|
|
||||||
| weights | Relative path to the .ckpt weights file for this model. |
|
|
||||||
| config | This is the confusingly-named configuration file for the model itself. Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens to need a custom configuration, in which case the place you downloaded it from will tell you what to use instead. For example, the runwayML custom inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`. This is already inclued in the InvokeAI distribution and is configured automatically for you by the `preload_models.py` script. |
|
|
||||||
| vae | If you want to add a VAE file to the model, then enter its path here. |
|
|
||||||
| width, height | This is the width and height of the images used to train the model. Currently they are always 512 and 512. |
|
|
||||||
|
|
||||||
Save the `models.yaml` and relaunch InvokeAI. The new model should now be
|
|
||||||
available for your use.
|
|
@ -1,247 +0,0 @@
|
|||||||
---
|
|
||||||
title: Docker
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-docker: Docker
|
|
||||||
|
|
||||||
!!! warning "For end users"
|
|
||||||
|
|
||||||
We highly recommend to Install InvokeAI locally using [these instructions](index.md)"
|
|
||||||
|
|
||||||
!!! tip "For developers"
|
|
||||||
|
|
||||||
For container-related development tasks or for enabling easy
|
|
||||||
deployment to other environments (on-premises or cloud), follow these
|
|
||||||
instructions.
|
|
||||||
|
|
||||||
For general use, install locally to leverage your machine's GPU.
|
|
||||||
|
|
||||||
## Why containers?
|
|
||||||
|
|
||||||
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also
|
|
||||||
use a Docker volume to store the largest model files and image outputs as a
|
|
||||||
first step in decoupling storage and compute. Future enhancements can do this
|
|
||||||
for other assets. See [Processes](https://12factor.net/processes) under the
|
|
||||||
Twelve-Factor App methodology for details on why running applications in such a
|
|
||||||
stateless fashion is important.
|
|
||||||
|
|
||||||
You can specify the target platform when building the image and running the
|
|
||||||
container. You'll also need to specify the InvokeAI requirements file that
|
|
||||||
matches the container's OS and the architecture it will run on.
|
|
||||||
|
|
||||||
Developers on Apple silicon (M1/M2): You
|
|
||||||
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
|
|
||||||
and performance is reduced compared with running it directly on macOS but for
|
|
||||||
development purposes it's fine. Once you're done with development tasks on your
|
|
||||||
laptop you can build for the target platform and architecture and deploy to
|
|
||||||
another environment with NVIDIA GPUs on-premises or in the cloud.
|
|
||||||
|
|
||||||
## Installation on a Linux container
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
#### Install [Docker](https://github.com/santisbon/guides#docker)
|
|
||||||
|
|
||||||
On the [Docker Desktop app](https://docs.docker.com/get-docker/), go to
|
|
||||||
Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
|
|
||||||
increase Swap and Disk image size too.
|
|
||||||
|
|
||||||
#### Get a Huggingface-Token
|
|
||||||
|
|
||||||
Besides the Docker Agent you will need an Account on
|
|
||||||
[huggingface.co](https://huggingface.co/join).
|
|
||||||
|
|
||||||
After you succesfully registered your account, go to
|
|
||||||
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
|
|
||||||
a token and copy it, since you will need in for the next step.
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
Set the fork you want to use and other variables.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
I preffer to save my env vars
|
|
||||||
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
|
|
||||||
them when I come back.
|
|
||||||
|
|
||||||
The build- and run- scripts contain default values for almost everything,
|
|
||||||
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
|
|
||||||
created in the last step.
|
|
||||||
|
|
||||||
Some Suggestions of variables you may want to change besides the Token:
|
|
||||||
|
|
||||||
| Environment-Variable | Default value | Description |
|
|
||||||
| ------------------------- | ----------------------------- | ---------------------------------------------------------------------------- |
|
|
||||||
| `HUGGINGFACE_TOKEN` | No default, but **required**! | This is the only **required** variable, without you can't get the checkpoint |
|
|
||||||
| `ARCH` | x86_64 | if you are using a ARM based CPU |
|
|
||||||
| `INVOKEAI_TAG` | invokeai-x86_64 | the Container Repository / Tag which will be used |
|
|
||||||
| `INVOKEAI_CONDA_ENV_FILE` | environment-lin-cuda.yml | since environment.yml wouldn't work with aarch |
|
|
||||||
| `INVOKEAI_GIT` | invoke-ai/InvokeAI | the repository to use |
|
|
||||||
| `INVOKEAI_BRANCH` | main | the branch to checkout |
|
|
||||||
|
|
||||||
#### Build the Image
|
|
||||||
|
|
||||||
I provided a build script, which is located in `docker-build/build.sh` but still
|
|
||||||
needs to be executed from the Repository root.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./docker-build/build.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
The build Script not only builds the container, but also creates the docker
|
|
||||||
volume if not existing yet, or if empty it will just download the models.
|
|
||||||
|
|
||||||
#### Run the Container
|
|
||||||
|
|
||||||
After the build process is done, you can run the container via the provided
|
|
||||||
`docker-build/run.sh` script
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./docker-build/run.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
When used without arguments, the container will start the webserver and provide
|
|
||||||
you the link to open it. But if you want to use some other parameters you can
|
|
||||||
also do so.
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./docker-build/run.sh --from_file tests/validate_pr_prompt.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
The output folder is located on the volume which is also used to store the model.
|
|
||||||
|
|
||||||
Find out more about available CLI-Parameters at [features/CLI.md](../features/CLI.md/#arguments)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
!!! warning "Deprecated"
|
|
||||||
|
|
||||||
From here on you will find the the previous Docker-Docs, which will still
|
|
||||||
provide some usefull informations.
|
|
||||||
|
|
||||||
## Usage (time to have fun)
|
|
||||||
|
|
||||||
### Startup
|
|
||||||
|
|
||||||
If you're on a **Linux container** the `invoke` script is **automatically
|
|
||||||
started** and the output dir set to the Docker volume you created earlier.
|
|
||||||
|
|
||||||
If you're **directly on macOS follow these startup instructions**.
|
|
||||||
With the Conda environment activated (`conda activate ldm`), run the interactive
|
|
||||||
interface that combines the functionality of the original scripts `txt2img` and
|
|
||||||
`img2img`:
|
|
||||||
Use the more accurate but VRAM-intensive full precision math because
|
|
||||||
half-precision requires autocast and won't work.
|
|
||||||
By default the images are saved in `outputs/img-samples/`.
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
python3 scripts/invoke.py --full_precision
|
|
||||||
```
|
|
||||||
|
|
||||||
You'll get the script's prompt. You can see available options or quit.
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
invoke> -h
|
|
||||||
invoke> q
|
|
||||||
```
|
|
||||||
|
|
||||||
### Text to Image
|
|
||||||
|
|
||||||
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
|
|
||||||
image. This will let you know that everything is set up correctly.
|
|
||||||
Then increase steps to 100 or more for good (but slower) results.
|
|
||||||
The prompt can be in quotes or not.
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
invoke> The hulk fighting with sheldon cooper -s5 -n1
|
|
||||||
invoke> "woman closeup highly detailed" -s 150
|
|
||||||
# Reuse previous seed and apply face restoration
|
|
||||||
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
|
||||||
```
|
|
||||||
|
|
||||||
You'll need to experiment to see if face restoration is making it better or
|
|
||||||
worse for your specific prompt.
|
|
||||||
|
|
||||||
If you're on a container the output is set to the Docker volume. You can copy it
|
|
||||||
wherever you want.
|
|
||||||
You can download it from the Docker Desktop app, Volumes, my-vol, data.
|
|
||||||
Or you can copy it from your Mac terminal. Keep in mind `docker cp` can't expand
|
|
||||||
`*.png` so you'll need to specify the image file name.
|
|
||||||
|
|
||||||
On your host Mac (you can use the name of any container that mounted the
|
|
||||||
volume):
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
|
|
||||||
```
|
|
||||||
|
|
||||||
### Image to Image
|
|
||||||
|
|
||||||
You can also do text-guided image-to-image translation. For example, turning a
|
|
||||||
sketch into a detailed drawing.
|
|
||||||
|
|
||||||
`strength` is a value between 0.0 and 1.0 that controls the amount of noise that
|
|
||||||
is added to the input image. Values that approach 1.0 allow for lots of
|
|
||||||
variations but will also produce images that are not semantically consistent
|
|
||||||
with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
|
|
||||||
|
|
||||||
Make sure your input image size dimensions are multiples of 64 e.g. 512x512.
|
|
||||||
Otherwise you'll get `Error: product of dimension sizes > 2**31'`. If you still
|
|
||||||
get the error
|
|
||||||
[try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.)
|
|
||||||
like 512x256.
|
|
||||||
|
|
||||||
If you're on a Docker container, copy your input image into the Docker volume
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
|
|
||||||
```
|
|
||||||
|
|
||||||
Try it out generating an image (or more). The `invoke` script needs absolute
|
|
||||||
paths to find the image so don't use `~`.
|
|
||||||
|
|
||||||
If you're on your Mac
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
|
|
||||||
```
|
|
||||||
|
|
||||||
If you're on a Linux container on your Mac
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Web Interface
|
|
||||||
|
|
||||||
You can use the `invoke` script with a graphical web interface. Start the web
|
|
||||||
server with:
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
python3 scripts/invoke.py --full_precision --web
|
|
||||||
```
|
|
||||||
|
|
||||||
If it's running on your Mac point your Mac web browser to
|
|
||||||
<http://127.0.0.1:9090>
|
|
||||||
|
|
||||||
Press Control-C at the command line to stop the web server.
|
|
||||||
|
|
||||||
### Notes
|
|
||||||
|
|
||||||
Some text you can add at the end of the prompt to make it very pretty:
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
cinematic photo, highly detailed, cinematic lighting, ultra-detailed, ultrarealistic, photorealism, Octane Rendering, cyberpunk lights, Hyper Detail, 8K, HD, Unreal Engine, V-Ray, full hd, cyberpunk, abstract, 3d octane render + 4k UHD + immense detail + dramatic lighting + well lit + black, purple, blue, pink, cerulean, teal, metallic colours, + fine details, ultra photoreal, photographic, concept art, cinematic composition, rule of thirds, mysterious, eerie, photorealism, breathtaking detailed, painting art deco pattern, by hsiao, ron cheng, john james audubon, bizarre compositions, exquisite detail, extremely moody lighting, painted by greg rutkowski makoto shinkai takashi takeuchi studio ghibli, akihiko yoshida
|
|
||||||
```
|
|
||||||
|
|
||||||
The original scripts should work as well.
|
|
||||||
|
|
||||||
```Shell
|
|
||||||
python3 scripts/orig_scripts/txt2img.py --help
|
|
||||||
python3 scripts/orig_scripts/txt2img.py --ddim_steps 100 --n_iter 1 --n_samples 1 --plms --prompt "new born baby kitten. Hyper Detail, Octane Rendering, Unreal Engine, V-Ray"
|
|
||||||
python3 scripts/orig_scripts/txt2img.py --ddim_steps 5 --n_iter 1 --n_samples 1 --plms --prompt "ocean" # or --klms
|
|
||||||
```
|
|
@ -1,64 +0,0 @@
|
|||||||
---
|
|
||||||
title: InvokeAI Installer
|
|
||||||
---
|
|
||||||
|
|
||||||
The InvokeAI installer is a shell script that will install InvokeAI onto a stock
|
|
||||||
computer running recent versions of Linux, MacOSX or Windows. It will leave you
|
|
||||||
with a version that runs a stable version of InvokeAI. When a new version of
|
|
||||||
InvokeAI is released, you will download and reinstall the new version.
|
|
||||||
|
|
||||||
If you wish to tinker with unreleased versions of InvokeAI that introduce
|
|
||||||
potentially unstable new features, you should consider using the
|
|
||||||
[source installer](INSTALL_SOURCE.md) or one of the
|
|
||||||
[manual install](INSTALL_MANUAL.md) methods.
|
|
||||||
|
|
||||||
**Important Caveats**
|
|
||||||
- This script does not support AMD GPUs. For Linux AMD support,
|
|
||||||
please use the manual or source code installer methods.
|
|
||||||
|
|
||||||
- This script has difficulty on some Macintosh machines
|
|
||||||
that have previously been used for Python development due to
|
|
||||||
conflicting development tools versions. Mac developers may wish
|
|
||||||
to try the source code installer or one of the manual methods instead.
|
|
||||||
|
|
||||||
!!! todo
|
|
||||||
|
|
||||||
Before you begin, make sure that you meet
|
|
||||||
the[hardware requirements](/#hardware-requirements) and has the
|
|
||||||
appropriate GPU drivers installed. In particular, if you are a Linux user with
|
|
||||||
an AMD GPU installed, you may need to install the
|
|
||||||
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the libraries and
|
|
||||||
recommended model weights files.
|
|
||||||
|
|
||||||
## Steps to Install
|
|
||||||
|
|
||||||
1. Download the
|
|
||||||
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
|
|
||||||
InvokeAI's installer for your platform
|
|
||||||
|
|
||||||
2. Place the downloaded package someplace where you have plenty of HDD space,
|
|
||||||
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
|
|
||||||
|
|
||||||
3. Extract the 'InvokeAI' folder from the downloaded package
|
|
||||||
|
|
||||||
4. Open the extracted 'InvokeAI' folder
|
|
||||||
|
|
||||||
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
|
|
||||||
a terminal)
|
|
||||||
|
|
||||||
6. Follow the prompts
|
|
||||||
|
|
||||||
7. After installation, please run the 'invoke.bat' file (on Windows) or
|
|
||||||
'invoke.sh' file (on Linux/Mac) to start InvokeAI.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI team is
|
|
||||||
available to help you. Either create an
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
|
||||||
make a request for help on the "bugs-and-support" channel of our
|
|
||||||
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
|
||||||
organization, but typically somebody will be available to help you within 24
|
|
||||||
hours, and often much sooner.
|
|
@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
title: Running InvokeAI on Google Colab using a Jupyter Notebook
|
|
||||||
---
|
|
||||||
|
|
||||||
# THIS NEEDS TO BE FLESHED OUT
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
We have a [Jupyter
|
|
||||||
notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
|
|
||||||
with cell-by-cell installation steps. It will download the code in
|
|
||||||
this repo as one of the steps, so instead of cloning this repo, simply
|
|
||||||
download the notebook from the link above and load it up in VSCode
|
|
||||||
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
|
||||||
start running the cells one-by-one.
|
|
||||||
|
|
||||||
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
|
|
||||||
|
|
||||||
## Walkthrough
|
|
||||||
|
|
||||||
## Updating to newer versions
|
|
||||||
|
|
||||||
### Updating the stable version
|
|
||||||
|
|
||||||
### Updating to the development version
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
@ -1,429 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation
|
|
||||||
---
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
!!! warning "This is for advanced Users"
|
|
||||||
|
|
||||||
who are already expirienced with using conda or pip
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
You have two choices for manual installation, the [first one](#Conda_method)
|
|
||||||
based on the Anaconda3 package manager (`conda`), and
|
|
||||||
[a second one](#PIP_method) which uses basic Python virtual environment (`venv`)
|
|
||||||
commands and the PIP package manager. Both methods require you to enter commands
|
|
||||||
on the terminal, also known as the "console".
|
|
||||||
|
|
||||||
On Windows systems you are encouraged to install and use the
|
|
||||||
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
|
|
||||||
which provides compatibility with Linux and Mac shells and nice features such as
|
|
||||||
command-line completion.
|
|
||||||
|
|
||||||
### Conda method
|
|
||||||
|
|
||||||
1. Check that your system meets the
|
|
||||||
[hardware requirements](index.md#Hardware_Requirements) and has the
|
|
||||||
appropriate GPU drivers installed. In particular, if you are a Linux user
|
|
||||||
with an AMD GPU installed, you may need to install the
|
|
||||||
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
|
|
||||||
of ROCm driver support on this platform.
|
|
||||||
|
|
||||||
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
|
|
||||||
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
|
|
||||||
information about the installed video card.
|
|
||||||
|
|
||||||
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
|
|
||||||
can skip this step.
|
|
||||||
|
|
||||||
2. You will need to install Anaconda3 and Git if they are not already
|
|
||||||
available. Use your operating system's preferred package manager, or
|
|
||||||
download the installers manually. You can find them here:
|
|
||||||
|
|
||||||
- [Anaconda3](https://www.anaconda.com/)
|
|
||||||
- [git](https://git-scm.com/downloads)
|
|
||||||
|
|
||||||
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
|
|
||||||
GitHub:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create InvokeAI folder where you will follow the rest of the
|
|
||||||
steps.
|
|
||||||
|
|
||||||
4. Enter the newly-created InvokeAI folder:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
From this step forward make sure that you are working in the InvokeAI
|
|
||||||
directory!
|
|
||||||
|
|
||||||
5. Select the appropriate environment file:
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different operating
|
|
||||||
systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| filename | OS |
|
|
||||||
| :----------------------: | :----------------------------: |
|
|
||||||
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
|
|
||||||
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
|
|
||||||
| environment-mac.yml | Macintosh |
|
|
||||||
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Choose the appropriate environment file for your system and link or copy it
|
|
||||||
to `environment.yml` in InvokeAI's top-level directory. To do so, run
|
|
||||||
following command from the repository-root:
|
|
||||||
|
|
||||||
!!! Example ""
|
|
||||||
|
|
||||||
=== "Macintosh and Linux"
|
|
||||||
|
|
||||||
!!! todo "Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the table above"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
When this is done, confirm that a file `environment.yml` has been linked in
|
|
||||||
the InvokeAI root directory and that it points to the correct file in the
|
|
||||||
`environments-and-requirements`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ls -la
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
|
|
||||||
!!! todo " Since it requires admin privileges to create links, we will use the copy command to create your `environment.yml`"
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
copy environments-and-requirements\environment-win-cuda.yml environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Afterwards verify that the file `environment.yml` has been created, either via the
|
|
||||||
explorer or by using the command `dir` from the terminal
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
dir
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning "Do not try to run conda on directly on the subdirectory environments file. This won't work. Instead, copy or link it to the top-level directory as shown."
|
|
||||||
|
|
||||||
6. Create the conda environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda env update
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create a new environment named `invokeai` and install all InvokeAI
|
|
||||||
dependencies into it. If something goes wrong you should take a look at
|
|
||||||
[troubleshooting](#troubleshooting).
|
|
||||||
|
|
||||||
7. Activate the `invokeai` environment:
|
|
||||||
|
|
||||||
In order to use the newly created environment you will first need to
|
|
||||||
activate it
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
Your command-line prompt should change to indicate that `invokeai` is active
|
|
||||||
by prepending `(invokeai)`.
|
|
||||||
|
|
||||||
8. Pre-Load the model weights files:
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [here](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
The script `preload_models.py` will interactively guide you through the
|
|
||||||
process of downloading and installing the weights files needed for InvokeAI.
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you have to agree to. The script will list the steps you need
|
|
||||||
to take to create an account on the site that hosts the weights files,
|
|
||||||
accept the agreement, and provide an access token that allows InvokeAI to
|
|
||||||
legally download and install the weights files.
|
|
||||||
|
|
||||||
If you get an error message about a module not being installed, check that
|
|
||||||
the `invokeai` environment is active and if not, repeat step 5.
|
|
||||||
|
|
||||||
9. Run the command-line- or the web- interface:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
If you choose the run the web interface, point your browser at
|
|
||||||
http://localhost:9090 in order to load the GUI.
|
|
||||||
|
|
||||||
10. Render away!
|
|
||||||
|
|
||||||
Browse the [features](../features/CLI.md) section to learn about all the things you
|
|
||||||
can do with InvokeAI.
|
|
||||||
|
|
||||||
Note that some GPUs are slow to warm up. In particular, when using an AMD
|
|
||||||
card with the ROCm driver, you may have to wait for over a minute the first
|
|
||||||
time you try to generate an image. Fortunately, after the warm up period
|
|
||||||
rendering will be fast.
|
|
||||||
|
|
||||||
11. Subsequently, to relaunch the script, be sure to run "conda activate
|
|
||||||
invokeai", enter the `InvokeAI` directory, and then launch the invoke
|
|
||||||
script. If you forget to activate the 'invokeai' environment, the script
|
|
||||||
will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the InvokeAI directory, then to update to the latest and
|
|
||||||
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
conda env update
|
|
||||||
python scripts/preload_models.py --no-interactive #optional
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one. The last step may
|
|
||||||
be needed to take advantage of new features or released models. The
|
|
||||||
`--no-interactive` flag will prevent the script from prompting you to download
|
|
||||||
the big Stable Diffusion weights files.
|
|
||||||
|
|
||||||
## pip Install
|
|
||||||
|
|
||||||
To install InvokeAI with only the PIP package manager, please follow these
|
|
||||||
steps:
|
|
||||||
|
|
||||||
1. Make sure you are using Python 3.9 or higher. The rest of the install
|
|
||||||
procedure depends on this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -V
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Install the `virtualenv` tool if you don't have it already:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install virtualenv
|
|
||||||
```
|
|
||||||
|
|
||||||
3. From within the InvokeAI top-level directory, create and activate a virtual
|
|
||||||
environment named `invokeai`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
virtualenv invokeai
|
|
||||||
source invokeai/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Pick the correct `requirements*.txt` file for your hardware and operating
|
|
||||||
system.
|
|
||||||
|
|
||||||
We have created a series of environment files suited for different operating
|
|
||||||
systems and GPU hardware. They are located in the
|
|
||||||
`environments-and-requirements` directory:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| filename | OS |
|
|
||||||
| :---------------------------------: | :-------------------------------------------------------------: |
|
|
||||||
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
|
|
||||||
| requirements-lin-arm64.txt | Linux running on arm64 systems |
|
|
||||||
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
|
|
||||||
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
|
|
||||||
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Select the appropriate requirements file, and make a link to it from
|
|
||||||
`requirements.txt` in the top-level InvokeAI directory. The command to do
|
|
||||||
this from the top-level directory is:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
=== "Macintosh and Linux"
|
|
||||||
|
|
||||||
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
|
|
||||||
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning
|
|
||||||
|
|
||||||
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
|
|
||||||
This is a base requirements file that does not have the platform-specific
|
|
||||||
libraries. Also, be sure to link or copy the platform-specific file to
|
|
||||||
a top-level file named `requirements.txt` as shown here. Running pip on
|
|
||||||
a requirements file in a subdirectory will not work as expected.
|
|
||||||
|
|
||||||
When this is done, confirm that a file named `requirements.txt` has been
|
|
||||||
created in the InvokeAI root directory and that it points to the correct
|
|
||||||
file in `environments-and-requirements`.
|
|
||||||
|
|
||||||
5. Run PIP
|
|
||||||
|
|
||||||
Be sure that the `invokeai` environment is active before doing this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install --prefer-binary -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
Here are some common issues and their suggested solutions.
|
|
||||||
|
|
||||||
### Conda
|
|
||||||
|
|
||||||
#### Conda fails before completing `conda update`
|
|
||||||
|
|
||||||
The usual source of these errors is a package incompatibility. While we have
|
|
||||||
tried to minimize these, over time packages get updated and sometimes introduce
|
|
||||||
incompatibilities.
|
|
||||||
|
|
||||||
We suggest that you search
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
|
|
||||||
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
|
|
||||||
|
|
||||||
You may also try to install the broken packages manually using PIP. To do this,
|
|
||||||
activate the `invokeai` environment, and run `pip install` with the name and
|
|
||||||
version of the package that is causing the incompatibility. For example:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install test-tube==0.7.5
|
|
||||||
```
|
|
||||||
|
|
||||||
You can keep doing this until all requirements are satisfied and the `invoke.py`
|
|
||||||
script runs without errors. Please report to
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
|
|
||||||
to work around the problem so that others can benefit from your investigation.
|
|
||||||
|
|
||||||
### Create Conda Environment fails on MacOS
|
|
||||||
|
|
||||||
If conda create environment fails with lmdb error, this is most likely caused by Clang.
|
|
||||||
Run brew config to see which Clang is installed on your Mac. If Clang isn't installed, that's causing the error.
|
|
||||||
Start by installing additional XCode command line tools, followed by brew install llvm.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
xcode-select --install
|
|
||||||
brew install llvm
|
|
||||||
```
|
|
||||||
|
|
||||||
If brew config has Clang installed, update to the latest llvm and try creating the environment again.
|
|
||||||
|
|
||||||
#### `preload_models.py` or `invoke.py` crashes at an early stage
|
|
||||||
|
|
||||||
This is usually due to an incomplete or corrupted Conda install. Make sure you
|
|
||||||
have linked to the correct environment file and run `conda update` again.
|
|
||||||
|
|
||||||
If the problem persists, a more extreme measure is to clear Conda's caches and
|
|
||||||
remove the `invokeai` environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda deactivate
|
|
||||||
conda env remove -n invokeai
|
|
||||||
conda clean -a
|
|
||||||
conda update
|
|
||||||
```
|
|
||||||
|
|
||||||
This removes all cached library files, including ones that may have been
|
|
||||||
corrupted somehow. (This is not supposed to happen, but does anyway).
|
|
||||||
|
|
||||||
#### `invoke.py` crashes at a later stage
|
|
||||||
|
|
||||||
If the CLI or web site had been working ok, but something unexpected happens
|
|
||||||
later on during the session, you've encountered a code bug that is probably
|
|
||||||
unrelated to an install issue. Please search
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
|
|
||||||
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
|
|
||||||
|
|
||||||
#### My renders are running very slowly
|
|
||||||
|
|
||||||
You may have installed the wrong torch (machine learning) package, and the
|
|
||||||
system is running on CPU rather than the GPU. To check, look at the log messages
|
|
||||||
that appear when `invoke.py` is first starting up. One of the earlier lines
|
|
||||||
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
|
|
||||||
and on Macintoshes, it should say "mps". If instead the message says it is
|
|
||||||
running on "cpu", then you may need to install the correct torch library.
|
|
||||||
|
|
||||||
You may be able to fix this by installing a different torch library. Here are
|
|
||||||
the magic incantations for Conda and PIP.
|
|
||||||
|
|
||||||
!!! todo "For CUDA systems"
|
|
||||||
|
|
||||||
- conda
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
|
||||||
```
|
|
||||||
|
|
||||||
- pip
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For AMD systems"
|
|
||||||
|
|
||||||
- conda
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
- pip
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
|
|
||||||
```
|
|
||||||
|
|
||||||
More information and troubleshooting tips can be found at https://pytorch.org.
|
|
@ -1,156 +0,0 @@
|
|||||||
---
|
|
||||||
title: Source Installer
|
|
||||||
---
|
|
||||||
|
|
||||||
# The InvokeAI Source Installer
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
The source installer is a shell script that attempts to automate every step
|
|
||||||
needed to install and run InvokeAI on a stock computer running recent versions
|
|
||||||
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
|
||||||
version of InvokeAI with the option to upgrade to experimental versions later.
|
|
||||||
It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md)
|
|
||||||
|
|
||||||
Before you begin, make sure that you meet the
|
|
||||||
[hardware requirements](index.md#Hardware_Requirements) and has the appropriate
|
|
||||||
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
|
|
||||||
installed, you may need to install the
|
|
||||||
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
|
||||||
|
|
||||||
Installation requires roughly 18G of free disk space to load the libraries and
|
|
||||||
recommended model weights files.
|
|
||||||
|
|
||||||
## Walk through
|
|
||||||
|
|
||||||
Though there are multiple steps, there really is only one click involved to kick
|
|
||||||
off the process.
|
|
||||||
|
|
||||||
1. The source installer is distributed in ZIP files. Go to the
|
|
||||||
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
|
||||||
look for a series of files named:
|
|
||||||
|
|
||||||
- invokeAI-src-installer-mac.zip
|
|
||||||
- invokeAI-src-installer-windows.zip
|
|
||||||
- invokeAI-src-installer-linux.zip
|
|
||||||
|
|
||||||
Download the one that is appropriate for your operating system.
|
|
||||||
|
|
||||||
2. Unpack the zip file into a directory that has at least 18G of free space. Do
|
|
||||||
_not_ unpack into a directory that has an earlier version of InvokeAI.
|
|
||||||
|
|
||||||
This will create a new directory named "InvokeAI". This example shows how
|
|
||||||
this would look using the `unzip` command-line tool, but you may use any
|
|
||||||
graphical or command-line Zip extractor:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> unzip invokeAI-windows.zip
|
|
||||||
Archive: C: \Linco\Downloads\invokeAI-linux.zip
|
|
||||||
creating: invokeAI\
|
|
||||||
inflating: invokeAI\install.bat
|
|
||||||
inflating: invokeAI\readme.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
3. If you are using a desktop GUI, double-click the installer file. It will be
|
|
||||||
named `install.bat` on Windows systems and `install.sh` on Linux and
|
|
||||||
Macintosh systems.
|
|
||||||
|
|
||||||
4. Alternatively, form the command line, run the shell script or .bat file:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd invokeAI
|
|
||||||
C:\Documents\Linco\invokeAI> install.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Sit back and let the install script work. It will install various binary
|
|
||||||
requirements including Conda, Git and Python, then download the current
|
|
||||||
InvokeAI code and install it along with its dependencies.
|
|
||||||
|
|
||||||
6. After installation completes, the installer will launch a script called
|
|
||||||
`preload_models.py`, which will guide you through the first-time process of
|
|
||||||
selecting one or more Stable Diffusion model weights files, downloading and
|
|
||||||
configuring them.
|
|
||||||
|
|
||||||
Note that the main Stable Diffusion weights file is protected by a license
|
|
||||||
agreement that you must agree to in order to use. The script will list the
|
|
||||||
steps you need to take to create an account on the official site that hosts
|
|
||||||
the weights files, accept the agreement, and provide an access token that
|
|
||||||
allows InvokeAI to legally download and install the weights files.
|
|
||||||
|
|
||||||
If you have already downloaded the weights file(s) for another Stable
|
|
||||||
Diffusion distribution, you may skip this step (by selecting "skip" when
|
|
||||||
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
|
||||||
process for this is described in [Installing Models](INSTALLING_MODELS.md).
|
|
||||||
|
|
||||||
7. The script will now exit and you'll be ready to generate some images. The
|
|
||||||
invokeAI directory will contain numerous files. Look for a shell script
|
|
||||||
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
|
|
||||||
by double-clicking it or typing its name at the command-line:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
C:\Documents\Linco> cd invokeAI
|
|
||||||
C:\Documents\Linco\invokeAI> invoke.bat
|
|
||||||
```
|
|
||||||
|
|
||||||
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
|
|
||||||
the command-line interface, or (2) the web GUI. If you start the latter, you can
|
|
||||||
load the user interface by pointing your browser at http://localhost:9090.
|
|
||||||
|
|
||||||
The `invoke` script also offers you a third option labeled "open the developer
|
|
||||||
console". If you choose this option, you will be dropped into a command-line
|
|
||||||
interface in which you can run python commands directly, access developer tools,
|
|
||||||
and launch InvokeAI with customized options. To do the latter, you would launch
|
|
||||||
the script `scripts/invoke.py` as shown in this example:
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
python scripts/invoke.py --web --max_load_models=3 \
|
|
||||||
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
|
|
||||||
```
|
|
||||||
|
|
||||||
These options are described in detail in the
|
|
||||||
[Command-Line Interface](../features/CLI.md) documentation.
|
|
||||||
|
|
||||||
## Updating to newer versions
|
|
||||||
|
|
||||||
This section describes how to update InvokeAI to new versions of the software.
|
|
||||||
|
|
||||||
### Updating the stable version
|
|
||||||
|
|
||||||
This distribution is changing rapidly, and we add new features on a daily basis.
|
|
||||||
To update to the latest released version (recommended), run the `update.sh`
|
|
||||||
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
|
||||||
release and re-run the `preload_models` script to download any updated models
|
|
||||||
files that may be needed. You can also use this to add additional models that
|
|
||||||
you did not select at installation time.
|
|
||||||
|
|
||||||
### Updating to the development version
|
|
||||||
|
|
||||||
There may be times that there is a feature in the `development` branch of
|
|
||||||
InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that
|
|
||||||
corrects an annoying bug. To do this, you will use the developer's console.
|
|
||||||
|
|
||||||
From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or
|
|
||||||
`invoke.bat` (Windows) and selection option (3) to open the developers console.
|
|
||||||
Then run the following command to get the `development branch`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git checkout development
|
|
||||||
git pull
|
|
||||||
conda env update
|
|
||||||
```
|
|
||||||
|
|
||||||
You can now close the developer console and run `invoke` as before. If you get
|
|
||||||
complaints about missing models, then you may need to do the additional step of
|
|
||||||
running `preload_models.py`. This happens relatively infrequently. To do this,
|
|
||||||
simply open up the developer's console again and type
|
|
||||||
`python scripts/preload_models.py`.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
If you run into problems during or after installation, the InvokeAI team is
|
|
||||||
available to help you. Either create an
|
|
||||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
|
||||||
make a request for help on the "bugs-and-support" channel of our
|
|
||||||
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
|
||||||
organization, but typically somebody will be available to help you within 24
|
|
||||||
hours, and often much sooner.
|
|
64
docs/installation/deprecated_documentation/INSTALL_BINARY.md
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
---
|
||||||
|
title: InvokeAI Binary Installer
|
||||||
|
---
|
||||||
|
|
||||||
|
The InvokeAI binary installer is a shell script that will install InvokeAI onto a stock
|
||||||
|
computer running recent versions of Linux, MacOSX or Windows. It will leave you
|
||||||
|
with a version that runs a stable version of InvokeAI. When a new version of
|
||||||
|
InvokeAI is released, you will download and reinstall the new version.
|
||||||
|
|
||||||
|
If you wish to tinker with unreleased versions of InvokeAI that introduce
|
||||||
|
potentially unstable new features, you should consider using the
|
||||||
|
[source installer](INSTALL_SOURCE.md) or one of the
|
||||||
|
[manual install](../020_INSTALL_MANUAL.md) methods.
|
||||||
|
|
||||||
|
**Important Caveats**
|
||||||
|
- This script does not support AMD GPUs. For Linux AMD support,
|
||||||
|
please use the manual or source code installer methods.
|
||||||
|
|
||||||
|
- This script has difficulty on some Macintosh machines
|
||||||
|
that have previously been used for Python development due to
|
||||||
|
conflicting development tools versions. Mac developers may wish
|
||||||
|
to try the source code installer or one of the manual methods instead.
|
||||||
|
|
||||||
|
!!! todo
|
||||||
|
|
||||||
|
Before you begin, make sure that you meet
|
||||||
|
the[hardware requirements](/#hardware-requirements) and has the
|
||||||
|
appropriate GPU drivers installed. In particular, if you are a Linux user with
|
||||||
|
an AMD GPU installed, you may need to install the
|
||||||
|
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
|
Installation requires roughly 18G of free disk space to load the libraries and
|
||||||
|
recommended model weights files.
|
||||||
|
|
||||||
|
## Steps to Install
|
||||||
|
|
||||||
|
1. Download the
|
||||||
|
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
|
||||||
|
InvokeAI's installer for your platform. Look for a file named `InvokeAI-binary-<your platform>.zip`
|
||||||
|
|
||||||
|
2. Place the downloaded package someplace where you have plenty of HDD space,
|
||||||
|
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
|
||||||
|
|
||||||
|
3. Extract the 'InvokeAI' folder from the downloaded package
|
||||||
|
|
||||||
|
4. Open the extracted 'InvokeAI' folder
|
||||||
|
|
||||||
|
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
|
||||||
|
a terminal)
|
||||||
|
|
||||||
|
6. Follow the prompts
|
||||||
|
|
||||||
|
7. After installation, please run the 'invoke.bat' file (on Windows) or
|
||||||
|
'invoke.sh' file (on Linux/Mac) to start InvokeAI.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
If you run into problems during or after installation, the InvokeAI team is
|
||||||
|
available to help you. Either create an
|
||||||
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
||||||
|
make a request for help on the "bugs-and-support" channel of our
|
||||||
|
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
||||||
|
organization, but typically somebody will be available to help you within 24
|
||||||
|
hours, and often much sooner.
|
@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
title: Running InvokeAI on Google Colab using a Jupyter Notebook
|
||||||
|
---
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
We have a [Jupyter
|
||||||
|
notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
||||||
|
with cell-by-cell installation steps. It will download the code in
|
||||||
|
this repo as one of the steps, so instead of cloning this repo, simply
|
||||||
|
download the notebook from the link above and load it up in VSCode
|
||||||
|
(with the appropriate extensions installed)/Jupyter/JupyterLab and
|
||||||
|
start running the cells one-by-one.
|
||||||
|
|
||||||
|
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
|
||||||
|
|
||||||
|
## Running Online On Google Colabotary
|
||||||
|
[](https://colab.research.google.com/github/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
||||||
|
|
||||||
|
## Running Locally (Cloning)
|
||||||
|
|
||||||
|
1. Install the Jupyter Notebook python library (one-time):
|
||||||
|
pip install jupyter
|
||||||
|
|
||||||
|
2. Clone the InvokeAI repository:
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
cd invoke-ai
|
||||||
|
3. Create a virtual environment using conda:
|
||||||
|
conda create -n invoke jupyter
|
||||||
|
4. Activate the environment and start the Jupyter notebook:
|
||||||
|
conda activate invoke
|
||||||
|
jupyter notebook
|
135
docs/installation/deprecated_documentation/INSTALL_LINUX.md
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
title: Manual Installation, Linux
|
||||||
|
---
|
||||||
|
|
||||||
|
# :fontawesome-brands-linux: Linux
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
1. You will need to install the following prerequisites if they are not already
|
||||||
|
available. Use your operating system's preferred installer.
|
||||||
|
|
||||||
|
- Python (version 3.8.5 recommended; higher may work)
|
||||||
|
- git
|
||||||
|
|
||||||
|
2. Install the Python Anaconda environment manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
||||||
|
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
||||||
|
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
After installing anaconda, you should log out of your system and log back
|
||||||
|
in. If the installation worked, your command prompt will be prefixed by the
|
||||||
|
name of the current anaconda environment - `(base)`.
|
||||||
|
|
||||||
|
3. Copy the InvokeAI source code from GitHub:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create InvokeAI folder where you will follow the rest of the
|
||||||
|
steps.
|
||||||
|
|
||||||
|
4. Enter the newly-created InvokeAI folder. From this step forward make sure
|
||||||
|
that you are working in the InvokeAI directory!
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(base) ~$ cd InvokeAI
|
||||||
|
(base) ~/InvokeAI$
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Use anaconda to copy necessary python packages, create a new python
|
||||||
|
environment named `invokeai` and then activate the environment.
|
||||||
|
|
||||||
|
!!! todo "For systems with a CUDA (Nvidia) card:"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
||||||
|
(base) ~/InvokeAI$ conda env create -f environment-cuda.yml
|
||||||
|
(base) ~/InvokeAI$ conda activate invokeai
|
||||||
|
(invokeai) ~/InvokeAI$
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "For systems with an AMD card (using ROCm driver):"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
||||||
|
(base) ~/InvokeAI$ conda env create -f environment-AMD.yml
|
||||||
|
(base) ~/InvokeAI$ conda activate invokeai
|
||||||
|
(invokeai) ~/InvokeAI$
|
||||||
|
```
|
||||||
|
|
||||||
|
After these steps, your command prompt will be prefixed by `(invokeai)` as
|
||||||
|
shown above.
|
||||||
|
|
||||||
|
6. Load the big stable diffusion weights files and a couple of smaller
|
||||||
|
machine-learning models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(invokeai) ~/InvokeAI$ python3 scripts/configure_invokeai.py
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
This script will lead you through the process of creating an account on Hugging Face,
|
||||||
|
accepting the terms and conditions of the Stable Diffusion model license,
|
||||||
|
and obtaining an access token for downloading. It will then download and
|
||||||
|
install the weights files for you.
|
||||||
|
|
||||||
|
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing
|
||||||
|
the same thing.
|
||||||
|
|
||||||
|
7. Start generating images!
|
||||||
|
|
||||||
|
!!! todo "Run InvokeAI!"
|
||||||
|
|
||||||
|
!!! warning "IMPORTANT"
|
||||||
|
|
||||||
|
Make sure that the conda environment is activated, which should create
|
||||||
|
`(invokeai)` in front of your prompt!
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
|
8. Subsequently, to relaunch the script, be sure to run "conda activate
|
||||||
|
invokeai" (step 5, second command), enter the `InvokeAI` directory, and then
|
||||||
|
launch the invoke script (step 8). If you forget to activate the 'invokeai'
|
||||||
|
environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||||
|
|
||||||
|
## Updating to newer versions of the script
|
||||||
|
|
||||||
|
This distribution is changing rapidly. If you used the `git clone` method
|
||||||
|
(step 5) to download the InvokeAI directory, then to update to the latest and
|
||||||
|
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(invokeai) ~/InvokeAI$ git pull
|
||||||
|
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
|
||||||
|
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This will bring your local copy into sync with the remote one.
|
525
docs/installation/deprecated_documentation/INSTALL_MAC.md
Normal file
@ -0,0 +1,525 @@
|
|||||||
|
---
|
||||||
|
title: Manual Installation, macOS
|
||||||
|
---
|
||||||
|
|
||||||
|
# :fontawesome-brands-apple: macOS
|
||||||
|
|
||||||
|
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the
|
||||||
|
community.
|
||||||
|
|
||||||
|
While the repo does run on Intel Macs, we only have a couple reports. If you
|
||||||
|
have an Intel Mac and run into issues, please create an issue on Github and we
|
||||||
|
will do our best to help.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- macOS 12.3 Monterey or later
|
||||||
|
- About 10GB of storage (and 10GB of data if your internet connection has data
|
||||||
|
caps)
|
||||||
|
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
!!! todo "Homebrew"
|
||||||
|
|
||||||
|
First you will install the "brew" package manager. Skip this if brew is already installed.
|
||||||
|
|
||||||
|
```bash title="install brew (and Xcode command line tools)"
|
||||||
|
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "Conda Installation"
|
||||||
|
|
||||||
|
Now there are two different ways to set up the Python (miniconda) environment:
|
||||||
|
|
||||||
|
1. Standalone
|
||||||
|
2. with pyenv
|
||||||
|
|
||||||
|
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
|
||||||
|
|
||||||
|
=== "Standalone"
|
||||||
|
|
||||||
|
```bash title="Install cmake, protobuf, and rust"
|
||||||
|
brew install cmake protobuf rust
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash title="Clone the InvokeAI repository"
|
||||||
|
# Clone the Invoke AI repo
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
cd InvokeAI
|
||||||
|
```
|
||||||
|
|
||||||
|
Choose the appropriate architecture for your system and install miniconda:
|
||||||
|
|
||||||
|
=== "M1 arm64"
|
||||||
|
|
||||||
|
```bash title="Install miniconda for M1 arm64"
|
||||||
|
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
|
||||||
|
-o Miniconda3-latest-MacOSX-arm64.sh
|
||||||
|
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Intel x86_64"
|
||||||
|
|
||||||
|
```bash title="Install miniconda for Intel"
|
||||||
|
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
|
||||||
|
-o Miniconda3-latest-MacOSX-x86_64.sh
|
||||||
|
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "with pyenv"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
brew install pyenv-virtualenv
|
||||||
|
pyenv install anaconda3-2022.05
|
||||||
|
pyenv virtualenv anaconda3-2022.05
|
||||||
|
eval "$(pyenv init -)"
|
||||||
|
pyenv activate anaconda3-2022.05
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "Clone the Invoke AI repo"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
cd InvokeAI
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "Create the environment & install packages"
|
||||||
|
|
||||||
|
=== "M1 Mac"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Intel x86_64 Mac"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Activate the environment (you need to do this every time you want to run SD)
|
||||||
|
conda activate invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! info
|
||||||
|
|
||||||
|
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
||||||
|
create -f environment-mac.yml` never finishing in some situations. So
|
||||||
|
it isn't required but won't hurt.
|
||||||
|
|
||||||
|
!!! todo "Download the model weight files"
|
||||||
|
|
||||||
|
The `configure_invokeai.py` script downloads and installs the model weight
|
||||||
|
files for you. It will lead you through the process of getting a Hugging Face
|
||||||
|
account, accepting the Stable Diffusion model weight license agreement, and
|
||||||
|
creating a download token:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# This will take some time, depending on the speed of your internet connection
|
||||||
|
# and will consume about 10GB of space
|
||||||
|
python scripts/configure_invokeai.py
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "Run InvokeAI!"
|
||||||
|
|
||||||
|
!!! warning "IMPORTANT"
|
||||||
|
|
||||||
|
Make sure that the conda environment is activated, which should create
|
||||||
|
`(invokeai)` in front of your prompt!
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common problems
|
||||||
|
|
||||||
|
After you followed all the instructions and try to run invoke.py, you might get
|
||||||
|
several errors. Here's the errors I've seen and found solutions for.
|
||||||
|
|
||||||
|
### Is it slow?
|
||||||
|
|
||||||
|
```bash title="Be sure to specify 1 sample and 1 iteration."
|
||||||
|
python ./scripts/orig_scripts/txt2img.py \
|
||||||
|
--prompt "ocean" \
|
||||||
|
--ddim_steps 5 \
|
||||||
|
--n_samples 1 \
|
||||||
|
--n_iter 1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Doesn't work anymore?
|
||||||
|
|
||||||
|
PyTorch nightly includes support for MPS. Because of this, this setup is
|
||||||
|
inherently unstable. One morning I woke up and it no longer worked no matter
|
||||||
|
what I did until I switched to miniforge. However, I have another Mac that works
|
||||||
|
just fine with Anaconda. If you can't get it to work, please search a little
|
||||||
|
first because many of the errors will get posted and solved. If you can't find a
|
||||||
|
solution please [create an issue](https://github.com/invoke-ai/InvokeAI/issues).
|
||||||
|
|
||||||
|
One debugging step is to update to the latest version of PyTorch nightly.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda install \
|
||||||
|
pytorch \
|
||||||
|
torchvision \
|
||||||
|
-c pytorch-nightly \
|
||||||
|
-n invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clean -f
|
||||||
|
conda clean \
|
||||||
|
--yes \
|
||||||
|
--all
|
||||||
|
```
|
||||||
|
|
||||||
|
Or you could try to completley reset Anaconda:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda update \
|
||||||
|
--force-reinstall \
|
||||||
|
-y \
|
||||||
|
-n base \
|
||||||
|
-c defaults conda
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### "No module named cv2", torch, 'invokeai', 'transformers', 'taming', etc
|
||||||
|
|
||||||
|
There are several causes of these errors:
|
||||||
|
|
||||||
|
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins
|
||||||
|
with "(invokeai)" then you activated it. If it begins with "(base)" or
|
||||||
|
something else you haven't.
|
||||||
|
|
||||||
|
2. You might've run `./scripts/configure_invokeai.py` or `./scripts/invoke.py`
|
||||||
|
instead of `python ./scripts/configure_invokeai.py` or
|
||||||
|
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
|
||||||
|
|
||||||
|
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
|
||||||
|
|
||||||
|
3. if it says you're missing taming you need to rebuild your virtual
|
||||||
|
environment.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda deactivate
|
||||||
|
conda env remove -n invokeai
|
||||||
|
conda env create -f environment-mac.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. If you have activated the invokeai virtual environment and tried rebuilding
|
||||||
|
it, maybe the problem could be that I have something installed that you don't
|
||||||
|
and you'll just need to manually install it. Make sure you activate the
|
||||||
|
virtual environment so it installs there instead of globally.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda activate invokeai
|
||||||
|
pip install <package name>
|
||||||
|
```
|
||||||
|
|
||||||
|
You might also need to install Rust (I mention this again below).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### How many snakes are living in your computer?
|
||||||
|
|
||||||
|
You might have multiple Python installations on your system, in which case it's
|
||||||
|
important to be explicit and consistent about which one to use for a given
|
||||||
|
project. This is because virtual environments are coupled to the Python that
|
||||||
|
created it (and all the associated 'system-level' modules).
|
||||||
|
|
||||||
|
When you run `python` or `python3`, your shell searches the colon-delimited
|
||||||
|
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
|
||||||
|
that order - first match wins. You can ask for the location of the first
|
||||||
|
`python3` found in your `PATH` with the `which` command like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
% which python3
|
||||||
|
/usr/bin/python3
|
||||||
|
```
|
||||||
|
|
||||||
|
Anything in `/usr/bin` is
|
||||||
|
[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
|
||||||
|
However, `/usr/bin/python3` is not actually python3, but rather a stub that
|
||||||
|
offers to install Xcode (which includes python 3). If you have Xcode installed
|
||||||
|
already, `/usr/bin/python3` will execute
|
||||||
|
`/Library/Developer/CommandLineTools/usr/bin/python3` or
|
||||||
|
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
|
||||||
|
Xcode you've selected with `xcode-select`).
|
||||||
|
|
||||||
|
Note that `/usr/bin/python` is an entirely different python - specifically,
|
||||||
|
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
% which python3
|
||||||
|
/opt/homebrew/bin/python3
|
||||||
|
```
|
||||||
|
|
||||||
|
If you installed python3 with Homebrew and you've modified your path to search
|
||||||
|
for Homebrew binaries before system ones, you'll see the above path.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
% which python
|
||||||
|
/opt/anaconda3/bin/python
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have Anaconda installed, you will see the above path. There is a
|
||||||
|
`/opt/anaconda3/bin/python3` also.
|
||||||
|
|
||||||
|
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
|
||||||
|
should actually be the _same python_, which you can verify by comparing the
|
||||||
|
output of `python3 -V` and `python -V`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(invokeai) % which python
|
||||||
|
/Users/name/miniforge3/envs/invokeai/bin/python
|
||||||
|
```
|
||||||
|
|
||||||
|
The above is what you'll see if you have miniforge and correctly activated the
|
||||||
|
invokeai environment, while usingd the standalone setup instructions above.
|
||||||
|
|
||||||
|
If you otherwise installed via pyenv, you will get this result:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(anaconda3-2022.05) % which python
|
||||||
|
/Users/name/.pyenv/shims/python
|
||||||
|
```
|
||||||
|
|
||||||
|
It's all a mess and you should know
|
||||||
|
[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
|
||||||
|
if you want to fix it. Here's a brief hint of the most common ways you can
|
||||||
|
modify it (don't really have the time to explain it all here).
|
||||||
|
|
||||||
|
- ~/.zshrc
|
||||||
|
- ~/.bash_profile
|
||||||
|
- ~/.bashrc
|
||||||
|
- /etc/paths.d
|
||||||
|
- /etc/path
|
||||||
|
|
||||||
|
Which one you use will depend on what you have installed, except putting a file
|
||||||
|
in /etc/paths.d - which also is the way I prefer to do.
|
||||||
|
|
||||||
|
Finally, to answer the question posed by this section's title, it may help to
|
||||||
|
list all of the `python` / `python3` things found in `$PATH` instead of just the
|
||||||
|
first hit. To do so, add the `-a` switch to `which`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
% which -a python3
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
This will show a list of all binaries which are actually available in your PATH.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Debugging?
|
||||||
|
|
||||||
|
Tired of waiting for your renders to finish before you can see if it works?
|
||||||
|
Reduce the steps! The image quality will be horrible but at least you'll get
|
||||||
|
quick feedback.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python ./scripts/txt2img.py \
|
||||||
|
--prompt "ocean" \
|
||||||
|
--ddim_steps 5 \
|
||||||
|
--n_samples 1 \
|
||||||
|
--n_iter 1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/configure_invokeai.py
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### "The operator [name] is not current implemented for the MPS device." (sic)
|
||||||
|
|
||||||
|
!!! example "example error"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
|
||||||
|
implemented for the MPS device. If you want this op to be added in priority
|
||||||
|
during the prototype phase of this feature, please comment on
|
||||||
|
https://github.com/pytorch/pytorch/issues/77764.
|
||||||
|
As a temporary fix, you can set the environment variable
|
||||||
|
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
|
||||||
|
WARNING: this will be slower than running natively on MPS.
|
||||||
|
```
|
||||||
|
|
||||||
|
The InvokeAI version includes this fix in
|
||||||
|
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
|
||||||
|
|
||||||
|
### "Could not build wheels for tokenizers"
|
||||||
|
|
||||||
|
I have not seen this error because I had Rust installed on my computer before I
|
||||||
|
started playing with Stable Diffusion. The fix is to install Rust.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl \
|
||||||
|
--proto '=https' \
|
||||||
|
--tlsv1.2 \
|
||||||
|
-sSf https://sh.rustup.rs | sh
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### How come `--seed` doesn't work?
|
||||||
|
|
||||||
|
!!! Information
|
||||||
|
|
||||||
|
Completely reproducible results are not guaranteed across PyTorch releases,
|
||||||
|
individual commits, or different platforms. Furthermore, results may not be
|
||||||
|
reproducible between CPU and GPU executions, even when using identical seeds.
|
||||||
|
|
||||||
|
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
|
||||||
|
|
||||||
|
Second, we might have a fix that at least gets a consistent seed sort of. We're
|
||||||
|
still working on it.
|
||||||
|
|
||||||
|
### libiomp5.dylib error?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
||||||
|
```
|
||||||
|
|
||||||
|
You are likely using an Intel package by mistake. Be sure to run conda with the
|
||||||
|
environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
||||||
|
|
||||||
|
`CONDA_SUBDIR=osx-arm64 conda install ...`
|
||||||
|
|
||||||
|
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
|
||||||
|
by a dependency.
|
||||||
|
[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
|
||||||
|
is a metapackage designed to prevent this, by making it impossible to install
|
||||||
|
`mkl`, but if your environment is already broken it may not work.
|
||||||
|
|
||||||
|
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
||||||
|
masks the underlying issue of using Intel packages.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Not enough memory
|
||||||
|
|
||||||
|
This seems to be a common problem and is probably the underlying problem for a
|
||||||
|
lot of symptoms (listed below). The fix is to lower your image size or to add
|
||||||
|
`model.half()` right after the model is loaded. I should probably test it out.
|
||||||
|
I've read that the reason this fixes problems is because it converts the model
|
||||||
|
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
|
||||||
|
how that would affect the quality of the images though.
|
||||||
|
|
||||||
|
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### "Error: product of dimension sizes > 2\*\*31'"
|
||||||
|
|
||||||
|
This error happens with img2img, which I haven't played with too much yet. But I
|
||||||
|
know it's because your image is too big or the resolution isn't a multiple of
|
||||||
|
32x32. Because the stable-diffusion model was trained on images that were 512 x
|
||||||
|
512, it's always best to use that output size (which is the default). However,
|
||||||
|
if you're using that size and you get the above error, try 256 x 256 or 512 x
|
||||||
|
256 or something as the source image.
|
||||||
|
|
||||||
|
BTW, 2\*\*31-1 =
|
||||||
|
[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
|
||||||
|
is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
|
||||||
|
C.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### I just got Rickrolled! Do I have a virus?
|
||||||
|
|
||||||
|
You don't have a virus. It's part of the project. Here's
|
||||||
|
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) and
|
||||||
|
here's
|
||||||
|
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
|
||||||
|
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
|
||||||
|
call this "computer vision", sheesh).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### My images come out black
|
||||||
|
|
||||||
|
We might have this fixed, we are still testing.
|
||||||
|
|
||||||
|
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
|
||||||
|
on CUDA GPU's where the images come out green. Maybe it's the same issue?
|
||||||
|
Someone in that issue says to use "--precision full", but this fork actually
|
||||||
|
disables that flag. I don't know why, someone else provided that code and I
|
||||||
|
don't know what it does. Maybe the `model.half()` suggestion above would fix
|
||||||
|
this issue too. I should probably test it.
|
||||||
|
|
||||||
|
### "view size is not compatible with input tensor's size and stride"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
File "/opt/anaconda3/envs/invokeai/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
|
||||||
|
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
|
||||||
|
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
|
||||||
|
```
|
||||||
|
|
||||||
|
Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but
|
||||||
|
we found a file in stable-diffusion that we could change instead. This is a
|
||||||
|
32-bit vs 16-bit problem.
|
||||||
|
|
||||||
|
### The processor must support the Intel bla bla bla
|
||||||
|
|
||||||
|
What? Intel? On an Apple Silicon?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||||
|
```
|
||||||
|
|
||||||
|
This is due to the Intel `mkl` package getting picked up when you try to install
|
||||||
|
something that depends on it-- Rosetta can translate some Intel instructions but
|
||||||
|
not the specialized ones here. To avoid this, make sure to use the environment
|
||||||
|
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
|
||||||
|
use ARM packages, and use `nomkl` as described above.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
|
||||||
|
|
||||||
|
May appear when just starting to generate, e.g.:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke> clouds
|
||||||
|
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
|
||||||
|
placeholder_idx = torch.where(
|
||||||
|
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
|
||||||
|
LLVM ERROR: Failed to infer result type(s).
|
||||||
|
Abort trap: 6
|
||||||
|
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
|
||||||
|
warnings.warn('resource_tracker: There appear to be %d '
|
||||||
|
```
|
225
docs/installation/deprecated_documentation/INSTALL_SOURCE.md
Normal file
@ -0,0 +1,225 @@
|
|||||||
|
---
|
||||||
|
title: Source Installer
|
||||||
|
---
|
||||||
|
|
||||||
|
# The InvokeAI Source Installer
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
The source installer is a shell script that attempts to automate every step
|
||||||
|
needed to install and run InvokeAI on a stock computer running recent versions
|
||||||
|
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
|
||||||
|
version of InvokeAI with the option to upgrade to experimental versions later.
|
||||||
|
|
||||||
|
Before you begin, make sure that you meet the
|
||||||
|
[hardware requirements](../../index.md#hardware-requirements) and has the appropriate
|
||||||
|
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
|
||||||
|
installed, you may need to install the
|
||||||
|
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||||
|
|
||||||
|
Installation requires roughly 18G of free disk space to load the libraries and
|
||||||
|
recommended model weights files.
|
||||||
|
|
||||||
|
## Walk through
|
||||||
|
|
||||||
|
Though there are multiple steps, there really is only one click involved to kick
|
||||||
|
off the process.
|
||||||
|
|
||||||
|
1. The source installer is distributed in ZIP files. Go to the
|
||||||
|
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
||||||
|
look for a series of files named:
|
||||||
|
|
||||||
|
- [invokeAI-src-installer-2.2.3-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-mac.zip)
|
||||||
|
- [invokeAI-src-installer-2.2.3-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-windows.zip)
|
||||||
|
- [invokeAI-src-installer-2.2.3-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/latest/download/invokeAI-src-installer-2.2.3-linux.zip)
|
||||||
|
|
||||||
|
Download the one that is appropriate for your operating system.
|
||||||
|
|
||||||
|
2. Unpack the zip file into a directory that has at least 18G of free space. Do
|
||||||
|
_not_ unpack into a directory that has an earlier version of InvokeAI.
|
||||||
|
|
||||||
|
This will create a new directory named "InvokeAI". This example shows how
|
||||||
|
this would look using the `unzip` command-line tool, but you may use any
|
||||||
|
graphical or command-line Zip extractor:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> unzip invokeAI-windows.zip
|
||||||
|
Archive: C: \Linco\Downloads\invokeAI-linux.zip
|
||||||
|
creating: invokeAI\
|
||||||
|
inflating: invokeAI\install.bat
|
||||||
|
inflating: invokeAI\readme.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
3. If you are a macOS user, you may need to install the Xcode command line tools.
|
||||||
|
These are a set of tools that are needed to run certain applications in a Terminal,
|
||||||
|
including InvokeAI. This package is provided directly by Apple.
|
||||||
|
|
||||||
|
To install, open a terminal window and run `xcode-select --install`. You will get
|
||||||
|
a macOS system popup guiding you through the install. If you already have them
|
||||||
|
installed, you will instead see some output in the Terminal advising you that the
|
||||||
|
tools are already installed.
|
||||||
|
|
||||||
|
More information can be found here:
|
||||||
|
https://www.freecodecamp.org/news/install-xcode-command-line-tools/
|
||||||
|
|
||||||
|
4. If you are using a desktop GUI, double-click the installer file. It will be
|
||||||
|
named `install.bat` on Windows systems and `install.sh` on Linux and
|
||||||
|
Macintosh systems.
|
||||||
|
|
||||||
|
5. Alternatively, from the command line, run the shell script or .bat file:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd invokeAI
|
||||||
|
C:\Documents\Linco\invokeAI> install.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Sit back and let the install script work. It will install various binary
|
||||||
|
requirements including Conda, Git and Python, then download the current
|
||||||
|
InvokeAI code and install it along with its dependencies.
|
||||||
|
|
||||||
|
Be aware that some of the library download and install steps take a long time.
|
||||||
|
In particular, the `pytorch` package is quite large and often appears to get
|
||||||
|
"stuck" at 99.9%. Similarly, the `pip installing requirements` step may
|
||||||
|
appear to hang. Have patience and the installation step will eventually
|
||||||
|
resume. However, there are occasions when the library install does
|
||||||
|
legitimately get stuck. If you have been waiting for more than ten minutes
|
||||||
|
and nothing is happening, you can interrupt the script with ^C. You may restart
|
||||||
|
it and it will pick up where it left off.
|
||||||
|
|
||||||
|
7. After installation completes, the installer will launch a script called
|
||||||
|
`configure_invokeai.py`, which will guide you through the first-time process of
|
||||||
|
selecting one or more Stable Diffusion model weights files, downloading and
|
||||||
|
configuring them.
|
||||||
|
|
||||||
|
Note that the main Stable Diffusion weights file is protected by a license
|
||||||
|
agreement that you must agree to in order to use. The script will list the
|
||||||
|
steps you need to take to create an account on the official site that hosts
|
||||||
|
the weights files, accept the agreement, and provide an access token that
|
||||||
|
allows InvokeAI to legally download and install the weights files.
|
||||||
|
|
||||||
|
If you have already downloaded the weights file(s) for another Stable
|
||||||
|
Diffusion distribution, you may skip this step (by selecting "skip" when
|
||||||
|
prompted) and configure InvokeAI to use the previously-downloaded files. The
|
||||||
|
process for this is described in [Installing Models](../050_INSTALLING_MODELS.md).
|
||||||
|
|
||||||
|
8. The script will now exit and you'll be ready to generate some images. The
|
||||||
|
invokeAI directory will contain numerous files. Look for a shell script
|
||||||
|
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
|
||||||
|
by double-clicking it or typing its name at the command-line:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
C:\Documents\Linco> cd invokeAI
|
||||||
|
C:\Documents\Linco\invokeAI> invoke.bat
|
||||||
|
```
|
||||||
|
|
||||||
|
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
|
||||||
|
the command-line interface, or (2) the web GUI. If you start the latter, you can
|
||||||
|
load the user interface by pointing your browser at http://localhost:9090.
|
||||||
|
|
||||||
|
The `invoke` script also offers you a third option labeled "open the developer
|
||||||
|
console". If you choose this option, you will be dropped into a command-line
|
||||||
|
interface in which you can run python commands directly, access developer tools,
|
||||||
|
and launch InvokeAI with customized options. To do the latter, you would launch
|
||||||
|
the script `scripts/invoke.py` as shown in this example:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
python scripts/invoke.py --web --max_load_models=3 \
|
||||||
|
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
|
||||||
|
```
|
||||||
|
|
||||||
|
These options are described in detail in the
|
||||||
|
[Command-Line Interface](../../features/CLI.md) documentation.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
_Package dependency conflicts_ If you have previously installed
|
||||||
|
InvokeAI or another Stable Diffusion package, the installer may
|
||||||
|
occasionally pick up outdated libraries and either the installer or
|
||||||
|
`invoke` will fail with complaints out library conflicts. There are
|
||||||
|
two steps you can take to clear this problem. Both of these are done
|
||||||
|
from within the "developer's console", which you can get to by
|
||||||
|
launching `invoke.sh` (or `invoke.bat`) and selecting launch option
|
||||||
|
#3:
|
||||||
|
|
||||||
|
1. Remove the previous `invokeai` environment completely. From within
|
||||||
|
the developer's console, give the command `conda env remove -n
|
||||||
|
invokeai`. This will delete previous files installed by `invoke`.
|
||||||
|
|
||||||
|
Then exit from the developer's console and launch the script
|
||||||
|
`update.sh` (or `update.bat`). This will download the most recent
|
||||||
|
InvokeAI (including bug fixes) and reinstall the environment.
|
||||||
|
You should then be able to run `invoke.sh`/`invoke.bat`.
|
||||||
|
|
||||||
|
2. If this doesn't work, you can try cleaning your system's conda
|
||||||
|
cache. This is slightly more extreme, but won't interfere with
|
||||||
|
any other python-based programs installed on your computer.
|
||||||
|
From the developer's console, run the command `conda clean -a`
|
||||||
|
and answer "yes" to all prompts.
|
||||||
|
|
||||||
|
After this is done, run `update.sh` and try again as before.
|
||||||
|
|
||||||
|
_"Corrupted configuration file."__ Everything seems to install ok, but
|
||||||
|
`invoke` complains of a corrupted configuration file and goes calls
|
||||||
|
`configure_invokeai.py` to fix, but this doesn't fix the problem.
|
||||||
|
|
||||||
|
This issue is often caused by a misconfigured configuration directive
|
||||||
|
in the `.invokeai` initialization file that contains startup settings.
|
||||||
|
This can be corrected by fixing the offending line.
|
||||||
|
|
||||||
|
First find `.invokeai`. It is a small text file located in your home
|
||||||
|
directory, `~/.invokeai` on Mac and Linux systems, and `C:\Users\*your
|
||||||
|
name*\.invokeai` on Windows systems. Open it with a text editor
|
||||||
|
(e.g. Notepad on Windows, TextEdit on Macs, or `nano` on Linux)
|
||||||
|
and look for the lines starting with `--root` and `--outdir`.
|
||||||
|
|
||||||
|
An example is here:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
--root="/home/lstein/invokeai"
|
||||||
|
--outdir="/home/lstein/invokeai/outputs"
|
||||||
|
```
|
||||||
|
|
||||||
|
There should not be whitespace before or after the directory paths,
|
||||||
|
and the paths should not end with slashes:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
--root="/home/lstein/invokeai " # wrong! no whitespace here
|
||||||
|
--root="/home\lstein\invokeai\" # wrong! shouldn't end in a slash
|
||||||
|
```
|
||||||
|
|
||||||
|
Fix the problem with your text editor and save as a **plain text**
|
||||||
|
file. This should clear the issue.
|
||||||
|
|
||||||
|
_If none of these maneuvers fixes the problem_ then please report the
|
||||||
|
problem to the [InvokeAI
|
||||||
|
Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
|
||||||
|
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive assistance.
|
||||||
|
|
||||||
|
## Updating to newer versions
|
||||||
|
|
||||||
|
This section describes how to update InvokeAI to new versions of the software.
|
||||||
|
|
||||||
|
### Updating the stable version
|
||||||
|
|
||||||
|
This distribution is changing rapidly, and we add new features on a daily basis.
|
||||||
|
To update to the latest released version (recommended), run the `update.sh`
|
||||||
|
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
|
||||||
|
release and re-run the `configure_invokeai` script to download any updated models
|
||||||
|
files that may be needed. You can also use this to add additional models that
|
||||||
|
you did not select at installation time.
|
||||||
|
|
||||||
|
You can now close the developer console and run `invoke` as before. If you get
|
||||||
|
complaints about missing models, then you may need to do the additional step of
|
||||||
|
running `configure_invokeai.py`. This happens relatively infrequently. To do this,
|
||||||
|
simply open up the developer's console again and type
|
||||||
|
`python scripts/configure_invokeai.py`.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
If you run into problems during or after installation, the InvokeAI team is
|
||||||
|
available to help you. Either create an
|
||||||
|
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
|
||||||
|
make a request for help on the "bugs-and-support" channel of our
|
||||||
|
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
|
||||||
|
organization, but typically somebody will be available to help you within 24
|
||||||
|
hours, and often much sooner.
|
137
docs/installation/deprecated_documentation/INSTALL_WINDOWS.md
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
---
|
||||||
|
title: Manual Installation, Windows
|
||||||
|
---
|
||||||
|
|
||||||
|
# :fontawesome-brands-windows: Windows
|
||||||
|
|
||||||
|
## **Notebook install (semi-automated)**
|
||||||
|
|
||||||
|
We have a
|
||||||
|
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
|
||||||
|
with cell-by-cell installation steps. It will download the code in this repo as
|
||||||
|
one of the steps, so instead of cloning this repo, simply download the notebook
|
||||||
|
from the link above and load it up in VSCode (with the appropriate extensions
|
||||||
|
installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
||||||
|
|
||||||
|
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
|
||||||
|
|
||||||
|
## **Manual Install with Conda**
|
||||||
|
|
||||||
|
1. Install Anaconda3 (miniconda3 version) from [here](https://docs.anaconda.com/anaconda/install/windows/)
|
||||||
|
|
||||||
|
2. Install Git from [here](https://git-scm.com/download/win)
|
||||||
|
|
||||||
|
3. Launch Anaconda from the Windows Start menu. This will bring up a command
|
||||||
|
window. Type all the remaining commands in this window.
|
||||||
|
|
||||||
|
4. Run the command:
|
||||||
|
|
||||||
|
```batch
|
||||||
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create stable-diffusion folder where you will follow the rest of
|
||||||
|
the steps.
|
||||||
|
|
||||||
|
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
|
||||||
|
|
||||||
|
```batch
|
||||||
|
cd InvokeAI
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Run the following commands:
|
||||||
|
|
||||||
|
!!! todo "For systems with a CUDA (Nvidia) card:"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmdir src # (this is a precaution in case there is already a src directory)
|
||||||
|
conda env create -f environment-cuda.yml
|
||||||
|
conda activate invokeai
|
||||||
|
(invokeai)>
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! todo "For systems with an AMD card (using ROCm driver):"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmdir src # (this is a precaution in case there is already a src directory)
|
||||||
|
conda env create -f environment-AMD.yml
|
||||||
|
conda activate invokeai
|
||||||
|
(invokeai)>
|
||||||
|
```
|
||||||
|
|
||||||
|
This will install all python requirements and activate the "invokeai" environment
|
||||||
|
which sets PATH and other environment variables properly.
|
||||||
|
|
||||||
|
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/configure_invokeai.py
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
This script will lead you through the process of creating an account on Hugging Face,
|
||||||
|
accepting the terms and conditions of the Stable Diffusion model license, and
|
||||||
|
obtaining an access token for downloading. It will then download and install the
|
||||||
|
weights files for you.
|
||||||
|
|
||||||
|
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing the
|
||||||
|
same thing.
|
||||||
|
|
||||||
|
8. Start generating images!
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
!!! warning "IMPORTANT"
|
||||||
|
|
||||||
|
Make sure that the conda environment is activated, which should create
|
||||||
|
`(invokeai)` in front of your prompt!
|
||||||
|
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "local Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Public Webserver"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/invoke.py --web --host 0.0.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
To use an alternative model you may invoke the `!switch` command in
|
||||||
|
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
||||||
|
either the CLI or the Web UI. See [Command Line
|
||||||
|
Client](../../features/CLI.md#model-selection-and-importation). The
|
||||||
|
model names are defined in `configs/models.yaml`.
|
||||||
|
|
||||||
|
9. Subsequently, to relaunch the script, first activate the Anaconda
|
||||||
|
command window (step 3),enter the InvokeAI directory (step 5, `cd
|
||||||
|
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
|
||||||
|
launch the invoke script (step 9).
|
||||||
|
|
||||||
|
!!! tip "Tildebyte has written an alternative"
|
||||||
|
|
||||||
|
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
||||||
|
which uses the Windows Powershell and pew. If you are having trouble with
|
||||||
|
Anaconda on Windows, give this a try (or try it first!)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
This distribution is changing rapidly. If you used the `git clone` method
|
||||||
|
(step 5) to download the stable-diffusion directory, then to update to the
|
||||||
|
latest and greatest version, launch the Anaconda window, enter
|
||||||
|
`stable-diffusion`, and type:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git pull
|
||||||
|
conda env update
|
||||||
|
```
|
||||||
|
|
||||||
|
This will bring your local copy into sync with the remote one.
|
@ -5,58 +5,31 @@ title: Overview
|
|||||||
We offer several ways to install InvokeAI, each one suited to your
|
We offer several ways to install InvokeAI, each one suited to your
|
||||||
experience and preferences.
|
experience and preferences.
|
||||||
|
|
||||||
1. [InvokeAI installer](INSTALL_INVOKE.md)
|
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
|
||||||
|
|
||||||
This is a installer script that installs InvokeAI and all the
|
This is a script that will install all of InvokeAI's essential
|
||||||
third party libraries it depends on. When a new version of
|
third party libraries and InvokeAI itself. It includes access to a
|
||||||
InvokeAI is released, you will download and reinstall the new
|
"developer console" which will help us debug problems with you and
|
||||||
version.
|
give you to access experimental features.
|
||||||
|
|
||||||
This installer is designed for people who want the system to "just
|
2. [Manual Installation](020_INSTALL_MANUAL.md)
|
||||||
work", don't have an interest in tinkering with it, and do not
|
|
||||||
care about upgrading to unreleased experimental features.
|
|
||||||
|
|
||||||
**Important Caveats**
|
|
||||||
- This script does not support AMD GPUs. For Linux AMD support,
|
|
||||||
please use the manual or source code installer methods.
|
|
||||||
- This script has difficulty on some Macintosh machines
|
|
||||||
that have previously been used for Python development due to
|
|
||||||
conflicting development tools versions. Mac developers may wish
|
|
||||||
to try the source code installer or one of the manual methods instead.
|
|
||||||
|
|
||||||
2. [Source code installer](INSTALL_SOURCE.md)
|
|
||||||
|
|
||||||
This is a script that will install InvokeAI and all its essential
|
|
||||||
third party libraries. In contrast to the previous installer, it
|
|
||||||
includes access to a "developer console" which will allow you to
|
|
||||||
access experimental features on the development branch.
|
|
||||||
|
|
||||||
This method is recommended for individuals who are wish to stay
|
|
||||||
on the cutting edge of InvokeAI development and are not afraid
|
|
||||||
of occasional breakage.
|
|
||||||
|
|
||||||
3. [Manual Installation](INSTALL_MANUAL.md)
|
|
||||||
|
|
||||||
In this method you will manually run the commands needed to install
|
In this method you will manually run the commands needed to install
|
||||||
InvokeAI and its dependencies. We offer two recipes: one suited to
|
InvokeAI and its dependencies. We offer two recipes: one suited to
|
||||||
those who prefer the `conda` tool, and one suited to those who prefer
|
those who prefer the `conda` tool, and one suited to those who prefer
|
||||||
`pip` and Python virtual environments.
|
`pip` and Python virtual environments. In our hands the pip install
|
||||||
|
is faster and more reliable, but your mileage may vary.
|
||||||
|
Note that the conda installation method is currently deprecated and
|
||||||
|
will not be supported at some point in the future.
|
||||||
|
|
||||||
This method is recommended for users who have previously used `conda`
|
This method is recommended for users who have previously used `conda`
|
||||||
or `pip` in the past, developers, and anyone who wishes to remain on
|
or `pip` in the past, developers, and anyone who wishes to remain on
|
||||||
the cutting edge of future InvokeAI development and is willing to put
|
the cutting edge of future InvokeAI development and is willing to put
|
||||||
up with occasional glitches and breakage.
|
up with occasional glitches and breakage.
|
||||||
|
|
||||||
4. [Docker Installation](INSTALL_DOCKER.md)
|
3. [Docker Installation](040_INSTALL_DOCKER.md)
|
||||||
|
|
||||||
We also offer a method for creating Docker containers containing
|
We also offer a method for creating Docker containers containing
|
||||||
InvokeAI and its dependencies. This method is recommended for
|
InvokeAI and its dependencies. This method is recommended for
|
||||||
individuals with experience with Docker containers and understand
|
individuals with experience with Docker containers and understand
|
||||||
the pluses and minuses of a container-based install.
|
the pluses and minuses of a container-based install.
|
||||||
|
|
||||||
5. [Jupyter Notebooks Installation](INSTALL_JUPYTER.md)
|
|
||||||
|
|
||||||
This method is suitable for running InvokeAI on a Google Colab
|
|
||||||
account. It is recommended for individuals who have previously
|
|
||||||
worked on the Colab and are comfortable with the Jupyter notebook
|
|
||||||
environment.
|
|
||||||
|
@ -1,135 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, Linux
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-linux: Linux
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
1. You will need to install the following prerequisites if they are not already
|
|
||||||
available. Use your operating system's preferred installer.
|
|
||||||
|
|
||||||
- Python (version 3.8.5 recommended; higher may work)
|
|
||||||
- git
|
|
||||||
|
|
||||||
2. Install the Python Anaconda environment manager.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
After installing anaconda, you should log out of your system and log back
|
|
||||||
in. If the installation worked, your command prompt will be prefixed by the
|
|
||||||
name of the current anaconda environment - `(base)`.
|
|
||||||
|
|
||||||
3. Copy the InvokeAI source code from GitHub:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create InvokeAI folder where you will follow the rest of the
|
|
||||||
steps.
|
|
||||||
|
|
||||||
4. Enter the newly-created InvokeAI folder. From this step forward make sure
|
|
||||||
that you are working in the InvokeAI directory!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) ~$ cd InvokeAI
|
|
||||||
(base) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Use anaconda to copy necessary python packages, create a new python
|
|
||||||
environment named `invokeai` and then activate the environment.
|
|
||||||
|
|
||||||
!!! todo "For systems with a CUDA (Nvidia) card:"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
|
||||||
(base) ~/InvokeAI$ conda env create -f environment-cuda.yml
|
|
||||||
(base) ~/InvokeAI$ conda activate invokeai
|
|
||||||
(invokeai) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For systems with an AMD card (using ROCm driver):"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(base) rm -rf src # (this is a precaution in case there is already a src directory)
|
|
||||||
(base) ~/InvokeAI$ conda env create -f environment-AMD.yml
|
|
||||||
(base) ~/InvokeAI$ conda activate invokeai
|
|
||||||
(invokeai) ~/InvokeAI$
|
|
||||||
```
|
|
||||||
|
|
||||||
After these steps, your command prompt will be prefixed by `(invokeai)` as
|
|
||||||
shown above.
|
|
||||||
|
|
||||||
6. Load the big stable diffusion weights files and a couple of smaller
|
|
||||||
machine-learning models:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
This script will lead you through the process of creating an account on Hugging Face,
|
|
||||||
accepting the terms and conditions of the Stable Diffusion model license,
|
|
||||||
and obtaining an access token for downloading. It will then download and
|
|
||||||
install the weights files for you.
|
|
||||||
|
|
||||||
Please look [here](INSTALLING_MODELS.md) for a manual process for doing
|
|
||||||
the same thing.
|
|
||||||
|
|
||||||
7. Start generating images!
|
|
||||||
|
|
||||||
!!! todo "Run InvokeAI!"
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../features/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
8. Subsequently, to relaunch the script, be sure to run "conda activate
|
|
||||||
invokeai" (step 5, second command), enter the `InvokeAI` directory, and then
|
|
||||||
launch the invoke script (step 8). If you forget to activate the 'invokeai'
|
|
||||||
environment, the script will fail with multiple `ModuleNotFound` errors.
|
|
||||||
|
|
||||||
## Updating to newer versions of the script
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the InvokeAI directory, then to update to the latest and
|
|
||||||
greatest version, launch the Anaconda window, enter `InvokeAI` and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/InvokeAI$ git pull
|
|
||||||
(invokeai) ~/InvokeAI$ rm -rf src # prevents conda freezing errors
|
|
||||||
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one.
|
|
@ -1,525 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, macOS
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-apple: macOS
|
|
||||||
|
|
||||||
Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the
|
|
||||||
community.
|
|
||||||
|
|
||||||
While the repo does run on Intel Macs, we only have a couple reports. If you
|
|
||||||
have an Intel Mac and run into issues, please create an issue on Github and we
|
|
||||||
will do our best to help.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- macOS 12.3 Monterey or later
|
|
||||||
- About 10GB of storage (and 10GB of data if your internet connection has data
|
|
||||||
caps)
|
|
||||||
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
!!! todo "Homebrew"
|
|
||||||
|
|
||||||
First you will install the "brew" package manager. Skip this if brew is already installed.
|
|
||||||
|
|
||||||
```bash title="install brew (and Xcode command line tools)"
|
|
||||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Conda Installation"
|
|
||||||
|
|
||||||
Now there are two different ways to set up the Python (miniconda) environment:
|
|
||||||
|
|
||||||
1. Standalone
|
|
||||||
2. with pyenv
|
|
||||||
|
|
||||||
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
|
|
||||||
|
|
||||||
=== "Standalone"
|
|
||||||
|
|
||||||
```bash title="Install cmake, protobuf, and rust"
|
|
||||||
brew install cmake protobuf rust
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash title="Clone the InvokeAI repository"
|
|
||||||
# Clone the Invoke AI repo
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
Choose the appropriate architecture for your system and install miniconda:
|
|
||||||
|
|
||||||
=== "M1 arm64"
|
|
||||||
|
|
||||||
```bash title="Install miniconda for M1 arm64"
|
|
||||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
|
|
||||||
-o Miniconda3-latest-MacOSX-arm64.sh
|
|
||||||
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Intel x86_64"
|
|
||||||
|
|
||||||
```bash title="Install miniconda for Intel"
|
|
||||||
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
|
|
||||||
-o Miniconda3-latest-MacOSX-x86_64.sh
|
|
||||||
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "with pyenv"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
brew install pyenv-virtualenv
|
|
||||||
pyenv install anaconda3-2022.05
|
|
||||||
pyenv virtualenv anaconda3-2022.05
|
|
||||||
eval "$(pyenv init -)"
|
|
||||||
pyenv activate anaconda3-2022.05
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Clone the Invoke AI repo"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Create the environment & install packages"
|
|
||||||
|
|
||||||
=== "M1 Mac"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Intel x86_64 Mac"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Activate the environment (you need to do this every time you want to run SD)
|
|
||||||
conda activate invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! info
|
|
||||||
|
|
||||||
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
|
|
||||||
create -f environment-mac.yml` never finishing in some situations. So
|
|
||||||
it isn't required but won't hurt.
|
|
||||||
|
|
||||||
!!! todo "Download the model weight files"
|
|
||||||
|
|
||||||
The `preload_models.py` script downloads and installs the model weight
|
|
||||||
files for you. It will lead you through the process of getting a Hugging Face
|
|
||||||
account, accepting the Stable Diffusion model weight license agreement, and
|
|
||||||
creating a download token:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# This will take some time, depending on the speed of your internet connection
|
|
||||||
# and will consume about 10GB of space
|
|
||||||
python scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "Run InvokeAI!"
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../features/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common problems
|
|
||||||
|
|
||||||
After you followed all the instructions and try to run invoke.py, you might get
|
|
||||||
several errors. Here's the errors I've seen and found solutions for.
|
|
||||||
|
|
||||||
### Is it slow?
|
|
||||||
|
|
||||||
```bash title="Be sure to specify 1 sample and 1 iteration."
|
|
||||||
python ./scripts/orig_scripts/txt2img.py \
|
|
||||||
--prompt "ocean" \
|
|
||||||
--ddim_steps 5 \
|
|
||||||
--n_samples 1 \
|
|
||||||
--n_iter 1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Doesn't work anymore?
|
|
||||||
|
|
||||||
PyTorch nightly includes support for MPS. Because of this, this setup is
|
|
||||||
inherently unstable. One morning I woke up and it no longer worked no matter
|
|
||||||
what I did until I switched to miniforge. However, I have another Mac that works
|
|
||||||
just fine with Anaconda. If you can't get it to work, please search a little
|
|
||||||
first because many of the errors will get posted and solved. If you can't find a
|
|
||||||
solution please [create an issue](https://github.com/invoke-ai/InvokeAI/issues).
|
|
||||||
|
|
||||||
One debugging step is to update to the latest version of PyTorch nightly.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda install \
|
|
||||||
pytorch \
|
|
||||||
torchvision \
|
|
||||||
-c pytorch-nightly \
|
|
||||||
-n invokeai
|
|
||||||
```
|
|
||||||
|
|
||||||
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clean -f
|
|
||||||
conda clean \
|
|
||||||
--yes \
|
|
||||||
--all
|
|
||||||
```
|
|
||||||
|
|
||||||
Or you could try to completley reset Anaconda:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda update \
|
|
||||||
--force-reinstall \
|
|
||||||
-y \
|
|
||||||
-n base \
|
|
||||||
-c defaults conda
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "No module named cv2", torch, 'invokeai', 'transformers', 'taming', etc
|
|
||||||
|
|
||||||
There are several causes of these errors:
|
|
||||||
|
|
||||||
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins
|
|
||||||
with "(invokeai)" then you activated it. If it begins with "(base)" or
|
|
||||||
something else you haven't.
|
|
||||||
|
|
||||||
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
|
|
||||||
instead of `python ./scripts/preload_models.py` or
|
|
||||||
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
|
|
||||||
|
|
||||||
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
|
|
||||||
|
|
||||||
3. if it says you're missing taming you need to rebuild your virtual
|
|
||||||
environment.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda deactivate
|
|
||||||
conda env remove -n invokeai
|
|
||||||
conda env create -f environment-mac.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
4. If you have activated the invokeai virtual environment and tried rebuilding
|
|
||||||
it, maybe the problem could be that I have something installed that you don't
|
|
||||||
and you'll just need to manually install it. Make sure you activate the
|
|
||||||
virtual environment so it installs there instead of globally.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
conda activate invokeai
|
|
||||||
pip install <package name>
|
|
||||||
```
|
|
||||||
|
|
||||||
You might also need to install Rust (I mention this again below).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### How many snakes are living in your computer?
|
|
||||||
|
|
||||||
You might have multiple Python installations on your system, in which case it's
|
|
||||||
important to be explicit and consistent about which one to use for a given
|
|
||||||
project. This is because virtual environments are coupled to the Python that
|
|
||||||
created it (and all the associated 'system-level' modules).
|
|
||||||
|
|
||||||
When you run `python` or `python3`, your shell searches the colon-delimited
|
|
||||||
locations in the `PATH` environment variable (`echo $PATH` to see that list) in
|
|
||||||
that order - first match wins. You can ask for the location of the first
|
|
||||||
`python3` found in your `PATH` with the `which` command like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python3
|
|
||||||
/usr/bin/python3
|
|
||||||
```
|
|
||||||
|
|
||||||
Anything in `/usr/bin` is
|
|
||||||
[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
|
|
||||||
However, `/usr/bin/python3` is not actually python3, but rather a stub that
|
|
||||||
offers to install Xcode (which includes python 3). If you have Xcode installed
|
|
||||||
already, `/usr/bin/python3` will execute
|
|
||||||
`/Library/Developer/CommandLineTools/usr/bin/python3` or
|
|
||||||
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
|
|
||||||
Xcode you've selected with `xcode-select`).
|
|
||||||
|
|
||||||
Note that `/usr/bin/python` is an entirely different python - specifically,
|
|
||||||
python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python3
|
|
||||||
/opt/homebrew/bin/python3
|
|
||||||
```
|
|
||||||
|
|
||||||
If you installed python3 with Homebrew and you've modified your path to search
|
|
||||||
for Homebrew binaries before system ones, you'll see the above path.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which python
|
|
||||||
/opt/anaconda3/bin/python
|
|
||||||
```
|
|
||||||
|
|
||||||
If you have Anaconda installed, you will see the above path. There is a
|
|
||||||
`/opt/anaconda3/bin/python3` also.
|
|
||||||
|
|
||||||
We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
|
|
||||||
should actually be the _same python_, which you can verify by comparing the
|
|
||||||
output of `python3 -V` and `python -V`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) % which python
|
|
||||||
/Users/name/miniforge3/envs/invokeai/bin/python
|
|
||||||
```
|
|
||||||
|
|
||||||
The above is what you'll see if you have miniforge and correctly activated the
|
|
||||||
invokeai environment, while usingd the standalone setup instructions above.
|
|
||||||
|
|
||||||
If you otherwise installed via pyenv, you will get this result:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(anaconda3-2022.05) % which python
|
|
||||||
/Users/name/.pyenv/shims/python
|
|
||||||
```
|
|
||||||
|
|
||||||
It's all a mess and you should know
|
|
||||||
[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
|
|
||||||
if you want to fix it. Here's a brief hint of the most common ways you can
|
|
||||||
modify it (don't really have the time to explain it all here).
|
|
||||||
|
|
||||||
- ~/.zshrc
|
|
||||||
- ~/.bash_profile
|
|
||||||
- ~/.bashrc
|
|
||||||
- /etc/paths.d
|
|
||||||
- /etc/path
|
|
||||||
|
|
||||||
Which one you use will depend on what you have installed, except putting a file
|
|
||||||
in /etc/paths.d - which also is the way I prefer to do.
|
|
||||||
|
|
||||||
Finally, to answer the question posed by this section's title, it may help to
|
|
||||||
list all of the `python` / `python3` things found in `$PATH` instead of just the
|
|
||||||
first hit. To do so, add the `-a` switch to `which`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
% which -a python3
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
This will show a list of all binaries which are actually available in your PATH.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Debugging?
|
|
||||||
|
|
||||||
Tired of waiting for your renders to finish before you can see if it works?
|
|
||||||
Reduce the steps! The image quality will be horrible but at least you'll get
|
|
||||||
quick feedback.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python ./scripts/txt2img.py \
|
|
||||||
--prompt "ocean" \
|
|
||||||
--ddim_steps 5 \
|
|
||||||
--n_samples 1 \
|
|
||||||
--n_iter 1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "The operator [name] is not current implemented for the MPS device." (sic)
|
|
||||||
|
|
||||||
!!! example "example error"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
|
|
||||||
implemented for the MPS device. If you want this op to be added in priority
|
|
||||||
during the prototype phase of this feature, please comment on
|
|
||||||
https://github.com/pytorch/pytorch/issues/77764.
|
|
||||||
As a temporary fix, you can set the environment variable
|
|
||||||
`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
|
|
||||||
WARNING: this will be slower than running natively on MPS.
|
|
||||||
```
|
|
||||||
|
|
||||||
The InvokeAI version includes this fix in
|
|
||||||
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
|
|
||||||
|
|
||||||
### "Could not build wheels for tokenizers"
|
|
||||||
|
|
||||||
I have not seen this error because I had Rust installed on my computer before I
|
|
||||||
started playing with Stable Diffusion. The fix is to install Rust.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl \
|
|
||||||
--proto '=https' \
|
|
||||||
--tlsv1.2 \
|
|
||||||
-sSf https://sh.rustup.rs | sh
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### How come `--seed` doesn't work?
|
|
||||||
|
|
||||||
!!! Information
|
|
||||||
|
|
||||||
Completely reproducible results are not guaranteed across PyTorch releases,
|
|
||||||
individual commits, or different platforms. Furthermore, results may not be
|
|
||||||
reproducible between CPU and GPU executions, even when using identical seeds.
|
|
||||||
|
|
||||||
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
|
|
||||||
|
|
||||||
Second, we might have a fix that at least gets a consistent seed sort of. We're
|
|
||||||
still working on it.
|
|
||||||
|
|
||||||
### libiomp5.dylib error?
|
|
||||||
|
|
||||||
```bash
|
|
||||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
|
||||||
```
|
|
||||||
|
|
||||||
You are likely using an Intel package by mistake. Be sure to run conda with the
|
|
||||||
environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
|
||||||
|
|
||||||
`CONDA_SUBDIR=osx-arm64 conda install ...`
|
|
||||||
|
|
||||||
This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
|
|
||||||
by a dependency.
|
|
||||||
[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
|
|
||||||
is a metapackage designed to prevent this, by making it impossible to install
|
|
||||||
`mkl`, but if your environment is already broken it may not work.
|
|
||||||
|
|
||||||
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
|
||||||
masks the underlying issue of using Intel packages.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Not enough memory
|
|
||||||
|
|
||||||
This seems to be a common problem and is probably the underlying problem for a
|
|
||||||
lot of symptoms (listed below). The fix is to lower your image size or to add
|
|
||||||
`model.half()` right after the model is loaded. I should probably test it out.
|
|
||||||
I've read that the reason this fixes problems is because it converts the model
|
|
||||||
from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
|
|
||||||
how that would affect the quality of the images though.
|
|
||||||
|
|
||||||
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### "Error: product of dimension sizes > 2\*\*31'"
|
|
||||||
|
|
||||||
This error happens with img2img, which I haven't played with too much yet. But I
|
|
||||||
know it's because your image is too big or the resolution isn't a multiple of
|
|
||||||
32x32. Because the stable-diffusion model was trained on images that were 512 x
|
|
||||||
512, it's always best to use that output size (which is the default). However,
|
|
||||||
if you're using that size and you get the above error, try 256 x 256 or 512 x
|
|
||||||
256 or something as the source image.
|
|
||||||
|
|
||||||
BTW, 2\*\*31-1 =
|
|
||||||
[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
|
|
||||||
is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
|
|
||||||
C.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### I just got Rickrolled! Do I have a virus?
|
|
||||||
|
|
||||||
You don't have a virus. It's part of the project. Here's
|
|
||||||
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) and
|
|
||||||
here's
|
|
||||||
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
|
|
||||||
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
|
|
||||||
call this "computer vision", sheesh).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### My images come out black
|
|
||||||
|
|
||||||
We might have this fixed, we are still testing.
|
|
||||||
|
|
||||||
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
|
|
||||||
on CUDA GPU's where the images come out green. Maybe it's the same issue?
|
|
||||||
Someone in that issue says to use "--precision full", but this fork actually
|
|
||||||
disables that flag. I don't know why, someone else provided that code and I
|
|
||||||
don't know what it does. Maybe the `model.half()` suggestion above would fix
|
|
||||||
this issue too. I should probably test it.
|
|
||||||
|
|
||||||
### "view size is not compatible with input tensor's size and stride"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
File "/opt/anaconda3/envs/invokeai/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
|
|
||||||
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
|
|
||||||
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
|
|
||||||
```
|
|
||||||
|
|
||||||
Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but
|
|
||||||
we found a file in stable-diffusion that we could change instead. This is a
|
|
||||||
32-bit vs 16-bit problem.
|
|
||||||
|
|
||||||
### The processor must support the Intel bla bla bla
|
|
||||||
|
|
||||||
What? Intel? On an Apple Silicon?
|
|
||||||
|
|
||||||
```bash
|
|
||||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
|
||||||
```
|
|
||||||
|
|
||||||
This is due to the Intel `mkl` package getting picked up when you try to install
|
|
||||||
something that depends on it-- Rosetta can translate some Intel instructions but
|
|
||||||
not the specialized ones here. To avoid this, make sure to use the environment
|
|
||||||
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
|
|
||||||
use ARM packages, and use `nomkl` as described above.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
|
|
||||||
|
|
||||||
May appear when just starting to generate, e.g.:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> clouds
|
|
||||||
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
|
|
||||||
placeholder_idx = torch.where(
|
|
||||||
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
|
|
||||||
LLVM ERROR: Failed to infer result type(s).
|
|
||||||
Abort trap: 6
|
|
||||||
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
|
|
||||||
warnings.warn('resource_tracker: There appear to be %d '
|
|
||||||
```
|
|
@ -1,137 +0,0 @@
|
|||||||
---
|
|
||||||
title: Manual Installation, Windows
|
|
||||||
---
|
|
||||||
|
|
||||||
# :fontawesome-brands-windows: Windows
|
|
||||||
|
|
||||||
## **Notebook install (semi-automated)**
|
|
||||||
|
|
||||||
We have a
|
|
||||||
[Jupyter notebook](https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
|
|
||||||
with cell-by-cell installation steps. It will download the code in this repo as
|
|
||||||
one of the steps, so instead of cloning this repo, simply download the notebook
|
|
||||||
from the link above and load it up in VSCode (with the appropriate extensions
|
|
||||||
installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
|
||||||
|
|
||||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
|
|
||||||
|
|
||||||
## **Manual Install with Conda**
|
|
||||||
|
|
||||||
1. Install Anaconda3 (miniconda3 version) from [here](https://docs.anaconda.com/anaconda/install/windows/)
|
|
||||||
|
|
||||||
2. Install Git from [here](https://git-scm.com/download/win)
|
|
||||||
|
|
||||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command
|
|
||||||
window. Type all the remaining commands in this window.
|
|
||||||
|
|
||||||
4. Run the command:
|
|
||||||
|
|
||||||
```batch
|
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create stable-diffusion folder where you will follow the rest of
|
|
||||||
the steps.
|
|
||||||
|
|
||||||
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
|
|
||||||
|
|
||||||
```batch
|
|
||||||
cd InvokeAI
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Run the following commands:
|
|
||||||
|
|
||||||
!!! todo "For systems with a CUDA (Nvidia) card:"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
rmdir src # (this is a precaution in case there is already a src directory)
|
|
||||||
conda env create -f environment-cuda.yml
|
|
||||||
conda activate invokeai
|
|
||||||
(invokeai)>
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! todo "For systems with an AMD card (using ROCm driver):"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
rmdir src # (this is a precaution in case there is already a src directory)
|
|
||||||
conda env create -f environment-AMD.yml
|
|
||||||
conda activate invokeai
|
|
||||||
(invokeai)>
|
|
||||||
```
|
|
||||||
|
|
||||||
This will install all python requirements and activate the "invokeai" environment
|
|
||||||
which sets PATH and other environment variables properly.
|
|
||||||
|
|
||||||
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/preload_models.py
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
This script will lead you through the process of creating an account on Hugging Face,
|
|
||||||
accepting the terms and conditions of the Stable Diffusion model license, and
|
|
||||||
obtaining an access token for downloading. It will then download and install the
|
|
||||||
weights files for you.
|
|
||||||
|
|
||||||
Please look [here](INSTALLING_MODELS.md) for a manual process for doing the
|
|
||||||
same thing.
|
|
||||||
|
|
||||||
8. Start generating images!
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
!!! warning "IMPORTANT"
|
|
||||||
|
|
||||||
Make sure that the conda environment is activated, which should create
|
|
||||||
`(invokeai)` in front of your prompt!
|
|
||||||
|
|
||||||
=== "CLI"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "local Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Public Webserver"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --web --host 0.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
To use an alternative model you may invoke the `!switch` command in
|
|
||||||
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
|
|
||||||
either the CLI or the Web UI. See [Command Line
|
|
||||||
Client](../features/CLI.md#model-selection-and-importation). The
|
|
||||||
model names are defined in `configs/models.yaml`.
|
|
||||||
|
|
||||||
9. Subsequently, to relaunch the script, first activate the Anaconda
|
|
||||||
command window (step 3),enter the InvokeAI directory (step 5, `cd
|
|
||||||
\path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then
|
|
||||||
launch the invoke script (step 9).
|
|
||||||
|
|
||||||
!!! tip "Tildebyte has written an alternative"
|
|
||||||
|
|
||||||
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
|
||||||
which uses the Windows Powershell and pew. If you are having trouble with
|
|
||||||
Anaconda on Windows, give this a try (or try it first!)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
This distribution is changing rapidly. If you used the `git clone` method
|
|
||||||
(step 5) to download the stable-diffusion directory, then to update to the
|
|
||||||
latest and greatest version, launch the Anaconda window, enter
|
|
||||||
`stable-diffusion`, and type:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
conda env update
|
|
||||||
```
|
|
||||||
|
|
||||||
This will bring your local copy into sync with the remote one.
|
|
@ -3,10 +3,10 @@ info:
|
|||||||
title: Stable Diffusion
|
title: Stable Diffusion
|
||||||
description: |-
|
description: |-
|
||||||
TODO: Description Here
|
TODO: Description Here
|
||||||
|
|
||||||
Some useful links:
|
Some useful links:
|
||||||
- [Stable Diffusion Dream Server](https://github.com/lstein/stable-diffusion)
|
- [Stable Diffusion Dream Server](https://github.com/lstein/stable-diffusion)
|
||||||
|
|
||||||
license:
|
license:
|
||||||
name: MIT License
|
name: MIT License
|
||||||
url: https://github.com/lstein/stable-diffusion/blob/main/LICENSE
|
url: https://github.com/lstein/stable-diffusion/blob/main/LICENSE
|
||||||
@ -36,7 +36,7 @@ paths:
|
|||||||
description: successful operation
|
description: successful operation
|
||||||
content:
|
content:
|
||||||
image/png:
|
image/png:
|
||||||
schema:
|
schema:
|
||||||
type: string
|
type: string
|
||||||
format: binary
|
format: binary
|
||||||
'404':
|
'404':
|
||||||
@ -66,7 +66,7 @@ paths:
|
|||||||
description: successful operation
|
description: successful operation
|
||||||
content:
|
content:
|
||||||
image/png:
|
image/png:
|
||||||
schema:
|
schema:
|
||||||
type: string
|
type: string
|
||||||
format: binary
|
format: binary
|
||||||
'404':
|
'404':
|
||||||
|