Commit Graph

3222 Commits

Author SHA1 Message Date
Lincoln Stein
eb8d87e185
Merge branch 'main' into refactor_use_compel 2023-02-22 12:34:16 -05:00
Lincoln Stein
a4afb69615
fix crash in textual inversion with "num_samples=0" error (#2762)
-At some point pathlib was added to the list of imported modules and
this broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 12:31:28 -05:00
Lincoln Stein
8b7925edf3 fix crash in textual inversion with "num_samples=0" error
-At some point pathlib was added to the list of imported modules and this
broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 11:29:30 -05:00
Damian Stewart
3f5d8c3e44 remove inaccurate docstring 2023-02-22 13:18:39 +01:00
Damian Stewart
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
Damian Stewart
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
Jonathan
a461875abd
Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00
Lincoln Stein
ab018ccdfe
Fallback to using filename to trigger embeddings (#2752)
Lots of earlier embeds use a common trigger token such as * or the
hebrew letter shan. Previously, the textual inversion manager would
refuse to load the second and subsequent embeddings that used a
previously-claimed trigger. Now, when this case is encountered, the
trigger token is replaced by <filename> and the user is informed of the
fact.
2023-02-21 21:58:11 -05:00
Lincoln Stein
d41dcdfc46 move trigger_str registration into try block 2023-02-21 21:38:42 -05:00
Lincoln Stein
7f99efc5df require diffusers 0.13 2023-02-21 17:28:07 -05:00
Lincoln Stein
0a6d8b4855
Merge branch 'main' into refactor_use_compel 2023-02-21 17:19:48 -05:00
Lincoln Stein
5e41811fb5 move trigger text munging to upper level per review 2023-02-21 17:04:42 -05:00
Jonathan
1d0ba4a1a7
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 13:12:34 -06:00
blessedcoolant
9e5aa645a7
Fix crashing when using 2.1 model (#2757)
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Jonathan
71bbd78574
Fix crashing when using 2.1 model
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
blessedcoolant
d5f524a156
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883
Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Jonathan
da04b11a31
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
7fadd5e5c4
performance: low-memory option for calculating guidance sequentially (#2732)
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.

In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)

That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407),
so it seems worthwhile.

To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff
update installation docs for 2.3.1 installer screens (#2749)
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
b6ed5eafd6 update installation docs for 2.3.1 installer screens 2023-02-20 17:24:52 -05:00
blessedcoolant
694d5aa2e8
Add 'update' action to launcher script (#2636)
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.

The user interface (such as it is) looks like this:

![updater-screenshot](https://user-images.githubusercontent.com/111189/218291539-e5542662-6bfd-46ef-8ea9-655ca77392b7.png)
2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b
Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
Lincoln Stein
bac6b50dd1
During textual inversion training, skip over non-image files (#2747)
- The TI script was looping over all files in the training image
directory, regardless of whether they were image files or not. This PR
adds a check for image file extensions.
- 
- Closes #2715
2023-02-20 16:17:32 -05:00
blessedcoolant
a30c91f398
Merge branch 'main' into bugfix/textual-inversion-training 2023-02-21 09:58:19 +13:00
Lincoln Stein
17294bfa55
restore ability of textual inversion manager to read .pt files (#2746)
- Fixes longstanding bug in the token vector size code which caused .pt
files to be assigned the wrong token vector length. These were then
tossed out during directory scanning.
2023-02-20 15:34:56 -05:00
Lincoln Stein
3fa1771cc9
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 15:20:15 -05:00
Lincoln Stein
f3bd386ff0
Merge branch 'main' into bugfix/textual-inversion-training 2023-02-20 15:19:53 -05:00
Lincoln Stein
8486ce31de
Merge branch 'main' into bugfix/embedding-vector-length 2023-02-20 15:19:36 -05:00
Lincoln Stein
1d9845557f reduced verbosity of embed loading messages 2023-02-20 15:18:55 -05:00
blessedcoolant
dc9268f772
[WebUI] Symmetry Fix (#2745)
Symmetry now has a toggle on and off. Won't be passed if not enabled.
Symmetry settings now moved to their accordion.
2023-02-21 08:47:23 +13:00
Lincoln Stein
47ddc00c6a in textual inversion training, skip over non-image files
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed restore ability of textual inversion manager to read .pt files
- Fixes longstanding bug in the token vector size code which caused
  .pt files to be assigned the wrong token vector length. These
  were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
blessedcoolant
d5efd57c28 Merge branch 'symmetry-fix' of https://github.com/blessedcoolant/InvokeAI into symmetry-fix 2023-02-21 07:44:34 +13:00
blessedcoolant
b52a92da7e build: symmetry-fix-2 2023-02-21 07:43:56 +13:00
blessedcoolant
b949162e7e Revert Symmetry Big Size Input 2023-02-21 07:42:20 +13:00
blessedcoolant
5409991256
Merge branch 'main' into symmetry-fix 2023-02-21 07:29:53 +13:00
blessedcoolant
be1bcbc173 build: symmetry-fix 2023-02-21 07:28:25 +13:00
blessedcoolant
d6196e863d Move symmetry settings to their own accordion 2023-02-21 07:25:24 +13:00
blessedcoolant
63e790b79b
fix crash in CLI when --save_intermediates called (#2744)
Fixes #2733
2023-02-21 07:16:45 +13:00
Lincoln Stein
cf53bba99e
Merge branch 'main' into bugfix/save-intermediates 2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a fix crash in CLI when --save_intermediates called
Fixes #2733
2023-02-20 12:50:32 -05:00
Lincoln Stein
aab8263c31
Fix crash on calling diffusers' prepare_attention_mask (#2743)
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in
a batch size.
2023-02-20 12:35:33 -05:00
Jonathan
b21bd6f428
Fix crash on calling diffusers' prepare_attention_mask
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00