Commit Graph

2437 Commits

Author SHA1 Message Date
damian
fb312f9ed3 use the correct value - whoops 2023-01-26 17:30:29 +01:00
damian
729752620b trying out JPPhoto's patch on vast.ai 2023-01-26 17:27:33 +01:00
damian
8ed8bf52d0 use 'auto' slice size 2023-01-26 17:04:22 +01:00
Damian Stewart
5e7ed964d2 wip updating docs 2023-01-25 23:49:38 +01:00
Damian Stewart
93a24445dc Merge remote-tracking branch 'upstream/main' into diffusers_cross_attention_control_reimplementation 2023-01-25 23:05:39 +01:00
Damian Stewart
95d147c5df MPS support: negatory 2023-01-25 23:03:30 +01:00
Damian Stewart
41aed57449 wip tracking down MPS slicing support 2023-01-25 22:27:23 +01:00
Damian Stewart
34a3f4a820 cleanup 2023-01-25 21:47:17 +01:00
Damian Stewart
1f5ad1b05e sliced swap working 2023-01-25 21:38:27 +01:00
blessedcoolant
eb2ca4970b Add Dutch Localization Build 2023-01-26 08:56:38 +13:00
blessedcoolant
c2b10e6461 Add Dutch Localization 2023-01-26 08:56:38 +13:00
Dennis
190d266060 Dutch localization 2023-01-26 08:56:38 +13:00
Damian Stewart
c52dd7e3f4 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-25 14:51:15 +01:00
Damian Stewart
a4aea1540b more wip sliced attention (.swap doesn't work yet) 2023-01-25 14:51:08 +01:00
Kevin Turner
3c53b46a35
Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-24 19:32:34 -08:00
Lincoln Stein
61403fe306 fix second conflict in CLI.py 2023-01-24 14:21:21 -05:00
Lincoln Stein
b2f288d6ec fix conflict in CLI.py 2023-01-24 14:20:40 -05:00
Lincoln Stein
eaf7934d74
[Enhancements] Allow user to specify VAE with !import_model and delete underlying model with !del_model (#2369)
Fix two deficiencies in the CLI's support for model management:

1. `!import_model` did not allow user to specify VAE file. This is now
fixed.
2. `!del_model` did not offer the user the opportunity to delete the
underlying
       weights file or diffusers directory. This is now fixed.
2023-01-24 13:43:16 -05:00
Lincoln Stein
079ec4cb5c
Merge branch 'main' into feat/import-with-vae 2023-01-24 13:16:00 -05:00
Lincoln Stein
f521f5feba
improve UI of textual inversion frontend (#2333)
- File selection box now accepts directories that don't exist yet.
- Fixed crash when resume is selected and no files available to resume
from.
2023-01-24 12:22:17 -05:00
Lincoln Stein
00839d02ab
Merge branch 'main' into lstein-improve-ti-frontend 2023-01-24 11:53:03 -05:00
Lincoln Stein
ce52d0c42b
Merge branch 'main' into feat/import-with-vae 2023-01-24 11:52:40 -05:00
Lincoln Stein
f687d90bca
[feat] Better status reporting when loading embeds and concepts (#2372)
This PR improves the console reporting of the process of recognizing
trigger tokens and loading their embeds.

1. Do not report "concept is not known to HuggingFace" if the trigger
term is in fact a local embedding trigger.
2. When a trigger term is first recognized during a session, report the
fact.
This should help debug embedding issues in the future.

Note that the local embeddings produced by the new InvokeAI TI training
script default to the format <trigger> with literal angle brackets. This
sets them off from the rest of the text well and will enable
autocomplete at some point in the future. However, this means that they
supersede like-named HuggingFace concepts, and may cause problems for
people uploading them to the HuggingFace repository (although that
problem already exists).
2023-01-24 09:35:53 -05:00
Lincoln Stein
7473d814f5 remove original setup.py 2023-01-24 09:11:05 -05:00
Lincoln Stein
b2c30c2093
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-24 09:08:13 -05:00
Lincoln Stein
a7048eea5f
Merge branch 'main' into feat/import-with-vae 2023-01-24 09:07:41 -05:00
Lincoln Stein
87c9398266
[enhancement] import .safetensors ckpt files directly (#2353)
This small fix makes it possible to import and run safetensors ckpt
files directly without doing a conversion step first.
2023-01-24 09:06:49 -05:00
Damian Stewart
63c6019f92 sliced attention processor wip (untested) 2023-01-24 14:46:32 +01:00
Lincoln Stein
9f32daab2d
Merge branch 'main' into lstein-import-safetensors 2023-01-23 21:58:07 -05:00
Lincoln Stein
884768c39d
Make sure --free_gpu_mem still works when using CKPT-based diffuser model (#2367)
This PR attempts to fix `--free_gpu_mem` option that was not working in
CKPT-based diffuser model after #1583.

I noticed that the memory usage after #1583 did not decrease after
generating an image when `--free_gpu_mem` option was enabled.
It turns out that the option was not propagated into `Generator`
instance, hence the generation will always run without the memory saving
procedure.

This PR also related to #2326. Initially, I was trying to make
`--free_gpu_mem` works on 🤗 diffuser model as well.
In the process, I noticed that InvokeAI will raise an exception when
`--free_gpu_mem` is enabled.
I tried to quickly fix it by simply ignoring the exception and produce a
warning message to user's console.
2023-01-23 21:48:23 -05:00
Lincoln Stein
bc2194228e stability improvements
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101 correct fail-to-resume error
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
  error in epoch calculation that caused script not to resume from
  latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
blessedcoolant
66babb2e81 Japanese Localization Build 2023-01-24 09:07:29 +13:00
blessedcoolant
31a967965b Add Japanese Localization 2023-01-24 09:07:29 +13:00
Katsuyuki-Karasawa
b9c9b947cd update japanese translation 2023-01-24 09:07:29 +13:00
唐澤 克幸
1eee08a070 add Japanese Translation 2023-01-24 09:07:29 +13:00
Lincoln Stein
aca1b61413
[Feature] Add interactive diffusers model merger (#2388)
This PR adds `scripts/merge_fe.py`, which will merge any 2-3 diffusers
models registered in InvokeAI's `models.yaml`, producing a new merged
model that will be registered as well.

Currently this script will only work if all models to be merged are
known by their repo_ids. Local models, including those converted from
ckpt files, will cause a crash due to a bug in the diffusers
`checkpoint_merger.py` code. I have made a PR against
huggingface/diffusers which fixes this:
https://github.com/huggingface/diffusers/pull/2060
2023-01-23 09:27:05 -05:00
Lincoln Stein
e18beaff9c
Merge branch 'main' into feat/merge-script 2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd
fix typo in prompt 2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700
Merge branch 'main' into feat/import-with-vae 2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5
Merge branch 'main' into bugfix/embed-loading-messages 2023-01-23 00:12:33 -08:00
Kevin Turner
ea61bf2c94
[bugfix] ckpt conversion script respects cache in ~/invokeai/models (#2395) 2023-01-23 00:07:23 -08:00
Lincoln Stein
7dead7696c fixed setup.py to install the new scripts 2023-01-23 00:43:15 -05:00
Lincoln Stein
ffcc5ad795 conversion script uses invokeai models cache by default 2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d add model merging documentation and launcher script menu entries 2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19 create small module for merge importation logic 2023-01-22 18:07:53 -05:00
Damian Stewart
c0610f7cb9 pass missing value 2023-01-22 18:19:06 +01:00
Damian Stewart
313b206ff8 squash float16/float32 mismatch on linux 2023-01-22 18:13:12 +01:00
Lincoln Stein
f0fe483915
Merge branch 'main' into feat/merge-script 2023-01-21 18:42:40 -05:00