Commit Graph

566 Commits

Author SHA1 Message Date
Lincoln Stein
29b348ece1 fix corrupted outputs/.next_prefix file
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
  file named `.next_prefix` in the outputs directory. This avoids the
  overhead of doing a directory listing to find out what file number
  comes next.

- The code uses advisory locking to prevent corruption of this file in
  the event that multiple invokeai's try to access it simultaneously,
  but some users have experienced corruption of the file nevertheless.

- This PR addresses the problem by detecting a potentially corrupted
  `.next_prefix` file and falling back to the directory listing method.
  A fixed version of the file is then written out.

- Closes #3001
2023-03-23 22:07:05 -04:00
Lincoln Stein
16dea46b79 remove outdated comment 2023-03-13 12:51:27 -04:00
Lincoln Stein
1f80b5335b reenable run_patches() 2023-03-13 10:38:08 -04:00
Lincoln Stein
6db509a4ff add --upgrade to update script 2023-03-13 10:15:33 -04:00
Lincoln Stein
c3d292e8f9 bump version to post1 2023-03-13 09:35:25 -04:00
Lincoln Stein
206593ec99 update version number 2023-03-13 09:34:00 -04:00
Lincoln Stein
1b62c781d7 temporarily disable run-patches 2023-03-13 09:33:32 -04:00
Lincoln Stein
694925f427 improve support for V2 variant legacy checkpoints
This commit enhances support for V2 variant (epsilon and v-predict)
import and conversion to diffusers, by prompting the user to select
the proper config file during startup time autoimport as well as
in the invokeai installer script..
2023-03-11 19:34:10 -05:00
Lincoln Stein
e6e93bbb80
Merge branch 'v2.3' into bugfix/restore-update-command 2023-03-11 17:52:09 -05:00
blessedcoolant
827ac82d54
Merge branch 'v2.3' into bugfix/support-both-v2-variants 2023-03-12 04:18:11 +13:00
Lincoln Stein
6abe2bfe42
Merge branch 'v2.3' into bugfix/support-both-v2-variants 2023-03-11 10:01:32 -05:00
Lincoln Stein
acf955fc7b upgrade transformers, accelerate, safetensors 2023-03-10 06:58:46 -05:00
Lincoln Stein
023db8ac41 use diffusers 0.14 cache layout
This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts
the global diffusers model cache to work with the 0.14 diffusers
layout of placing models in HF_HOME/hub rather than HF_HOME/diffusers.
2023-03-09 22:35:43 -05:00
Lincoln Stein
65cf733a0c
Merge branch 'v2.3' into bugfix/restore-update-command 2023-03-09 21:45:17 -05:00
Kevin Turner
bf5cd1bd3b
Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-09 16:08:27 -08:00
Lincoln Stein
6d5e9161fb make version pep 440 compliant 2023-03-09 18:00:31 -05:00
Lincoln Stein
0636348585 bump version number to +a0 2023-03-09 17:57:19 -05:00
Lincoln Stein
5372800e60 Disable built-in NSFW checker on models converted with --ckpt_convert
When a legacy ckpt model was converted into diffusers in RAM, the
built-in NSFW checker was not being disabled, in contrast to models
converted and saved to disk. Because InvokeAI does its NSFW checking
as a separate post-processing step (in order to generate blurred
images rather than black ones), this defeated the
--nsfw and --no-nsfw switches.

This closes #2836 and #2580.
2023-03-09 17:38:58 -05:00
Lincoln Stein
2ae396640b Support both v2-v and v2-e legacy ckpt models 2023-03-09 15:35:17 -05:00
Lincoln Stein
252f222068
Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-09 12:02:40 -05:00
Lincoln Stein
142ba8c8ea add logging, support for prompts with shell metachars 2023-03-09 11:57:44 -05:00
Lincoln Stein
84dfd2003e fix documentation of range syntax 2023-03-09 02:29:07 -05:00
Lincoln Stein
f207647f0f CLI now writes hires_fix to metadata 2023-03-07 17:22:16 -08:00
Jonathan
d669e69755
Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-07 11:45:45 -06:00
Lincoln Stein
d912bab4c2 install the script as "invokeai-batch" 2023-03-07 10:10:18 -05:00
Jonathan
426fea9681
Merge branch 'v2.3' into bugfix/crash-on-unlink-after-convert 2023-03-06 20:51:58 -06:00
Lincoln Stein
45aa770cd1 implemented multiprocessing across multiple GPUs 2023-03-05 01:52:28 -05:00
Lincoln Stein
117f70e1ec implement locking when acquiring next output file prefix 2023-03-04 09:13:17 -05:00
Lincoln Stein
c840bd8c12 this prevents a crash when converting models from CLI
- Crash would occur at the end of this sequence:
  - launch CLI
  - !convert <URL pointing to a legacy ckpt file>
  - Answer "Y" when asked to delete original .ckpt file

- This commit modifies model_manager.heuristic_import()
  to silently delete the downloaded legacy file after
  it has been converted into a diffusers model. The user
  is no longer asked to approve deletion.

NB: This should be cherry-picked into main once refactor
is done.
2023-03-02 10:49:53 -05:00
Lincoln Stein
a5b11e1071 fix newlines causing negative prompt to be parsed incorrectly
This is the same fix that was applied to main in PR 2837.
2023-02-28 17:32:17 -05:00
Lincoln Stein
7b92b27ceb
Merge branch 'v2.3' into bugfix/quote-initfile-paths 2023-02-26 23:54:20 -05:00
Lincoln Stein
e3a19d4f3e quote output, embedding and autoscan directores in invokeai.init
- this should prevent the errors that users are seeing with
  spaces in the file pathsa

quot
2023-02-26 23:02:18 -05:00
Lincoln Stein
ecbb385447 bump version number 2023-02-26 16:11:07 -05:00
Lincoln Stein
210998081a use right pep-440 standard version number 2023-02-24 15:14:39 -05:00
Lincoln Stein
604acb9d91 use pep-440 standard version number 2023-02-24 15:07:54 -05:00
Lincoln Stein
5beeb1a897 hotfix for broken merge function 2023-02-24 15:00:22 -05:00
Lincoln Stein
d0be79c33d fixes crashes on merge in both WebUI and console
- an inadvertent change to the model manager broke the merging functions
- corrected here - will be a hotfix
2023-02-24 14:54:23 -05:00
Lincoln Stein
b4ed8bc47a
Merge branch 'main' into v2.3 2023-02-24 10:52:03 -05:00
Lincoln Stein
4e446130d8
Merge branch 'v2.3' into enhance/curated-2.3.1-models 2023-02-24 10:30:42 -05:00
Lincoln Stein
4c93b514bb bump version to final 2.3.1 2023-02-24 10:04:41 -05:00
Lincoln Stein
a540cc537f add curated set of HuggingFace diffusers models for 2.3.1 release
- Final list can be found in invokeai/configs/INITIAL_MODELS.yaml

- After installing all the models, I discovered a bug in the file
  selection form that caused a crash when no remaining uninstalled
  models remained. So had to fix this.
2023-02-24 00:53:48 -05:00
Lincoln Stein
2d990c1f54
Merge branch 'v2.3' into bugfix/webui-accurate-intermediates 2023-02-23 22:07:18 -05:00
Lincoln Stein
7fb2da8741 fix generate backend to generate "accurate" intermediate images
- Closes #2784
- Closes #2775
2023-02-23 22:03:28 -05:00
Lincoln Stein
0982548e1f
Merge branch 'v2.3' into bugfix/v2-model-conversion 2023-02-23 21:27:49 -05:00
Matthias Wild
a7c2333312
Merge branch 'main' into fix/py39-compatibility 2023-02-23 23:53:38 +01:00
Lincoln Stein
b5b541c747 bump version; use correct format for PyPi 2023-02-23 17:47:36 -05:00
mauwii
1a6ed85d99
fix typeing to be compatible with python 3.9
without this, the project can be installed on 3.9 but not used
this also fixes the container images
2023-02-23 23:27:16 +01:00
Lincoln Stein
8eaf1c4033 Revert "(updater) style 'pip' progress to use dark background"
This reverts commit 89239d1c54.

- This was making a subprocess call to 'bash', and hence crashing
  on windows systems!
2023-02-23 16:33:57 -05:00
Lincoln Stein
4f44b64052 fix ckpt_convert module to work with dreambooth v2 models
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were
  not converting properly (probably dreambooth-generated):
  https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
  derive vector length of the CLIP model used by fetching the second
  dimension of the tensor at "cond_stage_model.model.text_projection".
  His proposed solution was to hardcode a value of 1024.

- On inspection, I found that the same second dimension can be
  recovered from key 'cond_stage_model.model.ln_final.bias', and use
  that instead. I hope this is correct; tested on multiple v1, v2 and
  inpainting models and they converted correctly.

- While debugging this, I found and fixed several other issues:

  - model download script was not pre-downloading the OpenCLIP
    text_encoder or text_tokenizer. This is fixed.
  - got rid of legacy code in `ckpt_to_diffuser.py` and replaced
    with calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 15:43:58 -05:00
Lincoln Stein
a485515bc6
Merge branch 'v2.3' into bugfix/sanity-check-rootdir 2023-02-23 11:14:52 -05:00