Commit Graph

401 Commits

Author SHA1 Message Date
blessedcoolant
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00
blessedcoolant
7f695fed39 Ignore safetensor or ckpt files inside diffusers model folders.
Basically skips the path if the path has the word diffusers anywhere inside it.
2023-02-12 00:03:42 +13:00
blessedcoolant
950f450665
Merge branch 'main' into main 2023-02-10 11:52:26 +13:00
Lincoln Stein
158d1ef384 bump version number; update contributors 2023-02-09 13:01:08 -05:00
blessedcoolant
4a1b4d63ef Change denoise_str default to 0.75 2023-02-09 20:21:09 +13:00
blessedcoolant
0503680efa Change denoise_str to an arg instead of a class variable 2023-02-09 20:16:23 +13:00
coreco
1f76b30e54 adding support for ESRGAN denoising strength, which allows for improved detail retention when upscaling photorelistic faces 2023-02-08 22:36:35 -06:00
Lincoln Stein
12d15a1a3f Up version to 2.3.0-rc7 2023-02-08 15:55:35 -05:00
Lincoln Stein
4ecf016ace Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 12:47:27 -05:00
Lincoln Stein
67f892455f fix crash when --prompt="prompt" is used in CLI
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-07 22:09:34 -05:00
Lincoln Stein
39b0288595
Merge branch 'main' into 2.3.0rc6 2023-02-07 17:43:38 -05:00
Lincoln Stein
13d12a0ceb Merge branch 'main' into 2.3-documentation-fixes 2023-02-07 17:08:10 -05:00
Jonathan
f0f3520bca
Switch to using max for attention slicing in all cases for the time being. (#2569) 2023-02-07 19:28:57 +01:00
Lincoln Stein
bdf683ec41
Merge branch 'main' into enhance/convert-inpaint-models 2023-02-07 06:59:35 -05:00
Lincoln Stein
8a21fc1c50 bump version to 2.3.0-rc6 2023-02-06 23:36:49 -05:00
Lincoln Stein
275d5040f4 Merge branch 'bugfix/configure-script' into 2.3.0rc6 2023-02-06 23:35:32 -05:00
Lincoln Stein
1b5930dcad do not merge diffusers and ckpt stanzas 2023-02-06 23:23:07 -05:00
Lincoln Stein
bde6e96800
Merge branch 'main' into 2.3.0rc5 2023-02-06 12:55:47 -05:00
Jonathan
28b40bebbe
Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
Lincoln Stein
1c9fd00f98 this is likely the penultimate rc 2023-02-06 12:03:08 -05:00
Jonathan
2432adb38f
In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
Lincoln Stein
ca749b7de1 remove debugging statement 2023-02-06 09:45:21 -05:00
Lincoln Stein
7486aa8608 enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-06 09:35:23 -05:00
mauwii
a40bdef29f update model_manager.py
- read files in chunks when calculating sha
  - windows runner is crashing without
2023-02-06 12:30:10 +01:00
Jonathan
b7ab025f40
Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
Lincoln Stein
3969637488 remove misleading completion message from merge_diffusers 2023-02-05 23:39:43 -05:00
Lincoln Stein
0240656361 fix crash in txt2img and img2img w/ inpainting models and perlin > 0
- get_perlin_noise() was returning 9 channels; fixed code to return
  noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 22:55:08 -05:00
Lincoln Stein
142016827f fix formatting bugs in both textual_inversion and merge front ends
- Issue is that if insufficient diffusers models are defined in
  models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
  is.
2023-02-05 18:35:01 -05:00
Lincoln Stein
c44b060a2e
Merge branch 'main' into 2.3.0rc4 2023-02-05 13:40:56 -05:00
Lincoln Stein
563867fa99
Merge branch 'main' into main 2023-02-05 12:51:03 -05:00
Lincoln Stein
2e230774c2
Merge branch 'main' into 2.3.0rc4 2023-02-05 12:44:44 -05:00
blessedcoolant
9a6966924c
Merge branch 'main' into main 2023-02-06 05:33:48 +13:00
Lincoln Stein
0d62525f3d reword help message slightly 2023-02-05 08:11:02 -08:00
Dan Sully
2ec864e37e Allow multiple models to be imported by passing a directory. 2023-02-05 08:11:02 -08:00
Lincoln Stein
9307ce3dc3
this fixes a crash in the TI frontend (#2527)
- This fixes an edge case crash when the textual inversion frontend
  tried to display the list of models and no default model defined
  in models.yaml

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2023-02-05 16:05:33 +00:00
Lincoln Stein
15996446e0
Merge branch 'main' into 2.3.0rc4 2023-02-05 10:54:53 -05:00
blessedcoolant
7a06c8fd89
Merge branch 'main' into main 2023-02-06 04:43:49 +13:00
Lincoln Stein
4895fe8395
fix crash when text mask applied to img2img (#2526)
This PR fixes the crash reported at https://discord.com/channels/1020123559063990373/1031668022294884392/1071782238137630800

It also quiets-down the "NSFW is disabled" nag during img2img generation.
2023-02-05 15:26:40 +00:00
Lincoln Stein
1e793a2dfe
Merge branch 'main' into 2.3.0rc4 2023-02-05 10:24:09 -05:00
blessedcoolant
f7532cdfd4 Beautify Token Log Outputs 2023-02-05 22:55:29 +13:00
blessedcoolant
f1dd76c20b Remove Deprecation Warning from Diffusers Pipeline 2023-02-05 22:55:10 +13:00
whosawhatsis
3016eeb6fb
Merge branch 'invoke-ai:main' into main 2023-02-04 22:56:59 -05:00
whosawhatsis
75b62d6ca8 Add --log_tokenization to sysargs
This allows the --log_tokenization option to be used as a command line argument (or from invokeai.init), making it possible to view tokenization information in the terminal when using the web interface.
2023-02-04 19:56:20 -08:00
Lincoln Stein
00d79c1fe3 bump version number to rc4 2023-02-04 14:00:58 -05:00
Lincoln Stein
60213893ab configuration script tidying up
- Rename configure_invokeai.py to invokeai_configure.py to be
  consistent with installed script name
- Remove warning message about half-precision models not being
  available during the model download process.

- adjust estimated file size reported by configure

- guesstimate disk space needed for "all" models

- fix up the "latest" tag to be named 'v2.3-latest'
2023-02-04 13:55:36 -05:00
Lincoln Stein
1139884493
Merge branch 'main' into fix/mps-fallback 2023-02-04 11:11:59 -05:00
Lincoln Stein
a42b25339f
Merge branch 'main' into bugfix/txt2mask 2023-02-04 10:25:30 -05:00
Lincoln Stein
61c3886843
Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:43:52 -05:00
Lincoln Stein
6bf73a0cf9
Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:17:45 -05:00
Lincoln Stein
5145df21d9
Merge branch 'main' into bugfix/merge-fixes 2023-02-04 09:17:01 -05:00