Commit Graph

123 Commits

Author SHA1 Message Date
28b40bebbe Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
2432adb38f In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
b7ab025f40 Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
0240656361 fix crash in txt2img and img2img w/ inpainting models and perlin > 0
- get_perlin_noise() was returning 9 channels; fixed code to return
  noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 22:55:08 -05:00
7a06c8fd89 Merge branch 'main' into main 2023-02-06 04:43:49 +13:00
4895fe8395 fix crash when text mask applied to img2img (#2526)
This PR fixes the crash reported at https://discord.com/channels/1020123559063990373/1031668022294884392/1071782238137630800

It also quiets-down the "NSFW is disabled" nag during img2img generation.
2023-02-05 15:26:40 +00:00
f1dd76c20b Remove Deprecation Warning from Diffusers Pipeline 2023-02-05 22:55:10 +13:00
d58574ca46 Merge branch 'main' into dev/installer 2023-02-02 13:53:11 -05:00
44d36a0e0b fix(img2img): do not attempt to do a zero-step img2img when strength is low 2023-02-01 18:42:54 -08:00
8ce9f07223 Merge branch 'main' into dev/installer 2023-02-01 17:50:22 -05:00
8cdc65effc Merge branch 'main' into fix_2418_simplified 2023-01-31 17:45:54 -08:00
e6d52d7ce6 Merge branch 'main' into fix_2418_simplified 2023-01-31 18:11:56 -05:00
56e2d22b6e Merge branch 'main' into feat/solid-infill 2023-01-31 18:02:17 -05:00
0066187651 Merge branch 'main' into feat/solid-infill 2023-01-31 17:53:09 -05:00
d3d24fa816 fill color is parameterized 2023-01-31 17:52:33 -05:00
4d58fed6b0 Merge branch 'main' into fix/inpainting-blank-slate 2023-01-31 11:04:56 -08:00
eed802f5d9 Merge branch 'main' into fix/hires_inpaint 2023-01-31 09:34:29 -08:00
1c377b7995 further improvements to ability to find location of data files
- implement the following pattern for finding data files under both
  regular and editable install conditions:

  import invokeai.foo.bar as bar
  path = bar.__path__[0]

- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
  to be performed.
2023-01-31 12:24:55 -05:00
9ad4c03277 Various fixes
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
2023-01-30 18:42:17 -05:00
5299324321 workaround for pytorch bug, fixes #2418 2023-01-30 18:45:53 +01:00
5ce62e00c9 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-29 13:52:01 -06:00
5a8c28de97 Merge remote-tracking branch 'origin/main' into fix/hires_inpaint 2023-01-29 10:51:59 -08:00
07e03b31b7 Update --hires_fix (#2414)
* Update --hires_fix

Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
47dbe7bc0d (assets) move 'caution.png' to avoid including entire 'assets' dir in the wheel
reduces wheel size to 3MB from 27MB
2023-01-28 17:39:33 -05:00
b6d37a70ca fix(inpainting model): threshold mask to avoid gray blurry seam 2023-01-28 13:34:22 -08:00
71b6ddf5fb fix(inpainting model): blank areas to be repainted in the masked image
Otherwise the model seems too reluctant to change these areas, even though the mask channel should allow it to.
2023-01-28 11:10:32 -08:00
6556b200b5 remove experimental "blur" infill
It seems counterproductive for use with the inpainting model, and not especially useful otherwise.
2023-01-27 15:25:50 -08:00
d627cd1865 feat(inpaint): add simpler infill methods for use with inpainting model 2023-01-27 14:28:16 -08:00
09b6104bfd refactor(txt2img2img): factor out tensor shape 2023-01-27 12:04:12 -08:00
1bb5b4ab32 fix dimension errors when inpainting model is used with hires-fix 2023-01-27 11:52:05 -08:00
e090c0dc10 try without setting every time 2023-01-26 17:46:51 +01:00
729752620b trying out JPPhoto's patch on vast.ai 2023-01-26 17:27:33 +01:00
8ed8bf52d0 use 'auto' slice size 2023-01-26 17:04:22 +01:00
41aed57449 wip tracking down MPS slicing support 2023-01-25 22:27:23 +01:00
34a3f4a820 cleanup 2023-01-25 21:47:17 +01:00
c52dd7e3f4 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-25 14:51:15 +01:00
a4aea1540b more wip sliced attention (.swap doesn't work yet) 2023-01-25 14:51:08 +01:00
3c53b46a35 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-24 19:32:34 -08:00
63c6019f92 sliced attention processor wip (untested) 2023-01-24 14:46:32 +01:00
10c3afef17 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
bffe199ad7 SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work) 2023-01-21 20:54:18 +01:00
0c2a511671 wip SwapCrossAttnProcessor 2023-01-21 18:07:36 +01:00
02ce602a38 Merge branch 'main' into feat/disable-xformers 2023-01-19 18:45:59 -05:00
f0010919f2 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 18:03:36 -05:00
895505976e [bugfix] suppress extraneous warning messages generated by diffusers
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
   them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
171f4aa71b [feat] Provide option to disable xformers from command line
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
5a40aadbee Ensure free_gpu_mem option is passed into the generator (#2326) 2023-01-19 09:57:03 +07:00
5aec48735e lint(generator): 🚮 remove unused imports 2023-01-17 11:44:45 -08:00
ce00c9856f fix perlin noise and txt2img2img 2023-01-16 22:50:13 -05:00
7e8f364d8d do not use autocast for diffusers
- All tensors in diffusers code path are now set explicitly to
  float32 or float16, depending on the --precision flag.
- autocast is still used in the ckpt path, since it is being
  deprecated.
2023-01-16 19:32:06 -05:00