Jonathan
9eed1919c2
Strategize slicing based on free [V]RAM ( #2572 )
...
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
Jonathan
f0f3520bca
Switch to using max for attention slicing in all cases for the time being. ( #2569 )
2023-02-07 19:28:57 +01:00
Jonathan
28b40bebbe
Refactor CUDA cache clearing to add statistical reporting. ( #2553 )
2023-02-06 12:53:30 -05:00
Jonathan
2432adb38f
In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. ( #2549 )
2023-02-06 10:33:24 -05:00
Jonathan
b7ab025f40
Update base.py ( #2543 )
...
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
Lincoln Stein
0240656361
fix crash in txt2img and img2img w/ inpainting models and perlin > 0
...
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.
- Closes Issue #2541
2023-02-05 22:55:08 -05:00
blessedcoolant
7a06c8fd89
Merge branch 'main' into main
2023-02-06 04:43:49 +13:00
Lincoln Stein
4895fe8395
fix crash when text mask applied to img2img ( #2526 )
...
This PR fixes the crash reported at https://discord.com/channels/1020123559063990373/1031668022294884392/1071782238137630800
It also quiets-down the "NSFW is disabled" nag during img2img generation.
2023-02-05 15:26:40 +00:00
blessedcoolant
f1dd76c20b
Remove Deprecation Warning from Diffusers Pipeline
2023-02-05 22:55:10 +13:00
Lincoln Stein
d58574ca46
Merge branch 'main' into dev/installer
2023-02-02 13:53:11 -05:00
Kevin Turner
44d36a0e0b
fix(img2img): do not attempt to do a zero-step img2img when strength is low
2023-02-01 18:42:54 -08:00
Lincoln Stein
8ce9f07223
Merge branch 'main' into dev/installer
2023-02-01 17:50:22 -05:00
Kevin Turner
8cdc65effc
Merge branch 'main' into fix_2418_simplified
2023-01-31 17:45:54 -08:00
Lincoln Stein
e6d52d7ce6
Merge branch 'main' into fix_2418_simplified
2023-01-31 18:11:56 -05:00
Lincoln Stein
56e2d22b6e
Merge branch 'main' into feat/solid-infill
2023-01-31 18:02:17 -05:00
Lincoln Stein
0066187651
Merge branch 'main' into feat/solid-infill
2023-01-31 17:53:09 -05:00
Lincoln Stein
d3d24fa816
fill color is parameterized
2023-01-31 17:52:33 -05:00
Kevin Turner
4d58fed6b0
Merge branch 'main' into fix/inpainting-blank-slate
2023-01-31 11:04:56 -08:00
Kevin Turner
eed802f5d9
Merge branch 'main' into fix/hires_inpaint
2023-01-31 09:34:29 -08:00
Lincoln Stein
1c377b7995
further improvements to ability to find location of data files
...
- implement the following pattern for finding data files under both
regular and editable install conditions:
import invokeai.foo.bar as bar
path = bar.__path__[0]
- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
to be performed.
2023-01-31 12:24:55 -05:00
Lincoln Stein
9ad4c03277
Various fixes
...
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
2023-01-30 18:42:17 -05:00
Damian Stewart
5299324321
workaround for pytorch bug, fixes #2418
2023-01-30 18:45:53 +01:00
Jonathan
5ce62e00c9
Merge branch 'main' into diffusers_cross_attention_control_reimplementation
2023-01-29 13:52:01 -06:00
Kevin Turner
5a8c28de97
Merge remote-tracking branch 'origin/main' into fix/hires_inpaint
2023-01-29 10:51:59 -08:00
Jonathan
07e03b31b7
Update --hires_fix ( #2414 )
...
* Update --hires_fix
Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
Eugene Brodsky
47dbe7bc0d
(assets) move 'caution.png' to avoid including entire 'assets' dir in the wheel
...
reduces wheel size to 3MB from 27MB
2023-01-28 17:39:33 -05:00
Kevin Turner
b6d37a70ca
fix(inpainting model): threshold mask to avoid gray blurry seam
2023-01-28 13:34:22 -08:00
Kevin Turner
71b6ddf5fb
fix(inpainting model): blank areas to be repainted in the masked image
...
Otherwise the model seems too reluctant to change these areas, even though the mask channel should allow it to.
2023-01-28 11:10:32 -08:00
Kevin Turner
6556b200b5
remove experimental "blur" infill
...
It seems counterproductive for use with the inpainting model, and not especially useful otherwise.
2023-01-27 15:25:50 -08:00
Kevin Turner
d627cd1865
feat(inpaint): add simpler infill methods for use with inpainting model
2023-01-27 14:28:16 -08:00
Kevin Turner
09b6104bfd
refactor(txt2img2img): factor out tensor shape
2023-01-27 12:04:12 -08:00
Kevin Turner
1bb5b4ab32
fix dimension errors when inpainting model is used with hires-fix
2023-01-27 11:52:05 -08:00
damian
e090c0dc10
try without setting every time
2023-01-26 17:46:51 +01:00
damian
729752620b
trying out JPPhoto's patch on vast.ai
2023-01-26 17:27:33 +01:00
damian
8ed8bf52d0
use 'auto' slice size
2023-01-26 17:04:22 +01:00
Damian Stewart
41aed57449
wip tracking down MPS slicing support
2023-01-25 22:27:23 +01:00
Damian Stewart
34a3f4a820
cleanup
2023-01-25 21:47:17 +01:00
Damian Stewart
c52dd7e3f4
Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation
2023-01-25 14:51:15 +01:00
Damian Stewart
a4aea1540b
more wip sliced attention (.swap doesn't work yet)
2023-01-25 14:51:08 +01:00
Kevin Turner
3c53b46a35
Merge branch 'main' into diffusers_cross_attention_control_reimplementation
2023-01-24 19:32:34 -08:00
Damian Stewart
63c6019f92
sliced attention processor wip (untested)
2023-01-24 14:46:32 +01:00
Lincoln Stein
10c3afef17
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-23 21:15:12 -05:00
Damian Stewart
bffe199ad7
SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work)
2023-01-21 20:54:18 +01:00
Damian Stewart
0c2a511671
wip SwapCrossAttnProcessor
2023-01-21 18:07:36 +01:00
Lincoln Stein
02ce602a38
Merge branch 'main' into feat/disable-xformers
2023-01-19 18:45:59 -05:00
Lincoln Stein
f0010919f2
Merge branch 'main' into bugfix/free-gpu-mem-diffuser
2023-01-19 18:03:36 -05:00
Lincoln Stein
895505976e
[bugfix] suppress extraneous warning messages generated by diffusers
...
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:
1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b
[feat] Provide option to disable xformers from command line
...
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.
--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Daya Adianto
5a40aadbee
Ensure free_gpu_mem option is passed into the generator ( #2326 )
2023-01-19 09:57:03 +07:00
Kevin Turner
5aec48735e
lint(generator): 🚮 remove unused imports
2023-01-17 11:44:45 -08:00