Commit Graph

1486 Commits

Author SHA1 Message Date
Ryan Dick
7ca677578e Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00
Ryan Dick
31c456c1e6 Update CustomAttention to support both IP-Adapters and regional prompting. 2024-04-09 08:12:12 -04:00
Ryan Dick
2ce79b61f5 Initialize a RegionalPromptAttnProcessor2_0 class by copying AttnProcessor2_0 from diffusers. 2024-04-09 08:12:12 -04:00
Ryan Dick
109e3f0e7f Add RegionalPromptData class for managing prompt region masks. 2024-04-09 08:12:12 -04:00
Ryan Dick
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
Ryan Dick
aba023e0c5 Improve documentation of conditioning_data.py. 2024-04-09 08:12:12 -04:00
Ryan Dick
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
Ryan Dick
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
blessedcoolant
fd1f240853 fix: SDXL Refiner not working properly with Inpainting 2024-04-09 14:13:10 +10:00
psychedelicious
9ab6655491 feat(backend): clean up choose_precision
- Allow user-defined precision on MPS.
- Use more explicit logic to handle all possible cases.
- Add comments.
- Remove the app_config args (they were effectively unused, just get the config using the singleton getter util)
2024-04-07 09:41:05 -04:00
psychedelicious
4068e817d6 fix(mm): typing issues in model cache 2024-04-06 14:35:36 +11:00
psychedelicious
a09d705e4c fix(mm): remove vram check
This check prematurely reports insufficient VRAM on Windows. See #6106 for details.
2024-04-06 14:35:36 +11:00
Lincoln Stein
4571986c63 fix misplaced lock call 2024-04-05 14:32:18 +11:00
Lincoln Stein
812f10730f
adjust free vram calculation for models that will be removed by lazy offloading (#6150)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-04 22:51:12 -04:00
psychedelicious
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
blessedcoolant
3c195d74a5 fix: bypass edge pixels which cannot transform to tile size
Still need to fix this somehow
2024-04-05 08:49:13 +11:00
blessedcoolant
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
brandonrising
51ca59c088 Update probe to always use cpu for loading models 2024-04-04 07:34:43 +11:00
psychedelicious
85f53f94f8 feat(mm): include needed vs free in OOM
Gives us a bit more visibility into these errors, which seem to be popping up more frequently with the new MM.
2024-04-04 06:26:15 +11:00
blessedcoolant
be574cb764 fix: incorrect suffix check in ip adapter checkpoint file 2024-04-03 22:38:28 +05:30
blessedcoolant
5f01de1993 chore: ruff and lint fixes 2024-04-03 20:41:51 +05:30
blessedcoolant
e574815413 chore: clean up merge conflicts 2024-04-03 20:28:00 +05:30
blessedcoolant
fb293dcd84 Merge branch 'checkpoint-ip-adapter' of https://github.com/blessedcoolant/InvokeAI into checkpoint-ip-adapter 2024-04-03 20:23:07 +05:30
blessedcoolant
414851f2f0 fix: raise and present the runtime error from the exception 2024-04-03 20:21:50 +05:30
blessedcoolant
2dcbb7223b fix: use Path for ip_adapter_ckpt_path instead of str 2024-04-03 20:21:03 +05:30
blessedcoolant
14a9f74b17 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-03 12:40:13 +05:30
blessedcoolant
1372ef15b3 fix: Fail when unexpected keys are found in IP Adapter models 2024-04-03 12:40:11 +05:30
blessedcoolant
be1212de9a fix: Raise a better error when incorrect CLIP Vision model is used 2024-04-03 12:40:10 +05:30
blessedcoolant
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
blessedcoolant
4a0dfc3b2d ui: improve the clip vision model picker layout 2024-04-03 12:40:08 +05:30
blessedcoolant
936b99bd3c chore: improve types in ip_adapter backend file 2024-04-03 12:40:02 +05:30
blessedcoolant
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
psychedelicious
59b4a23479 feat(mm): use same pattern for vae converter as others
Add `dump_path` arg to the converter function & save the model to disk inside the conversion function. This is the same pattern as in the other conversion functions.
2024-04-01 12:34:49 +11:00
psychedelicious
13f410478a fix(mm): typing issues in vae loader 2024-04-01 12:34:49 +11:00
psychedelicious
25ff0bf80f fix(mm): return converted vae model instead of path
This was missed in #6072.
2024-04-01 12:34:49 +11:00
blessedcoolant
23390f1516 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-01 06:37:38 +05:30
Lincoln Stein
3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
psychedelicious
6d261a5a13 fix(mm): handle relative conversion config paths
I have tested main, controlnet and vae checkpoint conversions.
2024-03-29 10:56:06 -04:00
blessedcoolant
cd52e99bb9 Merge branch 'main' into checkpoint-ip-adapter 2024-03-29 12:39:53 +05:30
blessedcoolant
6e4c2d3685 fix: Fail when unexpected keys are found in IP Adapter models 2024-03-29 12:34:56 +05:30
blessedcoolant
cd078b1865 fix: Raise a better error when incorrect CLIP Vision model is used 2024-03-29 11:58:10 +05:30
blessedcoolant
0d8b535131 chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-03-29 11:50:18 +05:30
brandonrising
3681e34d5a Use defaults for db_dir and outdir since config no longer writes defaults to invokeai.yaml 2024-03-28 22:39:48 -04:00
psychedelicious
2526ef52c5 fix(nodes): workaround seamless multi gpu error #6010
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.

This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.

Closes #6010
2024-03-29 08:56:38 +11:00
psychedelicious
2f6cce48af docs(mm): update ModelSearch 2024-03-28 12:35:41 +11:00
blessedcoolant
1a93f56d06 ui: improve the clip vision model picker layout 2024-03-27 22:11:07 +05:30
blessedcoolant
688a0f30bb chore: improve types in ip_adapter backend file 2024-03-27 22:08:23 +05:30