Commit Graph

1530 Commits

Author SHA1 Message Date
psychedelicious
a09d705e4c fix(mm): remove vram check
This check prematurely reports insufficient VRAM on Windows. See #6106 for details.
2024-04-06 14:35:36 +11:00
Lincoln Stein
4571986c63 fix misplaced lock call 2024-04-05 14:32:18 +11:00
Lincoln Stein
812f10730f
adjust free vram calculation for models that will be removed by lazy offloading (#6150)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-04 22:51:12 -04:00
psychedelicious
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
blessedcoolant
3c195d74a5 fix: bypass edge pixels which cannot transform to tile size
Still need to fix this somehow
2024-04-05 08:49:13 +11:00
blessedcoolant
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
brandonrising
51ca59c088 Update probe to always use cpu for loading models 2024-04-04 07:34:43 +11:00
psychedelicious
85f53f94f8 feat(mm): include needed vs free in OOM
Gives us a bit more visibility into these errors, which seem to be popping up more frequently with the new MM.
2024-04-04 06:26:15 +11:00
blessedcoolant
be574cb764 fix: incorrect suffix check in ip adapter checkpoint file 2024-04-03 22:38:28 +05:30
blessedcoolant
5f01de1993 chore: ruff and lint fixes 2024-04-03 20:41:51 +05:30
blessedcoolant
e574815413 chore: clean up merge conflicts 2024-04-03 20:28:00 +05:30
blessedcoolant
fb293dcd84 Merge branch 'checkpoint-ip-adapter' of https://github.com/blessedcoolant/InvokeAI into checkpoint-ip-adapter 2024-04-03 20:23:07 +05:30
blessedcoolant
414851f2f0 fix: raise and present the runtime error from the exception 2024-04-03 20:21:50 +05:30
blessedcoolant
2dcbb7223b fix: use Path for ip_adapter_ckpt_path instead of str 2024-04-03 20:21:03 +05:30
blessedcoolant
14a9f74b17 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-03 12:40:13 +05:30
blessedcoolant
1372ef15b3 fix: Fail when unexpected keys are found in IP Adapter models 2024-04-03 12:40:11 +05:30
blessedcoolant
be1212de9a fix: Raise a better error when incorrect CLIP Vision model is used 2024-04-03 12:40:10 +05:30
blessedcoolant
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
blessedcoolant
4a0dfc3b2d ui: improve the clip vision model picker layout 2024-04-03 12:40:08 +05:30
blessedcoolant
936b99bd3c chore: improve types in ip_adapter backend file 2024-04-03 12:40:02 +05:30
blessedcoolant
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
psychedelicious
0259114d9c
Merge branch 'main' into main 2024-04-03 17:03:19 +11:00
psychedelicious
51e515b925 tidy: use lowercase for tcd scheduler identifier 2024-04-03 17:03:02 +11:00
psychedelicious
8c509295f9 chore: ruff 2024-04-03 17:02:45 +11:00
psychedelicious
59b4a23479 feat(mm): use same pattern for vae converter as others
Add `dump_path` arg to the converter function & save the model to disk inside the conversion function. This is the same pattern as in the other conversion functions.
2024-04-01 12:34:49 +11:00
psychedelicious
13f410478a fix(mm): typing issues in vae loader 2024-04-01 12:34:49 +11:00
psychedelicious
25ff0bf80f fix(mm): return converted vae model instead of path
This was missed in #6072.
2024-04-01 12:34:49 +11:00
blessedcoolant
23390f1516 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-01 06:37:38 +05:30
Lincoln Stein
3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero (#6072)
* pass model config to _load_model

* make conversion work again

* do not write diffusers to disk when convert_cache set to 0

* adding same model to cache twice is a no-op, not an assertion error

* fix issues identified by psychedelicious during pr review

* following conversion, avoid redundant read of cached submodels

* fix error introduced while merging

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
psychedelicious
6d261a5a13 fix(mm): handle relative conversion config paths
I have tested main, controlnet and vae checkpoint conversions.
2024-03-29 10:56:06 -04:00
l0stl0rd
1a83936cdd
Merge branch 'invoke-ai:main' into main 2024-03-29 11:14:28 +01:00
blessedcoolant
cd52e99bb9 Merge branch 'main' into checkpoint-ip-adapter 2024-03-29 12:39:53 +05:30
blessedcoolant
6e4c2d3685 fix: Fail when unexpected keys are found in IP Adapter models 2024-03-29 12:34:56 +05:30
blessedcoolant
cd078b1865 fix: Raise a better error when incorrect CLIP Vision model is used 2024-03-29 11:58:10 +05:30
blessedcoolant
0d8b535131 chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-03-29 11:50:18 +05:30
brandonrising
3681e34d5a Use defaults for db_dir and outdir since config no longer writes defaults to invokeai.yaml 2024-03-28 22:39:48 -04:00
psychedelicious
2526ef52c5 fix(nodes): workaround seamless multi gpu error #6010
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.

This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.

Closes #6010
2024-03-29 08:56:38 +11:00
l0stl0rd
80e311a069
Update schedulers.py 2024-03-28 22:52:15 +01:00
l0stl0rd
b6e6bdc195
Update schedulers.py 2024-03-28 22:51:59 +01:00
psychedelicious
2f6cce48af docs(mm): update ModelSearch 2024-03-28 12:35:41 +11:00
blessedcoolant
1a93f56d06 ui: improve the clip vision model picker layout 2024-03-27 22:11:07 +05:30
blessedcoolant
688a0f30bb chore: improve types in ip_adapter backend file 2024-03-27 22:08:23 +05:30
blessedcoolant
4ed2bf53ca fix: cleanup across various ip adapter files 2024-03-27 22:08:14 +05:30
blessedcoolant
60bf0caca3 feat: add base model recognition for ip adapter safetensor files 2024-03-27 22:08:14 +05:30
blessedcoolant
b013d0e064 wip: Initial implementation of safetensor support for IP Adapter 2024-03-27 22:08:14 +05:30
psychedelicious
a397fdbd25 chore: ruff 2024-03-27 08:16:27 -04:00
psychedelicious
a291a42abc feat: display torch device on startup
This functionality disappeared at some point.
2024-03-27 08:16:27 -04:00
psychedelicious
21758e7b49 fix(mm): move depth variant config to sd2
Looks like a copy/paste got mixed up.
2024-03-27 07:48:54 -04:00
psychedelicious
eb33303e79 fix(mm): handle depth and inpainting models when converting to diffusers
"Normal" models have 4 in-channels, while "Depth" models have 5 and "Inpaint" models have 9.

We need to explicitly tell diffusers the channel count when converting models.

Closes  #6058
2024-03-27 07:48:54 -04:00
psychedelicious
243de683d7 fix(mm): handle integer state dict keys in probe
It's possible for a model's state dict to have integer keys, though we do not actually support such models.

As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.

This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`

To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.

Closes #6044
2024-03-27 09:30:25 +11:00
psychedelicious
2ec03ae95c fix(mm): default settings pydantic error
Add `extra="forbid"` to the default settings models.

Closes #6035.

Pydantic has some quirks related to unions. This affected how the union of default settings was evaluated. See https://github.com/pydantic/pydantic/issues/9095 for a detailed description of the behaviour that this change addresses.
2024-03-25 07:40:52 -04:00
skunkworxdark
37fd57d4d9 Update probe.py
Minor case-sensitive typo. `ModelType.Lora` should be `ModelType.LoRA`
2024-03-22 09:09:56 -07:00
psychedelicious
e7a096dec1 fix(mm): remove proteus model
This model is SDXL and relies on CLIP Skip. We don't support that yet.
2024-03-22 02:22:03 -07:00
psychedelicious
05d6661877 feat(mm): revised list of starter models
- Enriched dependencies to not just be a string - allows reuse of a dependency as a starter model _and_ dependency of another model. For example, all the SDXL models have the fp16 VAE as a dependency, but you can also download it on its own.
- Looked at popular models on the major model sites to select the list. No SD2 models. All hosted on HF.
2024-03-22 14:59:33 +11:00
Lincoln Stein
27622dfd5e allow checkpoint config files to use root-relative paths 2024-03-22 08:57:45 +11:00
psychedelicious
72b44f7ebc feat(mm): rename "blake3" to "blake3_multi"
Just make it clearer which is which.
2024-03-22 08:26:36 +11:00
psychedelicious
7726d312e1 feat(mm): default hashing algo to blake3_single
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.

For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.

For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.

The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
2024-03-22 08:26:36 +11:00
Lincoln Stein
d4d0fea078
[feature] Add probe for SDXL controlnet models (#5382)
* add probe for SDXL controlnet models

* Update invokeai/backend/model_management/model_probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-03-21 14:49:45 +00:00
psychedelicious
a6283b9fb6 tidy: "fit_image_to_resolution" -> "resize_image_to_resolution" 2024-03-21 07:02:57 -07:00
psychedelicious
64fb15e117 chore: ruff 2024-03-21 07:02:57 -07:00
psychedelicious
e2d7b514e0 tidy: correct attributions for controlnet processors 2024-03-21 07:02:57 -07:00
psychedelicious
c36d12a50f feat: adaptation of Lineart Anime processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
c7f8fe4d5e feat: adaptation of Lineart processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
ffb41c3616 feat: adaptation of HED processor
Adapted from controlnet repo
2024-03-21 07:02:57 -07:00
psychedelicious
611006b692 feat: adaptation of Canny processor
Adapted from controlnet processors package

fix: do final resize in canny processor

canny
2024-03-21 07:02:57 -07:00
psychedelicious
ca496f0380 feat: add image utils
These all support controlnet processors.

- `pil_to_cv2`
- `cv2_to_pil`
- `pil_to_np`
- `np_to_pil`
- `normalize_image_channel_count` (a readable version of `HWC3` from the controlnet repo)
- `fit_image_to_resolution` (a readable version of `resize_image` from the controlnet repo)
- `non_maximum_suppression` (a readable version of `nms` from the controlnet repo)
- `safe_step` (a readable version of `safe_step` from the controlnet repo)
2024-03-21 07:02:57 -07:00
psychedelicious
7a4122235f feat(mm): display progress when hashing files 2024-03-21 17:24:48 +11:00
psychedelicious
6c13fa13ea fix(mm): regression from change to legacy conf dir change 2024-03-20 15:05:25 +11:00
psychedelicious
040ea8f41b tidy: do not show msg when loading NSFW checker 2024-03-20 15:05:25 +11:00
psychedelicious
2eacbb4d9d fix(nodes): do not load NSFW checker model on startup
Just check if the path exists to determine if it is "available". When needed, load it.
2024-03-20 15:05:25 +11:00
psychedelicious
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
psychedelicious
5ceaeb234d feat(mm): add starter models route
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
2024-03-20 15:05:25 +11:00
dunkeroni
609c2c0abf Fix: progress image preview for inpainting 2024-03-20 13:36:05 +11:00
dunkeroni
fe5fa7f8cc chore: make ruff 2024-03-20 13:36:05 +11:00
dunkeroni
8b30cbe81e chore: clean up old code comments 2024-03-20 13:36:05 +11:00
dunkeroni
2af9286345 fix: denoise mask incorectly applied after step 2024-03-20 13:36:05 +11:00
Lincoln Stein
c87497fd54
record model_variant in t2i and clip_vision configs (#5989)
- Move base of t2i and clip_vision config models to DiffusersBase, which contains
  a field to record the model variant (e.g. "fp16")
- This restore the ability to load fp16 t2i and clip_vision models
- Also add defensive coding to load the vanilla model when the fp16 model
  has been replaced (or more likely, user's preferences changed since installation)

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-19 20:14:12 +00:00
Brandon Rising
9d5b96c119 Pull format type setting out of model_type if statement 2024-03-19 01:16:11 -04:00
Brandon Rising
5daefccf77 Simplify logic for determining model type in probe 2024-03-19 01:16:11 -04:00
Brandon Rising
1f3c35ee90 Run Ruff 2024-03-19 01:16:11 -04:00
Brandon Rising
f78ed3a952 Add unit test 2024-03-19 01:16:11 -04:00
Brandon Rising
d38262a7ea Allow type field to be a string 2024-03-19 01:16:11 -04:00
Brandon Rising
5feb62d440 Allow users to specify model type and skip detection step of probe 2024-03-19 01:16:11 -04:00
Lincoln Stein
f8df293d2c Revert "fix(mm): provide ckpt config as stream to diffusers"
This reverts commit 9d045964d6.
2024-03-19 14:24:54 +11:00
psychedelicious
9d045964d6 fix(mm): provide ckpt config as stream to diffusers
Fixes converting ckpt main models
2024-03-19 09:24:28 +11:00
psychedelicious
1cb1b60b4c tidy: "check_root.py" -> "check_directories.py" 2024-03-19 09:24:28 +11:00
psychedelicious
1d4517d00d tidy: "validate_root" -> "validate_directories" 2024-03-19 09:24:28 +11:00
psychedelicious
982b513af3 tidy(config): move a few get_config calls to inside the functions where they are needed 2024-03-19 09:24:28 +11:00
Lincoln Stein
5d16a40b95 fix invokeai-configure to use isolated argument-parsing pattern 2024-03-19 09:24:28 +11:00
Lincoln Stein
d871fca643 partially address --root CLI argument handling
- fix places where `get_config()` is being called at import time rather
  than at run time.

- add regression test for import time get_config() calling.
2024-03-19 09:24:28 +11:00
psychedelicious
e76cc71e81 fix(config): edge cases in models.yaml migration
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.

- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
2024-03-19 09:24:28 +11:00
psychedelicious
5179587b5a feat(config): restore ignore_missing_core_models arg 2024-03-19 09:24:28 +11:00
psychedelicious
cb180909f7 fix(install): resolve config-related issues with configurator
- Do not use the singleton app config, this causes a lot of weirdness
- Update logic to use new config object
- Remove unused code
2024-03-19 09:24:28 +11:00
psychedelicious
60492500db chore: ruff 2024-03-19 09:24:28 +11:00
psychedelicious
f69938c6a8 fix(config): revised config methods
- `write_file` requires an destination file path
- `read_config` -> `merge_from_file`, if no path is provided, reads from `self.init_file_path`
- update app, tests to use new methods
- fix configurator, was overwriting config file data unexpectedly
2024-03-19 09:24:28 +11:00
psychedelicious
77b86e9ad5 fix(install): remove broken v2.3 -> v3 migration logic from configurator 2024-03-19 09:24:28 +11:00
psychedelicious
a6181b5759 fix(install): update configurator to use new config system 2024-03-19 09:24:28 +11:00
psychedelicious
b4b0af7c60 fix(install): do not use deprecated pydantic methods 2024-03-19 09:24:28 +11:00