brandonrising
51ca59c088
Update probe to always use cpu for loading models
2024-04-04 07:34:43 +11:00
psychedelicious
85f53f94f8
feat(mm): include needed vs free in OOM
...
Gives us a bit more visibility into these errors, which seem to be popping up more frequently with the new MM.
2024-04-04 06:26:15 +11:00
blessedcoolant
be574cb764
fix: incorrect suffix check in ip adapter checkpoint file
2024-04-03 22:38:28 +05:30
blessedcoolant
5f01de1993
chore: ruff and lint fixes
2024-04-03 20:41:51 +05:30
blessedcoolant
e574815413
chore: clean up merge conflicts
2024-04-03 20:28:00 +05:30
blessedcoolant
fb293dcd84
Merge branch 'checkpoint-ip-adapter' of https://github.com/blessedcoolant/InvokeAI into checkpoint-ip-adapter
2024-04-03 20:23:07 +05:30
blessedcoolant
414851f2f0
fix: raise and present the runtime error from the exception
2024-04-03 20:21:50 +05:30
blessedcoolant
2dcbb7223b
fix: use Path for ip_adapter_ckpt_path instead of str
2024-04-03 20:21:03 +05:30
blessedcoolant
14a9f74b17
cleanup: use load_file of safetensors directly for loading ip adapters
2024-04-03 12:40:13 +05:30
blessedcoolant
1372ef15b3
fix: Fail when unexpected keys are found in IP Adapter models
2024-04-03 12:40:11 +05:30
blessedcoolant
be1212de9a
fix: Raise a better error when incorrect CLIP Vision model is used
2024-04-03 12:40:10 +05:30
blessedcoolant
a14ce0edab
chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig
2024-04-03 12:40:10 +05:30
blessedcoolant
4a0dfc3b2d
ui: improve the clip vision model picker layout
2024-04-03 12:40:08 +05:30
blessedcoolant
936b99bd3c
chore: improve types in ip_adapter backend file
2024-04-03 12:40:02 +05:30
blessedcoolant
79f7b61dfe
fix: cleanup across various ip adapter files
2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22
feat: add base model recognition for ip adapter safetensor files
2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e
wip: Initial implementation of safetensor support for IP Adapter
2024-04-03 12:39:52 +05:30
Lincoln Stein
9adb15f86c
working but filled with debug statements
2024-04-01 18:44:24 -04:00
Lincoln Stein
3d69372785
implement session-level reservation of gpus
2024-04-01 16:01:43 -04:00
Lincoln Stein
eca29c41d0
added notes
2024-04-01 13:30:02 -04:00
Lincoln Stein
9df0980c46
parallel processing working on single-GPU, not tested on multi
2024-04-01 00:07:47 -04:00
psychedelicious
59b4a23479
feat(mm): use same pattern for vae converter as others
...
Add `dump_path` arg to the converter function & save the model to disk inside the conversion function. This is the same pattern as in the other conversion functions.
2024-04-01 12:34:49 +11:00
psychedelicious
13f410478a
fix(mm): typing issues in vae loader
2024-04-01 12:34:49 +11:00
psychedelicious
25ff0bf80f
fix(mm): return converted vae model instead of path
...
This was missed in #6072 .
2024-04-01 12:34:49 +11:00
blessedcoolant
23390f1516
cleanup: use load_file of safetensors directly for loading ip adapters
2024-04-01 06:37:38 +05:30
Lincoln Stein
83356ec74c
fix merge conflicts
2024-03-31 17:04:57 -04:00
Lincoln Stein
9336a076de
add locking around thread-critical sections
2024-03-31 16:58:56 -04:00
Lincoln Stein
eaa2c68693
remove vram_cache and don't move VRAM models back into CPU
2024-03-31 16:37:13 -04:00
Lincoln Stein
3d6d89feb4
[mm] Do not write diffuser model to disk when convert_cache set to zero ( #6072 )
...
* pass model config to _load_model
* make conversion work again
* do not write diffusers to disk when convert_cache set to 0
* adding same model to cache twice is a no-op, not an assertion error
* fix issues identified by psychedelicious during pr review
* following conversion, avoid redundant read of cached submodels
* fix error introduced while merging
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-03-29 16:11:08 -04:00
psychedelicious
6d261a5a13
fix(mm): handle relative conversion config paths
...
I have tested main, controlnet and vae checkpoint conversions.
2024-03-29 10:56:06 -04:00
blessedcoolant
cd52e99bb9
Merge branch 'main' into checkpoint-ip-adapter
2024-03-29 12:39:53 +05:30
blessedcoolant
6e4c2d3685
fix: Fail when unexpected keys are found in IP Adapter models
2024-03-29 12:34:56 +05:30
blessedcoolant
cd078b1865
fix: Raise a better error when incorrect CLIP Vision model is used
2024-03-29 11:58:10 +05:30
blessedcoolant
0d8b535131
chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig
2024-03-29 11:50:18 +05:30
brandonrising
3681e34d5a
Use defaults for db_dir and outdir since config no longer writes defaults to invokeai.yaml
2024-03-28 22:39:48 -04:00
psychedelicious
2526ef52c5
fix(nodes): workaround seamless multi gpu error #6010
...
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.
This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.
Closes #6010
2024-03-29 08:56:38 +11:00
psychedelicious
2f6cce48af
docs(mm): update ModelSearch
2024-03-28 12:35:41 +11:00
blessedcoolant
1a93f56d06
ui: improve the clip vision model picker layout
2024-03-27 22:11:07 +05:30
blessedcoolant
688a0f30bb
chore: improve types in ip_adapter backend file
2024-03-27 22:08:23 +05:30
blessedcoolant
4ed2bf53ca
fix: cleanup across various ip adapter files
2024-03-27 22:08:14 +05:30
blessedcoolant
60bf0caca3
feat: add base model recognition for ip adapter safetensor files
2024-03-27 22:08:14 +05:30
blessedcoolant
b013d0e064
wip: Initial implementation of safetensor support for IP Adapter
2024-03-27 22:08:14 +05:30
psychedelicious
a397fdbd25
chore: ruff
2024-03-27 08:16:27 -04:00
psychedelicious
a291a42abc
feat: display torch device on startup
...
This functionality disappeared at some point.
2024-03-27 08:16:27 -04:00
psychedelicious
21758e7b49
fix(mm): move depth variant config to sd2
...
Looks like a copy/paste got mixed up.
2024-03-27 07:48:54 -04:00
psychedelicious
eb33303e79
fix(mm): handle depth and inpainting models when converting to diffusers
...
"Normal" models have 4 in-channels, while "Depth" models have 5 and "Inpaint" models have 9.
We need to explicitly tell diffusers the channel count when converting models.
Closes #6058
2024-03-27 07:48:54 -04:00
psychedelicious
243de683d7
fix(mm): handle integer state dict keys in probe
...
It's possible for a model's state dict to have integer keys, though we do not actually support such models.
As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.
This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`
To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.
Closes #6044
2024-03-27 09:30:25 +11:00
psychedelicious
2ec03ae95c
fix(mm): default settings pydantic error
...
Add `extra="forbid"` to the default settings models.
Closes #6035 .
Pydantic has some quirks related to unions. This affected how the union of default settings was evaluated. See https://github.com/pydantic/pydantic/issues/9095 for a detailed description of the behaviour that this change addresses.
2024-03-25 07:40:52 -04:00
skunkworxdark
37fd57d4d9
Update probe.py
...
Minor case-sensitive typo. `ModelType.Lora` should be `ModelType.LoRA`
2024-03-22 09:09:56 -07:00
psychedelicious
e7a096dec1
fix(mm): remove proteus model
...
This model is SDXL and relies on CLIP Skip. We don't support that yet.
2024-03-22 02:22:03 -07:00