Commit Graph

76 Commits

Author SHA1 Message Date
maryhipp
5f567f41f4 add case for clip embed models in probe 2024-08-26 20:17:50 -04:00
Brandon Rising
72398350b4 More flux loader cleanup 2024-08-26 20:17:50 -04:00
Brandon Rising
57168d719b Fix styling/lint 2024-08-26 20:17:50 -04:00
Brandon Rising
a63f842a13 Select dev/schnell based on state dict, use correct max seq len based on dev/schnell, and shift in inference, separate vae flux params into separate config 2024-08-26 20:17:50 -04:00
Brandon Rising
723f3ab0a9 Add nf4 bnb quantized format 2024-08-26 20:17:50 -04:00
Brandon Rising
1bd90e0fd4 Run ruff, setup initial text to image node 2024-08-26 20:17:50 -04:00
Brandon Rising
436f18ff55 Add backend functions and classes for Flux implementation, Update the way flux encoders/tokenizers are loaded for prompt encoding, Update way flux vae is loaded 2024-08-26 20:17:50 -04:00
Brandon Rising
5f59a828f9 Setup flux model loading in the UI 2024-08-26 20:17:50 -04:00
Ryan Dick
650902dc29 Fix broken unit test caused by non-existent model path. 2024-07-10 13:59:17 -04:00
Ryan Dick
7b5d4935b4 Merge branch 'main' into ryan/spandrel-upscale 2024-07-09 13:47:11 -04:00
Ryan Dick
0ce6ec634d Do not assign the result of SpandrelImageToImageModel.load_from_file(...) during probe to ensure that the model is immediately gc'd. 2024-07-05 14:05:12 -04:00
Ryan Dick
1d449097cc Apply ruff rule to disallow all relative imports. 2024-07-04 09:35:37 -04:00
Ryan Dick
1ab20f43c8 Tidy spandrel model probe logic, and document the reasons behind the current implementation. 2024-07-03 16:28:21 -04:00
Ryan Dick
29c8ddfb88 WIP - A bunch of boilerplate to support Spandrel Image-to-Image models throughout the model manager and the frontend. 2024-07-03 16:28:21 -04:00
Ryan Dick
59ce9cf41c WIP - Begin to integrate SpandreImageToImageModel type into the model manager. 2024-07-03 16:28:21 -04:00
Ryan Dick
e6abea7bc5 (minor) Remove redundant else clause on a for-loop with no break statement. 2024-07-03 16:28:21 -04:00
Ryan Dick
c335f92345 (minor) simplify startswith(...) syntax. 2024-07-03 16:28:21 -04:00
Lincoln Stein
3e0fb45dd7
Load single-file checkpoints directly without conversion (#6510)
* use model_class.load_singlefile() instead of converting; works, but performance is poor

* adjust the convert api - not right just yet

* working, needs sql migrator update

* rename migration_11 before conflict merge with main

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* implement lightweight version-by-version config migration

* simplified config schema migration code

* associate sdxl config with sdxl VAEs

* remove use of original_config_file in load_single_file()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-27 17:31:28 -04:00
Lincoln Stein
b03073d888
[MM] Add support for probing and loading SDXL VAE checkpoint files (#6524)
* add support for probing and loading SDXL VAE checkpoint files

* broaden regexp probe for SDXL VAEs

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-06-20 02:57:27 +00:00
Ryan Dick
8e47e005a7 Tidy SilenceWarnings context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
2024-06-18 15:06:22 -04:00
psychedelicious
241a1fdb57 feat(mm): support sdxl ckpt inpainting models
There are only a couple SDXL inpainting models, and my tests indicate they are not as good as SD1.5 inpainting, but at least we support them now.

- Add the config file. This matches what is used in A1111. The only difference from the non-inpainting SDXL config is the number of in-channels.
- Update the legacy config maps to use this config file.
2024-04-28 12:57:27 +10:00
brandonrising
51ca59c088 Update probe to always use cpu for loading models 2024-04-04 07:34:43 +11:00
blessedcoolant
5f01de1993 chore: ruff and lint fixes 2024-04-03 20:41:51 +05:30
blessedcoolant
4a0dfc3b2d ui: improve the clip vision model picker layout 2024-04-03 12:40:08 +05:30
blessedcoolant
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
blessedcoolant
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
blessedcoolant
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
psychedelicious
21758e7b49 fix(mm): move depth variant config to sd2
Looks like a copy/paste got mixed up.
2024-03-27 07:48:54 -04:00
psychedelicious
243de683d7 fix(mm): handle integer state dict keys in probe
It's possible for a model's state dict to have integer keys, though we do not actually support such models.

As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.

This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`

To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.

Closes #6044
2024-03-27 09:30:25 +11:00
skunkworxdark
37fd57d4d9 Update probe.py
Minor case-sensitive typo. `ModelType.Lora` should be `ModelType.LoRA`
2024-03-22 09:09:56 -07:00
psychedelicious
7726d312e1 feat(mm): default hashing algo to blake3_single
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.

For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.

For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.

The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
2024-03-22 08:26:36 +11:00
Lincoln Stein
d4d0fea078
[feature] Add probe for SDXL controlnet models (#5382)
* add probe for SDXL controlnet models

* Update invokeai/backend/model_management/model_probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/probe.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-03-21 14:49:45 +00:00
psychedelicious
6c13fa13ea fix(mm): regression from change to legacy conf dir change 2024-03-20 15:05:25 +11:00
Brandon Rising
9d5b96c119 Pull format type setting out of model_type if statement 2024-03-19 01:16:11 -04:00
Brandon Rising
5daefccf77 Simplify logic for determining model type in probe 2024-03-19 01:16:11 -04:00
Brandon Rising
1f3c35ee90 Run Ruff 2024-03-19 01:16:11 -04:00
Brandon Rising
f78ed3a952 Add unit test 2024-03-19 01:16:11 -04:00
Brandon Rising
d38262a7ea Allow type field to be a string 2024-03-19 01:16:11 -04:00
Brandon Rising
5feb62d440 Allow users to specify model type and skip detection step of probe 2024-03-19 01:16:11 -04:00
Lincoln Stein
71a1740740 Remove core safetensors->diffusers conversion models
- No longer install core conversion models. Use the HuggingFace cache to load
  them if and when needed.

- Call directly into the diffusers library to perform conversions with only shallow
   wrappers around them to massage arguments, etc.

- At root configuration time, do not create all the possible model subdirectories,
  but let them be created and populated at model install time.

- Remove checks for missing core conversion files, since they are no
  longer installed.
2024-03-17 19:13:18 -04:00
psychedelicious
3fd824306c feat(mm): probe for main model default settings
Currently, this is just the width and height, derived from the model base.
2024-03-14 16:03:37 +11:00
psychedelicious
eb6e6548ed feat(mm): faster hashing for spinning disk HDDs
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31

- Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted.
- Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality.
- Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue
- Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3`
- Update all calls of the probe to use the app's configured hashing algorithm
- Update an external script that probes models
- Update tests
- Move ModelHash into its own module to avoid circuclar import issues
2024-03-14 15:54:42 +11:00
Brandon Rising
e52274ecac Experiment with using absolute paths within model management 2024-03-08 15:36:14 -05:00
psychedelicious
a10dccdd43 fix(mm): fix bug in control adapter probe default settings
Wasn't checking for matches correctly.
2024-03-08 12:44:58 -05:00
psychedelicious
13bb3c5e15 feat(mm): add control adapter default settings while probing 2024-03-08 12:44:58 -05:00
psychedelicious
e866d90ab2 tidy(mm): remove unused method on probe 2024-03-05 23:50:19 +11:00
psychedelicious
7c9128b253 tidy(mm): use canonical capitalization for all model-related enums, classes
For example, "Lora" -> "LoRA", "Vae" -> "VAE".
2024-03-05 23:50:19 +11:00
psychedelicious
44c40d7d1a refactor(mm): remove unused metadata logic, fix tests
- Metadata is merged with the config. We can simplify the MM substantially and remove the handling for metadata.
- Per discussion, we don't have an ETA for frontend implementation of tags, and with the realization that the tags from CivitAI are largely useless, there's no reason to keep tags in the MM right now. When we are ready to implement tags on the frontend, we can refer back to the implementation here and use it if it supports the design.
- Fix all tests.
2024-03-05 23:50:19 +11:00
psychedelicious
7cb0da1f66 refactor(mm): wip schema changes 2024-03-05 23:50:19 +11:00
psychedelicious
9378e47a06 feat(mm): add source_type to model configs 2024-03-05 23:50:19 +11:00