Commit Graph

1171 Commits

Author SHA1 Message Date
Ryan Dick
fb3d0c4b12 Fix bug in model cache reference count checking. 2023-11-03 13:50:40 -07:00
Ryan Dick
8488ab0134 Reduce frequency that we call gc.collect() in the model cache. 2023-11-03 13:50:40 -07:00
Ryan Dick
875231ed3d Add reminder to clean up our model cache clearing logic. 2023-11-03 13:50:40 -07:00
Ryan Dick
43b300498f Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests). 2023-11-03 13:50:40 -07:00
Ryan Dick
e391f3c9a8 Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model. 2023-11-02 19:41:33 -07:00
Ryan Dick
6e7a3f0546 (minor) Fix static checks and typo. 2023-11-02 19:20:37 -07:00
Ryan Dick
4a683cc669 Add a app config parameter to control the ModelCache logging behavior. 2023-11-02 19:20:37 -07:00
Ryan Dick
3781e56e57 Add log_memory_usage param to ModelCache. 2023-11-02 19:20:37 -07:00
Ryan Dick
8ff49109a8 Update get_pretty_snapshot_diff(...) to handle None-snapshots. 2023-11-02 19:20:37 -07:00
Ryan Dick
bac2a757e8 Replace deepcopy with a pickle roundtrip in apply_ti(...) to improve speed. 2023-11-02 19:05:24 -07:00
Ryan Dick
fa7f6a6a10 Further tidying of LoRA patching. Revert some changes that didn't end up being important under the constraint that calculations are done on the same device as the model. 2023-11-02 10:03:17 -07:00
Ryan Dick
61b17c475a Add TODO note about improving _resolve_lora_key(...). 2023-11-02 10:03:17 -07:00
Ryan Dick
379d68f595 Patch LoRA on device when model is already on device. 2023-11-02 10:03:17 -07:00
Ryan Dick
545c811bf1 Remove device and dtype members from LoRAModelRaw, they can too easily get out-of-sync with the underlying layer states. 2023-11-02 10:03:17 -07:00
Ryan Dick
2ba5b44ec4 Remove unused _lora_forward_hook(...). 2023-11-02 10:03:17 -07:00
Ryan Dick
7f4ce518b7 auto-format lora.py 2023-11-02 10:03:17 -07:00
Lincoln Stein
5c9046580f
Merge branch 'main' into bugfix/textual-inversion 2023-10-24 14:17:43 -04:00
Lincoln Stein
6cbc69f3b7 support conversion of controlnets from safetensors to diffusers 2023-10-23 22:06:10 -04:00
Lincoln Stein
3546c41f4a close #4975 2023-10-23 18:48:14 -04:00
d8ahazard
fdf02c33d0 Catch generic model errors
Prevent the app from dying on invalid models.
2023-10-19 07:28:33 +11:00
Ryan Dick
a078efc0f2 Merge branch 'main' into ryan/multi-image-ip 2023-10-18 08:59:12 -04:00
psychedelicious
252c9a5f5a fix(backend): fix nsfw/watermarker util types 2023-10-18 09:08:13 +11:00
Kent Keirsey
b7555ddae8 Revert "Revert "chore: lint""
This reverts commit 38e7eb8878.
2023-10-17 11:59:19 -04:00
Kent Keirsey
8afc47018b Revert "Revert "Cleaning up (removing diagnostic prints)""
This reverts commit 6e697b7b6f.
2023-10-17 11:59:19 -04:00
Kent Keirsey
a97ec88e06 Revert "Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.""
This reverts commit c04fb451ee.
2023-10-17 11:59:19 -04:00
Kent Keirsey
282d36b640 Revert "Revert "Fixing some var and arg names.""
This reverts commit 58a0709c1e.
2023-10-17 11:59:19 -04:00
psychedelicious
58a0709c1e Revert "Fixing some var and arg names."
This reverts commit f11ba81a8d.
2023-10-17 11:59:11 -04:00
psychedelicious
c04fb451ee Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time."
This reverts commit 378689a519.
2023-10-17 11:59:11 -04:00
psychedelicious
6e697b7b6f Revert "Cleaning up (removing diagnostic prints)"
This reverts commit 06f8a3276d.
2023-10-17 11:59:11 -04:00
psychedelicious
38e7eb8878 Revert "chore: lint"
This reverts commit fff29d663d.
2023-10-17 11:59:11 -04:00
psychedelicious
fff29d663d chore: lint 2023-10-17 19:42:06 +11:00
user1
06f8a3276d Cleaning up (removing diagnostic prints) 2023-10-17 19:42:06 +11:00
user1
378689a519 Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.
Also, the PREVIOUS commit (@8d3885d, which was already pushed to github repo) was wrongly commented, but too late to fix without a force push or other mucking that I'm reluctant to do. That commit is actually the one that has all the changes to diffusers_pipeline.py to use additional arg down_intrablock_additional_residuals (introduced in diffusers PR https://github.com/huggingface/diffusers/pull/5362) to detangle T2I-Adapter from ControlNet inputs to main UNet.
2023-10-17 19:42:06 +11:00
user1
f11ba81a8d Fixing some var and arg names. 2023-10-17 19:42:06 +11:00
Millun Atluri
001bba1719
Merge branch 'main' into feat/nodes/freeu 2023-10-17 15:58:00 +11:00
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
Ryan Dick
3079c75a60 (minor) Make it more clear that shape annotations are just comments and not commented lines of code. 2023-10-16 08:35:32 -04:00
psychedelicious
53b6f0dc73
Merge branch 'main' into ryan/multi-image-ip 2023-10-16 17:16:10 +11:00
blessedcoolant
2c6772f92f
Merge branch 'main' into bugfix/trim-whitespace-from-urls 2023-10-15 17:41:41 +05:30
blessedcoolant
c9d95e5758
Merge branch 'main' into bugfix/ignore-dot-directories-on-model-scan 2023-10-15 17:23:02 +05:30
psychedelicious
48626c40fd fix(backend): handle systems with glibc < 2.33
`mallinfo2` is not available on `glibc` < 2.33.

On these systems, we successfully load the library but get an `AttributeError` on attempting to access `mallinfo2`.

I'm not sure if the old `mallinfo` will work, and not sure how to install it safely to test, so for now we just handle the `AttributeError`.

This means the enhanced memory snapshot logic will be skipped for these systems, which isn't a big deal.
2023-10-15 07:56:55 +11:00
Ryan Dick
8464450a53 Add support for multi-image IP-Adapter. 2023-10-14 12:50:33 -04:00
Lincoln Stein
a1001b6d10
Merge branch 'main' into bugfix/ignore-dot-directories-on-model-scan 2023-10-14 10:37:55 -04:00
Lincoln Stein
15cabc4968 Possibly closes #4815 2023-10-12 23:37:05 -04:00
Lincoln Stein
29c3f49182 enable the ram cache slider in invokeai-configure 2023-10-12 23:04:16 -04:00
Lincoln Stein
21d5969942 strip leading and trailing quotes as well as whitespace 2023-10-12 22:35:02 -04:00
Lincoln Stein
52274087f3 close #4536 2023-10-12 21:24:07 -04:00
psychedelicious
9646157ad5 fix: fix test imports 2023-10-12 12:15:06 -04:00
Ryan Dick
40f9e49b5e Demote model cache logs from warning to debug based on the conversation here: https://discord.com/channels/1020123559063990373/1049495067846524939/1161647290189090816 2023-10-11 12:02:46 -04:00
psychedelicious
15b33ad501 feat(nodes): add freeu support
Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU

Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU config to the UNet
- No support for model-dependent presets, this will be a future workflow editor enhancement

Closes #4845
2023-10-11 13:49:28 +11:00
Ryan Dick
61242bf86a Fix bug in skip_torch_weight_init() where the original behavior of torch.nn.Conv*d modules wasn't being restored correctly. 2023-10-10 10:05:50 -04:00
Ryan Dick
58b56e9b1e Add a skip_torch_weight_init() context manager to improve model load times (from disk). 2023-10-09 14:12:56 -04:00
Lincoln Stein
fe0cf2c160 remove hardcoded subfolder name from model downloader 2023-10-08 17:45:39 -04:00
Ryan Dick
971ccfb081 Refactor multi-IP-Adapter to clean up the interface around changing scales. 2023-10-06 20:43:43 -04:00
Ryan Dick
43a3c3c7ea Fix typo in setting IP-Adapter scales. 2023-10-06 20:43:43 -04:00
Ryan Dick
4df1cdb34d Tidy _prepare_attention_processors(...) logic. 2023-10-06 20:43:43 -04:00
Ryan Dick
d8d0c9af09 Fix handling of scales with multiple IP-Adapters. 2023-10-06 20:43:43 -04:00
Ryan Dick
26b91a538a Fixes to get IP-Adapter tests working with new multi-IP-Adapter support. 2023-10-06 20:43:43 -04:00
Ryan Dick
7ca456d674 Update IP-Adapter model to enable running multiple IP-Adapters at once. (Not tested yet.) 2023-10-06 20:43:43 -04:00
Ryan Dick
78828b6b9c WIP - Accept a list of IPAdapterFields in DenoiseLatents. 2023-10-06 20:43:43 -04:00
Ryan Dick
4f97bd4418
Merge branch 'main' into ryan/model-tests 2023-10-06 19:47:28 -04:00
Ryan Dick
e0e001758a Remove @slow decorator in favor of @pytest.mark.slow. 2023-10-06 18:26:06 -04:00
Ryan Dick
096d195d6e
Merge branch 'main' into ryan/model-cache-logging-only 2023-10-06 09:52:45 -04:00
Ryan Dick
1c8b1fbc53 POC of a test that depends on models. 2023-10-05 15:35:58 -04:00
Ryan Dick
78377469db
Add support for T2I-Adapter in node workflows (#4612)
* Bump diffusers to 0.21.2.

* Add T2IAdapterInvocation boilerplate.

* Add T2I-Adapter model to model-management.

* (minor) Tidy prepare_control_image(...).

* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.

* Add logic for applying T2I-Adapter weights and accumulating.

* Add T2IAdapter to MODEL_CLASSES map.

* yarn typegen

* Add model probes for T2I-Adapter models.

* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.

* Add T2IAdapterModel.convert_if_required(...).

* Fix errors in T2I-Adapter input image sizing logic.

* Fix bug with handling of multiple T2I-Adapters.

* black / flake8

* Fix typo

* yarn build

* Add num_channels param to prepare_control_image(...).

* Link to upstream diffusers bugfix PR that currently requires a workaround.

* feat: Add Color Map Preprocessor

Needed for the color T2I Adapter

* feat: Add Color Map Preprocessor to Linear UI

* Revert "feat: Add Color Map Preprocessor"

This reverts commit a1119a00bf.

* Revert "feat: Add Color Map Preprocessor to Linear UI"

This reverts commit bd8a9b82d8.

* Fix T2I-Adapter field rendering in workflow editor.

* yarn build, yarn typegen

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-05 16:29:16 +11:00
Ryan Dick
fbe6452c45 Add support for IPAdapterPlusXL based on 6219530507. 2023-10-04 22:35:17 -04:00
Ryan Dick
2479a59e5e Re-enable garbage collection in model cache MemorySnapshots. 2023-10-03 15:18:47 -04:00
Ryan Dick
7d0ac2c36d (minor) clean up typos. 2023-10-03 15:00:03 -04:00
Ryan Dick
519b892f0c Add unit test for Struct_mallinfo2.__str__() 2023-10-03 14:25:34 -04:00
Ryan Dick
763dcacfd3 Add unit test for get_pretty_snapshot_diff(...). 2023-10-03 14:25:34 -04:00
Ryan Dick
22a84930f6 Disable garbage collection in ModelCache calls to MemorySnapshot in order minimize snapshot overhead. 2023-10-03 14:25:34 -04:00
Ryan Dick
d64e17e043 Add README with info about glib memory fragmentation caused by the model cache. 2023-10-03 14:25:34 -04:00
Ryan Dick
ba54277011 Catch a more specific exception in environments that do not have a libc shared library. 2023-10-03 14:25:34 -04:00
Ryan Dick
5915a4a51c Minor fixes. 2023-10-03 14:25:34 -04:00
Ryan Dick
4580ba0d87 Remove logic to update model cache size estimates dynamically. 2023-10-03 14:25:34 -04:00
Ryan Dick
b9fd2e9e76 Improve get_pretty_snapshot_diff(...) message formatting. 2023-10-03 14:25:34 -04:00
Ryan Dick
75b65597af Add malloc info to MemorySnapshot. 2023-10-03 14:25:34 -04:00
Ryan Dick
2a3c0ab5d2 Move MemorySnapshot to its own file. 2023-10-03 14:25:34 -04:00
Ryan Dick
7d61373b82 Add LibcUtil class. 2023-10-03 14:25:34 -04:00
Ryan Dick
7d65555a5a Fix type error in torch device comparison. 2023-10-03 14:25:34 -04:00
Ryan Dick
123f2b2dbc Update cache model size estimates based on changes in VRAM when moving models to/from CUDA. 2023-10-03 14:25:34 -04:00
Ryan Dick
1e4e42556e Update model cache device comparison to treat 'cuda' and 'cuda:0' as the same device type. 2023-10-03 14:25:34 -04:00
Ryan Dick
1f6699ac43 Consolidate all model.to(...) calls in the model cache to use a utility function with better logging. 2023-10-03 14:25:34 -04:00
Ryan Dick
ace8665411 Add warning log if moving a model from cuda to cpu causes unexpected change in VRAM usage. 2023-10-03 14:25:34 -04:00
Ryan Dick
7fa5bae8fd Add warning log if moving model from RAM to VRAM causes an unexpected change in VRAM usage. 2023-10-03 14:25:34 -04:00
Ryan Dick
f9faca7c91 Add warning log if model mis-reports its required cache memory before load from disk. 2023-10-03 14:25:34 -04:00
Ryan Dick
594fd3ba6d Add debug logging of changes in RAM and VRAM for all model cache operations. 2023-10-03 14:25:34 -04:00
Ryan Dick
44d68f5ed5 Auto-format model_cache.py. 2023-10-03 14:25:34 -04:00
psychedelicious
045d3f6139 chore: flake8 2023-09-28 13:49:31 +10:00
Lincoln Stein
f6bbcd0589 remove dangling debug statement 2023-09-27 22:26:26 -04:00
Lincoln Stein
ada22a799e remove dangling debug statement 2023-09-27 22:26:06 -04:00
Lincoln Stein
034af2d9f8 enable downloading from subfolders for repo_ids 2023-09-27 22:11:56 -04:00
Ryan Dick
0fc14afcf0
Merge branch 'main' into bug/ip-adapter-calc-size 2023-09-27 09:42:51 -04:00
Lincoln Stein
3fd27b1aa9 run correct version of black 2023-09-26 08:03:34 -04:00
Lincoln Stein
d59e534cad use heuristic to select RAM cache size during headless install; blackified 2023-09-26 08:03:34 -04:00
Ryan Dick
399ebe443e Fix IP-Adapter calculation of memory footprint. 2023-09-25 18:28:10 -04:00
Lincoln Stein
43ebd68313
Merge branch 'main' into install/install-ip-adapters 2023-09-24 23:19:25 -04:00
Lincoln Stein
912087e4dc blackify 2023-09-24 19:00:38 -04:00
Lincoln Stein
593fb95213 ip_adapter_sd15 & its encoder will now be installed by default during headless install 2023-09-24 19:00:21 -04:00
Lincoln Stein
a1d9e6b871
Merge branch 'main' into bugfix/probe_ip_adapter 2023-09-24 15:39:43 -04:00
Lincoln Stein
e34e6d6e80 enable v_prediction for sd-1 models 2023-09-24 12:22:29 -04:00
Lincoln Stein
86cb53342a fix probing for ip_adapter folders 2023-09-23 22:32:03 -04:00
Lincoln Stein
25a71a1791
Merge branch 'main' into refactor/rename-get-logger 2023-09-23 14:49:07 -07:00
Ryan Dick
bf62553150 (minor) Update documentation to reflect that a bug was fixed in InvokeAI/ip_adapter_sdxl_vit_h by e178288fb6 2023-09-20 20:18:33 -04:00
Lincoln Stein
0960518088 add techjedi's database maintenance script 2023-09-20 17:46:49 -04:00
Kevin Turner
b1b5f70ea6
Merge branch 'main' into feat/taesd 2023-09-20 12:54:17 -07:00
Kevin Turner
e487bcd0f7 feat(model management): guess whether a VAE is for SDXL based on its name 2023-09-20 12:07:12 -07:00
Kevin Turner
e0f8274f49 feat(model management): guess whether a VAE is for SDXL based on its name 2023-09-20 12:06:55 -07:00
Lincoln Stein
0420874f56 reimplement the old invokeai-metadata command 2023-09-20 13:49:29 -04:00
Kevin Turner
f222b871e9 Merge remote-tracking branch 'origin/main' into feat/taesd
# Conflicts:
#	invokeai/backend/model_management/model_probe.py
2023-09-20 10:46:55 -07:00
Ryan Dick
92b49e45bb Address flake8 error. 2023-09-18 16:33:16 -04:00
Ryan Dick
b05b8ef677 Switch to using torch 2.0 attention for IP-Adapter (more memory-efficient). 2023-09-18 16:30:53 -04:00
blessedcoolant
e0dddbd38e chore: fix isort issues 2023-09-17 12:13:03 +12:00
blessedcoolant
b7773c9962 chore: black & lint fixes 2023-09-17 12:00:21 +12:00
user1
c48e648cbb Added per-step setting of IP-Adapter weights (for param easing, etc.) 2023-09-16 12:36:16 -07:00
user1
7ee13879e3 Added check in IP-Adapter to avoid begin/end step percent handling if use of IP-Adapter is already turned off due to potential clash with other cross attention control. 2023-09-16 09:29:50 -07:00
user1
ced297ed21 Initial implementation of IP-Adapter "begin_step_percent" and "end_step_percent" for controlling on which steps IP-Adapter is applied in the denoising loop. 2023-09-16 08:24:12 -07:00
Ryan Dick
343df03a92 isort 2023-09-15 13:18:00 -04:00
Ryan Dick
b57acb7353 Merge branch 'main' into feat/ip-adapter 2023-09-15 13:15:25 -04:00
Kent Keirsey
afe9756667
Merge branch 'main' into feat/taesd 2023-09-15 12:19:19 -04:00
Sergey Borisov
ff3150a818 Update lora hotfix to new diffusers version(scale argument added) 2023-09-15 12:19:01 -04:00
Ryan Dick
16664da5b6 black 2023-09-14 23:49:02 -04:00
Ryan Dick
c104807201 Update list of supported IP-Adapters. 2023-09-14 23:43:19 -04:00
Ryan Dick
990ce9a1da Lookup IP-Adapter linked image encoder from disk instead of storing in model config metadata. 2023-09-14 23:06:57 -04:00
Ryan Dick
c2f074dc2f Fix python static checks. 2023-09-14 16:48:47 -04:00
Ryan Dick
fca60862e2 Add README.md describing IP-Adapter model formats. 2023-09-14 16:02:07 -04:00
Ryan Dick
94c186bb4c Fix bug in IPAdapter.to(...). 2023-09-14 15:45:25 -04:00
Ryan Dick
a22c8cb3a1 Improve robustness of check for IPAdapter vs IPAdapterPlus. 2023-09-14 15:25:41 -04:00
Ryan Dick
781e8521d5 Eliminate the need for IPAdapter.initialize(). 2023-09-14 15:02:59 -04:00
Ryan Dick
d114d0ba95 Remove need for the image_encoder param in IPAdapter.initialize(). 2023-09-14 14:14:35 -04:00
Ryan Dick
388554448a Add CLIP Vision model to IP-Adapter info and use this to infer which model to use. 2023-09-14 11:57:53 -04:00
Ryan Dick
d5160648d0 Add support for downloading IP-Adapter models from HF. 2023-09-14 11:18:43 -04:00
Ryan Dick
6d0ea42a94 Get CLIPVision model download from HF working. 2023-09-14 09:54:10 -04:00
Ryan Dick
2c1100509f Add BaseModelType.Any to be used by CLIPVisionModel. 2023-09-14 08:19:55 -04:00
Ryan Dick
c34b359c36 (minor) Remove duplicate TODO. 2023-09-13 21:25:20 -04:00
Ryan Dick
77d135967f Update IPAdapterModel to respect requested torch_dtype. 2023-09-13 21:06:42 -04:00
Ryan Dick
ebf26687cb (minor) Remove unnecessary TODO. 2023-09-13 21:03:42 -04:00
Ryan
2f5e923008 Removed duplicate import in model_cache.py 2023-09-13 19:33:43 -04:00
Ryan
b7296000e4 made MPS calls conditional on MPS actually being the chosen device with backend available 2023-09-13 19:33:43 -04:00
Ryan
fab055995e Add empty_cache() for MPS hardware. 2023-09-13 19:33:43 -04:00
Ryan Dick
1c8991a3df Use CLIPVisionModel under model management for IP-Adapter. 2023-09-13 19:10:02 -04:00
Ryan Dick
3d52656176 Add CLIPVisionModel to model management. 2023-09-13 17:14:20 -04:00
Kevin Turner
090db1ab3a Merge remote-tracking branch 'origin/main' into feat/taesd 2023-09-13 09:17:53 -07:00
Ryan Dick
3ee9a21647 Initial (barely) working version of IP-Adapter model management. 2023-09-13 08:27:24 -04:00
Ryan Dick
0d823901ef Add IPAdapter to model_management __init__.py 2023-09-13 08:27:24 -04:00
Ryan Dick
7ee55489bb Improve model search warning messages. 2023-09-13 08:27:24 -04:00
Ryan Dick
163ece9aee Initial skeleton for IPAdapter model management. 2023-09-13 08:27:24 -04:00
Martin Kristiansen
e467ca7f1b Apply black, isort, flake8 2023-09-12 13:01:58 -04:00
Martin Kristiansen
e88d7c242f isort wip 3 2023-09-12 13:01:58 -04:00
Martin Kristiansen
caea6d11c6 isort wip 2 2023-09-12 13:01:58 -04:00