Commit Graph

559 Commits

Author SHA1 Message Date
6001d3d71d Change pad_to_multiple_of to be 8 for all cases. Add comment about it's temporary status 2023-11-10 17:51:59 -05:00
8831d1ee41 Update Documentation 2023-11-10 17:51:59 -05:00
a0be83e370 Update Transformers to 4.34 and fix pad_to_multiple_of 2023-11-10 17:51:59 -05:00
8702a63197 add support for downloading and installing LCM lora diffusers models 2023-11-10 17:51:30 -05:00
6aa87f973e fix(nodes): create app/shared/ module to prevent circular imports
We have a number of shared classes, objects, and functions that are used in multiple places. This causes circular import issues.

This commit creates a new `app/shared/` module to hold these shared classes, objects, and functions.

Initially, only `FreeUConfig` and `FieldDescriptions` are moved here. This resolves a circular import issue with custom nodes.

Other shared classes, objects, and functions will be moved here in future commits.
2023-11-09 16:41:55 +11:00
e66d0f7372 Merge branch 'main' into feat/nodes/freeu 2023-11-06 05:39:58 -08:00
aa02ebf8f5 Fix model cache gc.collect() condition. 2023-11-04 08:52:10 -04:00
fb3d0c4b12 Fix bug in model cache reference count checking. 2023-11-03 13:50:40 -07:00
8488ab0134 Reduce frequency that we call gc.collect() in the model cache. 2023-11-03 13:50:40 -07:00
875231ed3d Add reminder to clean up our model cache clearing logic. 2023-11-03 13:50:40 -07:00
43b300498f Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests). 2023-11-03 13:50:40 -07:00
e391f3c9a8 Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model. 2023-11-02 19:41:33 -07:00
6e7a3f0546 (minor) Fix static checks and typo. 2023-11-02 19:20:37 -07:00
4a683cc669 Add a app config parameter to control the ModelCache logging behavior. 2023-11-02 19:20:37 -07:00
3781e56e57 Add log_memory_usage param to ModelCache. 2023-11-02 19:20:37 -07:00
8ff49109a8 Update get_pretty_snapshot_diff(...) to handle None-snapshots. 2023-11-02 19:20:37 -07:00
bac2a757e8 Replace deepcopy with a pickle roundtrip in apply_ti(...) to improve speed. 2023-11-02 19:05:24 -07:00
fa7f6a6a10 Further tidying of LoRA patching. Revert some changes that didn't end up being important under the constraint that calculations are done on the same device as the model. 2023-11-02 10:03:17 -07:00
61b17c475a Add TODO note about improving _resolve_lora_key(...). 2023-11-02 10:03:17 -07:00
379d68f595 Patch LoRA on device when model is already on device. 2023-11-02 10:03:17 -07:00
545c811bf1 Remove device and dtype members from LoRAModelRaw, they can too easily get out-of-sync with the underlying layer states. 2023-11-02 10:03:17 -07:00
2ba5b44ec4 Remove unused _lora_forward_hook(...). 2023-11-02 10:03:17 -07:00
7f4ce518b7 auto-format lora.py 2023-11-02 10:03:17 -07:00
6cbc69f3b7 support conversion of controlnets from safetensors to diffusers 2023-10-23 22:06:10 -04:00
fdf02c33d0 Catch generic model errors
Prevent the app from dying on invalid models.
2023-10-19 07:28:33 +11:00
001bba1719 Merge branch 'main' into feat/nodes/freeu 2023-10-17 15:58:00 +11:00
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
c9d95e5758 Merge branch 'main' into bugfix/ignore-dot-directories-on-model-scan 2023-10-15 17:23:02 +05:30
48626c40fd fix(backend): handle systems with glibc < 2.33
`mallinfo2` is not available on `glibc` < 2.33.

On these systems, we successfully load the library but get an `AttributeError` on attempting to access `mallinfo2`.

I'm not sure if the old `mallinfo` will work, and not sure how to install it safely to test, so for now we just handle the `AttributeError`.

This means the enhanced memory snapshot logic will be skipped for these systems, which isn't a big deal.
2023-10-15 07:56:55 +11:00
15cabc4968 Possibly closes #4815 2023-10-12 23:37:05 -04:00
40f9e49b5e Demote model cache logs from warning to debug based on the conversation here: https://discord.com/channels/1020123559063990373/1049495067846524939/1161647290189090816 2023-10-11 12:02:46 -04:00
15b33ad501 feat(nodes): add freeu support
Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU

Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU config to the UNet
- No support for model-dependent presets, this will be a future workflow editor enhancement

Closes #4845
2023-10-11 13:49:28 +11:00
61242bf86a Fix bug in skip_torch_weight_init() where the original behavior of torch.nn.Conv*d modules wasn't being restored correctly. 2023-10-10 10:05:50 -04:00
58b56e9b1e Add a skip_torch_weight_init() context manager to improve model load times (from disk). 2023-10-09 14:12:56 -04:00
096d195d6e Merge branch 'main' into ryan/model-cache-logging-only 2023-10-06 09:52:45 -04:00
78377469db Add support for T2I-Adapter in node workflows (#4612)
* Bump diffusers to 0.21.2.

* Add T2IAdapterInvocation boilerplate.

* Add T2I-Adapter model to model-management.

* (minor) Tidy prepare_control_image(...).

* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.

* Add logic for applying T2I-Adapter weights and accumulating.

* Add T2IAdapter to MODEL_CLASSES map.

* yarn typegen

* Add model probes for T2I-Adapter models.

* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.

* Add T2IAdapterModel.convert_if_required(...).

* Fix errors in T2I-Adapter input image sizing logic.

* Fix bug with handling of multiple T2I-Adapters.

* black / flake8

* Fix typo

* yarn build

* Add num_channels param to prepare_control_image(...).

* Link to upstream diffusers bugfix PR that currently requires a workaround.

* feat: Add Color Map Preprocessor

Needed for the color T2I Adapter

* feat: Add Color Map Preprocessor to Linear UI

* Revert "feat: Add Color Map Preprocessor"

This reverts commit a1119a00bf.

* Revert "feat: Add Color Map Preprocessor to Linear UI"

This reverts commit bd8a9b82d8.

* Fix T2I-Adapter field rendering in workflow editor.

* yarn build, yarn typegen

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-05 16:29:16 +11:00
2479a59e5e Re-enable garbage collection in model cache MemorySnapshots. 2023-10-03 15:18:47 -04:00
7d0ac2c36d (minor) clean up typos. 2023-10-03 15:00:03 -04:00
519b892f0c Add unit test for Struct_mallinfo2.__str__() 2023-10-03 14:25:34 -04:00
763dcacfd3 Add unit test for get_pretty_snapshot_diff(...). 2023-10-03 14:25:34 -04:00
22a84930f6 Disable garbage collection in ModelCache calls to MemorySnapshot in order minimize snapshot overhead. 2023-10-03 14:25:34 -04:00
d64e17e043 Add README with info about glib memory fragmentation caused by the model cache. 2023-10-03 14:25:34 -04:00
ba54277011 Catch a more specific exception in environments that do not have a libc shared library. 2023-10-03 14:25:34 -04:00
5915a4a51c Minor fixes. 2023-10-03 14:25:34 -04:00
4580ba0d87 Remove logic to update model cache size estimates dynamically. 2023-10-03 14:25:34 -04:00
b9fd2e9e76 Improve get_pretty_snapshot_diff(...) message formatting. 2023-10-03 14:25:34 -04:00
75b65597af Add malloc info to MemorySnapshot. 2023-10-03 14:25:34 -04:00
2a3c0ab5d2 Move MemorySnapshot to its own file. 2023-10-03 14:25:34 -04:00
7d61373b82 Add LibcUtil class. 2023-10-03 14:25:34 -04:00
7d65555a5a Fix type error in torch device comparison. 2023-10-03 14:25:34 -04:00