Commit Graph

91 Commits

Author SHA1 Message Date
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
Ryan Dick
b57acb7353 Merge branch 'main' into feat/ip-adapter 2023-09-15 13:15:25 -04:00
Martin Kristiansen
5615c31799 isort wip 2023-09-12 13:01:58 -04:00
Ryan Dick
ddc148b70b Move ConditioningData and its field classes to their own file. This will allow new conditioning types to be added more cleanly without introducing circular dependencies. 2023-09-08 11:00:11 -04:00
psychedelicious
d9148fb619 feat(nodes): add version to node schemas
The `@invocation` decorator is extended with an optional `version` arg. On execution of the decorator, the version string is parsed using the `semver` package (this was an indirect dependency and has been added to `pyproject.toml`).

All built-in nodes are set with `version="1.0.0"`.

The version is added to the OpenAPI Schema for consumption by the client.
2023-09-04 19:08:18 +10:00
Sergey Borisov
9c3405e0c0 Fix sdxl lora loader input definitions, fix namings 2023-09-02 04:34:17 +03:00
Sergey Borisov
d5267357b1 Pad conditioning tensors from clip and clip2 in sdxl 2023-08-30 21:28:40 -04:00
psychedelicious
044d4c107a feat(nodes): move all invocation metadata (type, title, tags, category) to decorator
All invocation metadata (type, title, tags and category) are now defined in decorators.

The decorators add the `type: Literal["invocation_type"]: "invocation_type"` field to the invocation.

Category is a new invocation metadata, but it is not used by the frontend just yet.

- `@invocation()` decorator for invocations

```py
@invocation(
    "sdxl_compel_prompt",
    title="SDXL Prompt",
    tags=["sdxl", "compel", "prompt"],
    category="conditioning",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
    ...
```

- `@invocation_output()` decorator for invocation outputs

```py
@invocation_output("clip_skip_output")
class ClipSkipInvocationOutput(BaseInvocationOutput):
    ...
```

- update invocation docs
- add category to decorator
- regen frontend types
2023-08-30 18:35:12 +10:00
psychedelicious
ae05d34584 fix(nodes): fix uploading image metadata retention
was causing failure to save images
2023-08-30 14:52:50 +10:00
Sergey Borisov
121396f844 Fix tokenization log for sd models 2023-08-29 17:07:33 +03:00
Millun Atluri
65feb92286
Merge branch 'main' into feat_compel_and 2023-08-24 17:38:35 +10:00
Sergey Borisov
beb3e5aeb7 Report correctly to compel if we want get pooled in future(affects blend computation) 2023-08-21 19:05:40 +10:00
Millun Atluri
3798c8bdb0
Merge branch 'main' into feat_compel_and 2023-08-18 17:04:03 +10:00
Martin Kristiansen
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
psychedelicious
c48fd9c083 feat(nodes): refactor parameter/primitive nodes
Refine concept of "parameter" nodes to "primitives":
- integer
- float
- string
- boolean
- image
- latents
- conditioning
- color

Each primitive has:
- A field definition, if it is not already python primitive value. The field is how this primitive value is passed between nodes. Collections are lists of the field in node definitions. ex: `ImageField` & `list[ImageField]`
- A single output class. ex: `ImageOutput`
- A collection output class. ex: `ImageCollectionOutput`
- A node, which functions to load or pass on the primitive value. ex: `ImageInvocation` (in this case, `ImageInvocation` replaces `LoadImage`)

Plus a number of related changes:
- Reorganize these into `primitives.py`
- Update all nodes and logic to use primitives
- Consolidate "prompt" outputs into "string" & "mask" into "image" (there's no reason for these to be different, the function identically)
- Update default graphs & tests
- Regen frontend types & minor frontend tidy related to changes
2023-08-16 09:54:38 +10:00
psychedelicious
f49fc7fb55 feat: node editor
squashed rebase on main after backendd refactor
2023-08-16 09:54:38 +10:00
Sergey Borisov
94636ddb03 Fix empty prompt handling 2023-08-13 19:31:14 +03:00
blessedcoolant
561951ad98 chore: Black linting 2023-08-13 21:28:39 +12:00
Sergey Borisov
e9ec5ab85c Apply requested changes
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-10 06:19:22 +03:00
Sergey Borisov
f7aec3b934 Move conditioning class to backend 2023-08-08 23:33:52 +03:00
Sergey Borisov
2539e26c18 Apply denoising_start/end, add torch-sdp to memory effictiend attention func 2023-08-07 19:57:11 +03:00
Sergey Borisov
b0738b7f70 Fixes, zero tensor for empty negative prompt, remove raw prompt node 2023-08-07 18:37:06 +03:00
Sergey Borisov
9aaf67c5b4 wip 2023-08-06 05:05:25 +03:00
Lincoln Stein
6ad565d84c folded in changes from 4099 2023-08-04 18:24:47 -04:00
Sergey Borisov
04229082d6 Provide ti name from model manager, not from ti itself 2023-08-04 18:24:47 -04:00
Sergey Borisov
1ac14a1e43 add sdxl lora support 2023-08-04 11:44:56 -04:00
Lincoln Stein
e7d9e552a7
Merge branch 'main' into feat_compel_and 2023-08-01 07:20:25 -04:00
Damian Stewart
d2c55dc011 enable .and() syntax and long prompts 2023-07-30 14:20:59 +02:00
Brandon Rising
2b7b3dd4ba Run python black 2023-07-28 09:46:44 -04:00
Martin Kristiansen
218b6d0546 Apply black 2023-07-27 10:54:01 -04:00
Brandon Rising
c16da75ac7 Merge branch 'main' into feat/onnx 2023-07-26 10:42:31 -04:00
psychedelicious
840205496a feat(nodes): fix model load events on sdxl nodes
they need the `context` to be provided to emit socket events
2023-07-26 14:46:38 +10:00
Brandon Rising
ee7b36cea5 Merge branch 'main' into onnx-testing 2023-07-18 22:56:41 -04:00
Sergey Borisov
3240f98f4e Rename clip1 to clip 2023-07-18 18:58:17 +03:00
blessedcoolant
3f1d5000c0 Merge branch 'main' into nodes-stuff 2023-07-19 02:37:50 +12:00
blessedcoolant
0c18c5d603 feat: Add titles and tags to all Nodes 2023-07-19 02:26:45 +12:00
StAlKeR7779
889b77d3d6
Merge branch 'main' into save_vram 2023-07-18 16:55:48 +03:00
Sergey Borisov
bc11296a5e Disable lazy offloading on disabled vram cache, move resulted tensors to cpu(to not stack vram tensors in cache), fix - text encoder not freed(detach) 2023-07-18 16:20:25 +03:00
Lincoln Stein
9c3c393b84 merge with main 2023-07-18 07:00:55 -04:00
Lincoln Stein
ef31837167 fix caption on sdxl raw prompt 2023-07-17 23:49:23 -04:00
blessedcoolant
13da881953 Merge branch 'main' into sdxl-support 2023-07-18 13:34:07 +12:00
Sergey Borisov
ada9b06e48 Implement compel prompt nodes for sdxl 2023-07-18 01:49:45 +03:00
psychedelicious
ba12849685 fix(nodes): fix some model load events not emitting
Missed adding the `context` arg to them initially
2023-07-17 17:16:55 +10:00
Lincoln Stein
5206ddf9b2 truncate long prompts to avoid a crash with controlnet 2023-07-15 23:49:25 -04:00
Sergey Borisov
fe78a08e37 Fix sd1/2 models conditionings 2023-07-16 06:24:24 +03:00
Sergey Borisov
c9c2229917 Separate prompt to sdxl and sdxl-refiner, add denoising start-end fields, add l2l node(supports both sdxl and sdxl-refiner), add fp32 to vae encode 2023-07-16 06:00:37 +03:00
Lincoln Stein
ccbfa5d862 resolve conflicts 2023-07-15 19:47:50 -04:00
Sergey Borisov
7093e5d033 Pad conditionings using zeros and encoder_attention_mask 2023-07-15 00:52:54 +03:00
Brandon Rising
524888bf3b Merge branch 'main' into feat/onnx 2023-07-13 14:23:57 -04:00
Sergey Borisov
358ced6bab SDXL Prompt and t2l nodes draft, add fp32 to vae decode 2023-07-11 18:19:36 +03:00