Commit Graph

61 Commits

Author SHA1 Message Date
psychedelicious
e2d7b514e0 tidy: correct attributions for controlnet processors 2024-03-21 07:02:57 -07:00
psychedelicious
c36d12a50f feat: adaptation of Lineart Anime processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
c7f8fe4d5e feat: adaptation of Lineart processor
Adapted from https://github.com/huggingface/controlnet_aux
2024-03-21 07:02:57 -07:00
psychedelicious
ffb41c3616 feat: adaptation of HED processor
Adapted from controlnet repo
2024-03-21 07:02:57 -07:00
psychedelicious
611006b692 feat: adaptation of Canny processor
Adapted from controlnet processors package

fix: do final resize in canny processor

canny
2024-03-21 07:02:57 -07:00
psychedelicious
ca496f0380 feat: add image utils
These all support controlnet processors.

- `pil_to_cv2`
- `cv2_to_pil`
- `pil_to_np`
- `np_to_pil`
- `normalize_image_channel_count` (a readable version of `HWC3` from the controlnet repo)
- `fit_image_to_resolution` (a readable version of `resize_image` from the controlnet repo)
- `non_maximum_suppression` (a readable version of `nms` from the controlnet repo)
- `safe_step` (a readable version of `safe_step` from the controlnet repo)
2024-03-21 07:02:57 -07:00
psychedelicious
040ea8f41b tidy: do not show msg when loading NSFW checker 2024-03-20 15:05:25 +11:00
psychedelicious
2eacbb4d9d fix(nodes): do not load NSFW checker model on startup
Just check if the path exists to determine if it is "available". When needed, load it.
2024-03-20 15:05:25 +11:00
psychedelicious
b378cfcb46 cleanup: remove unused scripts, cruft
App runs & tests pass.
2024-03-20 15:05:25 +11:00
psychedelicious
982b513af3 tidy(config): move a few get_config calls to inside the functions where they are needed 2024-03-19 09:24:28 +11:00
psychedelicious
60492500db chore: ruff 2024-03-19 09:24:28 +11:00
psychedelicious
897fe497dc fix(config): use new get_config across the app, use correct settings 2024-03-19 09:24:28 +11:00
psychedelicious
b4182b190f fix(config): use new config.patchmatch 2024-03-19 09:24:28 +11:00
psychedelicious
22ac204678 fix(config): fix invisible_watermark handling
This setting was hardcoded to True. Simplified logic around it to not have a conditional that does nothing.
2024-03-19 09:24:28 +11:00
psychedelicious
fbe3afa5e1 fix(config): fix nsfw_checker handling
This setting was hardcoded to True. Rework logic around it to not conditionally check the setting.
2024-03-19 09:24:28 +11:00
psychedelicious
fed1f983db fix(nodes): depth anything processor (#5956)
We were passing a PIL image when we needed to pass the np image.

Closes #5956
2024-03-14 20:14:53 +11:00
blessedcoolant
af660163ca chore: cleanup DepthAnything code 2024-03-13 20:35:52 +05:30
psychedelicious
dd9daf8efb chore: ruff 2024-03-01 10:42:33 +11:00
Lincoln Stein
a23dedd2ee make model manager v2 ready for PR review
- Replace legacy model manager service with the v2 manager.

- Update invocations to use new load interface.

- Fixed many but not all type checking errors in the invocations. Most
  were unrelated to model manager

- Updated routes. All the new routes live under the route tag
  `model_manager_v2`. To avoid confusion with the old routes,
  they have the URL prefix `/api/v2/models`. The old routes
  have been de-registered.

- Added a pytest for the loader.

- Updated documentation in contributing/MODEL_MANAGER.md
2024-03-01 10:42:33 +11:00
blessedcoolant
e82c21b5ba chore: rename DWPose to DW Openpose 2024-02-12 11:12:45 -05:00
blessedcoolant
f8e566d62a cleanup: unused util functions 2024-02-12 11:12:45 -05:00
blessedcoolant
f588b95c7f cleanup: remove unused code from the DWPose implementation 2024-02-12 11:12:45 -05:00
blessedcoolant
7d80261d47 chore: Add code attribution for the DWPoseDetector 2024-02-12 11:12:45 -05:00
blessedcoolant
67cbfeb33d feat: Add output image resizing for DWPose 2024-02-12 11:12:45 -05:00
blessedcoolant
675c73c94f fix: ruff lint errors 2024-02-12 11:12:45 -05:00
blessedcoolant
0a27b0379f feat: Initial implementation of DWPoseDetector 2024-02-12 11:12:45 -05:00
psychedelicious
c5f069a255 feat(backend): remove dependency on basicsr
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep.

Closes #5108
2024-02-11 08:34:54 +11:00
blessedcoolant
35184dbd86 fix: incorrect local file path 2024-01-24 03:37:16 +05:30
blessedcoolant
92fb09c4df fix: Move the models to any folder to avoid boot warnings 2024-01-24 03:35:37 +05:30
blessedcoolant
7cb49e65bd feat: Add Resolution to DepthAnything 2024-01-23 14:13:50 -06:00
blessedcoolant
6a2eb1d2e4 fix: Change the path of the annotator folder to annotators
Just making this change in case there are other models added to the folder in the future
2024-01-23 14:13:50 -06:00
blessedcoolant
c859eb865e fix: lint & other minor issues 2024-01-23 14:13:50 -06:00
blessedcoolant
8f5e2cbcc7 feat: Add Depth Anything PreProcessor 2024-01-23 14:13:50 -06:00
psychedelicious
7653d21cf5 feat(backend): rename realesrgan class & upscale method 2023-11-28 07:58:22 +11:00
psychedelicious
46a2d83b84 feat(backend): organise realesrgan code, add license
- Moved util to own folder
- BSD3 License for RealESRGAN repo added
2023-11-28 07:58:22 +11:00
psychedelicious
2192210910 feat(nodes): remove dependency on realesrgan
We used the `RealESRGANer` utility class from the repo. It handled model loading and tiled upscaling logic.

Unfortunately, it hasn't been updated in over a year, had no types, and annoyingly printed to console.

I've adapted the class, cleaning it up a bit and removing the bits that are not relevant for us.

Upscaling functionality is identical.
2023-11-28 07:58:22 +11:00
psychedelicious
513fceac82 chore: ruff check - fix pycodestyle 2023-11-11 10:55:33 +11:00
psychedelicious
3a136420d5 chore: ruff check - fix flake8-comprensions 2023-11-11 10:55:23 +11:00
psychedelicious
252c9a5f5a fix(backend): fix nsfw/watermarker util types 2023-10-18 09:08:13 +11:00
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
Lincoln Stein
0420874f56 reimplement the old invokeai-metadata command 2023-09-20 13:49:29 -04:00
Martin Kristiansen
caea6d11c6 isort wip 2 2023-09-12 13:01:58 -04:00
Sergey Borisov
5151798a16 Cleanup memory after model run 2023-09-01 20:50:39 +03:00
blessedcoolant
1a9f552a75 experimental: Add CV2 Infill 2023-09-02 04:48:18 +12:00
blessedcoolant
54cda8ea42 chore: Change LaMA log statement to use InvokeAI Logger 2023-09-01 09:17:41 +12:00
blessedcoolant
99475ab800 chore: pyflake lint fixes 2023-08-29 05:16:23 +12:00
blessedcoolant
fa3fcd7820 cleanup: Lama 2023-08-23 23:05:29 -04:00
blessedcoolant
5eb6148336 chore: black fix 2023-08-23 23:05:29 -04:00
blessedcoolant
49892faee4 experimental: LaMa Infill 2023-08-23 23:05:29 -04:00
Martin Kristiansen
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00