Commit Graph

236 Commits

Author SHA1 Message Date
psychedelicious
884ec0b5df feat: replace isort, flake8 & black with ruff 2023-11-11 10:42:27 +11:00
Lincoln Stein
f1c195afb7 Merge branch 'main' into refactor/model-manager-2 2023-11-10 17:54:28 -05:00
Wubbbi
b9f607be56 Update to 4.35.X 2023-11-10 17:51:59 -05:00
Wubbbi
a0be83e370 Update Transformers to 4.34 and fix pad_to_multiple_of 2023-11-10 17:51:59 -05:00
Millun Atluri
e22a091d76
Update diffusers to ~=0.23 2023-11-10 11:50:50 +11:00
Lincoln Stein
927f8a66e6 Merge branch 'main' into refactor/model-manager-2 2023-11-08 16:46:08 -05:00
Millun Atluri
9733cd4199 Update xformers to 0.0.22post7 2023-11-06 17:17:03 -08:00
Millun Atluri
c68db6e40f Update xformers to ~0.0.22 2023-11-06 17:17:03 -08:00
Millun Atluri
014d6187ab
Update pyproject.toml 2023-11-07 10:22:20 +11:00
Millun Atluri
9fb15fae87
Update pyproject.toml 2023-11-07 10:20:16 +11:00
Millun Atluri
a07336a020
Merge branch 'main' into patch-1 2023-11-07 10:17:46 +11:00
Millun Atluri
0718cc2392
Update xformers to 0.0.21 2023-11-07 10:16:44 +11:00
Lincoln Stein
ce22c0fbaa sync pydantic and sql field names; merge routes 2023-11-06 18:08:57 -05:00
Wilson E. Alvarez
76b3f8956b Fix ROCm support in Docker container 2023-11-06 13:47:08 -08:00
Kent Keirsey
546aaedbe4 Update pyproject.toml 2023-11-06 05:29:17 -08:00
Lincoln Stein
edeea5237b add sql-based model config store and api 2023-11-04 23:03:26 -04:00
Eugene Brodsky
224b09f8fd
Enforce Unix line endings in container (#4990)
* (fix) enforce Unix (LF) line endings in docker/ directory

* (fix) update docker docs wrt line endings on Windows

* (fix) static check fixes
2023-10-30 12:34:30 -04:00
Eugene Brodsky
8e948d3f17 fix(assets): re-add missing caution image 2023-10-20 16:50:16 +11:00
psychedelicious
3d33b3e1f5 fix(nodes): explicitly include custom nodes files
setuptools ignores markdown files - explicitly include all files in `"invokeai.app.invocations"` to ensure all custom node files are included
2023-10-20 15:18:29 +11:00
psychedelicious
67a343b3e4 Update pyproject.toml 2023-10-18 11:28:26 +11:00
Lincoln Stein
d27392cc2d remove all references to CLI 2023-10-18 11:28:26 +11:00
Lincoln Stein
9542883bb5 update requirements to python 3.10-11 2023-10-17 19:30:31 +11:00
psychedelicious
a094f4ca2b fix: pin python-socketio~=5.10.0 2023-10-17 14:59:25 +11:00
psychedelicious
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
Drun555
9db152bf75
xformers==0.0.20
I'm not sure if it's correct way of handling things, but correcting this string to '==0.0.20' fixes xformers install for me - and maybe for others too. 

Please see this thread, this is the issue I had (trying to install InvokeAI):
https://github.com/facebookresearch/xformers/issues/740
2023-10-14 14:59:55 +04:00
Ryan Dick
89db8c83c2 Add a comment to warn about a necessary action before bumping the diffusers version. 2023-10-12 14:48:10 -04:00
Ryan Dick
fe889235cc Bump safetensors to ~=0.4.0 2023-10-10 18:00:15 -04:00
Ryan Dick
1c8b1fbc53 POC of a test that depends on models. 2023-10-05 15:35:58 -04:00
Lincoln Stein
4ce00a32f4 add font Inter-Regular.ttf to installed assets 2023-10-03 08:48:50 -04:00
Wubbbi
d16583ad1c Unpin Safetensors dependencies, safeguard against breaking changes 2023-09-23 10:23:05 -04:00
Wubbbi
b4790002c7 Add python-socketio depencency (mandatory) 2023-09-21 08:57:41 -04:00
Kent Keirsey
098d506b95 Update accelerate to .23 2023-09-20 20:20:06 -04:00
Kent Keirsey
7aa33c352b Update Diffusers to .21 2023-09-20 20:20:06 -04:00
Brandon
b915d74127
Remove fastapi-socketio dependency, doesn't really do much for us and… (#4552)
* Remove fastapi-socketio dependency, doesn't really do much for us and isn't well maintained

* Run python black

* Remove fastapi_socketio import

* Add __app as class variable in case we ever need it later

* Run isort

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-20 22:30:01 +00:00
Lincoln Stein
0960518088 add techjedi's database maintenance script 2023-09-20 17:46:49 -04:00
Lincoln Stein
0420874f56 reimplement the old invokeai-metadata command 2023-09-20 13:49:29 -04:00
Martin Kristiansen
627750eded Adding excludes to flake8 config 2023-09-18 15:10:04 +10:00
Martin Kristiansen
0450c28f14 Adding pre-commit to test dependencies 2023-09-12 13:01:58 -04:00
Martin Kristiansen
5615c31799 isort wip 2023-09-12 13:01:58 -04:00
Martin Kristiansen
4390a051ca isort wip 2023-09-12 13:01:58 -04:00
psychedelicious
d9148fb619 feat(nodes): add version to node schemas
The `@invocation` decorator is extended with an optional `version` arg. On execution of the decorator, the version string is parsed using the `semver` package (this was an indirect dependency and has been added to `pyproject.toml`).

All built-in nodes are set with `version="1.0.0"`.

The version is added to the OpenAPI Schema for consumption by the client.
2023-09-04 19:08:18 +10:00
blessedcoolant
68dc3c6cb4 feat: Upgrade compel to 2.0.2 2023-08-29 12:58:59 +12:00
Kevin Turner
88963dbe6e Merge remote-tracking branch 'origin/main' into feat/dev_reload
# Conflicts:
#	invokeai/app/api_app.py
#	invokeai/app/services/config.py
2023-08-21 09:04:31 -07:00
Kevin Turner
6df6abf6f6
Merge branch 'main' into dep/diffusers020 2023-08-18 11:02:52 -07:00
Martin Kristiansen
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
Kevin Turner
654dcd453f feat(dev_reload): use jurigged to hot reload changes to Python source 2023-08-17 19:02:44 -07:00
Kevin Turner
4267132926 dep(diffusers): upgrade diffusers to 0.20
Removed `is_safetensors_available` as safetensors is now a required dependency of diffusers.
2023-08-17 13:42:29 -07:00
psychedelicious
549d2e0485 chore: remove old web server code and python deps 2023-08-15 10:54:57 +10:00
blessedcoolant
cc85c98bf3 feat: Upgrade Diffusers to 0.19.3
Needed for some schedulers
2023-08-14 09:26:28 +12:00
Lincoln Stein
1bfe9835cf clip cache settings to permissible values; remove redundant imports in install __init__ file 2023-08-10 18:00:45 -04:00
Lincoln Stein
930e7bc754
Merge branch 'main' into feat/image-import-script 2023-08-09 08:54:56 -04:00
Millun Atluri
c82da330db Pin safetensors to 0.3.1
Safetensors 0.3.2 does not ship an ARM64 wheel so install on macOS fails
2023-08-09 00:29:43 -04:00
Kevin Turner
44bf308192 test(model_management): add a couple tests for _get_model_path 2023-08-05 15:22:23 -07:00
Lincoln Stein
83f75750a9 add techjedi's import script, with some filecompletion tweaks 2023-08-05 12:19:24 -04:00
Brandon Rising
e86925d424 Add onnxruntime to the main dependencies 2023-08-01 00:03:10 -04:00
Brandon Rising
f5ac73b091 Merge branch 'main' into feat/onnx 2023-07-31 10:58:40 -04:00
Lincoln Stein
60f5606c2d downgrade torchmetrics to fix model import problem 2023-07-29 13:28:29 -04:00
Lincoln Stein
71768f5988 restore unpinned versions of pydantic and numpy 2023-07-29 13:04:34 -04:00
Lincoln Stein
d633eb1612 remove pydantic and numpy from pyproject.toml 2023-07-28 21:56:22 -04:00
Brandon Rising
d3f6c7f983 Remove onnxruntime 2023-07-28 16:58:06 -04:00
Brandon Rising
da751da3dd Merge branch 'main' into feat/onnx 2023-07-28 09:59:35 -04:00
Brandon Rising
2b7b3dd4ba Run python black 2023-07-28 09:46:44 -04:00
Brandon Rising
dc1148106d Just install onnxruntime by default 2023-07-28 09:32:43 -04:00
Lincoln Stein
17ee17a789 merge with main;resolve conflicts 2023-07-27 15:29:34 -04:00
Lincoln Stein
0d8f9cbe55 resolved conflicts with main 2023-07-27 15:11:25 -04:00
Brandon Rising
918a0dedc0 Always install onnx 2023-07-27 11:00:40 -04:00
Brandon Rising
57271ad125 Move onnx to optional dependencies 2023-07-27 10:28:26 -04:00
Martin Kristiansen
fc9dacd082 Black/flake8 line length 100->120 2023-07-27 10:12:25 -04:00
Martin Kristiansen
8b4af69d87 Black config, pre-commit and GHA 2023-07-27 10:09:04 -04:00
blessedcoolant
3ff8c87c09 feat: Upgrade Diffusers to 0.19.0 2023-07-27 08:00:12 +12:00
Brandon Rising
c16da75ac7 Merge branch 'main' into feat/onnx 2023-07-26 10:42:31 -04:00
Lincoln Stein
fc4e104c61 tested on 3.11 and 3.10 2023-07-24 17:13:32 -04:00
Lincoln Stein
f9320475fd allow upgrade to transformers~=4.31.0 2023-07-19 09:46:21 -04:00
Brandon Rising
ee7b36cea5 Merge branch 'main' into onnx-testing 2023-07-18 22:56:41 -04:00
Lincoln Stein
700131fab2 Pin to transformers 4.30.2
bump version
2023-07-18 21:43:40 -04:00
Lincoln Stein
ec08151009 add correct requirements for installing SDXL models 2023-07-18 18:15:37 -04:00
Lincoln Stein
43fbbfb848 revert python version requirement 2023-07-18 16:15:47 -04:00
Lincoln Stein
efcb3a9d08 documentation fixes 2023-07-18 12:45:47 -04:00
Brandon Rising
35d5ef9118 Emit step completions 2023-07-18 12:35:07 -04:00
blessedcoolant
112937f1f8 reqs: Update compel to 2.0.0 2023-07-18 22:44:36 +12:00
Lincoln Stein
be0603b64c update to new compel 2.0.0rc2 2023-07-15 19:59:29 -04:00
blessedcoolant
6c17607a2b feat: Upgrade Diffusers to 0.18.1 2023-07-09 03:54:20 +12:00
Lincoln Stein
6259142078
Merge branch 'main' into patch-1 2023-07-07 17:16:37 -04:00
Lincoln Stein
8d88ad3b8d restore ability to launch web server with invokeai --web 2023-07-07 10:07:15 -04:00
Zadagu
94faa5de14
fixes ImportError described in #3658.
The issue was introduced by a new release of torchmetrics.
2023-07-06 16:16:02 +02:00
Lincoln Stein
176504a475 add .js, .woff2 and .css files to web/dist/assets 2023-07-02 21:50:29 -04:00
Lincoln Stein
72209d0cc3
Merge branch 'main' into lstein/installer-for-new-model-layout 2023-06-28 14:49:37 -04:00
Kent Keirsey
fc322aa9f7 Update controlnet-aux to 0.0.6 and add LeReS 2023-06-27 23:45:47 -04:00
user1
45aa338a98 Changed pyproject.toml to require controlnet_aux >= 0.0.5 (to enable use of SAM ControlNet preprocessor) 2023-06-25 14:22:34 -07:00
Lincoln Stein
33b04f6386 migration script working well 2023-06-22 15:47:12 -04:00
blessedcoolant
0ae6325353 chore: Add torchsde as a dependency for the SDE schedulers 2023-06-19 23:20:53 +12:00
blessedcoolant
9d4b84ef68 feat: Upgrade to Diffusers 0.17.1 2023-06-16 23:57:57 +12:00
Lincoln Stein
82c2498043
Merge branch 'main' into lstein/new-model-manager 2023-06-14 08:41:40 -07:00
psychedelicious
0497bea264 fix: add dynamicprompts to pyproject.toml 2023-06-15 01:05:16 +10:00
StAlKeR7779
c9ae26a176
Merge branch 'main' into lstein/new-model-manager 2023-06-13 23:37:52 +03:00
Lincoln Stein
c7ea46a5da use latest version of transformers to avoid deprecation warnings 2023-06-12 16:07:39 -04:00
blessedcoolant
2a814d886b
Merge branch 'main' into diffusers-upgrade 2023-06-13 05:29:15 +12:00
Gregg Helt
c647056287
Feat/easy param (#3504)
* Testing change to LatentsToText to allow setting different cfg_scale values per diffusion step.

* Adding first attempt at float param easing node, using Penner easing functions.

* Core implementation of ControlNet and MultiControlNet.

* Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving.

* Added example of using ControlNet with legacy Txt2Img generator

* Resolving rebase conflict

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* More rebase repair.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Fixed lint-ish formatting error

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Added dependency on controlnet-aux v0.0.3

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): add value to conditioning field

* fix(ui): add control field type

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml  had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor.

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.

* Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params.

* Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput.

* Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements.

* Added float to FIELD_TYPE_MAP ins constants.ts

* Progress toward improvement in fieldTemplateBuilder.ts  getFieldType()

* Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services.

* Cleaning up from merge, re-adding cfg_scale to FIELD_TYPE_MAP

* Making sure cfg_scale of type list[float] can be used in image metadata, to support param easing for cfg_scale

* Fixed math for per-step param easing.

* Added option to show plot of param value at each step

* Just cleaning up after adding param easing plot option, removing vestigial code.

* Modified control_weight ControlNet param to be polistmorphic --
can now be either a single float weight applied for all steps, or a list of floats of size total_steps, that specifies weight for each step.

* Added more informative error message when _validat_edge() throws an error.

* Just improving parm easing bar chart title to include easing type.

* Added requirement for easing-functions package

* Taking out some diagnostic prints.

* Added option to use both easing function and mirror of easing function together.

* Fixed recently introduced problem (when pulled in main), triggered by num_steps in StepParamEasingInvocation not having a default value -- just added default.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-06-11 16:27:44 +10:00
blessedcoolant
7bce455d16
Merge branch 'main' into diffusers-upgrade 2023-06-09 16:27:52 +12:00
Lincoln Stein
6652f3405b merge with main 2023-06-08 21:08:43 -04:00