Commit Graph

7541 Commits

Author SHA1 Message Date
Jonathan
b9ebce9bdd
Merge branch 'main' into feat/nodes/invocation-cache 2023-09-18 19:54:14 -05:00
Jonathan
56254b74e2
Prevent a duplicate image from appearing in the gallery
Set final l2i to intermediate so the save image node is the only one outputting a final image.
2023-09-18 12:30:51 -05:00
psychedelicious
1869874433 chore(ui): lint 2023-09-18 16:01:20 +10:00
psychedelicious
94f16b1c69 feat(ui): provide feedback when recalling invalid lora 2023-09-18 16:01:20 +10:00
psychedelicious
cc0482ae8b feat(ui): simplify lora recall check 2023-09-18 16:01:20 +10:00
Mary Hipp
fdf9833c39 add toast 2023-09-18 16:01:20 +10:00
Mary Hipp
5a961bb58e first pass to recall LoRAs 2023-09-18 16:01:20 +10:00
Martin Kristiansen
627750eded Adding excludes to flake8 config 2023-09-18 15:10:04 +10:00
psychedelicious
593604bbba feat(nodes): add invocation cache
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.

## Results

This feature provides anywhere some significant to massive performance improvement.

The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.

## Overview

A new `invocation_cache` service is added to handle the caching. There's not much to it.

All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.

The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.

To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.

## In-Memory Implementation

An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.

Max node cache size is added as `node_cache_size` under the `Generation` config category.

It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.

Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.

## Node Definition

The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.

Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.

The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.

## One Gotcha

Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.

If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.

## Linear UI

The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.

This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.

This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.

## Workflow Editor

All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.

The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.

Users should consider saving their workflows after loading them in and having them updated.

## Future Enhancements - Callback

A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.

This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.

## Future Enhancements - Persisted Cache

Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
2023-09-18 13:41:19 +10:00
blessedcoolant
d94d4ef83f
Missed Translations (#4529)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
A few Missed Translations From the Translation Update

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-16 06:54:29 +12:00
blessedcoolant
682d6998bc
Merge branch 'main' into moretranslation 2023-09-16 06:52:24 +12:00
blessedcoolant
dc9074f65d
Unmasked default (#4553)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
Mask Edge was set to default, and producing poor results. I've updated
the default back to Unmasked.
2023-09-16 06:48:00 +12:00
Kent Keirsey
b75c56768d Unmasked default 2023-09-15 13:52:11 -04:00
Sergey Borisov
ff3150a818 Update lora hotfix to new diffusers version(scale argument added) 2023-09-15 12:19:01 -04:00
mickr777
273271f091 Merge branch 'moretranslation' of https://github.com/mickr777/InvokeAI into moretranslation 2023-09-15 14:14:04 +10:00
mickr777
54dc912c83 Revert some test Changes 2023-09-15 14:13:54 +10:00
mickr777
571f50adf7
Merge branch 'main' into moretranslation 2023-09-15 14:06:26 +10:00
mickr777
368bd6f778 Prettier Fixes 2023-09-15 14:04:28 +10:00
mickr777
7481251127 More Translations and Fixes 2023-09-15 13:58:48 +10:00
psychedelicious
604fc006b1 fix(ui): construct openapi url from window.location.origin 2023-09-14 23:06:39 -04:00
psychedelicious
5a42774fbe Update FEATURE_REQUEST.yml
Added some verbiage about making feature requests singular and focused.

Updated the placeholder to something more Invoke-y.
2023-09-14 22:19:03 -04:00
psychedelicious
704e016f05 feat(ui): disable immutable redux check
The immutable and serializable checks for redux can cause substantial performance issues. The immutable check in particular is pretty heavy. It's only run in dev mode, but this and really slow down the already-slower performance of dev mode.

The most important one for us is serializable, which has far less of a performance impact.

The immutable check is largely redundant because we use immer-backed RTK for everything and immer gives us confidence there.

Disable the immutable check, leaving serializable in.
2023-09-14 22:02:29 -04:00
mickr777
a1ef079d1f
Merge branch 'main' into moretranslation 2023-09-15 11:34:48 +10:00
psychedelicious
34a09cb4ca fix(ui): fix send to canvas crash
A few weeks back, we changed how the canvas scales in response to changes in window/panel size.

This introduced a bug where if we the user hadn't already clicked the canvas tab once to initialize the stage elements, the stage's dimensions were zero, then the calculation of the stage's scale ends up zero, then something is divided by that zero and Konva dies.

This is only a problem on Chromium browsers - somehow Firefox handles it gracefully.

Now, when calculating the stage scale, never return a 0 - if it's a zero, return 1 instead. This is enough to fix the crash, but the image ends up centered on the top-left corner of the stage (the origin of the canvas).

Because the canvas elements are not initialized at this point (we haven't switched tabs yet), the stage dimensions fall back to (0,0). This means the center of the stage is also (0,0) - so the image is centered on (0,0), the top-left corner of the stage.

To fix this, we need to ensure we:
- Change to the canvas tab before actually setting the image, so the stage elements are able to initialize
- Use `flushSync` to flush DOM updates for this tab change so we actually have DOM elements to work with
- Update the stage dimensions once on first load of it (so in the effect that sets up the resize observer, we update the stage dimensions)

The result now is the expected behaviour - images sent to canvas do not crash and end up in the center of the canvas.
2023-09-15 11:05:53 +10:00
Jonathan
0f93991087
Remove multiple of 8 requirement for ImageResizeInvocation (#4538)
Testing required the width and height to be multiples of 8. This is no longer needed.
2023-09-14 08:56:17 -04:00
mickr777
ad5f61e3b5
Merge branch 'main' into moretranslation 2023-09-14 13:36:37 +10:00
psychedelicious
f6738d647e fix(ui): store customStarUI outside redux
JSX is not serializable, so it cannot be in redux. Non-serializable global state may be put into `nanostores`.

- Use `nanostores` for `customStarUI`
- Use `nanostores` for `headerComponent`
- Re-enable the serializable & immutable check redux middlewares
2023-09-14 12:13:03 +10:00
Ryan
2f5e923008 Removed duplicate import in model_cache.py 2023-09-13 19:33:43 -04:00
Ryan
b7296000e4 made MPS calls conditional on MPS actually being the chosen device with backend available 2023-09-13 19:33:43 -04:00
Ryan
fab055995e Add empty_cache() for MPS hardware. 2023-09-13 19:33:43 -04:00
Mary Hipp Rogers
d989c7fa34
add option for custom star ui (#4530)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-13 20:48:10 +00:00
mickr777
3920d5c90d Missed Translations 2023-09-13 21:15:36 +10:00
skunkworxdark
0f0366f1f3
Update collections.py (#4513)
* Update collections.py

RangeOfSizeInvocation was not taking step into account when generating the end point of the range

* - updated the node description to refelect this mod
- added a gt=0 constraint to ensure only a positive size of the range
- moved the + 1 to be on the size. To ensure the range is the requested size in cases where the step is negative
- formatted with Black

* Removed +1 from the range calculation

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-13 18:26:41 +10:00
skunkworxdark
4e05dcfe2e
Prompts from file support nodes (#3964)
* New classes to support the PromptsFromFileInvocation Class
- PromptPosNegOutput
- PromptSplitNegInvocation
- PromptJoinInvocation
- PromptReplaceInvocation

* - Added PromptsToFileInvocation,
- PromptSplitNegInvocation
  - now counts the bracket depth so ensures it cout the numbr of open and close brackets match.
  - checks for escaped [ ] so ignores them if escaped e.g \[
- PromptReplaceInvocation - now has a user regex. and no regex in made caseinsesitive

* Update prompt.py

created class PromptsToFileInvocationOutput and use it in PromptsToFileInvocation instead of BaseInvocationOutput

* Update prompt.py

* Added schema_extra title and tags  for PromptReplaceInvocation, PromptJoinInvocation,  PromptSplitNegInvocation and PromptsToFileInvocation

* Added PTFileds Collect and Expand

* update to nodes v1

* added ui_type to file_path for PromptToFile

* update params for the primitive types used, remove the ui_type filepath, promptsToFile now only accepts collections until a fix is available

* updated the parameters for the StringOutput primitive

* moved the prompt tools nodes out of the prompt.py into prompt_tools.py

* more rework for v1

* added github link

* updated to use "@invocation"

* updated tags

* Adde new nodes PromptStrength and PromptStrengthsCombine

* chore: black

* feat(nodes): add version to prompt nodes

* renamed nodes from prompt related to string related. Also moved them into a strings.py file.  Also moved and renamed the PromptsFromFileInvocation from prompt.py to strings.py.  The PTfileds still remain in the Prompt_tool.py for now.

* added , version="1.0.0" to the invocations

* removed the PTField related nodes and the prompt-tools.py file all new nodes now live in the

* formatted prompt.py and strings.py with Black and fixed silly mistake in the new StringSplitInvocation

* - Revert Prompt.py back to original
- Update strings.py to be only StringJoin, StringJoinThre, StringReplace, StringSplitNeg, StringSplit

* applied isort to imports

* fix(nodes): typos in `strings.py`

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2023-09-13 08:06:38 +00:00
mickr777
8c63173b0c
Translation update (#4503)
* Update Translations

* Fix Prettier Issue

* Fix Error in invokebutton.tsx

* More Translations

* few Fixes

* More Translations

* More Translations and lint Fixes

* Update constants.ts

Revert "Update constants.ts"

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-13 17:31:34 +10:00
psychedelicious
30792cb259 chore: flake8 2023-09-13 16:50:25 +10:00
psychedelicious
a88f16b81c chore: isort 2023-09-13 16:50:25 +10:00
psychedelicious
fb188ce63e feat(nodes): update float_math and integer_math to use new ui_choice_labels 2023-09-13 16:50:25 +10:00
psychedelicious
57ebf735e6 feat(nodes): add InputField.ui_choice_labels: dict[str, str]
This maps values to labels for multiple-choice fields.

This allows "enum" fields (i.e. `Literal["val1", "val2", ...]` fields) to use code-friendly string values for choices, but present this to the UI as human-friendly labels.
2023-09-13 16:50:25 +10:00
psychedelicious
ec0f6e7248 chore: black 2023-09-13 16:50:25 +10:00
dunkeroni
93c55ebcf2 fixed validator when operation is first input 2023-09-13 16:50:25 +10:00
dunkeroni
41f2eaa4de updated name references for Float To Integer 2023-09-13 16:50:25 +10:00
dunkeroni
244201b45d Cleanup documentation 2023-09-13 16:50:25 +10:00
dunkeroni
486b8506aa Combined nodes to Float and Int general maths 2023-09-13 16:50:25 +10:00
dunkeroni
79ca181276 documentation update 2023-09-13 16:50:25 +10:00
dunkeroni
dbde08f3d4 Updated default value on round to multiple 2023-09-13 16:50:25 +10:00
dunkeroni
e542608534 changed float_to_int to generalized round_multiple node 2023-09-13 16:50:25 +10:00
dunkeroni
99ee47b79b Added square root function 2023-09-13 16:50:25 +10:00
dunkeroni
005087a652 Added float math 2023-09-13 16:50:25 +10:00
Millun Atluri
e9f5814c6d Update invokeai version to 3.1.1 2023-09-12 23:07:20 -04:00