Commit Graph

7460 Commits

Author SHA1 Message Date
psychedelicious
d0a7832326 fix(tests): clarify test_deny_nodes xfail.reason 2023-09-08 13:24:37 -04:00
psychedelicious
75bc43b2a5 fix(tests): make test_deny_nodes as xfail :( 2023-09-08 13:24:37 -04:00
psychedelicious
4395ee3c03 feat: parse config before importing anything else
We need to parse the config before doing anything related to invocations to ensure that the invocations union picks up on denied nodes.

- Move that to the top of api_app and cli_app
- Wrap subsequent imports in `if True:`, as a hack to satisfy flake8 and not have to noqa every line or the whole file
- Add tests to ensure graph validation fails when using a denied node, and that the invocations union does not have denied nodes (this indirectly provides confidence that the generated OpenAPI schema will not include denied nodes)
2023-09-08 13:24:37 -04:00
psychedelicious
1d2636aa90 feat: ignore unknown args
Do not throw when parsing unknown args, instead parse only known args print the unknown ones (supersedes #4216)
2023-09-08 13:24:37 -04:00
psychedelicious
24d9357fdc feat(ui): truncate error messages in toasts to 128 characters 2023-09-08 13:24:37 -04:00
psychedelicious
74cc409c72 feat(ui): add nodesAllowlist to config 2023-09-08 13:24:37 -04:00
Eugene Brodsky
cc92ce3da5 feat(backend): allow/deny nodes - do not parse args again 2023-09-08 13:24:37 -04:00
psychedelicious
7254a6a517 feat(ui): add UI-level nodes denylist
This simply hides nodes from the workflow editor. The nodes will still work if an API request is made with them. For example, you could hide `iterate` nodes from the workflow editor, but if the Linear UI makes use of those nodes, they will still function.

- Update `AppConfig` with optional property `nodesDenylist: string[]`
- If provided, nodes are filtered out by `type` in the workflow editor
2023-09-08 13:24:37 -04:00
psychedelicious
dc771d9645 feat(backend): allow/deny nodes
Allow denying and explicitly allowing nodes. When a not-allowed node is used, a pydantic `ValidationError` will be raised.

- When collecting all invocations, check against the allowlist and denylist first. When pydantic constructs any unions related to nodes, the denied nodes will be omitted
- Add `allow_nodes` and `deny_nodes` to `InvokeAIAppConfig`. These are `Union[list[str], None]`, and may be populated with the `type` of invocations.
- When `allow_nodes` is `None`, allow all nodes, else if it is `list[str]`, only allow nodes in the list
- When `deny_nodes` is `None`, deny no nodes, else if it is `list[str]`, deny nodes in the list
- `deny_nodes` overrides `allow_nodes`
2023-09-08 13:24:37 -04:00
Millun Atluri
dccf291f64
3.1.1rc1 Release (#4493)
## What type of PR is this? (check all applicable)

3.1.1 Release build & updates


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 16:05:23 +10:00
Millun Atluri
d3a94e5853 Update release version to 3.1.1rc1 2023-09-08 15:27:22 +10:00
Millun Atluri
0166d7ba2b new frontend build 2023-09-08 15:22:22 +10:00
blessedcoolant
b700809e14
Maryhipp/option fetch metadata from api (#4491)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

Adds a configuration option to fetch metadata and workflows from api
isntead of the image file. Needed for commercial.
2023-09-08 15:29:13 +12:00
psychedelicious
501cb4c1e2
Merge branch 'main' into maryhipp/option-fetch-metadata-from-api 2023-09-08 11:56:02 +10:00
psychedelicious
56399a650a fix(ui): use zod to parse metdata when fetching from api 2023-09-08 11:55:25 +10:00
psychedelicious
e4035a51af fix(ui): add missing config property 2023-09-08 11:55:10 +10:00
Millun Atluri
cf83ddea15
fix(docs): Correct spelling and grammar in feature request template (#4490)
Minor corrections to spell and grammar in the feature request template.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

This PR should be self explanatory.
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Minor corrections to spell and grammar in the feature request template.

No code or behavioural changes.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

N/A

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

There are no tests for the issue template.

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 11:37:02 +10:00
Sam
a79d5901c7
Correct spelling and grammar in feature request template
Minor corrections to spell and grammar in the feature request template
2023-09-08 07:47:55 +10:00
Millun Atluri
a98c37b7a3
Added extra steps to update the Cudnnn DLL found in the Torch packages (#4459)
I added extra steps to update the Cudnnn DLL found in the Torch package
because it wasn't optimised or didn't use the lastest version. So
manually updating it can speed up iteration but the result might differ
from each card. Exemple i passed from 3 it/s to a steady 20 it/s.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-07 13:38:46 +10:00
Millun Atluri
252adb9e70
Fixed typos 2023-09-07 13:16:25 +10:00
Keerigan45
40a0b2c366
Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-07 03:25:26 +02:00
Keerigan45
cfc4caf231
Update 030_INSTALL_CUDA_AND_ROCM.md
Added Extra step and clarification on how to choose between 11x or 12x update for Cudnnn dll
2023-09-07 03:24:13 +02:00
Millun Atluri
e16598c48a
Merge branch 'main' into patch-2 2023-09-06 13:59:59 +10:00
Millun Atluri
6506ce3e68
Updated "\" to be escaped in markdown 2023-09-06 13:58:53 +10:00
Millun Atluri
3afa73cd33
Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-06 13:55:33 +10:00
Mary Hipp
81ea742aea cleanup 2023-09-05 16:55:44 -04:00
Mary Hipp
15d28bfdbf add option to fetch metadata from api instead of reading off of png 2023-09-05 16:54:29 -04:00
blessedcoolant
0e5eac7c21
fix(nodes): add version to iterate and collect (#4469)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

fix(nodes): add version to iterate and collect

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-06 03:29:55 +12:00
psychedelicious
0a1c5bea05 fix(ui): do not assign empty string to version if undefined
this causes zod to fail when building workflows
2023-09-06 00:01:26 +10:00
psychedelicious
9c290f4575 fix(nodes): add version to iterate and collect 2023-09-05 23:47:57 +10:00
Lincoln Stein
500f3046a9 remove choice to update from main and add a warning about tags & branches 2023-09-05 08:14:26 -04:00
Kent Keirsey
53f2369d18
Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-05 08:06:39 -04:00
blessedcoolant
357912285a
feat: Scaled Bounding Box Dimensions now respect Aspect Ratio (#4463)
## What type of PR is this? (check all applicable)

- [x] Feature


## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

Scale Before Processing Dimensions now respect the Aspect Ratio that is
locked in. This makes it way easier to control the setting when using it
with locked ratios on the canvas.
2023-09-05 23:19:14 +12:00
blessedcoolant
0f2b8dd7df Merge branch 'main' into scaled-aspect-ratio 2023-09-05 23:16:18 +12:00
Lincoln Stein
ba2ce72584
Prevent config script from trying to set vram on macs (#4412)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
      
## Have you updated all relevant documentation?
- [X] Yes


## Description

Running the config script on Macs triggered an error due to absence of
VRAM on these machines! VRAM setting is now skipped.

## Added/updated tests?

- [ ] Yes
- [X] No : Will add this test in the near future.
2023-09-05 07:15:30 -04:00
Lincoln Stein
c54c1f603b
Merge branch 'main' into bugfix/set-vram-on-macs 2023-09-05 07:09:39 -04:00
blessedcoolant
9caa2a2043 fix: Set scaled steps to be at 64 to be in sync with the rest of the canvas 2023-09-05 22:59:37 +12:00
blessedcoolant
86185f2fe3 feat: Scaled Bounding Box Dimensions now respect Aspect Ratio 2023-09-05 22:37:14 +12:00
Jonathan
dfbcb773da
Update communityNodes.md (#4452)
Fixed bad link
2023-09-05 07:11:40 +00:00
Keerigan45
04c0a83bff
Added extra steps to update the Cudnnn DLL found in the Torch packages
I added extra steps to update the Cudnnn DLL found in the Torch package because it wasn't optimised or didn't use the lastest version. So manually updating it can speed up iteration but the result might differ from each card. Exemple i passed from 3 it/s to a steady 20 it/s.
2023-09-05 06:54:06 +02:00
blessedcoolant
7a30162583
Update CODEOWNERS (#4456)
@blessedcoolant Per discussion, have updated codeowners so that we're
not force merging things.

This will, however, necessitate a much more disciplined approval.
2023-09-05 16:53:15 +12:00
blessedcoolant
2c65ffa305
Merge branch 'main' into codeowners-update 2023-09-05 16:46:38 +12:00
Millun Atluri
331a6227cc
Add textfontimage node to communityNodes.md (#4379)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Add textfontimage node to communityNodes.md

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-05 14:10:35 +10:00
Millun Atluri
eb90ea41fd
Merge branch 'main' into textfontimage 2023-09-05 13:54:46 +10:00
Kent Keirsey
f134804fe7 Update CODEOWNERS 2023-09-04 23:19:24 -04:00
Kent Keirsey
c59c3ae499 Update CODEOWNERS 2023-09-04 23:19:24 -04:00
blessedcoolant
42ee95ee97
fix(ui): fix non-nodes validation logic being applied to nodes invoke button (#4457)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

    
## Description

fix(ui): fix non-nodes validation logic being applied to nodes invoke
button

For example, if you had an invalid controlnet setup, it would prevent
you from invoking on nodes, when node validation was disabled.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes
https://discord.com/channels/1020123559063990373/1028661664519831552/1148431783289966603
2023-09-05 15:03:02 +12:00
blessedcoolant
b008fd4a5f
Merge branch 'main' into fix/ui/fix-invoke-button-validation 2023-09-05 15:00:39 +12:00
blessedcoolant
6b850d506a
feat: Inpaint & Outpaint Improvements (#4408)
## What type of PR is this? (check all applicable)

- [x] Feature
- [x] Optimization

## Have you discussed this change with the InvokeAI team?
- [x] Yes


## Description

# Coherence Mode

A new parameter called Coherence Mode has been added to Coherence Pass
settings. This parameter controls what kind of Coherence Pass is done
after Inpainting and Outpainting.

- Unmasked: This performs a complete unmasked image to image pass on the
entire generation.
- Mask: This performs a masked image to image pass using your input mask
as the coherence mask.
- Mask Edge [DEFAULT] - This performs as masked image to image pass on
the edges of your mask to try and clear out the seams.

# Why The Coherence Masked Modes?

One of the issues with unmasked coherence pass arises when the diffusion
process is trying to align detailed or organic objects. Because Image to
Image tends change the image a little bit even at lower strengths, this
ends up in the paste back process being slightly misaligned. By
providing the mask to the Coherence Pass, we can try to eliminate this
in those cases. While it will be impossible to address this for every
image out there, having these options will allow the user to automate a
lot of this. For everything else there's manual paint over with inpaint.

# Graph Improvements

The graphs have now been refined quite a bit. We no longer do manual
blurring of the masks anymore for outpainting. This is no longer needed
because we now dilate the mask depending on the blur size while pasting
back. As a result we got rid of quite a few nodes that were handling
this in the older graph.

The graphs are also a lot cleaner now because we now tackle Scaled
Dimensions & Coherence Mode completely independently.

Inpainting result seem very promising especially with the Mask Edge
mode.

---

# New Infill Methods [Experimental]

We are currently trying out various new infill methods to see which ones
might perform the best in outpainting. We may keep all of them or keep
none. This will be decided as we test more.

## LaMa Infill

- Renabled LaMA infill in the UI.
- We are trying to get this to work without a memory overhead.

In order to use LaMa, you need to manually download and place the LaMa
JIT model in `models/core/misc/lama/lama.pt`. You can download the JIT
model from Sanster
[here](https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt)
and rename it to `lama.pt` or you can use the script in the original
LaMA repo to convert the base model to a JIT model yourself.

## CV2 Infill

- Added a new infilling method using CV2's Inpaint.

## Patchmatch Rescaling

Patchmatch infill input image is now downscaled and infilled. Patchmatch
can be really slow at large resolutions and this is a pretty decent way
to get around that. Additionally, downscaling might also provide a
better patch match by avoiding larger areas to be infilled with
repeating patches. But that's just the theory. Still testing it out.

## [optional] Are there any post deployment tasks we need to perform?

- If we decide to keep LaMA infill, then we will need to host the model
and update the installer to download it as a core model.
2023-09-05 14:55:30 +12:00
blessedcoolant
3f3e0ab9f5
Merge branch 'main' into lama-infill 2023-09-05 14:47:53 +12:00