Compare commits

...

468 Commits

Author SHA1 Message Date
0ec7ac94b8 Update invokeai version to 3.1.1 2023-09-12 18:08:34 +10:00
e6dc8937c0 Update latest tag format 2023-09-12 18:07:54 +10:00
dccf291f64 3.1.1rc1 Release (#4493)
## What type of PR is this? (check all applicable)

3.1.1 Release build & updates


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 16:05:23 +10:00
d3a94e5853 Update release version to 3.1.1rc1 2023-09-08 15:27:22 +10:00
0166d7ba2b new frontend build 2023-09-08 15:22:22 +10:00
b700809e14 Maryhipp/option fetch metadata from api (#4491)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

Adds a configuration option to fetch metadata and workflows from api
isntead of the image file. Needed for commercial.
2023-09-08 15:29:13 +12:00
501cb4c1e2 Merge branch 'main' into maryhipp/option-fetch-metadata-from-api 2023-09-08 11:56:02 +10:00
56399a650a fix(ui): use zod to parse metdata when fetching from api 2023-09-08 11:55:25 +10:00
e4035a51af fix(ui): add missing config property 2023-09-08 11:55:10 +10:00
cf83ddea15 fix(docs): Correct spelling and grammar in feature request template (#4490)
Minor corrections to spell and grammar in the feature request template.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

This PR should be self explanatory.
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Minor corrections to spell and grammar in the feature request template.

No code or behavioural changes.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

N/A

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

There are no tests for the issue template.

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 11:37:02 +10:00
Sam
a79d5901c7 Correct spelling and grammar in feature request template
Minor corrections to spell and grammar in the feature request template
2023-09-08 07:47:55 +10:00
a98c37b7a3 Added extra steps to update the Cudnnn DLL found in the Torch packages (#4459)
I added extra steps to update the Cudnnn DLL found in the Torch package
because it wasn't optimised or didn't use the lastest version. So
manually updating it can speed up iteration but the result might differ
from each card. Exemple i passed from 3 it/s to a steady 20 it/s.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-07 13:38:46 +10:00
252adb9e70 Fixed typos 2023-09-07 13:16:25 +10:00
40a0b2c366 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-07 03:25:26 +02:00
cfc4caf231 Update 030_INSTALL_CUDA_AND_ROCM.md
Added Extra step and clarification on how to choose between 11x or 12x update for Cudnnn dll
2023-09-07 03:24:13 +02:00
e16598c48a Merge branch 'main' into patch-2 2023-09-06 13:59:59 +10:00
6506ce3e68 Updated "\" to be escaped in markdown 2023-09-06 13:58:53 +10:00
3afa73cd33 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-06 13:55:33 +10:00
81ea742aea cleanup 2023-09-05 16:55:44 -04:00
15d28bfdbf add option to fetch metadata from api instead of reading off of png 2023-09-05 16:54:29 -04:00
0e5eac7c21 fix(nodes): add version to iterate and collect (#4469)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

fix(nodes): add version to iterate and collect

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-06 03:29:55 +12:00
0a1c5bea05 fix(ui): do not assign empty string to version if undefined
this causes zod to fail when building workflows
2023-09-06 00:01:26 +10:00
9c290f4575 fix(nodes): add version to iterate and collect 2023-09-05 23:47:57 +10:00
500f3046a9 remove choice to update from main and add a warning about tags & branches 2023-09-05 08:14:26 -04:00
53f2369d18 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-05 08:06:39 -04:00
357912285a feat: Scaled Bounding Box Dimensions now respect Aspect Ratio (#4463)
## What type of PR is this? (check all applicable)

- [x] Feature


## Have you discussed this change with the InvokeAI team?
- [x] Yes
      
## Description

Scale Before Processing Dimensions now respect the Aspect Ratio that is
locked in. This makes it way easier to control the setting when using it
with locked ratios on the canvas.
2023-09-05 23:19:14 +12:00
0f2b8dd7df Merge branch 'main' into scaled-aspect-ratio 2023-09-05 23:16:18 +12:00
ba2ce72584 Prevent config script from trying to set vram on macs (#4412)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
      
## Have you updated all relevant documentation?
- [X] Yes


## Description

Running the config script on Macs triggered an error due to absence of
VRAM on these machines! VRAM setting is now skipped.

## Added/updated tests?

- [ ] Yes
- [X] No : Will add this test in the near future.
2023-09-05 07:15:30 -04:00
c54c1f603b Merge branch 'main' into bugfix/set-vram-on-macs 2023-09-05 07:09:39 -04:00
9caa2a2043 fix: Set scaled steps to be at 64 to be in sync with the rest of the canvas 2023-09-05 22:59:37 +12:00
86185f2fe3 feat: Scaled Bounding Box Dimensions now respect Aspect Ratio 2023-09-05 22:37:14 +12:00
dfbcb773da Update communityNodes.md (#4452)
Fixed bad link
2023-09-05 07:11:40 +00:00
04c0a83bff Added extra steps to update the Cudnnn DLL found in the Torch packages
I added extra steps to update the Cudnnn DLL found in the Torch package because it wasn't optimised or didn't use the lastest version. So manually updating it can speed up iteration but the result might differ from each card. Exemple i passed from 3 it/s to a steady 20 it/s.
2023-09-05 06:54:06 +02:00
7a30162583 Update CODEOWNERS (#4456)
@blessedcoolant Per discussion, have updated codeowners so that we're
not force merging things.

This will, however, necessitate a much more disciplined approval.
2023-09-05 16:53:15 +12:00
2c65ffa305 Merge branch 'main' into codeowners-update 2023-09-05 16:46:38 +12:00
331a6227cc Add textfontimage node to communityNodes.md (#4379)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Add textfontimage node to communityNodes.md

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-05 14:10:35 +10:00
eb90ea41fd Merge branch 'main' into textfontimage 2023-09-05 13:54:46 +10:00
f134804fe7 Update CODEOWNERS 2023-09-04 23:19:24 -04:00
c59c3ae499 Update CODEOWNERS 2023-09-04 23:19:24 -04:00
42ee95ee97 fix(ui): fix non-nodes validation logic being applied to nodes invoke button (#4457)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

    
## Description

fix(ui): fix non-nodes validation logic being applied to nodes invoke
button

For example, if you had an invalid controlnet setup, it would prevent
you from invoking on nodes, when node validation was disabled.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes
https://discord.com/channels/1020123559063990373/1028661664519831552/1148431783289966603
2023-09-05 15:03:02 +12:00
b008fd4a5f Merge branch 'main' into fix/ui/fix-invoke-button-validation 2023-09-05 15:00:39 +12:00
6b850d506a feat: Inpaint & Outpaint Improvements (#4408)
## What type of PR is this? (check all applicable)

- [x] Feature
- [x] Optimization

## Have you discussed this change with the InvokeAI team?
- [x] Yes


## Description

# Coherence Mode

A new parameter called Coherence Mode has been added to Coherence Pass
settings. This parameter controls what kind of Coherence Pass is done
after Inpainting and Outpainting.

- Unmasked: This performs a complete unmasked image to image pass on the
entire generation.
- Mask: This performs a masked image to image pass using your input mask
as the coherence mask.
- Mask Edge [DEFAULT] - This performs as masked image to image pass on
the edges of your mask to try and clear out the seams.

# Why The Coherence Masked Modes?

One of the issues with unmasked coherence pass arises when the diffusion
process is trying to align detailed or organic objects. Because Image to
Image tends change the image a little bit even at lower strengths, this
ends up in the paste back process being slightly misaligned. By
providing the mask to the Coherence Pass, we can try to eliminate this
in those cases. While it will be impossible to address this for every
image out there, having these options will allow the user to automate a
lot of this. For everything else there's manual paint over with inpaint.

# Graph Improvements

The graphs have now been refined quite a bit. We no longer do manual
blurring of the masks anymore for outpainting. This is no longer needed
because we now dilate the mask depending on the blur size while pasting
back. As a result we got rid of quite a few nodes that were handling
this in the older graph.

The graphs are also a lot cleaner now because we now tackle Scaled
Dimensions & Coherence Mode completely independently.

Inpainting result seem very promising especially with the Mask Edge
mode.

---

# New Infill Methods [Experimental]

We are currently trying out various new infill methods to see which ones
might perform the best in outpainting. We may keep all of them or keep
none. This will be decided as we test more.

## LaMa Infill

- Renabled LaMA infill in the UI.
- We are trying to get this to work without a memory overhead.

In order to use LaMa, you need to manually download and place the LaMa
JIT model in `models/core/misc/lama/lama.pt`. You can download the JIT
model from Sanster
[here](https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt)
and rename it to `lama.pt` or you can use the script in the original
LaMA repo to convert the base model to a JIT model yourself.

## CV2 Infill

- Added a new infilling method using CV2's Inpaint.

## Patchmatch Rescaling

Patchmatch infill input image is now downscaled and infilled. Patchmatch
can be really slow at large resolutions and this is a pretty decent way
to get around that. Additionally, downscaling might also provide a
better patch match by avoiding larger areas to be infilled with
repeating patches. But that's just the theory. Still testing it out.

## [optional] Are there any post deployment tasks we need to perform?

- If we decide to keep LaMA infill, then we will need to host the model
and update the installer to download it as a core model.
2023-09-05 14:55:30 +12:00
3f3e0ab9f5 Merge branch 'main' into lama-infill 2023-09-05 14:47:53 +12:00
8b305651f9 fix(ui): fix non-nodes validation logic being applied to nodes invoke button 2023-09-05 12:44:39 +10:00
52bd2bbb13 Update communityNodes.md with a few more nodes (#4444)
Adds my (@dwringer's) released nodes to the community nodes page.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Adds my released nodes -
Depth Map from Wavefront OBJ
Enhance Image
Generative Grammar-Based Prompt Nodes
Ideal Size Stepper
Image Compositor
Final Size & Orientation / Random Switch (Integers)
Text Mask (Simple 2D)
2023-09-05 12:20:33 +10:00
a9fafad5b5 chore: sync, lint & update 2023-09-05 14:17:23 +12:00
c5b9c8fc3a Merge branch 'main' into lama-infill 2023-09-05 14:16:27 +12:00
fb5ac78191 Merge branch 'lama-infill' of https://github.com/blessedcoolant/InvokeAI into lama-infill 2023-09-05 14:11:05 +12:00
871b9286d1 fix: Review changes 2023-09-05 14:10:41 +12:00
c49b436f06 Merge branch 'lama-infill' of github.com:blessedcoolant/InvokeAI into lama-infill 2023-09-04 21:54:52 -04:00
d2e327add9 install models/core/misc/lama/lama.pt 2023-09-04 21:54:40 -04:00
2ab75bc52e feat(ui): move fp32 check to its own variable
remove a ton of extraneous checks that are easy to miss during maintenance
2023-09-05 11:51:46 +10:00
384ad2df6a Merge branch 'main' into patch-2 2023-09-04 21:48:17 -04:00
94115b5217 fix(nodes): downscale and resample_mode are not optional 2023-09-05 11:23:13 +10:00
10eec546ad Consolidate and generalize saturation/luminosity adjusters (#4425)
* Consolidated saturation/luminosity adjust.
Now allows increasing and inverting.
Accepts any color PIL format and channel designation.

* Updated docs/nodes/defaultNodes.md

* shortened tags list to channel types only

* fix typo in mode list

* split features into offset and multiply nodes

* Updated documentation

* Change invert to discrete boolean.
Previous math was unclear and had issues with 0 values.

* chore: black

* chore(ui): typegen

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-05 11:18:37 +10:00
ac3bf81ca4 Update communityNodes.md for consistency and conciseness
Trims down a couple of my node descriptions and adjusts the formatting a little bit for consistency.
2023-09-04 20:21:48 -04:00
edd64bd537 Replace links to .py files with repo links, and consolidate some nodes
Revised links to my node py files, replacing them with links to independent repos. Additionally I consolidated some nodes together (Image and Mask Composition Pack, Size Stepper nodes).
2023-09-04 19:25:12 -04:00
8795ea8b06 Merge branch 'main' into patch-2 2023-09-04 19:19:03 -04:00
b1ef3370fa chore: Regen Schema 2023-09-05 09:56:34 +12:00
db4af7c287 Merge branch 'main' into lama-infill 2023-09-05 09:54:44 +12:00
78cc5a7825 feat(nodes): versioning (#4449)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

This PR is based on #4423 and should not be merged until it is merged.

[feat(nodes): add version to node
schemas](c179d4ccb7)

The `@invocation` decorator is extended with an optional `version` arg.
On execution of the decorator, the version string is parsed using the
`semver` package (this was an indirect dependency and has been added to
`pyproject.toml`).

All built-in nodes are set with `version="1.0.0"`.

The version is added to the OpenAPI Schema for consumption by the
client.

[feat(ui): handle node
versions](03de3e4f78)

- Node versions are now added to node templates
- Node data (including in workflows) include the version of the node
- On loading a workflow, we check to see if the node and template
versions match exactly. If not, a warning is logged to console.
- The node info icon (top-right corner of node, which you may click to
open the notes editor) now shows the version and mentions any issues.
- Some workflow validation logic has been shifted around and is now
executed in a redux listener.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4393

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Loading old workflows should prompt a warning, and the node status icon
should indicate some action is needed.

## [optional] Are there any post deployment tasks we need to perform?

I've updated the default workflows:
- Bump workflow versions from 1.0 to 1.0.1
- Add versions for all nodes in the workflows
- Test workflows

[Default
Workflows.zip](https://github.com/invoke-ai/InvokeAI/files/12511911/Default.Workflows.zip)

I'm not sure where these are being stored right now @Millu
2023-09-05 09:53:46 +12:00
438bc70dfd Merge branch 'main' into feat/nodes/versioning 2023-09-05 09:39:54 +12:00
1f6c868212 feat(nodes): polymorphic fields (#4423)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

### Polymorphic Fields

Initial support for polymorphic field types. Polymorphic types are a
single of or list of a specific type. For example, `Union[str,
list[str]]`.

Polymorphics do not yet have support for direct input in the UI (will
come in the future). They will be forcibly set as Connection-only
fields, in which case users will not be able to provide direct input to
the field.

If a polymorphic should present as a singleton type - which would allow
direct input - the node must provide an explicit type hint.

For example, `DenoiseLatents`' `CFG Scale` is polymorphic, but in the
node editor, we want to present this as a number input. In the node
definition, the field is given `ui_type=UIType.Float`, which tells the
UI to treat this as a `float` field.

The connection validation logic will prevent connecting a collection to
`CFG Scale` in this situation, because it is typed as `float`. The
workaround is to disable validation from the settings to make this
specific connection. A future improvement will resolve this.

### Collection Fields

This also introduces better support for collection field types. Like
polymorphics, collection types are parsed automatically by the client
and do not need any specific type hints.

Also like polymorphics, there is no support yet for direct input of
collection types in the UI.

### Other Changes

- Disabling validation in workflow editor now displays the visual hints
for valid connections, but lets you connect to anything.
- Added `ui_order: int` to `InputField` and `OutputField`. The UI will
use this, if present, to order fields in a node UI. See usage in
`DenoiseLatents` for an example.
- Updated the field colors - duplicate colors have just been lightened a
bit. It's not perfect but it was a quick fix.
- Field handles for collections are the same color as their single
counterparts, but have a dark dot in the center of them.
- Field handles for polymorphics are a rounded square with dot in the
middle.
- Removed all fields that just render `null` from `InputFieldRenderer`,
replaced with a single fallback
- Removed logic in `zValidatedWorkflow`, which checked for existence of
node templates for each node in a workflow. This logic introduced a
circular dependency, due to importing the global redux `store` in order
to get the node templates within a zod schema. It's actually fine to
just leave this out entirely; The case of a missing node template is
handled by the UI. Fixing it otherwise would introduce a substantial
headache.
- Fixed the `ControlNetInvocation.control_model` field default, which
was a string when it shouldn't have one.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4266 

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Add this polymorphic float node to the end of your
`invokeai/app/invocations/primitives.py`:
```py
@invocation("float_poly", title="Float Poly Test", tags=["primitives", "float"], category="primitives")
class FloatPolyInvocation(BaseInvocation):
    """A float polymorphic primitive value"""

    value: Union[float, list[float]] = InputField(default_factory=list, description="The float value")

    def invoke(self, context: InvocationContext) -> FloatOutput:
        return FloatOutput(value=self.value[0] if isinstance(self.value, list) else self.value)
``

Head over to nodes and try to connecting up some collection and polymorphic inputs.
2023-09-05 09:39:04 +12:00
52d15e06bf Merge branch 'main' into lama-infill 2023-09-05 07:12:27 +12:00
3dbb0e1bfb feat(tests): add tests for node versions 2023-09-04 19:16:44 +10:00
d6317bc53f docs: update INVOCATIONS.md with version info 2023-09-04 19:08:18 +10:00
4aca264308 feat(ui): handle node versions
- Node versions are now added to node templates
- Node data (including in workflows) include the version of the node
- On loading a workflow, we check to see if the node and template versions match exactly. If not, a warning is logged to console.
- The node info icon (top-right corner of node, which you may click to open the notes editor) now shows the version and mentions any issues.
- Some workflow validation logic has been shifted around and is now executed in a redux listener.
2023-09-04 19:08:18 +10:00
d9148fb619 feat(nodes): add version to node schemas
The `@invocation` decorator is extended with an optional `version` arg. On execution of the decorator, the version string is parsed using the `semver` package (this was an indirect dependency and has been added to `pyproject.toml`).

All built-in nodes are set with `version="1.0.0"`.

The version is added to the OpenAPI Schema for consumption by the client.
2023-09-04 19:08:18 +10:00
59cb6305b9 feat(tests): add tests for decorator and int -> float 2023-09-04 19:07:41 +10:00
945b9e3a0a Merge branch 'main' into textfontimage 2023-09-04 15:48:23 +10:00
920fc0e751 chore(ui): typegen 2023-09-04 15:25:58 +10:00
34e3c2e000 feat(ui): style handles 2023-09-04 15:25:31 +10:00
d65553841e fix: remove default_factory for ImageCollectionInvocation 2023-09-04 15:25:31 +10:00
446dc6bea1 fix(nodes): denoise_mask is connection-only, ui_order=6 2023-09-04 15:25:31 +10:00
92975130bd feat: allow float inputs to accept integers
Pydantic automatically casts ints to floats.
2023-09-04 15:25:31 +10:00
a765f01c08 chore(ui): typegen 2023-09-04 15:25:31 +10:00
09803b075d fix(ui): fix node value checks to compare to undefined
existing checks would fail if falsy values
2023-09-04 15:25:31 +10:00
1062fc4796 feat: polymorphic fields
Initial support for polymorphic field types. Polymorphic types are a single of or list of a specific type. For example, `Union[str, list[str]]`.

Polymorphics do not yet have support for direct input in the UI (will come in the future). They will be forcibly set as Connection-only fields, in which case users will not be able to provide direct input to the field.

If a polymorphic should present as a singleton type - which would allow direct input - the node must provide an explicit type hint.

For example, `DenoiseLatents`' `CFG Scale` is polymorphic, but in the node editor, we want to present this as a number input. In the node definition, the field is given `ui_type=UIType.Float`, which tells the UI to treat this as a `float` field.

The connection validation logic will prevent connecting a collection to `CFG Scale` in this situation, because it is typed as `float`. The workaround is to disable validation from the settings to make this specific connection. A future improvement will resolve this.

This also introduces better support for collection field types. Like polymorphics, collection types are parsed automatically by the client and do not need any specific type hints.

Also like polymorphics, there is no support yet for direct input of collection types in the UI.

- Disabling validation in workflow editor now displays the visual hints for valid connections, but lets you connect to anything.
- Added `ui_order: int` to `InputField` and `OutputField`. The UI will use this, if present, to order fields in a node UI. See usage in `DenoiseLatents` for an example.
- Updated the field colors - duplicate colors have just been lightened a bit. It's not perfect but it was a quick fix.
- Field handles for collections are the same color as their single counterparts, but have a dark dot in the center of them.
- Field handles for polymorphics are a rounded square with dot in the middle.
- Removed all fields that just render `null` from `InputFieldRenderer`, replaced with a single fallback
- Removed logic in `zValidatedWorkflow`, which checked for existence of node templates for each node in a workflow. This logic introduced a circular dependency, due to importing the global redux `store` in order to get the node templates within a zod schema. It's actually fine to just leave this out entirely; The case of a missing node template is handled by the UI. Fixing it otherwise would introduce a substantial headache.
- Fixed the `ControlNetInvocation.control_model` field default, which was a string when it shouldn't have one.
2023-09-04 15:25:31 +10:00
17170e9dab Merge branch 'main' into patch-2 2023-09-03 22:34:25 -05:00
d69f3a03bb feat: Infer Model Name automatically if empty in Model Forms (#4445)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] No
      
## Description

Automatically infer the name of the model from the path supplied IF the
model name slot is empty. If the model name is not empty, we presume
that the user has entered a model name or made changes to it and we do
not touch it in order to not override user changes.


## Related Tickets & Documents

- Addresses: #4443
2023-09-04 12:33:38 +12:00
95f44ff343 fix: Make the name extraction work for both ckpts and folders 2023-09-04 10:52:27 +12:00
f9c3c07d98 fix: Support UNIX paths 2023-09-04 10:16:57 +12:00
c91ba2dbe7 feat: Infer Model Name automatically if empty in Model Forms 2023-09-04 01:36:48 +12:00
917c2c480e Merge branch 'main' into lama-infill 2023-09-03 23:16:34 +12:00
fee5cd9c7e Update communityNodes.md with a few more nodes
Adds my (@dwringer's) released nodes to the community nodes page.
2023-09-03 02:37:36 -04:00
b0cce8008a Update communityNodes.md (#4442)
* Update communityNodes.md

Added some of my nodes to the community listing.
2023-09-03 16:31:12 +12:00
368c2bf08b fix(ui): clicking node collapse button does not bring node to front (#4437)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

fix(ui): clicking node collapse button does not bring node to front

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue
https://discord.com/channels/1020123559063990373/1130288930319761428/1147333454632071249
- Closes #4438
2023-09-03 12:50:47 +12:00
0a70a856e5 Merge branch 'main' into fix/ui/fix-click-node-collapse 2023-09-03 09:43:40 +10:00
56204e84bc Fix baseinvocation use of __attribute__ to work with py3.9 (#4413)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes
      
## Have you updated all relevant documentation?
- [X] Yes

## Description

There is a call in `baseinvocation.invocation_output()` to
`cls.__annotations__`. However, in Python 3.9 not all objects have this
attribute. I have worked around the limitation in the way described in
https://docs.python.org/3/howto/annotations.html , which supposedly will
produce same results in 3.9, 3.10 and 3.11.


## Related Tickets & Documents

See
https://discord.com/channels/1020123559063990373/1146897072394608660/1146939182300799017
for first bug report.
2023-09-02 12:09:21 -04:00
f1a01c473d Merge branch 'main' into bugfix/run-on-3.9 2023-09-02 12:01:37 -04:00
e27819f18f chore: remove unused files (#4433)
## What type of PR is this? (check all applicable)

- [x] Cleanup


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

Used https://github.com/albertas/deadcode to get rough overview of what
is not used, checked everything manually though. App still runs.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->


- Closes #4424

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

Ensure it doesn't explode when you run it.
2023-09-03 03:06:39 +12:00
f1f7778e73 Merge branch 'main' into chore/clean-up-unused-files 2023-09-03 02:59:31 +12:00
7763594839 Merge branch 'main' into bugfix/run-on-3.9 2023-09-02 10:08:40 -04:00
c965d3eb6b Merge branch 'main' into bugfix/set-vram-on-macs 2023-09-02 10:08:13 -04:00
85879d3013 remove additional unused scripts 2023-09-02 10:05:29 -04:00
4fa66b2ba8 ui: Move Coherence settings above mask settings 2023-09-03 01:39:01 +12:00
6cfabc585a feat: Add Coherence Mode - Mask 2023-09-03 01:26:32 +12:00
b5f42bedce feat: Add Coherence Mode 2023-09-03 00:34:37 +12:00
fded8bee39 chore: Regen schema 2023-09-02 23:13:29 +12:00
ec09e21fc2 Merge branch 'main' into lama-infill 2023-09-02 23:02:38 +12:00
7d50e413bc Merge branch 'main' into textfontimage 2023-09-02 18:12:56 +10:00
625b08cff7 chore: typegen 2023-09-02 13:03:48 +10:00
89b724d222 fix(ui): fix metadata parsing of older images
The metadata parsing was overly strict, not taking into account the shape of old metadata. Relaxed the schemas.

Also fixed a misspelling.
2023-09-02 13:03:48 +10:00
6f6d920686 [Feature] Support the XL inpainting model (#4431)
* add StableDiffusionXLInpaintPipeline to probe list

* add StableDiffusionXLInpaintPipeline to probe list

* Blackified (?)

---------

Authored-by: Lincoln Stein <lstein@gmail.com>
Mucked about with to get it merged by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-09-01 22:58:14 -04:00
699dfa222e fix(ui): node UI elements do not select node on click
Add a click handler for node wrapper component that exclusively selects that node, IF no other modifier keys are held.

Technically I believe this means we are doubling up on the selection logic, as reactflow handles this internally also. But this is by far the most reliable way to fix the UX.
2023-09-02 12:11:07 +10:00
288aec7080 Fix sdxl lora loader input definitions, fix namings (#4435)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-02 13:45:31 +12:00
2c754cfce7 Merge branch 'main' into fix/lora_node_inputs_definition 2023-09-02 13:38:05 +12:00
8fa2302956 Fix name 2023-09-02 04:37:11 +03:00
ec2b44bfbd update hooks to pass in DTO 2023-09-02 11:36:46 +10:00
f8bb1f7a3e update getImageMetadataFromFile query to allow dyanmic URL based on image without using baseUrl for rest of endpoints 2023-09-02 11:36:46 +10:00
9c3405e0c0 Fix sdxl lora loader input definitions, fix namings 2023-09-02 04:34:17 +03:00
4b78deba92 Merge branch 'main' into bugfix/set-vram-on-macs 2023-09-02 11:33:20 +10:00
d099924ae9 Merge branch 'main' into bugfix/run-on-3.9 2023-09-02 11:33:09 +10:00
45259894e0 Merge branch 'main' into chore/clean-up-unused-files 2023-09-02 11:30:41 +10:00
94473c541d fix(ui): fix circular imports (#4434)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

The logic that introduced a circular import was actually extraneous. I
have entirely removed it.

This fixes the frontend lint test.
2023-09-02 13:29:25 +12:00
0a7d06f8c6 fix(ui): fix circular imports
The logic that introduced a circular import was actually extraneous. I have entirely removed it.
2023-09-02 11:26:48 +10:00
3288d9b31a Merge branch 'main' into chore/clean-up-unused-files 2023-09-02 11:13:15 +10:00
9cb04f6f80 chore: remove unused files 2023-09-02 11:12:19 +10:00
7269ed2a0a Merge branch 'main' into lama-infill 2023-09-02 11:21:31 +12:00
4092d051e8 fix: ControlImage Dimension retrieval not working as intended (#4432)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-02 11:19:56 +12:00
46bc6968b8 fix: ControlImage Dimension retrieval not working as intended 2023-09-02 11:11:34 +12:00
48484e9fc8 Merge branch 'main' into lama-infill 2023-09-02 11:08:31 +12:00
26f7adeaa3 fix: SDXL Lora Loader not showing weight input (#4430)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-02 11:07:44 +12:00
a12fbc7406 chore: black fix 2023-09-02 10:51:53 +12:00
ba2048dbc6 fix: SDXL Lora Loader not showing weight input 2023-09-02 10:47:55 +12:00
497f66e682 feat: Add Patchmatch Downscale control to UI + refine the ui there 2023-09-02 10:24:32 +12:00
b73216ef81 feat: Decrement Brush Size by 1 for values under 5 for more precision 2023-09-02 10:23:14 +12:00
469fc49a2f ui: Make patchmatch downscale options optional 2023-09-02 08:36:01 +12:00
a36cf2f1dd Add scale to patchmatch 2023-09-01 23:08:46 +03:00
5151798a16 Cleanup memory after model run 2023-09-01 20:50:39 +03:00
1a9f552a75 experimental: Add CV2 Infill 2023-09-02 04:48:18 +12:00
10e4d8b72d fix second place where __annotations__ called 2023-08-31 23:49:08 -04:00
6c2786201b Update invokeai/app/invocations/baseinvocation.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-31 23:45:19 -04:00
2cb57ef301 fix baseinvocation call to __attribute__ to work with py3.9 2023-08-31 23:11:54 -04:00
44b49c7f2d fixed true source of problem 2023-08-31 22:55:17 -04:00
52a5f1f56f prevent from trying to set vram on macs 2023-08-31 22:50:53 -04:00
7a295cbfd5 experimental: Pass Mask To Coherence Pass 2023-09-01 11:40:09 +12:00
6f162c5dec experimental: Dilate mask if blurred in Color Correction 2023-09-01 11:12:30 +12:00
b94ec14853 chore: Black lint fix 2023-09-01 09:19:10 +12:00
54cda8ea42 chore: Change LaMA log statement to use InvokeAI Logger 2023-09-01 09:17:41 +12:00
0d3d880323 feat: Re-Enable LaMa Infill 2023-09-01 09:13:28 +12:00
a74e2108bb Release/3.1.0 (#4397)
## What type of PR is this? (check all applicable)

This is the 3.1.0 release candidate. Minor bugfixes will be applied here
during testing and then merged into main upon release.
2023-08-31 13:34:53 -04:00
ca5689dc54 jigger model naming so that v1-5-inpaint is not the default on new installs 2023-08-31 10:56:25 -04:00
b567d65032 blackify and rerun frontend build 2023-08-31 10:35:17 -04:00
35ac8e78bd bump to release version 2023-08-31 10:33:02 -04:00
e90fd96eee fix(nodes): fix warning when using current image node 2023-08-31 13:40:38 +10:00
ed72d51969 fix(nodes): fix primitives defaults for collections 2023-08-31 13:22:31 +10:00
d5267357b1 Pad conditioning tensors from clip and clip2 in sdxl 2023-08-30 21:28:40 -04:00
e085eb63bd Check noise and latents shapes, more informative error 2023-08-30 21:28:40 -04:00
8e470f9b6f fix(ui): fix metadata retrieval when has controlnet 2023-08-31 11:20:18 +10:00
83163ddd9a fix migrate script to work when autoimport directories are None 2023-08-30 18:46:17 -04:00
715686477e fix unknown PagingArgumentParser import error in ti-training 2023-08-30 17:49:19 -04:00
05e203570d make image import script work with python3.9; cleanup wheel creator 2023-08-30 17:35:58 -04:00
2bd3cf28ea nodes phase 5: workflow saving and loading (#4353)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

- Workflows are saved to image files directly
- Image-outputting nodes have an `Embed Workflow` checkbox which, if
enabled, saves the workflow
- `BaseInvocation` now has an `workflow: Optional[str]` field, so all
nodes automatically have the field (but again only image-outputting
nodes display this in UI)
- If this field is enabled, when the graph is created, the workflow is
stringified and set in this field
- Nodes should add `workflow=self.workflow` when they save their output
image to have the workflow written to the image
- Uploads now have their metadata retained so that you can upload
somebody else's image and have access to that workflow
- Graphs are no longer saved to images, workflows replace them

### TODO
- Images created in the linear UI do not have a workflow saved yet. Need
to write a function to build a workflow around the linear UI graph when
using linear tabs. Unfortunately it will not have the nice positioning
and size data the node editor gives you when you save a workflow...
we'll have to figure out how to handle this.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-08-30 15:05:17 -04:00
3cd2d3b764 fix: SDXL T2I and L2I not respecting Scaled on Canvas 2023-08-31 06:45:21 +12:00
4bac36356a fix: Create SDXL Refiner Create Mask only in inpaint & outpaint 2023-08-31 06:33:09 +12:00
97763f778a fix: SDXL Refiner not working with Canvas Inpaint & Outpaint 2023-08-31 06:26:02 +12:00
754666ed09 fix: Missing SDXL Refiner Seamless VAE plug 2023-08-31 05:49:02 +12:00
4c407328f2 fix: SDXL Refiner Seamless Interaction 2023-08-31 05:14:19 +12:00
943bedadf2 ui: Rename ControlNet Collapse header to Control Adapters 2023-08-31 01:44:13 +12:00
667d4deeb7 feat(ui): improved model node ui 2023-08-30 22:36:40 +10:00
adfdb02c1b fix(ui): fix workflow edge validation for collapsed edges 2023-08-30 22:36:15 +10:00
24d44ca559 feat(nodes): add scheduler invocation 2023-08-30 22:35:47 +10:00
216dff143e feat(ui): swath of UI tweaks and improvements 2023-08-30 21:31:58 +10:00
4047343503 Add textfontimage node to communityNodes.md 2023-08-30 19:19:49 +10:00
f2334ec302 fix(ui): reset node execution states on cancel 2023-08-30 18:58:27 +10:00
044d4c107a feat(nodes): move all invocation metadata (type, title, tags, category) to decorator
All invocation metadata (type, title, tags and category) are now defined in decorators.

The decorators add the `type: Literal["invocation_type"]: "invocation_type"` field to the invocation.

Category is a new invocation metadata, but it is not used by the frontend just yet.

- `@invocation()` decorator for invocations

```py
@invocation(
    "sdxl_compel_prompt",
    title="SDXL Prompt",
    tags=["sdxl", "compel", "prompt"],
    category="conditioning",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
    ...
```

- `@invocation_output()` decorator for invocation outputs

```py
@invocation_output("clip_skip_output")
class ClipSkipInvocationOutput(BaseInvocationOutput):
    ...
```

- update invocation docs
- add category to decorator
- regen frontend types
2023-08-30 18:35:12 +10:00
ae05d34584 fix(nodes): fix uploading image metadata retention
was causing failure to save images
2023-08-30 14:52:50 +10:00
94d0c18cbd feat(ui): remove highlighto n mouseover 2023-08-30 13:22:59 +10:00
7b49f96472 feat(ui): style input fields 2023-08-30 13:19:37 +10:00
9a2c0554de feat(ui): better workflow validation and parsing
Checks for the existence of nodes for each edge - does not yet check the types.
2023-08-30 13:02:49 +10:00
68fd07a606 Merge branch 'feat/nodes-phase-5' of https://github.com/invoke-ai/InvokeAI into feat/nodes-phase-5 2023-08-30 14:14:05 +12:00
71591d0bee Merge branch 'main' into feat/nodes-phase-5 2023-08-30 12:13:08 +10:00
8014fc2f4f Revert "fix(ui): fix control image save button logic"
This reverts commit d8ce20c06f.
2023-08-30 12:12:54 +10:00
29112f96d2 Merge branch 'main' into feat/nodes-phase-5 2023-08-30 14:11:49 +12:00
4405c39e48 [3.1] UI Fixes (#4376)
## What type of PR is this? (check all applicable)

- [x] Feature
- [x] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description
- Keep Boards Modal open by default.
- Combine Coherence and Mask settings under Compositing
- Auto Change Dimensions based on model type (option)
- Size resets are now model dependent
- Add Set Control Image Height & Width to Width and Height option.
- Fix numerous color & spacing issues (especially those pertaining to
sliders being too close to the bottom)
- Add Lock Ratio Option
2023-08-30 14:10:42 +12:00
1d6be7f7fd Merge branch 'ui-fixes' of https://github.com/blessedcoolant/InvokeAI into ui-fixes 2023-08-30 14:08:39 +12:00
64723f0628 fix: ControlNet DnD icons repeated twice 2023-08-30 14:07:24 +12:00
8982543312 fix(ui): fix control image save button logic 2023-08-30 11:58:15 +10:00
d8ce20c06f fix(ui): fix control image save button logic 2023-08-30 11:33:38 +10:00
0ed6a141f1 Merge branch 'main' into feat/nodes-phase-5 2023-08-30 11:15:34 +10:00
33cb6cb4d8 Merge branch 'main' into ui-fixes 2023-08-30 12:58:43 +12:00
600e9ecf8d Hotfix to make second order schedulers work with mask (#4378)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents


## QA Instructions, Screenshots, Recordings


## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_
2023-08-30 12:49:04 +12:00
ca15b8b33e Fix wrong timestep selection in some cases(dpmpp_sde) 2023-08-30 03:40:59 +03:00
8562dbaaa8 Hotfix to make second order schedulers work with mask 2023-08-30 02:18:08 +03:00
db4d35ed45 ui: update scaled width and height sliders to be model sensitive 2023-08-30 10:28:54 +12:00
65fb6af01f ui: Make aspect ratio logic more robust 2023-08-30 10:15:26 +12:00
c6bab14043 ui: actually resolve circulars + fix flip bounding boxes AR unset 2023-08-30 09:33:04 +12:00
55f19aff3a ui: encase Denoising Strength to make it more prominent 2023-08-30 09:32:41 +12:00
1b6586dd8c fix: cyclic redundancy 2023-08-30 09:12:07 +12:00
b5da7faafb ui: make bounding box swap also unlock Aspect Ratio 2023-08-30 09:06:38 +12:00
b13a06f650 ui: map aspect ratios instead of manually creating the array 2023-08-30 08:52:11 +12:00
8e4d288f02 ui: Make swap size unlock fixed ratio
Coz it is no longer relevant
2023-08-30 08:44:34 +12:00
8d4caaabb0 ui: Simply collapse spacing 2023-08-30 08:40:17 +12:00
171a0eaf51 feat: Add Lock Ratio Option 2023-08-30 07:04:08 +12:00
2469859c01 feat: Add Set Control Image Width / Height to User Settings 2023-08-30 06:23:02 +12:00
cff391aa1d feat: Update size resets to be model dependent 2023-08-30 05:58:07 +12:00
4fd4aee2ab feat: Auto Change Dimensions on Model Switch by Type 2023-08-30 05:49:57 +12:00
f5c5f59220 minor: tweak padding on ControlNet Collapse 2023-08-30 05:24:42 +12:00
9afc909ff0 ui: tweak parameter options spacing 2023-08-30 05:22:44 +12:00
176d41d624 ui: Add SubParametersWrapper 2023-08-30 05:05:54 +12:00
9eed8cdc27 ui: fix some minor spacing and color issues 2023-08-30 04:51:53 +12:00
98e905ee48 ui: Combine mask and coherence under Compositing 2023-08-30 04:51:32 +12:00
52c2397498 ui: Keep boards modal open by default 2023-08-30 04:17:30 +12:00
9f9807d7f7 fix: Controlnet Prepreocessed Image Save Icon Missing (#4375)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-30 04:06:04 +12:00
11fa87388b fix: Controlnet Prepreocessed Image Save Icon Missing 2023-08-30 04:05:36 +12:00
258b0814a8 Merge branch 'main' into feat/nodes-phase-5 2023-08-30 02:33:49 +12:00
dd2057322c enable .and() syntax and long prompts (#4112)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

In current main, long prompts and support for [Compel's `.and()`
syntax](https://github.com/damian0815/compel/blob/main/doc/syntax.md#conjunction)
is missing. This PR adds it back.

### needs Compel>=2.0.2.dev1
2023-08-30 02:30:22 +12:00
41c5963e41 Merge branch 'main' into pr/4112 2023-08-30 02:22:37 +12:00
ed1456e0cc feat: Send Canvas Image & Mask To ControlNet (#4374)
## What type of PR is this? (check all applicable)

- [x] Feature


## Have you discussed this change with the InvokeAI team?
- [x] Yes

      
## Description

Send stuff directly from canvas to ControlNet

## Usage

- Two new buttons available on canvas Controlnet to import image and
mask.
- Click them.
2023-08-30 02:21:57 +12:00
15a927b517 fix: Processing Control Image not saving properly 2023-08-30 02:09:13 +12:00
121396f844 Fix tokenization log for sd models 2023-08-29 17:07:33 +03:00
d251124196 feat: Add Save Preprocessed Image To Board 2023-08-30 01:14:41 +12:00
243e76dd80 feat: Send Canvas Image & Mask To ControlNet 2023-08-29 23:48:28 +12:00
cfee8d9804 chore: seamless print statement cleanup 2023-08-29 13:09:30 +12:00
68dc3c6cb4 feat: Upgrade compel to 2.0.2 2023-08-29 12:58:59 +12:00
4196c669a0 chore: black / flake lint errors 2023-08-29 12:57:26 +12:00
a1398dec91 Merge branch 'main' into pr/4112 2023-08-29 12:56:59 +12:00
c4bec0e81b Merge branch 'main' into feat/nodes-phase-5 2023-08-29 12:42:52 +12:00
a03233bd8a Add Next/Prev Buttons CurrentImageNode.tsx (#4352)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Adds Next and Prev Buttons to the current image node
As usual you don't have to use 😄 

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-29 12:42:16 +12:00
6fdeeb8ce8 Merge branch 'main' into pr/4352 2023-08-29 12:40:01 +12:00
9993e4b02e fix: lint errors 2023-08-29 12:37:09 +12:00
e6b677873a chore: Regen schema 2023-08-29 12:20:55 +12:00
44e77589b7 cleanup: Print statement in seamless hotfix 2023-08-29 12:18:26 +12:00
d0c74822eb resolve: Merge conflicts 2023-08-29 12:08:00 +12:00
383d008529 Merge branch 'main' into feat/nodes-phase-5 2023-08-29 12:05:28 +12:00
59511783fc Seamless Patch from Stalker (#4372)
Last commit that didn't get merged in with #4370
2023-08-29 08:57:06 +12:00
605e13eac0 chore: black fix 2023-08-29 07:50:17 +12:00
2a1d7342a7 Seamless Patch from Stalker 2023-08-28 15:48:05 -04:00
d1efabaf2f Seamless Implementation (#4370)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
Adds Seamless back into the options for Denoising.

## Related Tickets & Documents

- Related Issue #3975 

## QA Instructions, Screenshots, Recordings

- Should test X, Y, and XY seamless tiling for all model architectures.

## Added/updated tests?

- [ ] Yes
- [ X ] No : Will need some guidance on automating this.
2023-08-28 15:18:04 -04:00
577464091c fix: SDXL LoRA's not working with seamless 2023-08-29 06:44:18 +12:00
aaae471910 fix: SDXL Canvas Inpaint & Outpaint being broken 2023-08-29 05:42:00 +12:00
56ed76fd95 fix: useMultiSelect file named incorrectly 2023-08-29 05:19:51 +12:00
5133825efb fix: Incorrect plug in Dynamic Prompt Graph 2023-08-29 05:17:46 +12:00
99475ab800 chore: pyflake lint fixes 2023-08-29 05:16:23 +12:00
50a266e064 feat: Add Seamless to Inpaint & Outpaint 2023-08-29 05:11:22 +12:00
87bb4d8f6e fix: Seamless not working with SDXL on Canvas 2023-08-29 04:52:41 +12:00
fcb60a7a59 chore: Update var names that were not updated 2023-08-29 04:33:22 +12:00
b5dac99411 feat: Add Seamless To Canvas Text To Image / Image To Image + SDXL + Refiner 2023-08-29 04:26:11 +12:00
a08d22587b fix: Incorrect node ID's for Seamless plugging 2023-08-29 04:21:11 +12:00
0ea67050f1 fix: Seamless not correctly plugged to SDXL Denoise Latents 2023-08-29 04:18:45 +12:00
6db19a8dee fix: Connection type on Seamless Node VAE Input 2023-08-29 04:15:15 +12:00
ef58635a76 chore: black lint 2023-08-29 04:04:03 +12:00
594e547c3b feat: Add Seamless to T2I / I2I / SDXL T2I / I2I + Refiner 2023-08-29 04:01:04 +12:00
2bf747caf6 Blackify 2023-08-28 18:36:27 +03:00
cd548f73fd Merge branch 'main' into feat_compel_and 2023-08-28 18:31:41 +03:00
bb085c5fba Move monkeypatch for diffusers/torch bug to hotfixes.py 2023-08-28 18:29:49 +03:00
3efb1f6f17 Merge branch 'Seamless' of https://github.com/invoke-ai/InvokeAI into Seamless 2023-08-28 10:30:43 -04:00
1ed0d7bf3c Merge branch 'main' into Seamless 2023-08-29 01:21:01 +12:00
a5fe6c8af6 enable preselected image actions (#4355)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Allow an image and action to be passed into the app for starting state

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-29 01:15:08 +12:00
3c37245804 Merge branch 'main' into maryhipp/preselected-image 2023-08-29 01:12:09 +12:00
e60af40c8d chore: lint fixes 2023-08-29 01:11:55 +12:00
421f5b7d75 Seamless Updates 2023-08-28 08:43:08 -04:00
3ef36707a8 chore: Black lint 2023-08-28 23:10:00 +12:00
00ca9b027a Update CurrentImageNode.tsx 2023-08-28 19:15:53 +10:00
e81e17ccb6 Merge branch 'main' into nextprevcurrentimagenode 2023-08-28 18:05:33 +10:00
b9731cb434 Merge branch 'main' into Seamless 2023-08-28 00:12:23 -04:00
502570e083 fix: Inpaint Fixes (#4301)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Fix masked generation with inpaint models

## Related Tickets & Documents
- Closes #4295 

## Added/updated tests?

- [ ] Yes
- [x] No
2023-08-28 00:11:11 -04:00
1f476692da Seamless fixes 2023-08-28 00:10:46 -04:00
5fdd25501b updates per stalkers comments 2023-08-27 22:54:53 -04:00
4f00dbe704 Merge branch 'main' into fix/inpaint_gen 2023-08-27 22:49:55 -04:00
b65c9ad612 Add monkeypatch for xformers to align unaligned attention_mask 2023-08-28 04:50:58 +03:00
ef3bf2803f Merge branch 'main' into feat_compel_and 2023-08-28 04:11:35 +03:00
f87b2364b7 Merge branch 'main' into nextprevcurrentimagenode 2023-08-28 10:44:17 +10:00
3e6c49001c Change antialias to True as input - image
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-08-28 02:54:39 +03:00
19e0f360e7 Fix vae fields 2023-08-27 15:05:10 -04:00
ea40a7844a add VAE 2023-08-27 14:53:57 -04:00
0d2e194213 Fixed dict error 2023-08-27 14:21:56 -04:00
c6d00387a7 Revert old latent changes, update seamless 2023-08-27 14:15:37 -04:00
3de45af734 updates 2023-08-27 14:13:00 -04:00
526c7e7737 Provide antialias argument as behaviour will be changed in future(deprecation warning) 2023-08-27 20:04:55 +03:00
1811b54727 Provide metadata to image creation call 2023-08-27 20:03:53 +03:00
95883c2efd Add Initial (non-working) Seamless Implementation 2023-08-27 12:29:11 -04:00
b5a83bbc8a Update CODEOWNERS 2023-08-27 11:28:42 -04:00
38851ae19a Merge branch 'main' into nextprevcurrentimagenode 2023-08-27 19:50:39 +10:00
71c3955530 feat: Add Scale Before Processing To Canvas Txt2Img / Img2Img (w/ SDXL) 2023-08-27 08:26:23 +12:00
3f8d17d6b7 chore: Black linting 2023-08-27 06:17:08 +12:00
b18695df6f fix: Update color of denoise mask socket
The previous red look too much like the error color.
2023-08-27 06:16:13 +12:00
249048aae7 fix: Reorder DenoiseMask socket fields 2023-08-27 06:14:35 +12:00
521da555d6 feat: Update color of Denoise Mask socket 2023-08-27 06:09:02 +12:00
c923d094c6 rename: Inpaint Mask to Denoise Mask 2023-08-27 05:50:13 +12:00
226721ce51 feat: Setup UnifiedCanvas to work with new InpaintMaskField 2023-08-27 03:50:29 +12:00
af3e316cee chore: Regen schema 2023-08-27 03:12:03 +12:00
382a55afd3 fix: merge conflicts 2023-08-27 03:07:42 +12:00
e9633a3adb Merge branch 'main' into fix/inpaint_gen 2023-08-27 02:54:19 +12:00
61224e5cfe Update communityNodes.md (#4362)
Added a node to prompt Oobabooga Text-Generation-Webui

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-08-26 08:47:01 -04:00
dc581350e6 Merge branch 'main' into sammyf-patch-1-1 2023-08-26 08:46:38 -04:00
64c5b20ce3 Update communityNodes.md
discarded commits, resynced, added Load Video Frames to the community nodes. Hopefully I can start to understand github soon... sigh...
2023-08-25 23:43:57 -04:00
8a79798fa6 Merge branch 'main' into sammyf-patch-1-1 2023-08-25 20:40:34 -04:00
6b462f2ed5 feat(dev_reload): use jurigged to hot reload changes to Python source (#4313) 2023-08-25 14:27:40 -07:00
9c13f1b0fd Merge branch 'main' into feat/dev_reload 2023-08-25 17:06:58 -04:00
7ab3d3861c Merge branch 'main' into sammyf-patch-1-1 2023-08-26 00:48:05 +10:00
8e90468637 Node for Oobabooga, Update communityNodes.md
third try should be the right try. Now with link
2023-08-25 16:22:50 +02:00
f67bbadf83 Add to communityNodes.md 2023-08-25 08:43:05 -04:00
e2942b9b8d Add Retroize Nodes to Community Nodes 2023-08-25 08:41:49 -04:00
ac942a2034 Update communityNodes.md
Added a node to prompt Oobabooga Text-Generation-Webui
2023-08-25 10:55:52 +02:00
0bf5fee1b2 correct solution to crash 2023-08-24 23:16:03 -04:00
8114fc7bc2 UI tweak to column select 2023-08-24 23:16:03 -04:00
f9d2bcce04 blackify 2023-08-24 23:16:03 -04:00
84bf2a03e9 fix crash that occurs when no invokeai.yaml is present 2023-08-24 23:16:03 -04:00
4ee65d179c 3.1 Documentation Updates (#4318)
* Updating Nodes documentation

* Restructured nodes docs

* Comfy to Invoke Overview

* Corrections to Comfy -> Invoke Mappings

* Adding GA4 to docs

* Hiding CLI status

* Node doc updates

* File path updates

* Updates based on lstein's feedback

* Fix broken links

* Fix broken links

* Update comfy to invoke nodes list

* Updated prompts documenation

* Fix formatting
2023-08-25 11:59:46 +10:00
368ff17ed4 Merge branch 'main' into feat/dev_reload 2023-08-24 15:21:50 -07:00
d52a096607 enable preselected image actions 2023-08-24 13:29:53 -04:00
44b6adfb9f cleanup 2023-08-25 00:09:16 +10:00
466a819f06 render created_by in UI if its present 2023-08-25 00:09:16 +10:00
e6fd1c3d1f add optional field to type 2023-08-25 00:09:16 +10:00
7caccb11fa fix(backend): fix workflow not saving to image 2023-08-25 00:01:29 +10:00
e22c797fa3 fix(db): fix typing on ImageRecordChanges 2023-08-24 22:13:05 +10:00
0c5736d9c9 feat(ui): cache image metadata for 24 hours 2023-08-24 22:12:13 +10:00
2d8f7d425c feat(nodes): retain image metadata on save 2023-08-24 22:10:24 +10:00
7d1942e9f0 feat: workflow saving and loading 2023-08-24 21:42:32 +10:00
5d8cd62e44 Update CurrentImageNode.tsx 2023-08-24 19:20:35 +10:00
b6dc5c0fee Run Prettier 2023-08-24 18:45:38 +10:00
c1b8e4b501 Add Next/Prev Buttons CurrentImageNode.tsx 2023-08-24 18:31:27 +10:00
65feb92286 Merge branch 'main' into feat_compel_and 2023-08-24 17:38:35 +10:00
7f6fdf5d39 feat(ui): hide lama infill 2023-08-23 23:05:29 -04:00
40e6dd8464 feat(ui): use seed + 1 for second inpaint/outpaint pass 2023-08-23 23:05:29 -04:00
79df46bad2 chore: flake8 2023-08-23 23:05:29 -04:00
2f11936db0 fix(ui): use seed + 1 for inpaint/outpaint second pass 2023-08-23 23:05:29 -04:00
2ba52b8921 fix: File Tile Infill being broken 2023-08-23 23:05:29 -04:00
fa3fcd7820 cleanup: Lama 2023-08-23 23:05:29 -04:00
f45ea1145d fix: LoRA's not working with new canvas refine 2023-08-23 23:05:29 -04:00
5eb6148336 chore: black fix 2023-08-23 23:05:29 -04:00
49892faee4 experimental: LaMa Infill 2023-08-23 23:05:29 -04:00
7bb876a79b feat: Add Refiner Pass to Canvas Inpainting 2023-08-23 23:05:29 -04:00
f89be8c685 cleanup: Some minor cleanup 2023-08-23 23:05:29 -04:00
7e4009a58e chore: Rename canvas refine elements to have more apt names 2023-08-23 23:05:29 -04:00
5141e82f88 fix: Remove paste back from inpainting too 2023-08-23 23:05:29 -04:00
8277bfab5e feat: Add Refiner Pass to SDXL Outpainting
Also fix Scale Before Processing
2023-08-23 23:05:29 -04:00
0af8a0e84b feat: Replace Seam Painting with Refine Pass for Outpainting 2023-08-23 23:05:29 -04:00
9bafe4a94f fix: Paste Back Not Respecting Inpainted Mask 2023-08-23 23:05:29 -04:00
54e844f7da Merge branch 'main' into feat/dev_reload 2023-08-23 09:47:24 -07:00
111322b015 fix(ui): fix staging area shadow
It was too strong
2023-08-23 23:06:42 +10:00
859c155e7f fix(ui): fix IAICollapse styling 2023-08-23 23:06:42 +10:00
955fef35aa chore(ui): remove cruft related to old canvas scaling method 2023-08-23 23:06:42 +10:00
f3b293b5cc feat: Add Blank Image Node 2023-08-23 23:06:42 +10:00
6efa953172 fix(ui): fix canvas scaling 2023-08-23 23:06:42 +10:00
06ac16a77d feat(ui): style minimap 2023-08-23 23:06:42 +10:00
05c939d41e feat(ui): remove canvas beta layout 2023-08-23 23:06:42 +10:00
cfee02b753 feat(ui): align invoke buttons 2023-08-23 23:06:42 +10:00
4f088252db fix: Restyle the WorkflowPanel 2023-08-23 23:06:42 +10:00
ca3e826a14 feat: Make the in progress dark mode colors golden 2023-08-23 23:06:42 +10:00
0cb886b915 feat(ui): node buttons and shadow 2023-08-23 23:06:42 +10:00
2ec8fd3dc7 feat: Make the active processing node light up 2023-08-23 23:06:42 +10:00
90abd0fe49 fix(ui): position floating buttons 2023-08-23 23:06:42 +10:00
3651cf7ee2 wip buttons 2023-08-23 23:06:42 +10:00
8eca3bbbcd chore: Remove Pinned Hotkeys from Hotkeys Modal 2023-08-23 23:06:42 +10:00
73318c2847 feat(ui): remove floating panels, move all to resizable panels
There is a console error we can ignore when toggling gallery panel on canvas - this will be resolved in the next release of the resizable library
2023-08-23 23:06:42 +10:00
6d10e40c9b feat(ui): add selection mode toggle 2023-08-23 23:06:42 +10:00
5cf9b75d77 fix: Remove / as hotkey for add node and add tooltip 2023-08-23 23:06:42 +10:00
d4463674cf fix: Move add node hotkey to the right component 2023-08-23 23:06:42 +10:00
ce7172d78c feat(ui): add workflow saving/loading (wip)
Adds loading workflows with exhaustive validation via `zod`.

There is a load button but no dedicated save/load UI yet. Also need to add versioning to the workflow format itself.
2023-08-23 23:06:42 +10:00
38b2dedc1d feat(ui): use new ui_order to sort fields; connection-only fields in grid 2023-08-23 23:06:42 +10:00
cd73085eb9 feat(nodes): add ui_order node field attribute
used by UI to sort fields in workflow editor
2023-08-23 23:06:42 +10:00
2497aa5cd8 feat(ui): improve node schema parsing and add outputType to templates 2023-08-23 23:06:42 +10:00
089ada8cd1 chore(ui): typegen 2023-08-23 23:06:42 +10:00
35d14fc0f9 fix(ui): simplify typegen script
i had this committed earlier but lost it somehow
2023-08-23 23:06:42 +10:00
b79bca2c14 build(ui): fix up lint scripts (way faster now) 2023-08-23 23:06:42 +10:00
5fc60d0539 fix(nodes): id field is not an InputField 2023-08-23 23:06:42 +10:00
7b97754271 chore(ui): update all packages
- only breaking change was in `openapi-fetch`, easy fix
- also looks like prettier/eslint is a bit more comprehensive? caught a couple extra things
2023-08-23 23:06:42 +10:00
98dcc8d8b3 Merge remote-tracking branch 'origin/main' into feat/dev_reload 2023-08-22 18:18:16 -07:00
d3c177aaef Refactor config class and reorganize image generation options (#4309)
## What type of PR is this? (check all applicable)

- [X Refactor
- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes
      
## Have you updated all relevant documentation?
- [X] Yes

## Description

### Refactoring

This PR refactors `invokeai.app.services.config` to be easier to
maintain by splitting off the argument, environment and init file
parsing code from the InvokeAIAppConfig object. This will hopefully make
it easier for people to find the place where the various settings are
defined.

### New Features

In collaboration with @StAlKeR7779 , I have renamed and reorganized the
settings controlling image generation and model management to be more
intuitive. The relevant portion of the init file now looks like this:

```
  Model Cache:
    ram: 14.5
    vram: 0.5
    lazy_offload: true
  Device:
    precision: auto
    device: auto
  Generation:
    sequential_guidance: false
    attention_type: auto
    attention_slice_size: auto
    force_tiled_decode: false
```
Key differences are:
1. Split `Performance/Memory` into `Device`, `Generation` and `Model
Cache`
2. Added the ability to force the `device`. The value of this option is
one of {`auto`, `cpu`, `cuda`, `cuda:1`, `mps`}
3. Added the ability to force the `attention_type`. Possible values are
{`auto`, `normal`, `xformers`, `sliced`, `torch-sdp`}
4. Added the ability to force the `attention_slice_size` when `sliced`
attention is in use. The value of this option is one of {`auto`, `max`}
or an integer between 1 and 8.
 
@StAlKeR7779 Please confirm that I wired the `attention_type` and
`attention_slice_size` configuration options to the diffusers backend
correctly.

In addition, I have exposed the generation-related configuration options
to the TUI:


![image](https://github.com/invoke-ai/InvokeAI/assets/111189/8c0235d4-c3b0-494e-a1ab-ff45cdbfd9af)

### Backward Compatibility

This refactor should be backward compatible with earlier versions of
`invokeai.yaml`. If the user re-runs the `invokeai-configure` script,
`invokeai.yaml` will be upgraded to the current format. Several
configuration attributes had to be changed in order to preserve backward
compatibility. These attributes been changed in the code where
appropriate. For the record:

| Old Name | Preferred New Name | Comment |
| ------------| ---------------|------------|
| `max_cache_size` | `ram_cache_size` |
| `max_vram_cache` | `vram_cache_size` |
| `always_use_cpu` | `use_cpu` | Better to check conf.device == "cpu" |
2023-08-22 21:01:25 -04:00
3f7ac556c6 Merge branch 'main' into refactor/rename-performance-options 2023-08-21 22:29:34 -04:00
56c052a747 Merge branch 'main' into feat/dev_reload 2023-08-21 18:22:31 -07:00
8087b428cc ui: node editor misc 2 (#4306)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Next batch of Node Editor changes.
2023-08-21 20:46:20 -04:00
0c639bd751 fix(tests): fix tests 2023-08-22 10:26:11 +10:00
be6ba57775 chore: flake8 2023-08-22 10:14:46 +10:00
2f8d3022a0 Merge branch 'main' into feat/nodes-phase-3 2023-08-22 10:09:25 +10:00
4da861e980 chore: clean up .gitignore 2023-08-22 10:02:03 +10:00
9d7dfeb857 Merge branch 'main' into refactor/rename-performance-options 2023-08-21 19:47:55 -04:00
572e6b892a stats: handle exceptions (#4320)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

[fix(stats): fix fail case when previous graph is
invalid](d1d2d5a47d)

When retrieving a graph, it is parsed through pydantic. It is possible
that this graph is invalid, and an error is thrown.

Handle this by deleting the failed graph from the stats if this occurs.

[fix(stats): fix InvocationStatsService
types](1b70bd1380)

- move docstrings to ABC
- `start_time: int` -> `start_time: float`
- remove class attribute assignments in `StatsContext`
- add `update_mem_stats()` to ABC
- add class attributes to ABC, because they are referenced in instances
of the class. if they should not be on the ABC, then maybe there needs
to be some restructuring

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

On `main` (not this PR), create a situation in which an graph is valid
but will be rendered invalid on invoke. Easy way in node editor:
- create an `Integer Primitive` node, set value to 3
- create a `Resize Image` node and add an image to it
- route the output of `Integer Primitive` to the `width` of `Resize
Image`
- Invoke - this will cause first a `Validation Error` (expected), and if
you inspect the error in the JS console, you'll see it is a "session
retrieval error"
- Invoke again - this will also cause a `Validation Error`, but if you
inspect the error you should see it originates in the stats module (this
is the error this PR fixes)
- Fix the graph by setting the `Integer Primitive` to 512
- Invoke again - you get the same `Validation Error` originating from
stats, even tho there are no issues

Switch to this PR, and then you should only ever get the `Validation
Error` that that is classified as a "session retrieval error".
2023-08-21 19:47:21 -04:00
76750b0121 doc(development): add section on hot reloading with --dev_reload 2023-08-21 16:45:39 -07:00
3039f92e69 doc(development): small updates to backend development intro 2023-08-21 16:38:47 -07:00
88963dbe6e Merge remote-tracking branch 'origin/main' into feat/dev_reload
# Conflicts:
#	invokeai/app/api_app.py
#	invokeai/app/services/config.py
2023-08-21 09:04:31 -07:00
7b2079cf83 feat: Add hotkey for Add Nodes (Shift+A)
Standard with other tools like Blender
2023-08-22 03:31:29 +12:00
535eb1db16 Merge branch 'main' into fix/stats/handle-exceptions 2023-08-21 19:19:32 +10:00
01738deb23 feat(ui): add eslint rules
- `curly` requires conditionals to use curly braces
- `react/jsx-curly-brace-presence` requires string props to *not* have curly braces
2023-08-21 19:17:36 +10:00
fbff22c94b feat(ui): memoize all components 2023-08-21 19:17:36 +10:00
5c305b1eeb feat(ui): add app error boundary
Should catch all app crashes
2023-08-21 19:17:36 +10:00
990b6b5f6a feat(ui): useful tooltips on invoke button 2023-08-21 19:17:36 +10:00
2dfcba8654 fix(ui): fix graphs using old field names 2023-08-21 19:17:36 +10:00
d95773f50f Revert "feat(nodes): make fields that accept connection input optional in OpenAPI schema"
This reverts commit 7325cbdd250153f347e3782265dd42783f7f1d00.
2023-08-21 19:17:36 +10:00
6d111aac90 fix(ui): fix node opacity slider hitbox 2023-08-21 19:17:36 +10:00
f9fc89b3c5 feat(ui): nodes scheduler type default value -> "euler" 2023-08-21 19:17:36 +10:00
ab76d54c10 feat(ui): update node schema parsing
simplified logic thanks to backend changes
2023-08-21 19:17:36 +10:00
56245a7406 chore(ui): regen types 2023-08-21 19:17:36 +10:00
bf04e913c2 feat(nodes): make primitive outputs not optional, fix primitive invocation defaults 2023-08-21 19:17:36 +10:00
cdc49456e8 feat(api): add additional class attribute to invocations and outputs in OpenAPI schema
It is `"invocation"` for invocations and `"output"` for outputs. Clients may use this to confidently and positively identify if an OpenAPI schema object is an invocation or output, instead of using a potentially fragile heuristic.
2023-08-21 19:17:36 +10:00
37dc2d9d4d feat(nodes): update vae node tags 2023-08-21 19:17:36 +10:00
6e1ddb671e feat(nodes): make fields that accept connection input optional in OpenAPI schema
Doing this via `BaseInvocation`'s `Config.schema_extra()` means all clients get an accurate OpenAPI schema.

Shifts the responsibility of correct types to the backend, where previously it was on the client.
2023-08-21 19:17:36 +10:00
496a2db15c feat(nodes): make id, type required in BaseInvocation, BaseInvocationOutput
Doing this via these classes' `Config.schema_extra()` method makes it unintrusive and clients will get the correct types for these properties.

Shifts the responsibility of correct types to the backend, where previously it was on the client.
2023-08-21 19:17:36 +10:00
5292eda0e4 feat(nodes): remove "Loader" from model nodes
They are not loaders, they are selectors - remove this to reduce confusion.
2023-08-21 19:17:36 +10:00
4ac41bc4b1 feat(ui): adding node selects new node exclusively 2023-08-21 19:17:36 +10:00
4be4fc6731 feat(ui): rework add node select
- `space` and `/` open floating add node select
- improved filter logic (partial word matches)
2023-08-21 19:17:36 +10:00
a9fdc77edd feat(ui): rename node editor to workflow editor 2023-08-21 19:17:36 +10:00
385765faec fix(ui): fix missing tags on template parse 2023-08-21 19:17:36 +10:00
adb05cde5b feat(ui): simple partial search for nodes 2023-08-21 19:17:36 +10:00
211e8203f8 feat(ui): organise nodes files
- also remove old `.gitignore` of `inputs/` which wasn't used and was ignoring a frontend folder
2023-08-21 19:17:36 +10:00
0b9ae74192 fix(stats): RuntimeError: dictionary changed size during iteration 2023-08-21 19:17:36 +10:00
165c57c001 feat(ui): add select all to workflow editor 2023-08-21 19:17:36 +10:00
2514af79a0 feat(ui): crude node outputs display
Resets on invoke. Nothing fancy for the UI yet, just simple text (for numbers and strings) or image. For other output types, the output in JSON.
2023-08-21 19:17:36 +10:00
f952f8f685 feat(ui): add typegen customisation for invocation outputs
The `type` property is required on all of them, but because this is defined in pydantic as a Literal, it is not required in the OpenAPI schema. Easier to fix this by changing the generated types than fiddling around with pydantic.
2023-08-21 19:17:36 +10:00
484b572023 feat(nodes): primitives have value instead of a as field names 2023-08-21 19:17:36 +10:00
cd9baf8092 fix(stats): fix InvocationStatsService types
- move docstrings to ABC
- `start_time: int` -> `start_time: float`
- remove class attribute assignments in `StatsContext`
- add `update_mem_stats()` to ABC
- add class attributes to ABC, because they are referenced in instances of the class. if they should not be on the ABC, then maybe there needs to be some restructuring
2023-08-21 19:17:36 +10:00
81385d7d35 fix(stats): fix fail case when previous graph is invalid
When retrieving a graph, it is parsed through pydantic. It is possible that this graph is invalid, and an error is thrown.

Handle this by deleting the failed graph from the stats if this occurs.
2023-08-21 19:17:36 +10:00
519bcb38c1 feat(ui): node delete, copy, paste 2023-08-21 19:17:36 +10:00
567d46b646 feat(ui): delete key works on workflow editor 2023-08-21 19:17:36 +10:00
030802295b feat(ui): reset only specific nodes/cnet that use images
Previously if an image was used in nodes and you deleted it, it would reset all of node editor. Same for controlnet.

Now it only resets the specific nodes or controlnets that used that image.
2023-08-21 19:17:36 +10:00
a495c8c156 feat(ui): misc cleanups 2023-08-21 19:17:36 +10:00
ae6db67068 feat(ui): add width to mantine selects 2023-08-21 19:17:36 +10:00
3d84e7756a fix(nodes): fix field names 2023-08-21 19:17:36 +10:00
98431b3de4 feat: add Scheduler as field type
- update node schemas
- add `UIType.Scheduler`
- add field type to schema parser, input components
2023-08-21 19:17:36 +10:00
210a3f9aa7 feat(ui): make mantine single selects *exactly* the same size as chakra ones 2023-08-21 19:17:36 +10:00
9332ce639c fix(ui): fix node mouse interactions
Add "nodrag", "nowheel" and "nopan" class names in interactable elements, as neeeded. This fixes the mouse interactions and also makes the node draggable from anywhere without needing shift.

Also fixes ctrl/cmd multi-select to support deselecting.
2023-08-21 19:17:36 +10:00
84cf8bdc08 feat(ui): field context menu, add/remove from linear ui 2023-08-21 19:17:36 +10:00
64a6aa0293 fix(ui): move BoardContextMenu to use IAIContextMenu 2023-08-21 19:17:36 +10:00
5ae14bffba fix(ui): clear exposedFields when resetting graph 2023-08-21 19:17:36 +10:00
0909812c84 chore: black 2023-08-21 19:17:15 +10:00
66c0aea9e7 fix(nodes): removed duplicate node 2023-08-21 19:17:15 +10:00
2bcded78e1 add BlendInvocation 2023-08-21 19:17:15 +10:00
beb3e5aeb7 Report correctly to compel if we want get pooled in future(affects blend computation) 2023-08-21 19:05:40 +10:00
5b6069b916 blackify (again) 2023-08-20 16:06:01 -04:00
766cb887e4 resolve more flake8 problems 2023-08-20 15:57:15 -04:00
ef317be1f9 blackify (again) 2023-08-20 15:46:12 -04:00
027b84d1aa add noqa comments to util/__init__ 2023-08-20 15:43:17 -04:00
11b670755d fix flake8 error 2023-08-20 15:39:45 -04:00
a536719fc3 blackify 2023-08-20 15:27:51 -04:00
8e6d88e98c resolve merge conflicts 2023-08-20 15:26:52 -04:00
0f1b975d0e dep(diffusers): upgrade diffusers to 0.20 (#4311) 2023-08-18 18:22:11 -07:00
2fef478497 fix(convert_ckpt): Removed is_safetensors_available as safetensors is now a required dependency. 2023-08-18 11:05:59 -07:00
6df6abf6f6 Merge branch 'main' into dep/diffusers020 2023-08-18 11:02:52 -07:00
1b70bd1380 fix(stats): fix InvocationStatsService types
- move docstrings to ABC
- `start_time: int` -> `start_time: float`
- remove class attribute assignments in `StatsContext`
- add `update_mem_stats()` to ABC
- add class attributes to ABC, because they are referenced in instances of the class. if they should not be on the ABC, then maybe there needs to be some restructuring
2023-08-18 21:35:03 +10:00
d1d2d5a47d fix(stats): fix fail case when previous graph is invalid
When retrieving a graph, it is parsed through pydantic. It is possible that this graph is invalid, and an error is thrown.

Handle this by deleting the failed graph from the stats if this occurs.
2023-08-18 21:34:55 +10:00
3798c8bdb0 Merge branch 'main' into feat_compel_and 2023-08-18 17:04:03 +10:00
c49851e027 chore: minor cleanup after merge & flake8 2023-08-18 16:05:39 +10:00
3c43594c26 Merge branch 'main' into fix/inpaint_gen 2023-08-18 15:57:48 +10:00
c96ae4c331 Reverting late imports to fix tests 2023-08-18 15:52:04 +10:00
ce465acf04 Fixed OnnxRuntimeModel import 2023-08-18 15:52:04 +10:00
33ee418d8c Fixing class level import 2023-08-18 15:52:04 +10:00
4f1008f31f Installing Flake8-pyproject in GHA workflow 2023-08-18 15:52:04 +10:00
6cc629e19d Adding flake8 to GHA and pre-commit. Fixing missing flake8 2023-08-18 15:52:04 +10:00
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
f6db9da06c chore(ui): rename file to not cause madge to fail 2023-08-18 13:20:29 +10:00
a17dbd7df6 feat(ui): improve error toast messages 2023-08-18 13:20:29 +10:00
98a4cc20a9 Merge branch 'main' into dep/diffusers020 2023-08-17 20:04:11 -07:00
e2bdcc0271 Merge branch 'main' into refactor/rename-performance-options 2023-08-17 22:36:08 -04:00
ffd0f5924b pass lazy_offload to model cache 2023-08-17 22:35:16 -04:00
654dcd453f feat(dev_reload): use jurigged to hot reload changes to Python source 2023-08-17 19:02:44 -07:00
cfd827cfad Added node for creating mask inpaint 2023-08-18 04:07:40 +03:00
498d2ecc2b allow symbolic links to be followed during autoimport (#4268)
## What type of PR is this? (check all applicable)

- [X] Feature
- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [X] Yes

## Description

Follow symbolic links when auto importing from a directory. Previously
links to files worked, but links to directories weren’t entered during
the scanning/import process.
2023-08-17 20:31:00 -04:00
4ebe839d54 Merge branch 'main' into bugfix/enable-links-in-autoimport 2023-08-17 18:55:45 -04:00
bc16b50302 add followlinks to all os.walk() calls 2023-08-17 18:54:18 -04:00
4267132926 dep(diffusers): upgrade diffusers to 0.20
Removed `is_safetensors_available` as safetensors is now a required dependency of diffusers.
2023-08-17 13:42:29 -07:00
e9a294f733 Merge branch 'main' into fix/inpaint_gen 2023-08-17 16:13:33 -04:00
b69f26c85c add support for "balanced" attention slice size 2023-08-17 16:11:09 -04:00
23b4e1cea0 Merge branch 'main' into refactor/rename-performance-options 2023-08-17 14:43:00 -04:00
635a814dfb fix up documentation 2023-08-17 14:32:05 -04:00
c19835c2d0 wired attention configuration into backend 2023-08-17 14:20:45 -04:00
ed38eaa10c refactor InvokeAIAppConfig 2023-08-17 13:47:26 -04:00
b213335316 feat: Add InpaintMask Field type 2023-08-18 04:54:23 +12:00
ff5c725586 Update mask field type 2023-08-17 19:35:03 +03:00
bf0dfcac2f Add inapint mask field class 2023-08-17 19:19:07 +03:00
842eb4bb0a Merge branch 'main' into bugfix/enable-links-in-autoimport 2023-08-17 07:20:26 -04:00
503e3bca54 revise config but need to migrate old format to new 2023-08-16 23:30:00 -04:00
5aa7bfebd4 Fix masked generation with inpaint models 2023-08-16 20:28:33 +03:00
b524bf3c04 allow symbolic links to be followed during autoimport 2023-08-14 07:37:47 -04:00
e7d9e552a7 Merge branch 'main' into feat_compel_and 2023-08-01 07:20:25 -04:00
d2c55dc011 enable .and() syntax and long prompts 2023-07-30 14:20:59 +02:00
581 changed files with 20456 additions and 21060 deletions

38
.github/CODEOWNERS vendored
View File

@ -1,34 +1,34 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername
/mkdocs.yml @lstein @blessedcoolant
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @Millu
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername
# installation and configuration
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
/invokeai/configs @lstein @hipsterusername
/invokeai/version @lstein @blessedcoolant @hipsterusername
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr
/invokeai/frontend/merge @lstein @blessedcoolant
/invokeai/frontend/training @lstein @blessedcoolant
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@ -1,5 +1,5 @@
name: Feature Request
description: Commit a idea or Request a new feature
description: Contribute a idea or request a new feature
title: '[enhancement]: '
labels: ['enhancement']
# assignees:
@ -9,14 +9,14 @@ body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Feature request!
Thanks for taking the time to fill out this feature request!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a simmilar issue already exists for the feature you want to request
to see if a similar issue already exists for the feature you want to request
options:
- label: I have searched the existing issues
required: true
@ -36,7 +36,7 @@ body:
label: What should this feature add?
description: Please try to explain the functionality this feature should add
placeholder: |
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Instead of one huge text field, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
@ -51,6 +51,6 @@ body:
- type: textarea
attributes:
label: Aditional Content
label: Additional Content
description: Add any other context or screenshots about the feature request here.
placeholder: This is a Mockup of the design how I imagine it <screenshot>
placeholder: This is a mockup of the design how I imagine it <screenshot>

View File

@ -1,6 +1,6 @@
name: style checks
# just formatting for now
# TODO: add isort and flake8 later
# just formatting and flake8 for now
# TODO: add isort later
on:
pull_request:
@ -20,8 +20,8 @@ jobs:
- name: Install dependencies with pip
run: |
pip install black
pip install black flake8 Flake8-pyproject
# - run: isort --check-only .
- run: black --check .
# - run: flake8
- run: flake8

37
.gitignore vendored
View File

@ -1,23 +1,8 @@
# ignore default image save location and model symbolic link
.idea/
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
**/restoration/codeformer/weights
# ignore user models config
configs/models.user.yaml
config/models.user.yml
invokeai.init
.version
.last_model
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
# ignore a directory which serves as a place for initial images
inputs/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@ -189,39 +174,17 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
src
**/__pycache__/
outputs
# Logs and associated folders
# created from generated embeddings.
logs
testtube
checkpoints
# If it's a Mac
.DS_Store
invokeai/frontend/yarn.lock
invokeai/frontend/node_modules
# Let the frontend manage its own gitignore
!invokeai/frontend/web/*
# Scratch folder
.scratch/
.vscode/
gfpgan/
models/ldm/stable-diffusion-v1/*.sha256
# GFPGAN model files
gfpgan/
# config file (will be created by installer)
configs/models.yaml
# ignore initfile
.invokeai
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements

View File

@ -8,3 +8,10 @@ repos:
language: system
entry: black
types: [python]
- id: flake8
name: flake8
stages: [commit]
language: system
entry: flake8
types: [python]

View File

@ -43,7 +43,7 @@ Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.
**Quick links**: [[How to
Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] [<a
@ -81,7 +81,7 @@ Table of Contents 📝
## Quick Start
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)
If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 335 KiB

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 179 KiB

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 501 KiB

After

Width:  |  Height:  |  Size: 421 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 473 KiB

After

Width:  |  Height:  |  Size: 585 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 557 KiB

After

Width:  |  Height:  |  Size: 598 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 340 KiB

After

Width:  |  Height:  |  Size: 438 KiB

View File

@ -14,11 +14,14 @@ To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the
#### Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
#### Nodes
If youd like to help with development, please see our [nodes contribution guide](/nodes/contributingNodes). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
#### Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documenation.md).
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
#### Translation
If you'd like to help with translation, please see our [translation guide](docs/contributing/.contribution_guides/translation.md).
If you'd like to help with translation, please see our [translation guide](contribution_guides/translation.md).
#### Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.

View File

@ -29,12 +29,13 @@ The first set of things we need to do when creating a new Invocation are -
- Create a new class that derives from a predefined parent class called
`BaseInvocation`.
- The name of every Invocation must end with the word `Invocation` in order for
it to be recognized as an Invocation.
- Every Invocation must have a `docstring` that describes what this Invocation
does.
- Every Invocation must have a unique `type` field defined which becomes its
indentifier.
- While not strictly required, we suggest every invocation class name ends in
"Invocation", eg "CropImageInvocation".
- Every Invocation must use the `@invocation` decorator to provide its unique
invocation type. You may also provide its title, tags and category using the
decorator.
- Invocations are strictly typed. We make use of the native
[typing](https://docs.python.org/3/library/typing.html) library and the
installed [pydantic](https://pydantic-docs.helpmanual.io/) library for
@ -43,12 +44,11 @@ The first set of things we need to do when creating a new Invocation are -
So let us do that.
```python
from typing import Literal
from .baseinvocation import BaseInvocation
from .baseinvocation import BaseInvocation, invocation
@invocation('resize')
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
```
That's great.
@ -62,8 +62,10 @@ our Invocation takes.
### **Inputs**
Every Invocation input is a pydantic `Field` and like everything else should be
strictly typed and defined.
Every Invocation input must be defined using the `InputField` function. This is
a wrapper around the pydantic `Field` function, which handles a few extra things
and provides type hints. Like everything else, this should be strictly typed and
defined.
So let us create these inputs for our Invocation. First up, the `image` input we
need. Generally, we can use standard variable types in Python but InvokeAI
@ -76,55 +78,51 @@ create your own custom field types later in this guide. For now, let's go ahead
and use it.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation
from ..models.image import ImageField
from .baseinvocation import BaseInvocation, InputField, invocation
from .primitives import ImageField
@invocation('resize')
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
image: ImageField = InputField(description="The input image")
```
Let us break down our input code.
```python
image: Union[ImageField, None] = Field(description="The input image", default=None)
image: ImageField = InputField(description="The input image")
```
| Part | Value | Description |
| --------- | ---------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| Name | `image` | The variable that will hold our image |
| Type Hint | `Union[ImageField, None]` | The types for our field. Indicates that the image can either be an `ImageField` type or `None` |
| Field | `Field(description="The input image", default=None)` | The image variable is a field which needs a description and a default value that we set to `None`. |
| Part | Value | Description |
| --------- | ------------------------------------------- | ------------------------------------------------------------------------------- |
| Name | `image` | The variable that will hold our image |
| Type Hint | `ImageField` | The types for our field. Indicates that the image must be an `ImageField` type. |
| Field | `InputField(description="The input image")` | The image variable is an `InputField` which needs a description. |
Great. Now let us create our other inputs for `width` and `height`
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation
from ..models.image import ImageField
from .baseinvocation import BaseInvocation, InputField, invocation
from .primitives import ImageField
@invocation('resize')
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
```
As you might have noticed, we added two new parameters to the field type for
`width` and `height` called `gt` and `le`. These basically stand for _greater
than or equal to_ and _less than or equal to_. There are various other param
types for field that you can find on the **pydantic** documentation.
As you might have noticed, we added two new arguments to the `InputField`
definition for `width` and `height`, called `gt` and `le`. They stand for
_greater than or equal to_ and _less than or equal to_.
These impose contraints on those fields, and will raise an exception if the
values do not meet the constraints. Field constraints are provided by
**pydantic**, so anything you see in the **pydantic docs** will work.
**Note:** _Any time it is possible to define constraints for our field, we
should do it so the frontend has more information on how to parse this field._
@ -141,20 +139,17 @@ that are provided by it by InvokeAI.
Let us create this function first.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField
from .baseinvocation import BaseInvocation, InputField, invocation
from .primitives import ImageField
@invocation('resize')
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext):
pass
@ -173,21 +168,18 @@ all the necessary info related to image outputs. So let us use that.
We will cover how to create your own output types later in this guide.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField
from .baseinvocation import BaseInvocation, InputField, invocation
from .primitives import ImageField
from .image import ImageOutput
@invocation('resize')
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext) -> ImageOutput:
pass
@ -195,39 +187,34 @@ class ResizeInvocation(BaseInvocation):
Perfect. Now that we have our Invocation setup, let us do what we want to do.
- We will first load the image. Generally we do this using the `PIL` library but
we can use one of the services provided by InvokeAI to load the image.
- We will first load the image using one of the services provided by InvokeAI to
load the image.
- We will resize the image using `PIL` to our input data.
- We will output this image in the format we set above.
So let's do that.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext
from ..models.image import ImageField, ResourceOrigin, ImageCategory
from .baseinvocation import BaseInvocation, InputField, invocation
from .primitives import ImageField
from .image import ImageOutput
@invocation("resize")
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
"""Resizes an image"""
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load the image using InvokeAI's predefined Image Service.
image = context.services.images.get_pil_image(self.image.image_origin, self.image.image_name)
# Load the image using InvokeAI's predefined Image Service. Returns the PIL image.
image = context.services.images.get_pil_image(self.image.image_name)
# Resizing the image
# Because we used the above service, we already have a PIL image. So we can simply resize.
resized_image = image.resize((self.width, self.height))
# Preparing the image for output using InvokeAI's predefined Image Service.
# Save the image using InvokeAI's predefined Image Service. Returns the prepared PIL image.
output_image = context.services.images.create(
image=resized_image,
image_origin=ResourceOrigin.INTERNAL,
@ -241,7 +228,6 @@ class ResizeInvocation(BaseInvocation):
return ImageOutput(
image=ImageField(
image_name=output_image.image_name,
image_origin=output_image.image_origin,
),
width=output_image.width,
height=output_image.height,
@ -253,6 +239,24 @@ certain way that the images need to be dispatched in order to be stored and read
correctly. In 99% of the cases when dealing with an image output, you can simply
copy-paste the template above.
### Customization
We can use the `@invocation` decorator to provide some additional info to the
UI, like a custom title, tags and category.
We also encourage providing a version. This must be a
[semver](https://semver.org/) version string ("$MAJOR.$MINOR.$PATCH"). The UI
will let users know if their workflow is using a mismatched version of the node.
```python
@invocation("resize", title="My Resizer", tags=["resize", "image"], category="My Invocations", version="1.0.0")
class ResizeInvocation(BaseInvocation):
"""Resizes an image"""
image: ImageField = InputField(description="The input image")
...
```
That's it. You made your own **Resize Invocation**.
## Result
@ -270,9 +274,57 @@ new Invocation ready to be used.
![resize node editor](../assets/contributing/resize_node_editor.png)
# Advanced
## Contributing Nodes
## Custom Input Fields
Once you've created a Node, the next step is to share it with the community! The
best way to do this is to submit a Pull Request to add the Node to the
[Community Nodes](nodes/communityNodes) list. If you're not sure how to do that,
take a look a at our [contributing nodes overview](contributingNodes).
## Advanced
### Custom Output Types
Like with custom inputs, sometimes you might find yourself needing custom
outputs that InvokeAI does not provide. We can easily set one up.
Now that you are familiar with Invocations and Inputs, let us use that knowledge
to create an output that has an `image` field, a `color` field and a `string`
field.
- An invocation output is a class that derives from the parent class of
`BaseInvocationOutput`.
- All invocation outputs must use the `@invocation_output` decorator to provide
their unique output type.
- Output fields must use the provided `OutputField` function. This is very
similar to the `InputField` function described earlier - it's a wrapper around
`pydantic`'s `Field()`.
- It is not mandatory but we recommend using names ending with `Output` for
output types.
- It is not mandatory but we highly recommend adding a `docstring` to describe
what your output type is for.
Now that we know the basic rules for creating a new output type, let us go ahead
and make it.
```python
from .baseinvocation import BaseInvocationOutput, OutputField, invocation_output
from .primitives import ImageField, ColorField
@invocation_output('image_color_string_output')
class ImageColorStringOutput(BaseInvocationOutput):
'''Base class for nodes that output a single image'''
image: ImageField = OutputField(description="The image")
color: ColorField = OutputField(description="The color")
text: str = OutputField(description="The string")
```
That's all there is to it.
<!-- TODO: DANGER - we probably do not want people to create their own field types, because this requires a lot of work on the frontend to accomodate.
### Custom Input Fields
Now that you know how to create your own Invocations, let us dive into slightly
more advanced topics.
@ -326,173 +378,7 @@ like this.
color: ColorField = Field(default=ColorField(r=0, g=0, b=0, a=0), description='Background color of an image')
```
**Extra Config**
All input fields also take an additional `Config` class that you can use to do
various advanced things like setting required parameters and etc.
Let us do that for our _ColorField_ and enforce all the values because we did
not define any defaults for our fields.
```python
class ColorField(BaseModel):
'''A field that holds the rgba values of a color'''
r: int = Field(ge=0, le=255, description="The red channel")
g: int = Field(ge=0, le=255, description="The green channel")
b: int = Field(ge=0, le=255, description="The blue channel")
a: int = Field(ge=0, le=255, description="The alpha channel")
class Config:
schema_extra = {"required": ["r", "g", "b", "a"]}
```
Now it becomes mandatory for the user to supply all the values required by our
input field.
We will discuss the `Config` class in extra detail later in this guide and how
you can use it to make your Invocations more robust.
## Custom Output Types
Like with custom inputs, sometimes you might find yourself needing custom
outputs that InvokeAI does not provide. We can easily set one up.
Now that you are familiar with Invocations and Inputs, let us use that knowledge
to put together a custom output type for an Invocation that returns _width_,
_height_ and _background_color_ that we need to create a blank image.
- A custom output type is a class that derives from the parent class of
`BaseInvocationOutput`.
- It is not mandatory but we recommend using names ending with `Output` for
output types. So we'll call our class `BlankImageOutput`
- It is not mandatory but we highly recommend adding a `docstring` to describe
what your output type is for.
- Like Invocations, each output type should have a `type` variable that is
**unique**
Now that we know the basic rules for creating a new output type, let us go ahead
and make it.
```python
from typing import Literal
from pydantic import Field
from .baseinvocation import BaseInvocationOutput
class BlankImageOutput(BaseInvocationOutput):
'''Base output type for creating a blank image'''
type: Literal['blank_image_output'] = 'blank_image_output'
# Inputs
width: int = Field(description='Width of blank image')
height: int = Field(description='Height of blank image')
bg_color: ColorField = Field(description='Background color of blank image')
class Config:
schema_extra = {"required": ["type", "width", "height", "bg_color"]}
```
All set. We now have an output type that requires what we need to create a
blank_image. And if you noticed it, we even used the `Config` class to ensure
the fields are required.
## Custom Configuration
As you might have noticed when making inputs and outputs, we used a class called
`Config` from _pydantic_ to further customize them. Because our inputs and
outputs essentially inherit from _pydantic_'s `BaseModel` class, all
[configuration options](https://docs.pydantic.dev/latest/usage/schema/#schema-customization)
that are valid for _pydantic_ classes are also valid for our inputs and outputs.
You can do the same for your Invocations too but InvokeAI makes our life a
little bit easier on that end.
InvokeAI provides a custom configuration class called `InvocationConfig`
particularly for configuring Invocations. This is exactly the same as the raw
`Config` class from _pydantic_ with some extra stuff on top to help faciliate
parsing of the scheme in the frontend UI.
At the current moment, tihs `InvocationConfig` class is further improved with
the following features related the `ui`.
| Config Option | Field Type | Example |
| ------------- | ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
| type_hints | `Dict[str, Literal["integer", "float", "boolean", "string", "enum", "image", "latents", "model", "control"]]` | `type_hint: "model"` provides type hints related to the model like displaying a list of available models |
| tags | `List[str]` | `tags: ['resize', 'image']` will classify your invocation under the tags of resize and image. |
| title | `str` | `title: 'Resize Image` will rename your to this custom title rather than infer from the name of the Invocation class. |
So let us update your `ResizeInvocation` with some extra configuration and see
how that works.
```python
from typing import Literal, Union
from pydantic import Field
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from ..models.image import ImageField, ResourceOrigin, ImageCategory
from .image import ImageOutput
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
type: Literal['resize'] = 'resize'
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
width: int = Field(default=512, ge=64, le=2048, description="Width of the new image")
height: int = Field(default=512, ge=64, le=2048, description="Height of the new image")
class Config(InvocationConfig):
schema_extra: {
ui: {
tags: ['resize', 'image'],
title: ['My Custom Resize']
}
}
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load the image using InvokeAI's predefined Image Service.
image = context.services.images.get_pil_image(self.image.image_origin, self.image.image_name)
# Resizing the image
# Because we used the above service, we already have a PIL image. So we can simply resize.
resized_image = image.resize((self.width, self.height))
# Preparing the image for output using InvokeAI's predefined Image Service.
output_image = context.services.images.create(
image=resized_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
# Returning the Image
return ImageOutput(
image=ImageField(
image_name=output_image.image_name,
image_origin=output_image.image_origin,
),
width=output_image.width,
height=output_image.height,
)
```
We now customized our code to let the frontend know that our Invocation falls
under `resize` and `image` categories. So when the user searches for these
particular words, our Invocation will show up too.
We also set a custom title for our Invocation. So instead of being called
`Resize`, it will be called `My Custom Resize`.
As simple as that.
As time goes by, InvokeAI will further improve and add more customizability for
Invocation configuration. We will have more documentation regarding this at a
later time.
# **[TODO]**
## Custom Components For Frontend
### Custom Components For Frontend
Every backend input type should have a corresponding frontend component so the
UI knows what to render when you use a particular field type.
@ -510,281 +396,4 @@ Let us create a new component for our custom color field we created above. When
we use a color field, let us say we want the UI to display a color picker for
the user to pick from rather than entering values. That is what we will build
now.
---
# OLD -- TO BE DELETED OR MOVED LATER
---
## Creating a new invocation
To create a new invocation, either find the appropriate module file in
`/ldm/invoke/app/invocations` to add your invocation to, or create a new one in
that folder. All invocations in that folder will be discovered and made
available to the CLI and API automatically. Invocations make use of
[typing](https://docs.python.org/3/library/typing.html) and
[pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration
into the CLI and API.
An invocation looks like this:
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
# fmt: off
type: Literal["upscale"] = "upscale"
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2, 4] = Field(default=2, description="The upscale level")
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
Each portion is important to implement correctly.
### Class definition and type
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
```
All invocations must derive from `BaseInvocation`. They should have a docstring
that declares what they do in a single, short line. They should also have a
`type` with a type hint that's `Literal["command_name"]`, where `command_name`
is what the user will type on the CLI or use in the API to create this
invocation. The `command_name` must be unique. The `type` must be assigned to
the value of the literal in the type hint.
### Inputs
```py
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description="The upscale level")
```
Inputs consist of three parts: a name, a type hint, and a `Field` with default,
description, and validation information. For example:
| Part | Value | Description |
| --------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this
field to be parsed with `None` as a value, which enables linking to previous
invocations. All fields should either provide a default value or allow `None` as
a value, so that they can be overwritten with a linked output from another
invocation.
The special type `ImageField` is also used here. All images are passed as
`ImageField`, which protects them from pydantic validation errors (since images
only ever come from links).
Finally, note that for all linking, the `type` of the linked fields must match.
If the `name` also matches, then the field can be **automatically linked** to a
previous invocation by name and matching.
### Config
```py
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
```
This is an optional configuration for the invocation. It inherits from
pydantic's model `Config` class, and it used primarily to customize the
autogenerated OpenAPI schema.
The UI relies on the OpenAPI schema in two ways:
- An API client & Typescript types are generated from it. This happens at build
time.
- The node editor parses the schema into a template used by the UI to create the
node editor UI. This parsing happens at runtime.
In this example, a `ui` key has been added to the `schema_extra` dict to provide
some tags for the UI, to facilitate filtering nodes.
See the Schema Generation section below for more information.
### Invoke Function
```py
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
The `invoke` function is the last portion of an invocation. It is provided an
`InvocationContext` which contains services to perform work as well as a
`session_id` for use as needed. It should return a class with output values that
derives from `BaseInvocationOutput`.
Before being called, the invocation will have all of its fields set from
defaults, inputs, and finally links (overriding in that order).
Assume that this invocation may be running simultaneously with other
invocations, may be running on another machine, or in other interesting
scenarios. If you need functionality, please provide it as a service in the
`InvocationServices` class, and make sure it can be overridden.
### Outputs
```py
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
Output classes look like an invocation class without the invoke method. Prefer
to use an existing output class if available, and prefer to name inputs the same
as outputs when possible, to promote automatic invocation linking.
## Schema Generation
Invocation, output and related classes are used to generate an OpenAPI schema.
### Required Properties
The schema generation treat all properties with default values as optional. This
makes sense internally, but when when using these classes via the generated
schema, we end up with e.g. the `ImageOutput` class having its `image` property
marked as optional.
We know that this property will always be present, so the additional logic
needed to always check if the property exists adds a lot of extraneous cruft.
To fix this, we can leverage `pydantic`'s
[schema customisation](https://docs.pydantic.dev/usage/schema/#schema-customization)
to mark properties that we know will always be present as required.
Here's that `ImageOutput` class, without the needed schema customisation:
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
```
The OpenAPI schema that results from this `ImageOutput` will have the `type`,
`image`, `width` and `height` properties marked as optional, even though we know
they will always have a value.
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
# Add schema customization
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
With the customization in place, the schema will now show these properties as
required, obviating the need for extensive null checks in client code.
See this `pydantic` issue for discussion on this solution:
<https://github.com/pydantic/pydantic/discussions/4577>
-->

View File

@ -35,18 +35,17 @@ access.
## Backend
The backend is contained within the `./invokeai/backend` folder structure. To
get started however please install the development dependencies.
The backend is contained within the `./invokeai/backend` and `./invokeai/app` directories.
To get started please install the development dependencies.
From the root of the repository run the following command. Note the use of `"`.
```zsh
pip install ".[test]"
pip install ".[dev,test]"
```
This in an optional group of packages which is defined within the
`pyproject.toml` and will be required for testing the changes you make the the
code.
These are optional groups of packages which are defined within the `pyproject.toml`
and will be required for testing the changes you make to the code.
### Running Tests
@ -76,6 +75,20 @@ pytest --cov; open ./coverage/html/index.html
![html-detail](../assets/contributing/html-detail.png)
### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server —
and re-load those multi-gigabyte models —
after every change.
For a faster development workflow, add the `--dev_reload` flag when starting the server.
The server will watch for changes to all the Python files in the `invokeai` directory and apply those changes to the
running server on the fly.
This will allow you to avoid restarting the server (and reloading models) in most cases, but there are some caveats; see
the [jurigged documentation](https://github.com/breuleux/jurigged#caveats) for details.
## Front End
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->

View File

@ -175,22 +175,27 @@ These configuration settings allow you to enable and disable various InvokeAI fe
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
| `restore` | `true` | Activate the facial restoration features (DEPRECATED; restoration features will be removed in 3.0.0) |
### Memory/Performance
### Generation
These options tune InvokeAI's memory and performance characteristics.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `always_use_cpu` | `false` | Use the CPU to generate images, even if a GPU is available |
| `free_gpu_mem` | `false` | Aggressively free up GPU memory after each operation; this will allow you to run in low-VRAM environments with some performance penalties |
| `max_cache_size` | `6` | Amount of CPU RAM (in GB) to reserve for caching models in memory; more cache allows you to keep models in memory and switch among them quickly |
| `max_vram_cache_size` | `2.75` | Amount of GPU VRAM (in GB) to reserve for caching models in VRAM; more cache speeds up generation but reduces the size of the images that can be generated. This can be set to zero to maximize the amount of memory available for generation. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `xformers_enabled` | `true` | If the x-formers memory-efficient attention module is installed, activate it for better memory usage and generation speed|
| `tiled_decode` | `false` | If true, then during the VAE decoding phase the image will be decoded a section at a time, reducing memory consumption at the cost of a performance hit |
| Setting | Default Value | Description |
|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `attention_type` | `auto` | Select the type of attention to use. One of `auto`,`normal`,`xformers`,`sliced`, or `torch-sdp` |
| `attention_slice_size` | `auto` | When "sliced" attention is selected, set the slice size. One of `auto`, `balanced`, `max` or the integers 1-8|
| `force_tiled_decode` | `false` | Force the VAE step to decode in tiles, reducing memory consumption at the cost of performance |
### Device
These options configure the generation execution device.
| Setting | Default Value | Description |
|-----------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `device` | `auto` | Preferred execution device. One of `auto`, `cpu`, `cuda`, `cuda:1`, `mps`. `auto` will choose the device depending on the hardware platform and the installed torch capabilities. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
### Paths

View File

@ -1,208 +0,0 @@
# Nodes Editor (Experimental)
🚨
*The node editor is experimental. We've made it accessible because we use it to develop the application, but we have not addressed the many known rough edges. It's very easy to shoot yourself in the foot, and we cannot offer support for it until it sees full release (ETA v3.1). Everything is subject to change without warning.*
🚨
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. The node processing flow is usually done from left (inputs) to right (outputs), though linearity can become abstracted the more complex the node graph becomes. Nodes inputs and outputs are connected by dragging connectors from node to node.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
## Anatomy of a Node
Individual nodes are made up of the following:
- Inputs: Edge points on the left side of the node window where you connect outputs from other nodes.
- Outputs: Edge points on the right side of the node window where you connect to inputs on other nodes.
- Options: Various options which are either manually configured, or overridden by connecting an output from another node to the input.
## Diffusion Overview
Taking the time to understand the diffusion process will help you to understand how to set up your nodes in the nodes editor.
There are two main spaces Stable Diffusion works in: image space and latent space.
Image space represents images in pixel form that you look at. Latent space represents compressed inputs. Its in latent space that Stable Diffusion processes images. A VAE (Variational Auto Encoder) is responsible for compressing and encoding inputs into latent space, as well as decoding outputs back into image space.
When you generate an image using text-to-image, multiple steps occur in latent space:
1. Random noise is generated at the chosen height and width. The noises characteristics are dictated by the chosen (or not chosen) seed. This noise tensor is passed into latent space. Well call this noise A.
1. Using a models U-Net, a noise predictor examines noise A, and the words tokenized by CLIP from your prompt (conditioning). It generates its own noise tensor to predict what the final image might look like in latent space. Well call this noise B.
1. Noise B is subtracted from noise A in an attempt to create a final latent image indicative of the inputs. This step is repeated for the number of sampler steps chosen.
1. The VAE decodes the final latent image from latent space into image space.
image-to-image is a similar process, with only step 1 being different:
1. The input image is decoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how much noise is added, 0 being none, and 1 being all-encompassing. Well call this noise A. The process is then the same as steps 2-4 in the text-to-image explanation above.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).
A noise scheduler (eg. DPM++ 2M Karras) schedules the subtraction of noise from the latent image across the sampler steps chosen (step 3 above). Less noise is usually subtracted at higher sampler steps.
## Node Types (Base Nodes)
| Node <img width=160 align="right"> | Function |
| ---------------------------------- | --------------------------------------------------------------------------------------|
| Add | Adds two numbers |
| CannyImageProcessor | Canny edge detection for ControlNet |
| ClipSkip | Skip layers in clip text_encoder model |
| Collect | Collects values into a collection |
| Prompt (Compel) | Parse prompt using compel package to conditioning |
| ContentShuffleImageProcessor | Applies content shuffle processing to image |
| ControlNet | Collects ControlNet info to pass to other nodes |
| CvInpaint | Simple inpaint using opencv |
| Divide | Divides two numbers |
| DynamicPrompt | Parses a prompt using adieyal/dynamic prompt's random or combinatorial generator |
| FloatLinearRange | Creates a range |
| HedImageProcessor | Applies HED edge detection to image |
| ImageBlur | Blurs an image |
| ImageChannel | Gets a channel from an image |
| ImageCollection | Load a collection of images and provide it as output |
| ImageConvert | Converts an image to a different mode |
| ImageCrop | Crops an image to a specified box. The box can be outside of the image. |
| ImageInverseLerp | Inverse linear interpolation of all pixels of an image |
| ImageLerp | Linear interpolation of all pixels of an image |
| ImageMultiply | Multiplies two images together using `PIL.ImageChops.Multiply()` |
| ImageNSFWBlurInvocation | Detects and blurs images that may contain sexually explicit content |
| ImagePaste | Pastes an image into another image |
| ImageProcessor | Base class for invocations that reprocess images for ControlNet |
| ImageResize | Resizes an image to specific dimensions |
| ImageScale | Scales an image by a factor |
| ImageToLatents | Scales latents by a given factor |
| ImageWatermarkInvocation | Adds an invisible watermark to images |
| InfillColor | Infills transparent areas of an image with a solid color |
| InfillPatchMatch | Infills transparent areas of an image using the PatchMatch algorithm |
| InfillTile | Infills transparent areas of an image with tiles of the image |
| Inpaint | Generates an image using inpaint |
| Iterate | Iterates over a list of items |
| LatentsToImage | Generates an image from latents |
| LatentsToLatents | Generates latents using latents as base image |
| LeresImageProcessor | Applies leres processing to image |
| LineartAnimeImageProcessor | Applies line art anime processing to image |
| LineartImageProcessor | Applies line art processing to image |
| LoadImage | Load an image and provide it as output |
| Lora Loader | Apply selected lora to unet and text_encoder |
| Model Loader | Loads a main model, outputting its submodels |
| MaskFromAlpha | Extracts the alpha channel of an image as a mask |
| MediapipeFaceProcessor | Applies mediapipe face processing to image |
| MidasDepthImageProcessor | Applies Midas depth processing to image |
| MlsdImageProcessor | Applied MLSD processing to image |
| Multiply | Multiplies two numbers |
| Noise | Generates latent noise |
| NormalbaeImageProcessor | Applies NormalBAE processing to image |
| OpenposeImageProcessor | Applies Openpose processing to image |
| ParamFloat | A float parameter |
| ParamInt | An integer parameter |
| PidiImageProcessor | Applies PIDI processing to an image |
| Progress Image | Displays the progress image in the Node Editor |
| RandomInit | Outputs a single random integer |
| RandomRange | Creates a collection of random numbers |
| Range | Creates a range of numbers from start to stop with step |
| RangeOfSize | Creates a range from start to start + size with step |
| ResizeLatents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8. |
| RestoreFace | Restores faces in the image |
| ScaleLatents | Scales latents by a given factor |
| SegmentAnythingProcessor | Applies segment anything processing to image |
| ShowImage | Displays a provided image, and passes it forward in the pipeline |
| StepParamEasing | Experimental per-step parameter for easing for denoising steps |
| Subtract | Subtracts two numbers |
| TextToLatents | Generates latents from conditionings |
| TileResampleProcessor | Bass class for invocations that preprocess images for ControlNet |
| Upscale | Upscales an image |
| VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput |
| ZoeDepthImageProcessor | Applies Zoe depth processing to image |
## Node Grouping Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
### Noise
As described, an initial noise tensor is necessary for the latent diffusion process. As a result, all non-image *ToLatents nodes require a noise node input.
![groupsnoise](../assets/nodes/groupsnoise.png)
### Conditioning
As described, conditioning is necessary for the latent diffusion process, whether empty or not. As a result, all non-image *ToLatents nodes require positive and negative conditioning inputs. Conditioning is reliant on a CLIP tokenizer provided by the Model Loader node.
![groupsconditioning](../assets/nodes/groupsconditioning.png)
### Image Space & VAE
The ImageToLatents node doesn't require a noise node input, but requires a VAE input to convert the image from image space into latent space. In reverse, the LatentsToImage node requires a VAE input to convert from latent space back into image space.
![groupsimgvae](../assets/nodes/groupsimgvae.png)
### Defined & Random Seeds
It is common to want to use both the same seed (for continuity) and random seeds (for variance). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
### Control
Control means to guide the diffusion process to adhere to a defined input or structure. Control can be provided as input to non-image *ToLatents nodes from ControlNet nodes. ControlNet nodes usually require an image processor which converts an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png)
### LoRA
The Lora Loader node lets you load a LoRA (say that ten times fast) and pass it as output to both the Prompt (Compel) and non-image *ToLatents nodes. A model's CLIP tokenizer is passed through the LoRA into Prompt (Compel), where it affects conditioning. A model's U-Net is also passed through the LoRA into a non-image *ToLatents node, where it affects noise prediction.
![groupslora](../assets/nodes/groupslora.png)
### Scaling
Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. The chosen method differs across contexts. However, be aware that latents are already noisy and compressed at their original resolution; scaling an image could produce more detailed results.
![groupsallscale](../assets/nodes/groupsallscale.png)
### Iteration + Multiple Images as Input
Iteration is a common concept in any processing, and means to repeat a process with given input. In nodes, you're able to use the Iterate node to iterate through collections usually gathered by the Collect node. The Iterate node has many potential uses, from processing a collection of images one after another, to varying seeds across multiple image generations and more. This screenshot demonstrates how to collect several images and pass them out one at a time.
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)
## Examples
With our knowledge of node grouping and the diffusion process, lets break down some basic graphs in the nodes editor. Note that a node's options can be overridden by inputs from other nodes. These examples aren't strict rules to follow and only demonstrate some basic configurations.
### Basic text-to-image Node Graph
![nodest2i](../assets/nodes/nodest2i.png)
- Model Loader: A necessity to generating images (as weve read above). We choose our model from the dropdown. It outputs a U-Net, CLIP tokenizer, and VAE.
- Prompt (Compel): Another necessity. Two prompt nodes are created. One will output positive conditioning (what you want, dog), one will output negative (what you dont want, cat). They both input the CLIP tokenizer that the Model Loader node outputs.
- Noise: Consider this noise A from step one of the text-to-image explanation above. Choose a seed number, width, and height.
- TextToLatents: This node takes many inputs for converting and processing text & noise from image space into latent space, hence the name TextTo**Latents**. In this setup, it inputs positive and negative conditioning from the prompt nodes for processing (step 2 above). It inputs noise from the noise node for processing (steps 2 & 3 above). Lastly, it inputs a U-Net from the Model Loader node for processing (step 2 above). It outputs latents for use in the next LatentsToImage node. Choose number of sampler steps, CFG scale, and scheduler.
- LatentsToImage: This node takes in processed latents from the TextToLatents node, and the models VAE from the Model Loader node which is responsible for decoding latents back into the image space, hence the name LatentsTo**Image**. This node is the last stop, and once the image is decoded, it is saved to the gallery.
### Basic image-to-image Node Graph
![nodesi2i](../assets/nodes/nodesi2i.png)
- Model Loader: Choose a model from the dropdown.
- Prompt (Compel): Two prompt nodes. One positive (dog), one negative (dog). Same CLIP inputs from the Model Loader node as before.
- ImageToLatents: Upload a source image directly in the node window, via drag'n'drop from the gallery, or passed in as input. The ImageToLatents node inputs the VAE from the Model Loader node to decode the chosen image from image space into latent space, hence the name ImageTo**Latents**. It outputs latents for use in the next LatentsToLatents node. It also outputs the source image's width and height for use in the next Noise node if the final image is to be the same dimensions as the source image.
- Noise: A noise tensor is created with the width and height of the source image, and connected to the next LatentsToLatents node. Notice the width and height fields are overridden by the input from the ImageToLatents width and height outputs.
- LatentsToLatents: The inputs and options are nearly identical to TextToLatents, except that LatentsToLatents also takes latents as an input. Considering our source image is already converted to latents in the last ImageToLatents node, and text + noise are no longer the only inputs to process, we use the LatentsToLatents node.
- LatentsToImage: Like previously, the LatentsToImage node will use the VAE from the Model Loader as input to decode the latents from LatentsToLatents into image space, and save it to the gallery.
### Basic ControlNet Node Graph
![nodescontrol](../assets/nodes/nodescontrol.png)
- Model Loader
- Prompt (Compel)
- Noise: Width and height of the CannyImageProcessor ControlNet image is passed in to set the dimensions of the noise passed to TextToLatents.
- CannyImageProcessor: The CannyImageProcessor node is used to process the source image being used as a ControlNet. Each ControlNet processor node applies control in different ways, and has some different options to configure. Width and height are passed to noise, as mentioned. The processed ControlNet image is output to the ControlNet node.
- ControlNet: Select the type of control model. In this case, canny is chosen as the CannyImageProcessor was used to generate the ControlNet image. Configure the control node options, and pass the control output to TextToLatents.
- TextToLatents: Similar to the basic text-to-image example, except ControlNet is passed to the control input edge point.
- LatentsToImage

View File

@ -4,80 +4,6 @@ title: Prompting-Features
# :octicons-command-palette-24: Prompting-Features
## **Negative and Unconditioned Prompts**
Any words between a pair of square brackets will instruct Stable
Diffusion to attempt to ban the concept from the generated image. The
same effect is achieved by placing words in the "Negative Prompts"
textbox in the Web UI.
```text
this is a test prompt [not really] to make you understand [cool] how this works.
```
In the above statement, the words 'not really cool` will be ignored by Stable
Diffusion.
Here's a prompt that depicts what it does.
original prompt:
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve"`
`#!bash parameters: steps=20, dimensions=512x768, CFG=7.5, Scheduler=k_euler_a, seed=1654590180`
<figure markdown>
![step1](../assets/negative_prompt_walkthru/step1.png)
</figure>
That image has a woman, so if we want the horse without a rider, we can
influence the image not to have a woman by putting [woman] in the prompt, like
this:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]"`
(same parameters as above)
<figure markdown>
![step2](../assets/negative_prompt_walkthru/step2.png)
</figure>
That's nice - but say we also don't want the image to be quite so blue. We can
add "blue" to the list of negative prompts, so it's now [woman blue]:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]"`
(same parameters as above)
<figure markdown>
![step3](../assets/negative_prompt_walkthru/step3.png)
</figure>
Getting close - but there's no sense in having a saddle when our horse doesn't
have a rider, so we'll add one more negative prompt: [woman blue saddle].
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]"`
(same parameters as above)
<figure markdown>
![step4](../assets/negative_prompt_walkthru/step4.png)
</figure>
!!! notes "Notes about this feature:"
* The only requirement for words to be ignored is that they are in between a pair of square brackets.
* You can provide multiple words within the same bracket.
* You can provide multiple brackets with multiple words in different places of your prompt. That works just fine.
* To improve typical anatomy problems, you can add negative prompts like `[bad anatomy, extra legs, extra arms, extra fingers, poorly drawn hands, poorly drawn feet, disfigured, out of frame, tiling, bad art, deformed, mutated]`.
---
## **Prompt Syntax Features**
The InvokeAI prompting language has the following features:
@ -102,9 +28,6 @@ The following syntax is recognised:
`a tall thin man (picking (apricots)1.3)1.1`. (`+` is equivalent to 1.1, `++`
is pow(1.1,2), `+++` is pow(1.1,3), etc; `-` means 0.9, `--` means pow(0.9,2),
etc.)
- attention also applies to `[unconditioning]` so
`a tall thin man picking apricots [(ladder)0.01]` will _very gently_ nudge SD
away from trying to draw the man on a ladder
You can use this to increase or decrease the amount of something. Starting from
this prompt of `a man picking apricots from a tree`, let's see what happens if
@ -150,7 +73,7 @@ Or, alternatively, with more man:
| ---------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- |
| ![](../assets/prompt_syntax/mountain-man1.png) | ![](../assets/prompt_syntax/mountain-man2.png) | ![](../assets/prompt_syntax/mountain-man3.png) | ![](../assets/prompt_syntax/mountain-man4.png) |
### Blending between prompts
### Prompt Blending
- `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
- The existing prompt blending using `:<weight>` will continue to be supported -
@ -168,6 +91,24 @@ Or, alternatively, with more man:
See the section below on "Prompt Blending" for more information about how this
works.
### Prompt Conjunction
Join multiple clauses together to create a conjoined prompt. Each clause will be passed to CLIP separately.
For example, the prompt:
```bash
"A mystical valley surround by towering granite cliffs, watercolor, warm"
```
Can be used with .and():
```bash
("A mystical valley", "surround by towering granite cliffs", "watercolor", "warm").and()
```
Each will give you different results - try them out and see what you prefer!
### Cross-Attention Control ('prompt2prompt')
Sometimes an image you generate is almost right, and you just want to change one
@ -190,7 +131,7 @@ For example, consider the prompt `a cat.swap(dog) playing with a ball in the for
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to bloc97's `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to (bloc97's)[(https://github.com/bloc97/CrossAttentionControl)] `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
@ -201,7 +142,7 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
### Escaping parantheses () and speech marks ""
### Escaping parentheses () and speech marks ""
If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
@ -212,23 +153,16 @@ the parentheses as part of the prompt syntax and it will get confused.
## **Prompt Blending**
You may blend together different sections of the prompt to explore the AI's
You may blend together prompts to explore the AI's
latent semantic space and generate interesting (and often surprising!)
variations. The syntax is:
```bash
blue sphere:0.25 red cube:0.75 hybrid
("prompt #1", "prompt #2").blend(0.25, 0.75)
```
This will tell the sampler to blend 25% of the concept of a blue sphere with 75%
of the concept of a red cube. The blend weights can use any combination of
integers and floating point numbers, and they do not need to add up to 1.
Everything to the left of the `:XX` up to the previous `:XX` is used for
merging, so the overall effect is:
```bash
0.25 * "blue sphere" + 0.75 * "white duck" + hybrid
```
This will tell the sampler to blend 25% of the concept of prompt #1 with 75%
of the concept of prompt #2. It is recommended to keep the sum of the weights to around 1.0, but interesting things might happen if you go outside of this range.
Because you are exploring the "mind" of the AI, the AI's way of mixing two
concepts may not match yours, leading to surprising effects. To illustrate, here
@ -236,13 +170,14 @@ are three images generated using various combinations of blend weights. As
usual, unless you fix the seed, the prompts will give you different results each
time you run them.
<figure markdown>
Let's examine how this affects image generation results:
### "blue sphere, red cube, hybrid"
</figure>
```bash
"blue sphere, red cube, hybrid"
```
This example doesn't use melding at all and represents the default way of mixing
This example doesn't use blending at all and represents the default way of mixing
concepts.
<figure markdown>
@ -251,55 +186,47 @@ concepts.
</figure>
It's interesting to see how the AI expressed the concept of "cube" as the four
quadrants of the enclosing frame. If you look closely, there is depth there, so
the enclosing frame is actually a cube.
It's interesting to see how the AI expressed the concept of "cube" within the sphere. If you look closely, there is depth there, so the enclosing frame is actually a cube.
<figure markdown>
### "blue sphere:0.25 red cube:0.75 hybrid"
```bash
("blue sphere", "red cube").blend(0.25, 0.75)
```
![blue-sphere-25-red-cube-75](../assets/prompt-blending/blue-sphere-0.25-red-cube-0.75-hybrid.png)
</figure>
Now that's interesting. We get neither a blue sphere nor a red cube, but a red
sphere embedded in a brick wall, which represents a melding of concepts within
the AI's "latent space" of semantic representations. Where is Ludwig
Wittgenstein when you need him?
Now that's interesting. We get an image with a resemblance of a red cube, with a hint of blue shadows which represents a melding of concepts within the AI's "latent space" of semantic representations.
<figure markdown>
### "blue sphere:0.75 red cube:0.25 hybrid"
```bash
("blue sphere", "red cube").blend(0.75, 0.25)
```
![blue-sphere-75-red-cube-25](../assets/prompt-blending/blue-sphere-0.75-red-cube-0.25-hybrid.png)
</figure>
Definitely more blue-spherey. The cube is gone entirely, but it's really cool
abstract art.
Definitely more blue-spherey.
<figure markdown>
### "blue sphere:0.5 red cube:0.5 hybrid"
```bash
("blue sphere", "red cube").blend(0.5, 0.5)
```
</figure>
<figure markdown>
![blue-sphere-5-red-cube-5-hybrid](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5-hybrid.png)
</figure>
Whoa...! I see blue and red, but no spheres or cubes. Is the word "hybrid"
summoning up the concept of some sort of scifi creature? Let's find out.
<figure markdown>
Whoa...! I see blue and red, and if I squint, spheres and cubes.
### "blue sphere:0.5 red cube:0.5"
![blue-sphere-5-red-cube-5](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5.png)
</figure>
Indeed, removing the word "hybrid" produces an image that is more like what we'd
expect.
## Dynamic Prompts

View File

@ -30,10 +30,6 @@ image output.
### * [Image-to-Image Guide](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
### * [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it? Variations
are the ticket.
## Model Management
### * [Model Installation](../installation/050_INSTALLING_MODELS.md)

27
docs/help/diffusion.md Normal file
View File

@ -0,0 +1,27 @@
Taking the time to understand the diffusion process will help you to understand how to more effectively use InvokeAI.
There are two main ways Stable Diffusion works - with images, and latents.
Image space represents images in pixel form that you look at. Latent space represents compressed inputs. Its in latent space that Stable Diffusion processes images. A VAE (Variational Auto Encoder) is responsible for compressing and encoding inputs into latent space, as well as decoding outputs back into image space.
To fully understand the diffusion process, we need to understand a few more terms: UNet, CLIP, and conditioning.
A U-Net is a model trained on a large number of latent images with with known amounts of random noise added. This means that the U-Net can be given a slightly noisy image and it will predict the pattern of noise needed to subtract from the image in order to recover the original.
CLIP is a model that tokenizes and encodes text into conditioning. This conditioning guides the model during the denoising steps to produce a new image.
The U-Net and CLIP work together during the image generation process at each denoising step, with the U-Net removing noise in such a way that the result is similar to images in the U-Nets training set, while CLIP guides the U-Net towards creating images that are most similar to the prompt.
When you generate an image using text-to-image, multiple steps occur in latent space:
1. Random noise is generated at the chosen height and width. The noises characteristics are dictated by seed. This noise tensor is passed into latent space. Well call this noise A.
2. Using a models U-Net, a noise predictor examines noise A, and the words tokenized by CLIP from your prompt (conditioning). It generates its own noise tensor to predict what the final image might look like in latent space. Well call this noise B.
3. Noise B is subtracted from noise A in an attempt to create a latent image consistent with the prompt. This step is repeated for the number of sampler steps chosen.
4. The VAE decodes the final latent image from latent space into image space.
Image-to-image is a similar process, with only step 1 being different:
1. The input image is encoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how may noise steps are added, and the amount of noise added at each step. A Denoising Strength of 0 means there are 0 steps and no noise added, resulting in an unchanged image, while a Denoising Strength of 1 results in the image being completely replaced with noise and a full set of denoising steps are performance. The process is then the same as steps 2-4 in the text-to-image process.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).
A noise scheduler (eg. DPM++ 2M Karras) schedules the subtraction of noise from the latent image across the sampler steps chosen (step 3 above). Less noise is usually subtracted at higher sampler steps.

View File

@ -49,9 +49,9 @@ title: Home
[![github stars badge]][github stars link]
[![github forks badge]][github forks link]
[![CI checks on main badge]][ci checks on main link]
<!-- [![CI checks on main badge]][ci checks on main link]
[![CI checks on dev badge]][ci checks on dev link]
<!-- [![latest commit to dev badge]][latest commit to dev link] -->
[![latest commit to dev badge]][latest commit to dev link] -->
[![github open issues badge]][github open issues link]
[![github open prs badge]][github open prs link]

View File

@ -8,9 +8,9 @@ title: Installing Manually
</figure>
!!! warning "This is for advanced Users"
!!! warning "This is for Advanced Users"
**python experience is mandatory**
**Python experience is mandatory**
## Introduction

View File

@ -57,6 +57,30 @@ familiar with containerization technologies such as Docker.
For downloads and instructions, visit the [NVIDIA CUDA Container
Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
### cuDNN Installation for 40/30 Series Optimization* (Optional)
1. Find the InvokeAI folder
2. Click on .venv folder - e.g., YourInvokeFolderHere\\.venv
3. Click on Lib folder - e.g., YourInvokeFolderHere\\.venv\Lib
4. Click on site-packages folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages
5. Click on Torch directory - e.g., YourInvokeFolderHere\InvokeAI\\.venv\Lib\site-packages\torch
6. Click on the lib folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib
7. Copy everything inside the folder and save it elsewhere as a backup.
8. Go to __https://developer.nvidia.com/cudnn__
9. Login or create an Account.
10. Choose the newer version of cuDNN. **Note:**
There are two versions, 11.x or 12.x for the differents architectures(Turing,Maxwell Etc...) of GPUs.
You can find which version you should download from [this link](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html).
13. Download the latest version and extract it from the download location
14. Find the bin folder E\cudnn-windows-x86_64-__Whatever Version__\bin
15. Copy and paste the .dll files into YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib **Make sure to copy, and not move the files**
16. If prompted, replace any existing files
**Notes:**
* If no change is seen or any issues are encountered, follow the same steps as above and paste the torch/lib backup folder you made earlier and replace it. If you didn't make a backup, you can also uninstall and reinstall torch through the command line to repair this folder.
* This optimization is intended for the newer version of graphics card (40/30 series) but results have been seen with older graphics card.
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide

View File

@ -4,9 +4,9 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker
!!! warning "For end users"
!!! warning "For most users"
We highly recommend to Install InvokeAI locally using [these instructions](index.md)
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md)
!!! tip "For developers"

View File

@ -0,0 +1,7 @@
document$.subscribe(function() {
var tables = document.querySelectorAll("article table:not([class])")
tables.forEach(function(table) {
new Tablesort(table)
})
})

68
docs/nodes/NODES.md Normal file
View File

@ -0,0 +1,68 @@
# Using the Node Editor
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Nodes Editor and build workflows to suit your needs.
## Important Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
### Noise
An initial noise tensor is necessary for the latent diffusion process. As a result, the Denoising node requires a noise node input.
![groupsnoise](../assets/nodes/groupsnoise.png)
### Text Prompt Conditioning
Conditioning is necessary for the latent diffusion process, whether empty or not. As a result, the Denoising node requires positive and negative conditioning inputs. Conditioning is reliant on a CLIP text encoder provided by the Model Loader node.
![groupsconditioning](../assets/nodes/groupsconditioning.png)
### Image to Latents & VAE
The ImageToLatents node takes in a pixel image and a VAE and outputs a latents. The LatentsToImage node does the opposite, taking in a latents and a VAE and outpus a pixel image.
![groupsimgvae](../assets/nodes/groupsimgvae.png)
### Defined & Random Seeds
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
### ControlNet
The ControlNet node outputs a Control, which can be provided as input to non-image *ToLatents nodes. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png)
### LoRA
The Lora Loader node lets you load a LoRA and pass it as output.A LoRA provides fine-tunes to the UNet and text encoder weights that augment the base models image and text vocabularies.
![groupslora](../assets/nodes/groupslora.png)
### Scaling
Use the ImageScale, ScaleLatents, and Upscale nodes to upscale images and/or latent images. Upscaling is the process of enlarging an image and adding more detail. The chosen method differs across contexts. However, be aware that latents are already noisy and compressed at their original resolution; scaling an image could produce more detailed results.
![groupsallscale](../assets/nodes/groupsallscale.png)
### Iteration + Multiple Images as Input
Iteration is a common concept in any processing, and means to repeat a process with given input. In nodes, you're able to use the Iterate node to iterate through collections usually gathered by the Collect node. The Iterate node has many potential uses, from processing a collection of images one after another, to varying seeds across multiple image generations and more. This screenshot demonstrates how to collect several images and use them in an image generation workflow.
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)

View File

@ -0,0 +1,80 @@
# ComfyUI to InvokeAI
If you're coming to InvokeAI from ComfyUI, welcome! You'll find things are similar but different - the good news is that you already know how things should work, and it's just a matter of wiring them up!
Some things to note:
- InvokeAI's nodes tend to be more granular than default nodes in Comfy. This means each node in Invoke will do a specific task and you might need to use multiple nodes to achieve the same result. The added granularity improves the control you have have over your workflows.
- InvokeAI's backend and ComfyUI's backend are very different which means Comfy workflows are not able to be imported into InvokeAI. However, we have created a [list of popular workflows](exampleWorkflows.md) for you to get started with Nodes in InvokeAI!
## Node Equivalents:
| Comfy UI Category | ComfyUI Node | Invoke Equivalent |
|:---------------------------------- |:---------------------------------- | :----------------------------------|
| Sampling |KSampler |Denoise Latents|
| Sampling |Ksampler Advanced|Denoise Latents |
| Loaders |Load Checkpoint | Main Model Loader _or_ SDXL Main Model Loader|
| Loaders |Load VAE | VAE Loader |
| Loaders |Load Lora | LoRA Loader _or_ SDXL Lora Loader|
| Loaders |Load ControlNet Model | ControlNet|
| Loaders |Load ControlNet Model (diff) | ControlNet|
| Loaders |Load Style Model | Reference Only ControlNet will be coming in a future version of InvokeAI|
| Loaders |unCLIPCheckpointLoader | N/A |
| Loaders |GLIGENLoader | N/A |
| Loaders |Hypernetwork Loader | N/A |
| Loaders |Load Upscale Model | Occurs within "Upscale (RealESRGAN)"|
|Conditioning |CLIP Text Encode (Prompt) | Compel (Prompt) or SDXL Compel (Prompt) |
|Conditioning |CLIP Set Last Layer | CLIP Skip|
|Conditioning |Conditioning (Average) | Use the .blend() feature of prompts |
|Conditioning |Conditioning (Combine) | N/A |
|Conditioning |Conditioning (Concat) | See the Prompt Tools Community Node|
|Conditioning |Conditioning (Set Area) | N/A |
|Conditioning |Conditioning (Set Mask) | Mask Edge |
|Conditioning |CLIP Vision Encode | N/A |
|Conditioning |unCLIPConditioning | N/A |
|Conditioning |Apply ControlNet | ControlNet |
|Conditioning |Apply ControlNet (Advanced) | ControlNet |
|Latent |VAE Decode | Latents to Image|
|Latent |VAE Encode | Image to Latents |
|Latent |Empty Latent Image | Noise |
|Latent |Upscale Latent |Resize Latents |
|Latent |Upscale Latent By |Scale Latents |
|Latent |Latent Composite | Blend Latents |
|Latent |LatentCompositeMasked | N/A |
|Image |Save Image | Image |
|Image |Preview Image |Current |
|Image |Load Image | Image|
|Image |Empty Image| Blank Image |
|Image |Invert Image | Invert Lerp Image |
|Image |Batch Images | Link "Image" nodes into an "Image Collection" node |
|Image |Pad Image for Outpainting | Outpainting is easily accomplished in the Unified Canvas |
|Image |ImageCompositeMasked | Paste Image |
|Image | Upscale Image | Resize Image |
|Image | Upscale Image By | Upscale Image |
|Image | Upscale Image (using Model) | Upscale Image |
|Image | ImageBlur | Blur Image |
|Image | ImageQuantize | N/A |
|Image | ImageSharpen | N/A |
|Image | Canny | Canny Processor |
|Mask |Load Image (as Mask) | Image |
|Mask |Convert Mask to Image | Image|
|Mask |Convert Image to Mask | Image |
|Mask |SolidMask | N/A |
|Mask |InvertMask |Invert Lerp Image |
|Mask |CropMask | Crop Image |
|Mask |MaskComposite | Combine Mask |
|Mask |FeatherMask | Blur Image |
|Advanced | Load CLIP | Main Model Loader _or_ SDXL Main Model Loader|
|Advanced | UNETLoader | Main Model Loader _or_ SDXL Main Model Loader|
|Advanced | DualCLIPLoader | Main Model Loader _or_ SDXL Main Model Loader|
|Advanced | Load Checkpoint | Main Model Loader _or_ SDXL Main Model Loader |
|Advanced | ConditioningZeroOut | N/A |
|Advanced | ConditioningSetTimestepRange | N/A |
|Advanced | CLIPTextEncodeSDXLRefiner | Compel (Prompt) or SDXL Compel (Prompt) |
|Advanced | CLIPTextEncodeSDXL |Compel (Prompt) or SDXL Compel (Prompt) |
|Advanced | ModelMergeSimple | Model Merging is available in the Model Manager |
|Advanced | ModelMergeBlocks | Model Merging is available in the Model Manager|
|Advanced | CheckpointSave | Model saving is available in the Model Manager|
|Advanced | CLIPMergeSimple | N/A |

View File

@ -2,17 +2,13 @@
These are nodes that have been developed by the community, for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md).
If you'd like to submit a node for the community, please refer to the [node creation overview](./overview.md#contributing-nodes).
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
## Disclaimer
The nodes linked below have been developed and contributed by members of the Invoke AI community. While we strive to ensure the quality and safety of these contributions, we do not guarantee the reliability or security of the nodes. If you have issues or concerns with any of the nodes below, please raise it on GitHub or in the Discord.
## List of Nodes
## Community Nodes
### FaceTools
@ -26,8 +22,7 @@ The nodes linked below have been developed and contributed by members of the Inv
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
<hr>
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
@ -35,6 +30,172 @@ The nodes linked below have been developed and contributed by members of the Inv
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Film Grain
**Description:** This node adds a film grain effect to the input image based on the weights, seeds, and blur radii parameters. It works with RGB input images only.
**Node Link:** https://github.com/JPPhoto/film-grain-node
--------------------------------
### Image Picker
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
**Node Link:** https://github.com/JPPhoto/image-picker-node
--------------------------------
### Retroize
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
**Node Link:** https://github.com/Ar7ific1al/invokeai-retroizeinode/
**Retroize Output Examples**
![image](https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974)
--------------------------------
### GPT2RandomPromptMaker
**Description:** A node for InvokeAI utilizes the GPT-2 language model to generate random prompts based on a provided seed and context.
**Node Link:** https://github.com/mickr777/GPT2RandomPromptMaker
**Output Examples**
Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment.
![9acf5aef-7254-40dd-95b3-8eac431dfab0 (1)](https://github.com/mickr777/InvokeAI/assets/115216705/8496ba09-bcdd-4ff7-8076-ff213b6a1e4c)
--------------------------------
### Load Video Frame
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs.
**Node Link:** https://github.com/helix4u/load_video_frame
**Example Node Graph:** https://github.com/helix4u/load_video_frame/blob/main/Example_Workflow.json
**Output Example:**
=======
![Example animation](https://github.com/helix4u/load_video_frame/blob/main/testmp4_embed_converted.gif)
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
--------------------------------
### Oobabooga
**Description:** asks a local LLM running in Oobabooga's Text-Generation-Webui to write a prompt based on the user input.
**Link:** https://github.com/sammyf/oobabooga-node
**Example:**
"describe a new mystical creature in its natural environment"
*can return*
"The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface.
As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace."
![glimmerwing_small](https://github.com/sammyf/oobabooga-node/assets/42468608/cecdd820-93dd-4c35-abbf-607e001fb2ed)
**Requirement**
a Text-Generation-Webui instance (might work remotely too, but I never tried it) and obviously InvokeAI 3.x
**Note**
This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
--------------------------------
### Depth Map from Wavefront OBJ
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
**Node Link:** https://github.com/dwringer/depth-from-obj-node
**Example Usage:**
![depth from obj usage graph](https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg)
--------------------------------
### Enhance Image (simple adjustments)
**Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
**Node Link:** https://github.com/dwringer/image-enhance-node
**Example Usage:**
![enhance image usage graph](https://raw.githubusercontent.com/dwringer/image-enhance-node/main/image_enhance_usage.jpg)
--------------------------------
### Generative Grammar-Based Prompt Nodes
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no more nonterminal terms remain in the string.
This includes 3 Nodes:
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
**Example Usage:**
![lookups usage example graph](https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg)
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 4 Nodes:
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
**Node Link:** https://github.com/dwringer/composition-nodes
**Example Usage:**
![composition nodes usage graph](https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_nodes_usage.jpg)
--------------------------------
### Size Stepper Nodes
**Description:** This is a set of nodes for calculating the necessary size increments for doing upscaling workflows. Use the *Final Size & Orientation* node to enter your full size dimensions and orientation (portrait/landscape/random), then plug that and your initial generation dimensions into the *Ideal Size Stepper* and get 1, 2, or 3 intermediate pairs of dimensions for upscaling. Note this does not output the initial size or full size dimensions: the 1, 2, or 3 outputs of this node are only the intermediate sizes.
A third node is included, *Random Switch (Integers)*, which is just a generic version of Final Size with no orientation selection.
**Node Link:** https://github.com/dwringer/size-stepper-nodes
**Example Usage:**
![size stepper usage graph](https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg)
--------------------------------
### Text font to Image
**Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
**Node Link:** https://github.com/mickr777/textfontimage
**Output Examples**
![a3609d48-d9b7-41f0-b280-063d857986fb](https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36)
Results after using the depth controlnet
![9133eabb-bcda-4326-831e-1b641228b178](https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a)
![4f9a3fa8-9be9-4236-8a3e-fcec66decd2a](https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc)
![babd69c4-9d60-4a55-a834-5e8397f62610](https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89)
--------------------------------
### Example Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
@ -47,7 +208,12 @@ The nodes linked below have been developed and contributed by members of the Inv
![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"}
## Disclaimer
The nodes linked have been developed and contributed by members of the Invoke AI community. While we strive to ensure the quality and safety of these contributions, we do not guarantee the reliability or security of the nodes. If you have issues or concerns with any of the nodes below, please raise it on GitHub or in the Discord.
## Help
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).

View File

@ -0,0 +1,27 @@
# Contributing Nodes
To learn about the specifics of creating a new node, please visit our [Node creation documentation](../contributing/INVOCATIONS.md).
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
### Community Node Template
```markdown
--------------------------------
### Super Cool Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Output Examples**
![InvokeAI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
```

View File

@ -0,0 +1,97 @@
# List of Default Nodes
The table below contains a list of the default nodes shipped with InvokeAI and their descriptions.
| Node <img width=160 align="right"> | Function |
|: ---------------------------------- | :--------------------------------------------------------------------------------------|
|Add Integers | Adds two numbers|
|Boolean Primitive Collection | A collection of boolean primitive values|
|Boolean Primitive | A boolean primitive value|
|Canny Processor | Canny edge detection for ControlNet|
|CLIP Skip | Skip layers in clip text_encoder model.|
|Collect | Collects values into a collection|
|Color Correct | Shifts the colors of a target image to match the reference image, optionally using a mask to only color-correct certain regions of the target image.|
|Color Primitive | A color primitive value|
|Compel Prompt | Parse prompt using compel package to conditioning.|
|Conditioning Primitive Collection | A collection of conditioning tensor primitive values|
|Conditioning Primitive | A conditioning tensor primitive value|
|Content Shuffle Processor | Applies content shuffle processing to image|
|ControlNet | Collects ControlNet info to pass to other nodes|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Denoise Latents | Denoises noisy latents to decodable images|
|Divide Integers | Divides two numbers|
|Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|Float Primitive Collection | A collection of float primitive values|
|Float Primitive | A float primitive value|
|Float Range | Creates a range|
|HED (softedge) Processor | Applies HED edge detection to image|
|Blur Image | Blurs an image|
|Extract Image Channel | Gets a channel from an image.|
|Image Primitive Collection | A collection of image primitive values|
|Convert Image Mode | Converts an image to a different mode.|
|Crop Image | Crops an image to a specified box. The box can be outside of the image.|
|Image Hue Adjustment | Adjusts the Hue of an image.|
|Inverse Lerp Image | Inverse linear interpolation of all pixels of an image|
|Image Primitive | An image primitive value|
|Lerp Image | Linear interpolation of all pixels of an image|
|Offset Image Channel | Add to or subtract from an image color channel by a uniform value.|
|Multiply Image Channel | Multiply or Invert an image color channel by a scalar value.|
|Multiply Images | Multiplies two images together using `PIL.ImageChops.multiply()`.|
|Blur NSFW Image | Add blur to NSFW-flagged images|
|Paste Image | Pastes an image into another image.|
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
|Resize Image | Resizes an image to specific dimensions|
|Scale Image | Scales an image by a factor|
|Image to Latents | Encodes an image into latents.|
|Add Invisible Watermark | Add an invisible watermark to an image|
|Solid Color Infill | Infills transparent areas of an image with a solid color|
|PatchMatch Infill | Infills transparent areas of an image using the PatchMatch algorithm|
|Tile Infill | Infills transparent areas of an image with tiles of the image|
|Integer Primitive Collection | A collection of integer primitive values|
|Integer Primitive | An integer primitive value|
|Iterate | Iterates over a list of items|
|Latents Primitive Collection | A collection of latents tensor primitive values|
|Latents Primitive | A latents tensor primitive value|
|Latents to Image | Generates an image from latents.|
|Leres (Depth) Processor | Applies leres processing to image|
|Lineart Anime Processor | Applies line art anime processing to image|
|Lineart Processor | Applies line art processing to image|
|LoRA Loader | Apply selected lora to unet and text_encoder.|
|Main Model Loader | Loads a main model, outputting its submodels.|
|Combine Mask | Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`.|
|Mask Edge | Applies an edge mask to an image|
|Mask from Alpha | Extracts the alpha channel of an image as a mask.|
|Mediapipe Face Processor | Applies mediapipe face processing to image|
|Midas (Depth) Processor | Applies Midas depth processing to image|
|MLSD Processor | Applies MLSD processing to image|
|Multiply Integers | Multiplies two numbers|
|Noise | Generates latent noise.|
|Normal BAE Processor | Applies NormalBae processing to image|
|ONNX Latents to Image | Generates an image from latents.|
|ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in __init__ to receive providers.|
|ONNX Text to Latents | Generates latents from conditionings.|
|ONNX Model Loader | Loads a main model, outputting its submodels.|
|Openpose Processor | Applies Openpose processing to image|
|PIDI Processor | Applies PIDI processing to image|
|Prompts from File | Loads prompts from a text file|
|Random Integer | Outputs a single random integer.|
|Random Range | Creates a collection of random numbers|
|Integer Range | Creates a range of numbers from start to stop with step|
|Integer Range of Size | Creates a range from start to start + size with step|
|Resize Latents | Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8.|
|SDXL Compel Prompt | Parse prompt using compel package to conditioning.|
|SDXL LoRA Loader | Apply selected lora to unet and text_encoder.|
|SDXL Main Model Loader | Loads an sdxl base model, outputting its submodels.|
|SDXL Refiner Compel Prompt | Parse prompt using compel package to conditioning.|
|SDXL Refiner Model Loader | Loads an sdxl refiner model, outputting its submodels.|
|Scale Latents | Scales latents by a given factor.|
|Segment Anything Processor | Applies segment anything processing to image|
|Show Image | Displays a provided image, and passes it forward in the pipeline.|
|Step Param Easing | Experimental per-step parameter easing for denoising steps|
|String Primitive Collection | A collection of string primitive values|
|String Primitive | A string primitive value|
|Subtract Integers | Subtracts two numbers|
|Tile Resample Processor | Tile resampler processor|
|VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput|
|Zoe (Depth) Processor | Applies Zoe depth processing to image|

View File

@ -0,0 +1,15 @@
# Example Workflows
TODO: Will update once uploading workflows is available.
## Text2Image
## Image2Image
## ControlNet
## Upscaling
## Inpainting / Outpainting
## LoRAs

View File

@ -1,42 +1,26 @@
# Nodes
## What are Nodes?
An Node is simply a single operation that takes in some inputs and gives
out some outputs. We can then chain multiple nodes together to create more
An Node is simply a single operation that takes in inputs and returns
out outputs. Multiple nodes can be linked together to create more
complex functionality. All InvokeAI features are added through nodes.
This means nodes can be used to easily extend the image generation capabilities of InvokeAI, and allow you build workflows to suit your needs.
### Anatomy of a Node
You can read more about nodes and the node editor [here](../features/NODES.md).
Individual nodes are made up of the following:
- Inputs: Edge points on the left side of the node window where you connect outputs from other nodes.
- Outputs: Edge points on the right side of the node window where you connect to inputs on other nodes.
- Options: Various options which are either manually configured, or overridden by connecting an output from another node to the input.
## Downloading Nodes
To download a new node, visit our list of [Community Nodes](communityNodes.md). These are nodes that have been created by the community, for the community.
With nodes, you can can easily extend the image generation capabilities of InvokeAI, and allow you build workflows that suit your needs.
You can read more about nodes and the node editor [here](../nodes/NODES.md).
To get started with nodes, take a look at some of our examples for [common workflows](../nodes/exampleWorkflows.md)
## Downloading New Nodes
To download a new node, visit our list of [Community Nodes](../nodes/communityNodes.md). These are nodes that have been created by the community, for the community.
## Contributing Nodes
To learn about creating a new node, please visit our [Node creation documenation](../contributing/INVOCATIONS.md).
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
* Make sure the node is contained in a new Python (.py) file
* Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
* Make sure you are following the template below and have provided all relevant details about the node and what it does.
* A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
### Community Node Template
```markdown
--------------------------------
### Super Cool Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Output Examples**
![InvokeAI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
```

View File

@ -14,7 +14,7 @@ fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3.0-latest"
LATEST_TAG="v3-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."
@ -46,6 +46,7 @@ if [[ $(python -c 'from importlib.util import find_spec; print(find_spec("build"
pip install --user build
fi
rm -r ../build
python -m build --wheel --outdir dist/ ../.
# ----------------------

View File

@ -407,7 +407,7 @@ def get_pip_from_venv(venv_path: Path) -> str:
:rtype: str
"""
pip = "Scripts\pip.exe" if OS == "Windows" else "bin/pip"
pip = "Scripts\\pip.exe" if OS == "Windows" else "bin/pip"
return str(venv_path.expanduser().resolve() / pip)

View File

@ -49,7 +49,7 @@ if __name__ == "__main__":
try:
inst.install(**args.__dict__)
except KeyboardInterrupt as exc:
except KeyboardInterrupt:
print("\n")
print("Ctrl-C pressed. Aborting.")
print("Come back soon!")

View File

@ -70,7 +70,7 @@ def confirm_install(dest: Path) -> bool:
)
else:
print(f"InvokeAI will be installed in {dest}")
dest_confirmed = not Confirm.ask(f"Would you like to pick a different location?", default=False)
dest_confirmed = not Confirm.ask("Would you like to pick a different location?", default=False)
console.line()
return dest_confirmed
@ -90,7 +90,7 @@ def dest_path(dest=None) -> Path:
dest = Path(dest).expanduser().resolve()
else:
dest = Path.cwd().expanduser().resolve()
prev_dest = dest.expanduser().resolve()
prev_dest = init_path = dest
dest_confirmed = confirm_install(dest)
@ -109,9 +109,9 @@ def dest_path(dest=None) -> Path:
)
console.line()
print(f"[orange3]Please select the destination directory for the installation:[/] \[{browse_start}]: ")
console.print(f"[orange3]Please select the destination directory for the installation:[/] \\[{browse_start}]: ")
selected = prompt(
f">>> ",
">>> ",
complete_in_thread=True,
completer=path_completer,
default=str(browse_start) + os.sep,
@ -134,14 +134,14 @@ def dest_path(dest=None) -> Path:
try:
dest.mkdir(exist_ok=True, parents=True)
return dest
except PermissionError as exc:
print(
except PermissionError:
console.print(
f"Failed to create directory {dest} due to insufficient permissions",
style=Style(color="red"),
highlight=True,
)
except OSError as exc:
console.print_exception(exc)
except OSError:
console.print_exception()
if Confirm.ask("Would you like to try again?"):
dest_path(init_path)

View File

@ -1,6 +1,5 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Optional
from logging import Logger
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
@ -45,7 +44,7 @@ def check_internet() -> bool:
try:
urllib.request.urlopen(host, timeout=1)
return True
except:
except Exception:
return False

View File

@ -1,19 +1,19 @@
import typing
from enum import Enum
from pathlib import Path
from fastapi import Body
from fastapi.routing import APIRouter
from pathlib import Path
from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
from ..dependencies import ApiDependencies
from invokeai.backend.util.logging import logging
class LogLevel(int, Enum):
@ -55,7 +55,7 @@ async def get_version() -> AppVersion:
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config() -> AppConfig:
infill_methods = ["tile"]
infill_methods = ["tile", "lama", "cv2"]
if PatchMatch.patchmatch_available():
infill_methods.append("patchmatch")

View File

@ -34,7 +34,7 @@ async def add_image_to_board(
board_id=board_id, image_name=image_name
)
return result
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to add image to board")
@ -53,7 +53,7 @@ async def remove_image_from_board(
try:
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(image_name=image_name)
return result
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to remove image from board")
@ -79,10 +79,10 @@ async def add_images_to_board(
board_id=board_id, image_name=image_name
)
added_image_names.append(image_name)
except:
except Exception:
pass
return AddImagesToBoardResult(board_id=board_id, added_image_names=added_image_names)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to add images to board")
@ -105,8 +105,8 @@ async def remove_images_from_board(
try:
ApiDependencies.invoker.services.board_images.remove_image_from_board(image_name=image_name)
removed_image_names.append(image_name)
except:
except Exception:
pass
return RemoveImagesFromBoardResult(removed_image_names=removed_image_names)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to remove images from board")

View File

@ -37,7 +37,7 @@ async def create_board(
try:
result = ApiDependencies.invoker.services.boards.create(board_name=board_name)
return result
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to create board")
@ -50,7 +50,7 @@ async def get_board(
try:
result = ApiDependencies.invoker.services.boards.get_dto(board_id=board_id)
return result
except Exception as e:
except Exception:
raise HTTPException(status_code=404, detail="Board not found")
@ -73,7 +73,7 @@ async def update_board(
try:
result = ApiDependencies.invoker.services.boards.update(board_id=board_id, changes=changes)
return result
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to update board")
@ -105,7 +105,7 @@ async def delete_board(
deleted_board_images=deleted_board_images,
deleted_images=[],
)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to delete board")

View File

@ -55,7 +55,7 @@ async def upload_image(
if crop_visible:
bbox = pil_image.getbbox()
pil_image = pil_image.crop(bbox)
except:
except Exception:
# Error opening the image
raise HTTPException(status_code=415, detail="Failed to read image")
@ -73,7 +73,7 @@ async def upload_image(
response.headers["Location"] = image_dto.image_url
return image_dto
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to create image")
@ -85,7 +85,7 @@ async def delete_image(
try:
ApiDependencies.invoker.services.images.delete(image_name)
except Exception as e:
except Exception:
# TODO: Does this need any exception handling at all?
pass
@ -97,7 +97,7 @@ async def clear_intermediates() -> int:
try:
count_deleted = ApiDependencies.invoker.services.images.delete_intermediates()
return count_deleted
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to clear intermediates")
pass
@ -115,7 +115,7 @@ async def update_image(
try:
return ApiDependencies.invoker.services.images.update(image_name, image_changes)
except Exception as e:
except Exception:
raise HTTPException(status_code=400, detail="Failed to update image")
@ -131,7 +131,7 @@ async def get_image_dto(
try:
return ApiDependencies.invoker.services.images.get_dto(image_name)
except Exception as e:
except Exception:
raise HTTPException(status_code=404)
@ -147,7 +147,7 @@ async def get_image_metadata(
try:
return ApiDependencies.invoker.services.images.get_metadata(image_name)
except Exception as e:
except Exception:
raise HTTPException(status_code=404)
@ -183,7 +183,7 @@ async def get_image_full(
)
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception as e:
except Exception:
raise HTTPException(status_code=404)
@ -212,7 +212,7 @@ async def get_image_thumbnail(
response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception as e:
except Exception:
raise HTTPException(status_code=404)
@ -234,7 +234,7 @@ async def get_image_urls(
image_url=image_url,
thumbnail_url=thumbnail_url,
)
except Exception as e:
except Exception:
raise HTTPException(status_code=404)
@ -282,10 +282,10 @@ async def delete_images_from_list(
try:
ApiDependencies.invoker.services.images.delete(image_name)
deleted_images.append(image_name)
except:
except Exception:
pass
return DeleteImagesFromListResult(deleted_images=deleted_images)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to delete images")
@ -303,10 +303,10 @@ async def star_images_in_list(
try:
ApiDependencies.invoker.services.images.update(image_name, changes=ImageRecordChanges(starred=True))
updated_image_names.append(image_name)
except:
except Exception:
pass
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to star images")
@ -320,8 +320,8 @@ async def unstar_images_in_list(
try:
ApiDependencies.invoker.services.images.update(image_name, changes=ImageRecordChanges(starred=False))
updated_image_names.append(image_name)
except:
except Exception:
pass
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception as e:
except Exception:
raise HTTPException(status_code=500, detail="Failed to unstar images")

View File

@ -1,12 +1,13 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Annotated, List, Optional, Union
from typing import Annotated, Optional, Union
from fastapi import Body, HTTPException, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic.fields import Field
from ...invocations import *
# Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403
from ...invocations.baseinvocation import BaseInvocation
from ...services.graph import (
Edge,

View File

@ -1,12 +1,11 @@
# Copyright (c) 2022-2023 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
import asyncio
import sys
from inspect import signature
import logging
import uvicorn
import socket
from inspect import signature
from pathlib import Path
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
@ -14,24 +13,13 @@ from fastapi.openapi.utils import get_openapi
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pathlib import Path
from pydantic.schema import schema
# This should come early so that modules can log their initialization properly
from .services.config import InvokeAIAppConfig
from ..backend.util.logging import InvokeAILogger
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.getLogger(config=app_config)
from invokeai.version.invokeai_version import __version__
# we call this early so that the message appears before
# other invokeai initialization messages
if app_config.version:
print(f"InvokeAI version {__version__}")
sys.exit(0)
import invokeai.frontend.web as web_dir
import mimetypes
@ -40,12 +28,19 @@ from .api.routers import sessions, models, images, boards, board_images, app_inf
from .api.sockets import SocketIO
from .invocations.baseinvocation import BaseInvocation, _InputField, _OutputField, UIConfigBase
import torch
import invokeai.backend.util.hotfixes
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes
# noinspection PyUnresolvedReferences
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.getLogger(config=app_config)
# fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
@ -128,6 +123,7 @@ def custom_openapi():
output_schemas = schema(output_types, ref_prefix="#/components/schemas/")
for schema_key, output_schema in output_schemas["definitions"].items():
output_schema["class"] = "output"
openapi_schema["components"]["schemas"][schema_key] = output_schema
# TODO: note that we assume the schema_key here is the TYPE.__name__
@ -136,8 +132,8 @@ def custom_openapi():
# Add Node Editor UI helper schemas
ui_config_schemas = schema([UIConfigBase, _InputField, _OutputField], ref_prefix="#/components/schemas/")
for schema_key, output_schema in ui_config_schemas["definitions"].items():
openapi_schema["components"]["schemas"][schema_key] = output_schema
for schema_key, ui_config_schema in ui_config_schemas["definitions"].items():
openapi_schema["components"]["schemas"][schema_key] = ui_config_schema
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
@ -146,8 +142,8 @@ def custom_openapi():
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][invoker_name]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
from invokeai.backend.model_management.models import get_model_config_enums
@ -213,6 +209,17 @@ def invoke_api():
check_invokeai_root(app_config) # note, may exit with an exception if root not set up
if app_config.dev_reload:
try:
import jurigged
except ImportError as e:
logger.error(
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.',
exc_info=e,
)
else:
jurigged.watch(logger=InvokeAILogger.getLogger(name="jurigged").info)
port = find_port(app_config.port)
if port != app_config.port:
logger.warn(f"Port {app_config.port} in use, using port {port}")
@ -230,13 +237,16 @@ def invoke_api():
# replace uvicorn's loggers with InvokeAI's for consistent appearance
for logname in ["uvicorn.access", "uvicorn"]:
l = logging.getLogger(logname)
l.handlers.clear()
log = logging.getLogger(logname)
log.handlers.clear()
for ch in logger.handlers:
l.addHandler(ch)
log.addHandler(ch)
loop.run_until_complete(server.serve())
if __name__ == "__main__":
invoke_api()
if app_config.version:
print(f"InvokeAI version {__version__}")
else:
invoke_api()

View File

@ -145,10 +145,10 @@ def set_autocompleter(services: InvocationServices) -> Completer:
completer = Completer(services.model_manager)
readline.set_completer(completer.complete)
# pyreadline3 does not have a set_auto_history() method
try:
readline.set_auto_history(True)
except:
except AttributeError:
# pyreadline3 does not have a set_auto_history() method
pass
readline.set_pre_input_hook(completer._pre_input_hook)
readline.set_completer_delims(" ")

View File

@ -13,16 +13,8 @@ from pydantic.fields import Field
# This should come early so that the logger can pick up its configuration options
from .services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger().getLogger(config=config)
from invokeai.version.invokeai_version import __version__
# we call this early so that the message appears before other invokeai initialization messages
if config.version:
print(f"InvokeAI version {__version__}")
sys.exit(0)
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
@ -62,10 +54,15 @@ from .services.processor import DefaultInvocationProcessor
from .services.sqlite import SqliteItemStorage
import torch
import invokeai.backend.util.hotfixes
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger().getLogger(config=config)
class CliCommand(BaseModel):
@ -482,4 +479,7 @@ def invoke_cli():
if __name__ == "__main__":
invoke_cli()
if config.version:
print(f"InvokeAI version {__version__}")
else:
invoke_cli()

View File

@ -2,15 +2,18 @@
from __future__ import annotations
import json
from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
import re
from typing import (
TYPE_CHECKING,
AbstractSet,
Any,
Callable,
ClassVar,
Literal,
Mapping,
Optional,
Type,
@ -20,14 +23,19 @@ from typing import (
get_type_hints,
)
from pydantic import BaseModel, Field
from pydantic.fields import Undefined
from pydantic import BaseModel, Field, validator
from pydantic.fields import Undefined, ModelField
from pydantic.typing import NoArgAnyCallable
import semver
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
class InvalidVersionError(ValueError):
pass
class FieldDescriptions:
denoising_start = "When to start denoising, expressed a percentage of total steps"
denoising_end = "When to stop denoising, expressed a percentage of total steps"
@ -71,6 +79,9 @@ class FieldDescriptions:
safe_mode = "Whether or not to use safe mode"
scribble_mode = "Whether or not to use scribble mode"
scale_factor = "The factor by which to scale"
blend_alpha = (
"Blending factor. 0.0 = use input A only, 1.0 = use input B only, 0.5 = 50% mix of input A and input B."
)
num_1 = "The first number"
num_2 = "The second number"
mask = "The mask to use for the operation"
@ -99,24 +110,39 @@ class UIType(str, Enum):
"""
# region Primitives
Integer = "integer"
Float = "float"
Boolean = "boolean"
String = "string"
Array = "array"
Image = "ImageField"
Latents = "LatentsField"
Color = "ColorField"
Conditioning = "ConditioningField"
Control = "ControlField"
Color = "ColorField"
ImageCollection = "ImageCollection"
ConditioningCollection = "ConditioningCollection"
ColorCollection = "ColorCollection"
LatentsCollection = "LatentsCollection"
IntegerCollection = "IntegerCollection"
FloatCollection = "FloatCollection"
StringCollection = "StringCollection"
Float = "float"
Image = "ImageField"
Integer = "integer"
Latents = "LatentsField"
String = "string"
# endregion
# region Collection Primitives
BooleanCollection = "BooleanCollection"
ColorCollection = "ColorCollection"
ConditioningCollection = "ConditioningCollection"
ControlCollection = "ControlCollection"
FloatCollection = "FloatCollection"
ImageCollection = "ImageCollection"
IntegerCollection = "IntegerCollection"
LatentsCollection = "LatentsCollection"
StringCollection = "StringCollection"
# endregion
# region Polymorphic Primitives
BooleanPolymorphic = "BooleanPolymorphic"
ColorPolymorphic = "ColorPolymorphic"
ConditioningPolymorphic = "ConditioningPolymorphic"
ControlPolymorphic = "ControlPolymorphic"
FloatPolymorphic = "FloatPolymorphic"
ImagePolymorphic = "ImagePolymorphic"
IntegerPolymorphic = "IntegerPolymorphic"
LatentsPolymorphic = "LatentsPolymorphic"
StringPolymorphic = "StringPolymorphic"
# endregion
# region Models
@ -138,8 +164,11 @@ class UIType(str, Enum):
# endregion
# region Misc
FilePath = "FilePath"
Enum = "enum"
Scheduler = "Scheduler"
WorkflowField = "WorkflowField"
IsIntermediate = "IsIntermediate"
MetadataField = "MetadataField"
# endregion
@ -166,6 +195,8 @@ class _InputField(BaseModel):
ui_hidden: bool
ui_type: Optional[UIType]
ui_component: Optional[UIComponent]
ui_order: Optional[int]
item_default: Optional[Any]
class _OutputField(BaseModel):
@ -178,6 +209,7 @@ class _OutputField(BaseModel):
ui_hidden: bool
ui_type: Optional[UIType]
ui_order: Optional[int]
def InputField(
@ -211,6 +243,8 @@ def InputField(
ui_type: Optional[UIType] = None,
ui_component: Optional[UIComponent] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
item_default: Optional[Any] = None,
**kwargs: Any,
) -> Any:
"""
@ -230,13 +264,18 @@ def InputField(
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param UIComponent ui_component: [None] Optionally specifies a specific component to use in the UI. \
The UI will always render a suitable component, but sometimes you want something different than the default. \
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
: param bool item_default: [None] Specifies the default item value, if this is a collection input. \
Ignored for non-collection fields..
"""
return Field(
*args,
@ -269,6 +308,8 @@ def InputField(
ui_type=ui_type,
ui_component=ui_component,
ui_hidden=ui_hidden,
ui_order=ui_order,
item_default=item_default,
**kwargs,
)
@ -302,6 +343,7 @@ def OutputField(
repr: bool = True,
ui_type: Optional[UIType] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
**kwargs: Any,
) -> Any:
"""
@ -318,6 +360,8 @@ def OutputField(
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
: param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
: param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
"""
return Field(
*args,
@ -348,6 +392,7 @@ def OutputField(
repr=repr,
ui_type=ui_type,
ui_hidden=ui_hidden,
ui_order=ui_order,
**kwargs,
)
@ -355,12 +400,15 @@ def OutputField(
class UIConfigBase(BaseModel):
"""
Provides additional node configuration to the UI.
This is used internally by the @tags and @title decorator logic. You probably want to use those
decorators, though you may add this class to a node definition to specify the title and tags.
This is used internally by the @invocation decorator logic. Do not use this directly.
"""
tags: Optional[list[str]] = Field(default_factory=None, description="The tags to display in the UI")
title: Optional[str] = Field(default=None, description="The display name of the node")
tags: Optional[list[str]] = Field(default_factory=None, description="The node's tags")
title: Optional[str] = Field(default=None, description="The node's display name")
category: Optional[str] = Field(default=None, description="The node's category")
version: Optional[str] = Field(
default=None, description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".'
)
class InvocationContext:
@ -373,10 +421,11 @@ class InvocationContext:
class BaseInvocationOutput(BaseModel):
"""Base class for all invocation outputs"""
"""
Base class for all invocation outputs.
# All outputs must include a type name like this:
# type: Literal['your_output_name']
All invocation outputs must use the `@invocation_output` decorator to provide their unique type.
"""
@classmethod
def get_all_subclasses_tuple(cls):
@ -389,6 +438,13 @@ class BaseInvocationOutput(BaseModel):
toprocess.extend(next_subclasses)
return tuple(subclasses)
class Config:
@staticmethod
def schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = list()
schema["required"].extend(["type"])
class RequiredConnectionException(Exception):
"""Raised when an field which requires a connection did not receive a value."""
@ -405,12 +461,12 @@ class MissingInputException(Exception):
class BaseInvocation(ABC, BaseModel):
"""A node to process inputs and produce outputs.
May use dependency injection in __init__ to receive providers.
"""
A node to process inputs and produce outputs.
May use dependency injection in __init__ to receive providers.
# All invocations must include a type name like this:
# type: Literal['your_output_name']
All invocations must use the `@invocation` decorator to provide their unique type.
"""
@classmethod
def get_all_subclasses(cls):
@ -449,6 +505,13 @@ class BaseInvocation(ABC, BaseModel):
schema["title"] = uiconfig.title
if uiconfig and hasattr(uiconfig, "tags"):
schema["tags"] = uiconfig.tags
if uiconfig and hasattr(uiconfig, "category"):
schema["category"] = uiconfig.category
if uiconfig and hasattr(uiconfig, "version"):
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = list()
schema["required"].extend(["type", "id"])
@abstractmethod
def invoke(self, context: InvocationContext) -> BaseInvocationOutput:
@ -485,37 +548,124 @@ class BaseInvocation(ABC, BaseModel):
raise MissingInputException(self.__fields__["type"].default, field_name)
return self.invoke(context)
id: str = InputField(description="The id of this node. Must be unique among all nodes.")
is_intermediate: bool = InputField(
default=False, description="Whether or not this node is an intermediate node.", input=Input.Direct
id: str = Field(
description="The id of this instance of an invocation. Must be unique among all instances of invocations."
)
is_intermediate: bool = InputField(
default=False, description="Whether or not this is an intermediate invocation.", ui_type=UIType.IsIntermediate
)
workflow: Optional[str] = InputField(
default=None,
description="The workflow to save with the image",
ui_type=UIType.WorkflowField,
)
@validator("workflow", pre=True)
def validate_workflow_is_json(cls, v):
if v is None:
return None
try:
json.loads(v)
except json.decoder.JSONDecodeError:
raise ValueError("Workflow must be valid JSON")
return v
UIConfig: ClassVar[Type[UIConfigBase]]
T = TypeVar("T", bound=BaseInvocation)
GenericBaseInvocation = TypeVar("GenericBaseInvocation", bound=BaseInvocation)
def title(title: str) -> Callable[[Type[T]], Type[T]]:
"""Adds a title to the invocation. Use this to override the default title generation, which is based on the class name."""
def invocation(
invocation_type: str,
title: Optional[str] = None,
tags: Optional[list[str]] = None,
category: Optional[str] = None,
version: Optional[str] = None,
) -> Callable[[Type[GenericBaseInvocation]], Type[GenericBaseInvocation]]:
"""
Adds metadata to an invocation.
def wrapper(cls: Type[T]) -> Type[T]:
:param str invocation_type: The type of the invocation. Must be unique among all invocations.
:param Optional[str] title: Adds a title to the invocation. Use if the auto-generated title isn't quite right. Defaults to None.
:param Optional[list[str]] tags: Adds tags to the invocation. Invocations may be searched for by their tags. Defaults to None.
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
"""
def wrapper(cls: Type[GenericBaseInvocation]) -> Type[GenericBaseInvocation]:
# Validate invocation types on creation of invocation classes
# TODO: ensure unique?
if re.compile(r"^\S+$").match(invocation_type) is None:
raise ValueError(f'"invocation_type" must consist of non-whitespace characters, got "{invocation_type}"')
# Add OpenAPI schema extras
uiconf_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name:
cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict())
cls.UIConfig.title = title
if title is not None:
cls.UIConfig.title = title
if tags is not None:
cls.UIConfig.tags = tags
if category is not None:
cls.UIConfig.category = category
if version is not None:
try:
semver.Version.parse(version)
except ValueError as e:
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
cls.UIConfig.version = version
# Add the invocation type to the pydantic model of the invocation
invocation_type_annotation = Literal[invocation_type] # type: ignore
invocation_type_field = ModelField.infer(
name="type",
value=invocation_type,
annotation=invocation_type_annotation,
class_validators=None,
config=cls.__config__,
)
cls.__fields__.update({"type": invocation_type_field})
# to support 3.9, 3.10 and 3.11, as described in https://docs.python.org/3/howto/annotations.html
if annotations := cls.__dict__.get("__annotations__", None):
annotations.update({"type": invocation_type_annotation})
return cls
return wrapper
def tags(*tags: str) -> Callable[[Type[T]], Type[T]]:
"""Adds tags to the invocation. Use this to improve the streamline finding the invocation in the UI."""
GenericBaseInvocationOutput = TypeVar("GenericBaseInvocationOutput", bound=BaseInvocationOutput)
def invocation_output(
output_type: str,
) -> Callable[[Type[GenericBaseInvocationOutput]], Type[GenericBaseInvocationOutput]]:
"""
Adds metadata to an invocation output.
:param str output_type: The type of the invocation output. Must be unique among all invocation outputs.
"""
def wrapper(cls: Type[GenericBaseInvocationOutput]) -> Type[GenericBaseInvocationOutput]:
# Validate output types on creation of invocation output classes
# TODO: ensure unique?
if re.compile(r"^\S+$").match(output_type) is None:
raise ValueError(f'"output_type" must consist of non-whitespace characters, got "{output_type}"')
# Add the output type to the pydantic model of the invocation output
output_type_annotation = Literal[output_type] # type: ignore
output_type_field = ModelField.infer(
name="type",
value=output_type,
annotation=output_type_annotation,
class_validators=None,
config=cls.__config__,
)
cls.__fields__.update({"type": output_type_field})
# to support 3.9, 3.10 and 3.11, as described in https://docs.python.org/3/howto/annotations.html
if annotations := cls.__dict__.get("__annotations__", None):
annotations.update({"type": output_type_annotation})
def wrapper(cls: Type[T]) -> Type[T]:
uiconf_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconf_name:
cls.UIConfig = type(uiconf_name, (UIConfigBase,), dict())
cls.UIConfig.tags = list(tags)
return cls
return wrapper

View File

@ -1,24 +1,21 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from typing import Literal
import numpy as np
from pydantic import validator
from invokeai.app.invocations.primitives import ImageCollectionOutput, ImageField, IntegerCollectionOutput
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIType, tags, title
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@title("Integer Range")
@tags("collection", "integer", "range")
@invocation(
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"
)
class RangeInvocation(BaseInvocation):
"""Creates a range of numbers from start to stop with step"""
type: Literal["range"] = "range"
# Inputs
start: int = InputField(default=0, description="The start of the range")
stop: int = InputField(default=10, description="The stop of the range")
step: int = InputField(default=1, description="The step of the range")
@ -33,14 +30,16 @@ class RangeInvocation(BaseInvocation):
return IntegerCollectionOutput(collection=list(range(self.start, self.stop, self.step)))
@title("Integer Range of Size")
@tags("range", "integer", "size", "collection")
@invocation(
"range_of_size",
title="Integer Range of Size",
tags=["collection", "integer", "size", "range"],
category="collections",
version="1.0.0",
)
class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + size with step"""
type: Literal["range_of_size"] = "range_of_size"
# Inputs
start: int = InputField(default=0, description="The start of the range")
size: int = InputField(default=1, description="The number of values")
step: int = InputField(default=1, description="The step of the range")
@ -49,14 +48,16 @@ class RangeOfSizeInvocation(BaseInvocation):
return IntegerCollectionOutput(collection=list(range(self.start, self.start + self.size, self.step)))
@title("Random Range")
@tags("range", "integer", "random", "collection")
@invocation(
"random_range",
title="Random Range",
tags=["range", "integer", "random", "collection"],
category="collections",
version="1.0.0",
)
class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers"""
type: Literal["random_range"] = "random_range"
# Inputs
low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
size: int = InputField(default=1, description="The number of values to generate")

View File

@ -1,6 +1,6 @@
import re
from dataclasses import dataclass
from typing import List, Literal, Union
from typing import List, Union
import torch
from compel import Compel, ReturnedEmbeddingsType
@ -12,7 +12,7 @@ from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion impor
SDXLConditioningInfo,
)
from ...backend.model_management import ModelPatcher, ModelType
from ...backend.model_management.models import ModelType
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
@ -26,8 +26,8 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIComponent,
tags,
title,
invocation,
invocation_output,
)
from .model import ClipField
@ -44,13 +44,10 @@ class ConditioningFieldData:
# PerpNeg = "perp_neg"
@title("Compel Prompt")
@tags("prompt", "compel")
@invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning", version="1.0.0")
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
type: Literal["compel"] = "compel"
prompt: str = InputField(
default="",
description=FieldDescriptions.compel_prompt,
@ -116,16 +113,15 @@ class CompelInvocation(BaseInvocation):
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
truncate_long_prompts=True,
truncate_long_prompts=False,
)
conjunction = Compel.parse_prompt_string(self.prompt)
prompt: Union[FlattenedPrompt, Blend] = conjunction.prompts[0]
if context.services.configuration.log_tokenization:
log_tokenization_for_prompt_object(prompt, tokenizer)
log_tokenization_for_conjunction(conjunction, tokenizer)
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
@ -231,17 +227,16 @@ class SDXLPromptInvocationBase:
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
truncate_long_prompts=True, # TODO:
truncate_long_prompts=False, # TODO:
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
requires_pooled=True,
requires_pooled=get_pooled,
)
conjunction = Compel.parse_prompt_string(prompt)
if context.services.configuration.log_tokenization:
# TODO: better logging for and syntax
for prompt_obj in conjunction.prompts:
log_tokenization_for_prompt_object(prompt_obj, tokenizer)
log_tokenization_for_conjunction(conjunction, tokenizer)
# TODO: ask for optimizations? to not run text_encoder twice
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
@ -267,13 +262,16 @@ class SDXLPromptInvocationBase:
return c, c_pooled, ec
@title("SDXL Compel Prompt")
@tags("sdxl", "compel", "prompt")
@invocation(
"sdxl_compel_prompt",
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
type: Literal["sdxl_compel_prompt"] = "sdxl_compel_prompt"
prompt: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
style: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
original_width: int = InputField(default=1024, description="")
@ -282,8 +280,8 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
crop_left: int = InputField(default=0, description="")
target_width: int = InputField(default=1024, description="")
target_height: int = InputField(default=1024, description="")
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
clip2: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
@ -305,6 +303,29 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
add_time_ids = torch.tensor([original_size + crop_coords + target_size])
# [1, 77, 768], [1, 154, 1280]
if c1.shape[1] < c2.shape[1]:
c1 = torch.cat(
[
c1,
torch.zeros(
(c1.shape[0], c2.shape[1] - c1.shape[1], c1.shape[2]), device=c1.device, dtype=c1.dtype
),
],
dim=1,
)
elif c1.shape[1] > c2.shape[1]:
c2 = torch.cat(
[
c2,
torch.zeros(
(c2.shape[0], c1.shape[1] - c2.shape[1], c2.shape[2]), device=c2.device, dtype=c2.dtype
),
],
dim=1,
)
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
@ -326,13 +347,16 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
)
@title("SDXL Refiner Compel Prompt")
@tags("sdxl", "compel", "prompt")
@invocation(
"sdxl_refiner_compel_prompt",
title="SDXL Refiner Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
type: Literal["sdxl_refiner_compel_prompt"] = "sdxl_refiner_compel_prompt"
style: str = InputField(
default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea
) # TODO: ?
@ -374,20 +398,17 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
)
@invocation_output("clip_skip_output")
class ClipSkipInvocationOutput(BaseInvocationOutput):
"""Clip skip node output"""
type: Literal["clip_skip_output"] = "clip_skip_output"
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@title("CLIP Skip")
@tags("clipskip", "clip", "skip")
@invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning", version="1.0.0")
class ClipSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""
type: Literal["clip_skip"] = "clip_skip"
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP")
skipped_layers: int = InputField(default=0, description=FieldDescriptions.skipped_layers)

View File

@ -29,7 +29,7 @@ from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from ...backend.model_management import BaseModelType, ModelType
from ...backend.model_management import BaseModelType
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
@ -40,8 +40,8 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIType,
tags,
title,
invocation,
invocation_output,
)
@ -87,27 +87,20 @@ class ControlField(BaseModel):
return v
@invocation_output("control_output")
class ControlOutput(BaseInvocationOutput):
"""node output for ControlNet info"""
type: Literal["control_output"] = "control_output"
# Outputs
control: ControlField = OutputField(description=FieldDescriptions.control)
@title("ControlNet")
@tags("controlnet")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
type: Literal["controlnet"] = "controlnet"
# Inputs
image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(
default="lllyasviel/sd-controlnet-canny", description=FieldDescriptions.controlnet_model, input=Input.Direct
)
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet", ui_type=UIType.Float
)
@ -134,12 +127,12 @@ class ControlNetInvocation(BaseInvocation):
)
@invocation(
"image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet", version="1.0.0"
)
class ImageProcessorInvocation(BaseInvocation):
"""Base class for invocations that preprocess images for ControlNet"""
type: Literal["image_processor"] = "image_processor"
# Inputs
image: ImageField = InputField(description="The image to process")
def run_processor(self, image):
@ -151,11 +144,6 @@ class ImageProcessorInvocation(BaseInvocation):
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
# FIXME: what happened to image metadata?
# metadata = context.services.metadata.build_metadata(
# session_id=context.graph_execution_state_id, node=self
# )
# currently can't see processed image in node UI without a showImage node,
# so for now setting image_type to RESULT instead of INTERMEDIATE so will get saved in gallery
image_dto = context.services.images.create(
@ -165,6 +153,7 @@ class ImageProcessorInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
"""Builds an ImageOutput and its ImageField"""
@ -179,14 +168,16 @@ class ImageProcessorInvocation(BaseInvocation):
)
@title("Canny Processor")
@tags("controlnet", "canny")
@invocation(
"canny_image_processor",
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.0.0",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
type: Literal["canny_image_processor"] = "canny_image_processor"
# Input
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@ -200,14 +191,16 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("HED (softedge) Processor")
@tags("controlnet", "hed", "softedge")
@invocation(
"hed_image_processor",
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.0.0",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
type: Literal["hed_image_processor"] = "hed_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
@ -227,14 +220,16 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Lineart Processor")
@tags("controlnet", "lineart")
@invocation(
"lineart_image_processor",
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.0.0",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
type: Literal["lineart_image_processor"] = "lineart_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
@ -247,14 +242,16 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Lineart Anime Processor")
@tags("controlnet", "lineart", "anime")
@invocation(
"lineart_anime_image_processor",
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.0.0",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
type: Literal["lineart_anime_image_processor"] = "lineart_anime_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
@ -268,14 +265,16 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Openpose Processor")
@tags("controlnet", "openpose", "pose")
@invocation(
"openpose_image_processor",
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.0.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
type: Literal["openpose_image_processor"] = "openpose_image_processor"
# Inputs
hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
@ -291,14 +290,16 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Midas (Depth) Processor")
@tags("controlnet", "midas", "depth")
@invocation(
"midas_depth_image_processor",
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.0.0",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
type: Literal["midas_depth_image_processor"] = "midas_depth_image_processor"
# Inputs
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
# depth_and_normal not supported in controlnet_aux v0.0.3
@ -316,14 +317,16 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Normal BAE Processor")
@tags("controlnet", "normal", "bae")
@invocation(
"normalbae_image_processor",
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
type: Literal["normalbae_image_processor"] = "normalbae_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
@ -335,14 +338,12 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("MLSD Processor")
@tags("controlnet", "mlsd")
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
type: Literal["mlsd_image_processor"] = "mlsd_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
@ -360,14 +361,12 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("PIDI Processor")
@tags("controlnet", "pidi")
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
type: Literal["pidi_image_processor"] = "pidi_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
@ -385,14 +384,16 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Content Shuffle Processor")
@tags("controlnet", "contentshuffle")
@invocation(
"content_shuffle_image_processor",
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.0.0",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
type: Literal["content_shuffle_image_processor"] = "content_shuffle_image_processor"
# Inputs
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
h: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
@ -413,27 +414,32 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
# should work with controlnet_aux >= 0.0.4 and timm <= 0.6.13
@title("Zoe (Depth) Processor")
@tags("controlnet", "zoe", "depth")
@invocation(
"zoe_depth_image_processor",
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.0.0",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
type: Literal["zoe_depth_image_processor"] = "zoe_depth_image_processor"
def run_processor(self, image):
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
@title("Mediapipe Face Processor")
@tags("controlnet", "mediapipe", "face")
@invocation(
"mediapipe_face_processor",
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.0.0",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
type: Literal["mediapipe_face_processor"] = "mediapipe_face_processor"
# Inputs
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
@ -447,14 +453,16 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Leres (Depth) Processor")
@tags("controlnet", "leres", "depth")
@invocation(
"leres_image_processor",
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.0.0",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
type: Literal["leres_image_processor"] = "leres_image_processor"
# Inputs
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
@ -474,14 +482,16 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Tile Resample Processor")
@tags("controlnet", "tile")
@invocation(
"tile_image_processor",
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.0.0",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
type: Literal["tile_image_processor"] = "tile_image_processor"
# Inputs
# res: int = InputField(default=512, ge=0, le=1024, description="The pixel resolution for each tile")
down_sampling_rate: float = InputField(default=1.0, ge=1.0, le=8.0, description="Down sampling rate")
@ -512,13 +522,16 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
return processed_image
@title("Segment Anything Processor")
@tags("controlnet", "segmentanything")
@invocation(
"segment_anything_processor",
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.0.0",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
type: Literal["segment_anything_processor"] = "segment_anything_processor"
def run_processor(self, image):
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(

View File

@ -1,6 +1,5 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
import cv2 as cv
import numpy
@ -8,17 +7,13 @@ from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@title("OpenCV Inpaint")
@tags("opencv", "inpaint")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
class CvInpaintInvocation(BaseInvocation):
"""Simple inpaint using opencv."""
type: Literal["cv_inpaint"] = "cv_inpaint"
# Inputs
image: ImageField = InputField(description="The image to inpaint")
mask: ImageField = InputField(description="The mask to use when inpainting")
@ -45,6 +40,7 @@ class CvInpaintInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(

View File

@ -8,23 +8,18 @@ import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@title("Show Image")
@tags("image")
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
class ShowImageInvocation(BaseInvocation):
"""Displays a provided image, and passes it forward in the pipeline."""
"""Displays a provided image using the OS image viewer, and passes it forward in the pipeline."""
# Metadata
type: Literal["show_image"] = "show_image"
# Inputs
image: ImageField = InputField(description="The image to show")
def invoke(self, context: InvocationContext) -> ImageOutput:
@ -41,15 +36,39 @@ class ShowImageInvocation(BaseInvocation):
)
@title("Crop Image")
@tags("image", "crop")
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
class BlankImageInvocation(BaseInvocation):
"""Creates a blank image and forwards it to the pipeline"""
width: int = InputField(default=512, description="The width of the image")
height: int = InputField(default=512, description="The height of the image")
mode: Literal["RGB", "RGBA"] = InputField(default="RGB", description="The mode of the image")
color: ColorField = InputField(default=ColorField(r=0, g=0, b=0, a=255), description="The color of the image")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = Image.new(mode=self.mode, size=(self.width, self.height), color=self.color.tuple())
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
class ImageCropInvocation(BaseInvocation):
"""Crops an image to a specified box. The box can be outside of the image."""
# Metadata
type: Literal["img_crop"] = "img_crop"
# Inputs
image: ImageField = InputField(description="The image to crop")
x: int = InputField(default=0, description="The left x coordinate of the crop rectangle")
y: int = InputField(default=0, description="The top y coordinate of the crop rectangle")
@ -69,6 +88,7 @@ class ImageCropInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -78,15 +98,10 @@ class ImageCropInvocation(BaseInvocation):
)
@title("Paste Image")
@tags("image", "paste")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.0")
class ImagePasteInvocation(BaseInvocation):
"""Pastes an image into another image."""
# Metadata
type: Literal["img_paste"] = "img_paste"
# Inputs
base_image: ImageField = InputField(description="The base image")
image: ImageField = InputField(description="The image to paste")
mask: Optional[ImageField] = InputField(
@ -121,6 +136,7 @@ class ImagePasteInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -130,15 +146,10 @@ class ImagePasteInvocation(BaseInvocation):
)
@title("Mask from Alpha")
@tags("image", "mask")
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
class MaskFromAlphaInvocation(BaseInvocation):
"""Extracts the alpha channel of an image as a mask."""
# Metadata
type: Literal["tomask"] = "tomask"
# Inputs
image: ImageField = InputField(description="The image to create the mask from")
invert: bool = InputField(default=False, description="Whether or not to invert the mask")
@ -156,6 +167,7 @@ class MaskFromAlphaInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -165,15 +177,10 @@ class MaskFromAlphaInvocation(BaseInvocation):
)
@title("Multiply Images")
@tags("image", "multiply")
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
class ImageMultiplyInvocation(BaseInvocation):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
# Metadata
type: Literal["img_mul"] = "img_mul"
# Inputs
image1: ImageField = InputField(description="The first image to multiply")
image2: ImageField = InputField(description="The second image to multiply")
@ -190,6 +197,7 @@ class ImageMultiplyInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -202,15 +210,10 @@ class ImageMultiplyInvocation(BaseInvocation):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@title("Extract Image Channel")
@tags("image", "channel")
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
class ImageChannelInvocation(BaseInvocation):
"""Gets a channel from an image."""
# Metadata
type: Literal["img_chan"] = "img_chan"
# Inputs
image: ImageField = InputField(description="The image to get the channel from")
channel: IMAGE_CHANNELS = InputField(default="A", description="The channel to get")
@ -226,6 +229,7 @@ class ImageChannelInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -238,15 +242,10 @@ class ImageChannelInvocation(BaseInvocation):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@title("Convert Image Mode")
@tags("image", "convert")
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
class ImageConvertInvocation(BaseInvocation):
"""Converts an image to a different mode."""
# Metadata
type: Literal["img_conv"] = "img_conv"
# Inputs
image: ImageField = InputField(description="The image to convert")
mode: IMAGE_MODES = InputField(default="L", description="The mode to convert to")
@ -262,6 +261,7 @@ class ImageConvertInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -271,15 +271,10 @@ class ImageConvertInvocation(BaseInvocation):
)
@title("Blur Image")
@tags("image", "blur")
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
class ImageBlurInvocation(BaseInvocation):
"""Blurs an image"""
# Metadata
type: Literal["img_blur"] = "img_blur"
# Inputs
image: ImageField = InputField(description="The image to blur")
radius: float = InputField(default=8.0, ge=0, description="The blur radius")
# Metadata
@ -300,6 +295,7 @@ class ImageBlurInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -329,19 +325,17 @@ PIL_RESAMPLING_MAP = {
}
@title("Resize Image")
@tags("image", "resize")
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
class ImageResizeInvocation(BaseInvocation):
"""Resizes an image to specific dimensions"""
# Metadata
type: Literal["img_resize"] = "img_resize"
# Inputs
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=64, multiple_of=8, description="The width to resize to (px)")
height: int = InputField(default=512, ge=64, multiple_of=8, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
@ -360,6 +354,8 @@ class ImageResizeInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -369,15 +365,10 @@ class ImageResizeInvocation(BaseInvocation):
)
@title("Scale Image")
@tags("image", "scale")
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
class ImageScaleInvocation(BaseInvocation):
"""Scales an image by a factor"""
# Metadata
type: Literal["img_scale"] = "img_scale"
# Inputs
image: ImageField = InputField(description="The image to scale")
scale_factor: float = InputField(
default=2.0,
@ -405,6 +396,7 @@ class ImageScaleInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -414,15 +406,10 @@ class ImageScaleInvocation(BaseInvocation):
)
@title("Lerp Image")
@tags("image", "lerp")
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
class ImageLerpInvocation(BaseInvocation):
"""Linear interpolation of all pixels of an image"""
# Metadata
type: Literal["img_lerp"] = "img_lerp"
# Inputs
image: ImageField = InputField(description="The image to lerp")
min: int = InputField(default=0, ge=0, le=255, description="The minimum output value")
max: int = InputField(default=255, ge=0, le=255, description="The maximum output value")
@ -442,6 +429,7 @@ class ImageLerpInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -451,15 +439,10 @@ class ImageLerpInvocation(BaseInvocation):
)
@title("Inverse Lerp Image")
@tags("image", "ilerp")
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
class ImageInverseLerpInvocation(BaseInvocation):
"""Inverse linear interpolation of all pixels of an image"""
# Metadata
type: Literal["img_ilerp"] = "img_ilerp"
# Inputs
image: ImageField = InputField(description="The image to lerp")
min: int = InputField(default=0, ge=0, le=255, description="The minimum input value")
max: int = InputField(default=255, ge=0, le=255, description="The maximum input value")
@ -479,6 +462,7 @@ class ImageInverseLerpInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -488,15 +472,10 @@ class ImageInverseLerpInvocation(BaseInvocation):
)
@title("Blur NSFW Image")
@tags("image", "nsfw")
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
class ImageNSFWBlurInvocation(BaseInvocation):
"""Add blur to NSFW-flagged images"""
# Metadata
type: Literal["img_nsfw"] = "img_nsfw"
# Inputs
image: ImageField = InputField(description="The image to check")
metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
@ -522,6 +501,7 @@ class ImageNSFWBlurInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -537,15 +517,12 @@ class ImageNSFWBlurInvocation(BaseInvocation):
return caution.resize((caution.width // 2, caution.height // 2))
@title("Add Invisible Watermark")
@tags("image", "watermark")
@invocation(
"img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image", version="1.0.0"
)
class ImageWatermarkInvocation(BaseInvocation):
"""Add an invisible watermark to an image"""
# Metadata
type: Literal["img_watermark"] = "img_watermark"
# Inputs
image: ImageField = InputField(description="The image to check")
text: str = InputField(default="InvokeAI", description="Watermark text")
metadata: Optional[CoreMetadata] = InputField(
@ -563,6 +540,7 @@ class ImageWatermarkInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -572,14 +550,10 @@ class ImageWatermarkInvocation(BaseInvocation):
)
@title("Mask Edge")
@tags("image", "mask", "inpaint")
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
class MaskEdgeInvocation(BaseInvocation):
"""Applies an edge mask to an image"""
type: Literal["mask_edge"] = "mask_edge"
# Inputs
image: ImageField = InputField(description="The image to apply the mask to")
edge_size: int = InputField(description="The size of the edge")
edge_blur: int = InputField(description="The amount of blur on the edge")
@ -589,7 +563,7 @@ class MaskEdgeInvocation(BaseInvocation):
)
def invoke(self, context: InvocationContext) -> ImageOutput:
mask = context.services.images.get_pil_image(self.image.image_name)
mask = context.services.images.get_pil_image(self.image.image_name).convert("L")
npimg = numpy.asarray(mask, dtype=numpy.uint8)
npgradient = numpy.uint8(255 * (1.0 - numpy.floor(numpy.abs(0.5 - numpy.float32(npimg) / 255.0) * 2.0)))
@ -611,6 +585,7 @@ class MaskEdgeInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -620,14 +595,12 @@ class MaskEdgeInvocation(BaseInvocation):
)
@title("Combine Mask")
@tags("image", "mask", "multiply")
@invocation(
"mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image", version="1.0.0"
)
class MaskCombineInvocation(BaseInvocation):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
type: Literal["mask_combine"] = "mask_combine"
# Inputs
mask1: ImageField = InputField(description="The first mask to combine")
mask2: ImageField = InputField(description="The second image to combine")
@ -644,6 +617,7 @@ class MaskCombineInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -653,17 +627,13 @@ class MaskCombineInvocation(BaseInvocation):
)
@title("Color Correct")
@tags("image", "color")
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
class ColorCorrectInvocation(BaseInvocation):
"""
Shifts the colors of a target image to match the reference image, optionally
using a mask to only color-correct certain regions of the target image.
"""
type: Literal["color_correct"] = "color_correct"
# Inputs
image: ImageField = InputField(description="The image to color-correct")
reference: ImageField = InputField(description="Reference image for color-correction")
mask: Optional[ImageField] = InputField(default=None, description="Mask to use when applying color-correction")
@ -730,8 +700,13 @@ class ColorCorrectInvocation(BaseInvocation):
# Blur the mask out (into init image) by specified amount
if self.mask_blur_radius > 0:
nm = numpy.asarray(pil_init_mask, dtype=numpy.uint8)
inverted_nm = 255 - nm
dilation_size = int(round(self.mask_blur_radius) + 20)
dilating_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (dilation_size, dilation_size))
inverted_dilated_nm = cv2.dilate(inverted_nm, dilating_kernel)
dilated_nm = 255 - inverted_dilated_nm
nmd = cv2.erode(
nm,
dilated_nm,
kernel=numpy.ones((3, 3), dtype=numpy.uint8),
iterations=int(self.mask_blur_radius / 2),
)
@ -752,6 +727,7 @@ class ColorCorrectInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -761,14 +737,10 @@ class ColorCorrectInvocation(BaseInvocation):
)
@title("Image Hue Adjustment")
@tags("image", "hue", "hsl")
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
class ImageHueAdjustmentInvocation(BaseInvocation):
"""Adjusts the Hue of an image."""
type: Literal["img_hue_adjust"] = "img_hue_adjust"
# Inputs
image: ImageField = InputField(description="The image to adjust")
hue: int = InputField(default=0, description="The degrees by which to rotate the hue, 0-360")
@ -794,6 +766,7 @@ class ImageHueAdjustmentInvocation(BaseInvocation):
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
workflow=self.workflow,
)
return ImageOutput(
@ -805,37 +778,95 @@ class ImageHueAdjustmentInvocation(BaseInvocation):
)
@title("Image Luminosity Adjustment")
@tags("image", "luminosity", "hsl")
class ImageLuminosityAdjustmentInvocation(BaseInvocation):
"""Adjusts the Luminosity (Value) of an image."""
COLOR_CHANNELS = Literal[
"Red (RGBA)",
"Green (RGBA)",
"Blue (RGBA)",
"Alpha (RGBA)",
"Cyan (CMYK)",
"Magenta (CMYK)",
"Yellow (CMYK)",
"Black (CMYK)",
"Hue (HSV)",
"Saturation (HSV)",
"Value (HSV)",
"Luminosity (LAB)",
"A (LAB)",
"B (LAB)",
"Y (YCbCr)",
"Cb (YCbCr)",
"Cr (YCbCr)",
]
type: Literal["img_luminosity_adjust"] = "img_luminosity_adjust"
CHANNEL_FORMATS = {
"Red (RGBA)": ("RGBA", 0),
"Green (RGBA)": ("RGBA", 1),
"Blue (RGBA)": ("RGBA", 2),
"Alpha (RGBA)": ("RGBA", 3),
"Cyan (CMYK)": ("CMYK", 0),
"Magenta (CMYK)": ("CMYK", 1),
"Yellow (CMYK)": ("CMYK", 2),
"Black (CMYK)": ("CMYK", 3),
"Hue (HSV)": ("HSV", 0),
"Saturation (HSV)": ("HSV", 1),
"Value (HSV)": ("HSV", 2),
"Luminosity (LAB)": ("LAB", 0),
"A (LAB)": ("LAB", 1),
"B (LAB)": ("LAB", 2),
"Y (YCbCr)": ("YCbCr", 0),
"Cb (YCbCr)": ("YCbCr", 1),
"Cr (YCbCr)": ("YCbCr", 2),
}
@invocation(
"img_channel_offset",
title="Offset Image Channel",
tags=[
"image",
"offset",
"red",
"green",
"blue",
"alpha",
"cyan",
"magenta",
"yellow",
"black",
"hue",
"saturation",
"luminosity",
"value",
],
category="image",
version="1.0.0",
)
class ImageChannelOffsetInvocation(BaseInvocation):
"""Add or subtract a value from a specific color channel of an image."""
# Inputs
image: ImageField = InputField(description="The image to adjust")
luminosity: float = InputField(
default=1.0, ge=0, le=1, description="The factor by which to adjust the luminosity (value)"
)
channel: COLOR_CHANNELS = InputField(description="Which channel to adjust")
offset: int = InputField(default=0, ge=-255, le=255, description="The amount to adjust the channel by")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_image = context.services.images.get_pil_image(self.image.image_name)
# Convert PIL image to OpenCV format (numpy array), note color channel
# ordering is changed from RGB to BGR
image = numpy.array(pil_image.convert("RGB"))[:, :, ::-1]
# extract the channel and mode from the input and reference tuple
mode = CHANNEL_FORMATS[self.channel][0]
channel_number = CHANNEL_FORMATS[self.channel][1]
# Convert image to HSV color space
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Convert PIL image to new format
converted_image = numpy.array(pil_image.convert(mode)).astype(int)
image_channel = converted_image[:, :, channel_number]
# Adjust the luminosity (value)
hsv_image[:, :, 2] = numpy.clip(hsv_image[:, :, 2] * self.luminosity, 0, 255)
# Adjust the value, clipping to 0..255
image_channel = numpy.clip(image_channel + self.offset, 0, 255)
# Convert image back to BGR color space
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
# Put the channel back into the image
converted_image[:, :, channel_number] = image_channel
# Convert back to PIL format and to original color mode
pil_image = Image.fromarray(image[:, :, ::-1], "RGB").convert("RGBA")
# Convert back to RGBA format and output
pil_image = Image.fromarray(converted_image.astype(numpy.uint8), mode=mode).convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,
@ -844,6 +875,7 @@ class ImageLuminosityAdjustmentInvocation(BaseInvocation):
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
workflow=self.workflow,
)
return ImageOutput(
@ -855,35 +887,61 @@ class ImageLuminosityAdjustmentInvocation(BaseInvocation):
)
@title("Image Saturation Adjustment")
@tags("image", "saturation", "hsl")
class ImageSaturationAdjustmentInvocation(BaseInvocation):
"""Adjusts the Saturation of an image."""
@invocation(
"img_channel_multiply",
title="Multiply Image Channel",
tags=[
"image",
"invert",
"scale",
"multiply",
"red",
"green",
"blue",
"alpha",
"cyan",
"magenta",
"yellow",
"black",
"hue",
"saturation",
"luminosity",
"value",
],
category="image",
version="1.0.0",
)
class ImageChannelMultiplyInvocation(BaseInvocation):
"""Scale a specific color channel of an image."""
type: Literal["img_saturation_adjust"] = "img_saturation_adjust"
# Inputs
image: ImageField = InputField(description="The image to adjust")
saturation: float = InputField(default=1.0, ge=0, le=1, description="The factor by which to adjust the saturation")
channel: COLOR_CHANNELS = InputField(description="Which channel to adjust")
scale: float = InputField(default=1.0, ge=0.0, description="The amount to scale the channel by.")
invert_channel: bool = InputField(default=False, description="Invert the channel after scaling")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_image = context.services.images.get_pil_image(self.image.image_name)
# Convert PIL image to OpenCV format (numpy array), note color channel
# ordering is changed from RGB to BGR
image = numpy.array(pil_image.convert("RGB"))[:, :, ::-1]
# extract the channel and mode from the input and reference tuple
mode = CHANNEL_FORMATS[self.channel][0]
channel_number = CHANNEL_FORMATS[self.channel][1]
# Convert image to HSV color space
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Convert PIL image to new format
converted_image = numpy.array(pil_image.convert(mode)).astype(float)
image_channel = converted_image[:, :, channel_number]
# Adjust the saturation
hsv_image[:, :, 1] = numpy.clip(hsv_image[:, :, 1] * self.saturation, 0, 255)
# Adjust the value, clipping to 0..255
image_channel = numpy.clip(image_channel * self.scale, 0, 255)
# Convert image back to BGR color space
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
# Invert the channel if requested
if self.invert_channel:
image_channel = 255 - image_channel
# Convert back to PIL format and to original color mode
pil_image = Image.fromarray(image[:, :, ::-1], "RGB").convert("RGBA")
# Put the channel back into the image
converted_image[:, :, channel_number] = image_channel
# Convert back to RGBA format and output
pil_image = Image.fromarray(converted_image.astype(numpy.uint8), mode=mode).convert("RGBA")
image_dto = context.services.images.create(
image=pil_image,
@ -892,6 +950,7 @@ class ImageSaturationAdjustmentInvocation(BaseInvocation):
node_id=self.id,
is_intermediate=self.is_intermediate,
session_id=context.graph_execution_state_id,
workflow=self.workflow,
)
return ImageOutput(

View File

@ -1,24 +1,24 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
import math
from typing import Literal, Optional, get_args
import numpy as np
import math
from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ImageField, ImageOutput, ColorField
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, title, tags
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
def infill_methods() -> list[str]:
methods = [
"tile",
"solid",
]
methods = ["tile", "solid", "lama", "cv2"]
if PatchMatch.patchmatch_available():
methods.insert(0, "patchmatch")
return methods
@ -28,6 +28,11 @@ INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
def infill_lama(im: Image.Image) -> Image.Image:
lama = LaMA()
return lama(im)
def infill_patchmatch(im: Image.Image) -> Image.Image:
if im.mode != "RGBA":
return im
@ -42,6 +47,10 @@ def infill_patchmatch(im: Image.Image) -> Image.Image:
return im_patched
def infill_cv2(im: Image.Image) -> Image.Image:
return cv2_inpaint(im)
def get_tile_images(image: np.ndarray, width=8, height=8):
_nrows, _ncols, depth = image.shape
_strides = image.strides
@ -90,7 +99,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return im
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum()
replace_count = (tiles_mask == False).sum() # noqa: E712
rng = np.random.default_rng(seed=seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count), :, :, :]
@ -109,14 +118,10 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@title("Solid Color Infill")
@tags("image", "inpaint")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillColorInvocation(BaseInvocation):
"""Infills transparent areas of an image with a solid color"""
type: Literal["infill_rgba"] = "infill_rgba"
# Inputs
image: ImageField = InputField(description="The image to infill")
color: ColorField = InputField(
default=ColorField(r=127, g=127, b=127, a=255),
@ -138,6 +143,7 @@ class InfillColorInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -147,14 +153,10 @@ class InfillColorInvocation(BaseInvocation):
)
@title("Tile Infill")
@tags("image", "inpaint")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillTileInvocation(BaseInvocation):
"""Infills transparent areas of an image with tiles of the image"""
type: Literal["infill_tile"] = "infill_tile"
# Input
image: ImageField = InputField(description="The image to infill")
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
seed: int = InputField(
@ -177,6 +179,7 @@ class InfillTileInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
@ -186,23 +189,96 @@ class InfillTileInvocation(BaseInvocation):
)
@title("PatchMatch Infill")
@tags("image", "inpaint")
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
)
class InfillPatchMatchInvocation(BaseInvocation):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
type: Literal["infill_patchmatch"] = "infill_patchmatch"
image: ImageField = InputField(description="The image to infill")
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name).convert("RGBA")
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
infill_image = image.copy()
width = int(image.width / self.downscale)
height = int(image.height / self.downscale)
infill_image = infill_image.resize(
(width, height),
resample=resample_mode,
)
if PatchMatch.patchmatch_available():
infilled = infill_patchmatch(infill_image)
else:
raise ValueError("PatchMatch is not available on this system")
infilled = infilled.resize(
(image.width, image.height),
resample=resample_mode,
)
infilled.paste(image, (0, 0), mask=image.split()[-1])
# image.paste(infilled, (0, 0), mask=image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class LaMaInfillInvocation(BaseInvocation):
"""Infills transparent areas of an image using the LaMa model"""
# Inputs
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
if PatchMatch.patchmatch_available():
infilled = infill_patchmatch(image.copy())
else:
raise ValueError("PatchMatch is not available on this system")
infilled = infill_lama(image.copy())
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
class CV2InfillInvocation(BaseInvocation):
"""Infills transparent areas of an image using OpenCV Inpainting"""
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
infilled = infill_cv2(image.copy())
image_dto = context.services.images.create(
image=infilled,

View File

@ -4,6 +4,7 @@ from contextlib import ExitStack
from typing import List, Literal, Optional, Union
import einops
import numpy as np
import torch
import torchvision.transforms as T
from diffusers.image_processor import VaeImageProcessor
@ -15,11 +16,13 @@ from diffusers.models.attention_processor import (
)
from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import BaseModel, Field, validator
from pydantic import validator
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import (
DenoiseMaskField,
DenoiseMaskOutput,
ImageField,
ImageOutput,
LatentsField,
@ -30,8 +33,9 @@ from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_management.models import ModelType, SilenceWarnings
from ...backend.model_management import BaseModelType, ModelPatcher
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.seamless import set_seamless
from ...backend.model_management.models import BaseModelType
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData,
@ -52,8 +56,8 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIType,
tags,
title,
invocation,
invocation_output,
)
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
@ -65,6 +69,86 @@ DEFAULT_PRECISION = choose_precision(choose_torch_device())
SAMPLER_NAME_VALUES = Literal[tuple(list(SCHEDULER_MAP.keys()))]
@invocation_output("scheduler_output")
class SchedulerOutput(BaseInvocationOutput):
scheduler: SAMPLER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
@invocation("scheduler", title="Scheduler", tags=["scheduler"], category="latents", version="1.0.0")
class SchedulerInvocation(BaseInvocation):
"""Selects a scheduler."""
scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler
)
def invoke(self, context: InvocationContext) -> SchedulerOutput:
return SchedulerOutput(scheduler=self.scheduler)
@invocation(
"create_denoise_mask", title="Create Denoise Mask", tags=["mask", "denoise"], category="latents", version="1.0.0"
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
vae: VaeField = InputField(description=FieldDescriptions.vae, input=Input.Connection, ui_order=0)
image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1)
mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=3)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32, ui_order=4)
def prep_mask_tensor(self, mask_image):
if mask_image.mode != "L":
mask_image = mask_image.convert("L")
mask_tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
# if shape is not None:
# mask_tensor = tv_resize(mask_tensor, shape, T.InterpolationMode.BILINEAR)
return mask_tensor
@torch.no_grad()
def invoke(self, context: InvocationContext) -> DenoiseMaskOutput:
if self.image is not None:
image = context.services.images.get_pil_image(self.image.image_name)
image = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image.dim() == 3:
image = image.unsqueeze(0)
else:
image = None
mask = self.prep_mask_tensor(
context.services.images.get_pil_image(self.mask.image_name),
)
if image is not None:
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
context=context,
)
img_mask = tv_resize(mask, image.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image * torch.where(img_mask < 0.5, 0.0, 1.0)
# TODO:
masked_latents = ImageToLatentsInvocation.vae_encode(vae_info, self.fp32, self.tiled, masked_image.clone())
masked_latents_name = f"{context.graph_execution_state_id}__{self.id}_masked_latents"
context.services.latents.save(masked_latents_name, masked_latents)
else:
masked_latents_name = None
mask_name = f"{context.graph_execution_state_id}__{self.id}_mask"
context.services.latents.save(mask_name, mask)
return DenoiseMaskOutput(
denoise_mask=DenoiseMaskField(
mask_name=mask_name,
masked_latents_name=masked_latents_name,
),
)
def get_scheduler(
context: InvocationContext,
scheduler_info: ModelInfo,
@ -99,36 +183,42 @@ def get_scheduler(
return scheduler
@title("Denoise Latents")
@tags("latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l")
@invocation(
"denoise_latents",
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.0.0",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
type: Literal["denoise_latents"] = "denoise_latents"
# Inputs
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
description=FieldDescriptions.positive_cond, input=Input.Connection, ui_order=0
)
negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection
description=FieldDescriptions.negative_cond, input=Input.Connection, ui_order=1
)
noise: Optional[LatentsField] = InputField(description=FieldDescriptions.noise, input=Input.Connection)
noise: Optional[LatentsField] = InputField(description=FieldDescriptions.noise, input=Input.Connection, ui_order=3)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, ui_type=UIType.Float
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, ui_type=UIType.Float, title="CFG Scale"
)
denoising_start: float = InputField(default=0.0, ge=0, le=1, description=FieldDescriptions.denoising_start)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
scheduler: SAMPLER_NAME_VALUES = InputField(default="euler", description=FieldDescriptions.scheduler)
unet: UNetField = InputField(description=FieldDescriptions.unet, input=Input.Connection)
scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler
)
unet: UNetField = InputField(description=FieldDescriptions.unet, input=Input.Connection, title="UNet", ui_order=2)
control: Union[ControlField, list[ControlField]] = InputField(
default=None, description=FieldDescriptions.control, input=Input.Connection
default=None,
description=FieldDescriptions.control,
input=Input.Connection,
ui_order=5,
)
latents: Optional[LatentsField] = InputField(description=FieldDescriptions.latents, input=Input.Connection)
mask: Optional[ImageField] = InputField(
default=None,
description=FieldDescriptions.mask,
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None, description=FieldDescriptions.mask, input=Input.Connection, ui_order=6
)
@validator("cfg_scale")
@ -232,7 +322,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
context: InvocationContext,
# really only need model for dtype and device
model: StableDiffusionGeneratorPipeline,
control_input: List[ControlField],
control_input: Union[ControlField, List[ControlField]],
latents_shape: List[int],
exit_stack: ExitStack,
do_classifier_free_guidance: bool = True,
@ -306,52 +396,46 @@ class DenoiseLatentsInvocation(BaseInvocation):
# original idea by https://github.com/AmericanPresidentJimmyCarter
# TODO: research more for second order schedulers timesteps
def init_scheduler(self, scheduler, device, steps, denoising_start, denoising_end):
num_inference_steps = steps
if scheduler.config.get("cpu_only", False):
scheduler.set_timesteps(num_inference_steps, device="cpu")
scheduler.set_timesteps(steps, device="cpu")
timesteps = scheduler.timesteps.to(device=device)
else:
scheduler.set_timesteps(num_inference_steps, device=device)
scheduler.set_timesteps(steps, device=device)
timesteps = scheduler.timesteps
# apply denoising_start
# skip greater order timesteps
_timesteps = timesteps[:: scheduler.order]
# get start timestep index
t_start_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_start)))
t_start_idx = len(list(filter(lambda ts: ts >= t_start_val, timesteps)))
timesteps = timesteps[t_start_idx:]
if scheduler.order == 2 and t_start_idx > 0:
timesteps = timesteps[1:]
t_start_idx = len(list(filter(lambda ts: ts >= t_start_val, _timesteps)))
# save start timestep to apply noise
init_timestep = timesteps[:1]
# apply denoising_end
# get end timestep index
t_end_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_end)))
t_end_idx = len(list(filter(lambda ts: ts >= t_end_val, timesteps)))
if scheduler.order == 2 and t_end_idx > 0:
t_end_idx += 1
timesteps = timesteps[:t_end_idx]
t_end_idx = len(list(filter(lambda ts: ts >= t_end_val, _timesteps[t_start_idx:])))
# calculate step count based on scheduler order
num_inference_steps = len(timesteps)
if scheduler.order == 2:
num_inference_steps += num_inference_steps % 2
num_inference_steps = num_inference_steps // 2
# apply order to indexes
t_start_idx *= scheduler.order
t_end_idx *= scheduler.order
init_timestep = timesteps[t_start_idx : t_start_idx + 1]
timesteps = timesteps[t_start_idx : t_start_idx + t_end_idx]
num_inference_steps = len(timesteps) // scheduler.order
return num_inference_steps, timesteps, init_timestep
def prep_mask_tensor(self, mask, context, lantents):
if mask is None:
return None
def prep_inpaint_mask(self, context, latents):
if self.denoise_mask is None:
return None, None
mask_image = context.services.images.get_pil_image(mask.image_name)
if mask_image.mode != "L":
# FIXME: why do we get passed an RGB image here? We can only use single-channel.
mask_image = mask_image.convert("L")
mask_tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
mask_tensor = tv_resize(mask_tensor, lantents.shape[-2:], T.InterpolationMode.BILINEAR)
return 1 - mask_tensor
mask = context.services.latents.get(self.denoise_mask.mask_name)
mask = tv_resize(mask, latents.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
if self.denoise_mask.masked_latents_name is not None:
masked_latents = context.services.latents.get(self.denoise_mask.masked_latents_name)
else:
masked_latents = None
return 1 - mask, masked_latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
@ -366,13 +450,19 @@ class DenoiseLatentsInvocation(BaseInvocation):
latents = context.services.latents.get(self.latents.latents_name)
if seed is None:
seed = self.latents.seed
else:
if noise is not None and noise.shape[1:] != latents.shape[1:]:
raise Exception(f"Incompatable 'noise' and 'latents' shapes: {latents.shape=} {noise.shape=}")
elif noise is not None:
latents = torch.zeros_like(noise)
else:
raise Exception("'latents' or 'noise' must be provided!")
if seed is None:
seed = 0
mask = self.prep_mask_tensor(self.mask, context, latents)
mask, masked_latents = self.prep_inpaint_mask(context, latents)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
@ -397,12 +487,14 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
with ExitStack() as exit_stack, ModelPatcher.apply_lora_unet(
unet_info.context.model, _lora_loader()
), unet_info as unet:
), set_seamless(unet_info.context.model, self.unet.seamless_axes), unet_info as unet:
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:
noise = noise.to(device=unet.device, dtype=unet.dtype)
if mask is not None:
mask = mask.to(device=unet.device, dtype=unet.dtype)
if masked_latents is not None:
masked_latents = masked_latents.to(device=unet.device, dtype=unet.dtype)
scheduler = get_scheduler(
context=context,
@ -439,6 +531,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
noise=noise,
seed=seed,
mask=mask,
masked_latents=masked_latents,
num_inference_steps=num_inference_steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
@ -454,14 +547,12 @@ class DenoiseLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=result_latents, seed=seed)
@title("Latents to Image")
@tags("latents", "image", "vae")
@invocation(
"l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents", version="1.0.0"
)
class LatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
type: Literal["l2i"] = "l2i"
# Inputs
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
@ -487,7 +578,7 @@ class LatentsToImageInvocation(BaseInvocation):
context=context,
)
with vae_info as vae:
with set_seamless(vae_info.context.model, self.vae.seamless_axes), vae_info as vae:
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
@ -542,6 +633,7 @@ class LatentsToImageInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -554,14 +646,10 @@ class LatentsToImageInvocation(BaseInvocation):
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@title("Resize Latents")
@tags("latents", "resize")
@invocation("lresize", title="Resize Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
type: Literal["lresize"] = "lresize"
# Inputs
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
@ -602,14 +690,10 @@ class ResizeLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@title("Scale Latents")
@tags("latents", "resize")
@invocation("lscale", title="Scale Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
type: Literal["lscale"] = "lscale"
# Inputs
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
@ -642,14 +726,12 @@ class ScaleLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@title("Image to Latents")
@tags("latents", "image", "vae")
@invocation(
"i2l", title="Image to Latents", tags=["latents", "image", "vae", "i2l"], category="latents", version="1.0.0"
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
type: Literal["i2l"] = "i2l"
# Inputs
image: ImageField = InputField(
description="The image to encode",
)
@ -660,26 +742,11 @@ class ImageToLatentsInvocation(BaseInvocation):
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
# image = context.services.images.get(
# self.image.image_type, self.image.image_name
# )
image = context.services.images.get_pil_image(self.image.image_name)
# vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
context=context,
)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
@staticmethod
def vae_encode(vae_info, upcast, tiled, image_tensor):
with vae_info as vae:
orig_dtype = vae.dtype
if self.fp32:
if upcast:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = isinstance(
@ -704,7 +771,7 @@ class ImageToLatentsInvocation(BaseInvocation):
vae.to(dtype=torch.float16)
# latents = latents.half()
if self.tiled:
if tiled:
vae.enable_tiling()
else:
vae.disable_tiling()
@ -718,7 +785,98 @@ class ImageToLatentsInvocation(BaseInvocation):
latents = vae.config.scaling_factor * latents
latents = latents.to(dtype=orig_dtype)
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.services.images.get_pil_image(self.image.image_name)
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
context=context,
)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
latents = self.vae_encode(vae_info, self.fp32, self.tiled, image_tensor)
name = f"{context.graph_execution_state_id}__{self.id}"
latents = latents.to("cpu")
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents, seed=None)
@invocation("lblend", title="Blend Latents", tags=["latents", "blend"], category="latents", version="1.0.0")
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
latents_a: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
latents_b: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
alpha: float = InputField(default=0.5, description=FieldDescriptions.blend_alpha)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents_a = context.services.latents.get(self.latents_a.latents_name)
latents_b = context.services.latents.get(self.latents_b.latents_name)
if latents_a.shape != latents_b.shape:
raise "Latents to blend must be the same size."
# TODO:
device = choose_torch_device()
def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
"""
Spherical linear interpolation
Args:
t (float/np.ndarray): Float value between 0.0 and 1.0
v0 (np.ndarray): Starting vector
v1 (np.ndarray): Final vector
DOT_THRESHOLD (float): Threshold for considering the two vectors as
colineal. Not recommended to alter this.
Returns:
v2 (np.ndarray): Interpolation vector between v0 and v1
"""
inputs_are_torch = False
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
v0 = v0.detach().cpu().numpy()
if not isinstance(v1, np.ndarray):
inputs_are_torch = True
v1 = v1.detach().cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2 = torch.from_numpy(v2).to(device)
return v2
# blend
blended_latents = slerp(self.alpha, latents_a, latents_b)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
blended_latents = blended_latents.to("cpu")
torch.cuda.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, blended_latents)
return build_latents_output(latents_name=name, latents=blended_latents)

View File

@ -1,84 +1,62 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
import numpy as np
from invokeai.app.invocations.primitives import IntegerOutput
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, tags, title
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@title("Add Integers")
@tags("math")
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.0")
class AddInvocation(BaseInvocation):
"""Adds two numbers"""
type: Literal["add"] = "add"
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a + self.b)
return IntegerOutput(value=self.a + self.b)
@title("Subtract Integers")
@tags("math")
@invocation("sub", title="Subtract Integers", tags=["math", "subtract"], category="math", version="1.0.0")
class SubtractInvocation(BaseInvocation):
"""Subtracts two numbers"""
type: Literal["sub"] = "sub"
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a - self.b)
return IntegerOutput(value=self.a - self.b)
@title("Multiply Integers")
@tags("math")
@invocation("mul", title="Multiply Integers", tags=["math", "multiply"], category="math", version="1.0.0")
class MultiplyInvocation(BaseInvocation):
"""Multiplies two numbers"""
type: Literal["mul"] = "mul"
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a * self.b)
return IntegerOutput(value=self.a * self.b)
@title("Divide Integers")
@tags("math")
@invocation("div", title="Divide Integers", tags=["math", "divide"], category="math", version="1.0.0")
class DivideInvocation(BaseInvocation):
"""Divides two numbers"""
type: Literal["div"] = "div"
# Inputs
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=int(self.a / self.b))
return IntegerOutput(value=int(self.a / self.b))
@title("Random Integer")
@tags("math")
@invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math", version="1.0.0")
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""
type: Literal["rand_int"] = "rand_int"
# Inputs
low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=np.random.randint(self.low, self.high))
return IntegerOutput(value=np.random.randint(self.low, self.high))

View File

@ -1,4 +1,4 @@
from typing import Literal, Optional
from typing import Optional
from pydantic import Field
@ -8,8 +8,8 @@ from invokeai.app.invocations.baseinvocation import (
InputField,
InvocationContext,
OutputField,
tags,
title,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
@ -32,6 +32,7 @@ class CoreMetadata(BaseModelExcludeNull):
generation_mode: str = Field(
description="The generation mode that output this image",
)
created_by: Optional[str] = Field(description="The name of the creator of the image")
positive_prompt: str = Field(description="The positive prompt parameter")
negative_prompt: str = Field(description="The negative prompt parameter")
width: int = Field(description="The width parameter")
@ -71,10 +72,10 @@ class CoreMetadata(BaseModelExcludeNull):
)
refiner_steps: Optional[int] = Field(default=None, description="The number of steps used for the refiner")
refiner_scheduler: Optional[str] = Field(default=None, description="The scheduler used for the refiner")
refiner_positive_aesthetic_store: Optional[float] = Field(
refiner_positive_aesthetic_score: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_negative_aesthetic_store: Optional[float] = Field(
refiner_negative_aesthetic_score: Optional[float] = Field(
default=None, description="The aesthetic score used for the refiner"
)
refiner_start: Optional[float] = Field(default=None, description="The start value used for refiner denoising")
@ -90,21 +91,19 @@ class ImageMetadata(BaseModelExcludeNull):
graph: Optional[dict] = Field(default=None, description="The graph that created the image")
@invocation_output("metadata_accumulator_output")
class MetadataAccumulatorOutput(BaseInvocationOutput):
"""The output of the MetadataAccumulator node"""
type: Literal["metadata_accumulator_output"] = "metadata_accumulator_output"
metadata: CoreMetadata = OutputField(description="The core metadata for the image")
@title("Metadata Accumulator")
@tags("metadata")
@invocation(
"metadata_accumulator", title="Metadata Accumulator", tags=["metadata"], category="metadata", version="1.0.0"
)
class MetadataAccumulatorInvocation(BaseInvocation):
"""Outputs a Core Metadata Object"""
type: Literal["metadata_accumulator"] = "metadata_accumulator"
generation_mode: str = InputField(
description="The generation mode that output this image",
)
@ -163,11 +162,11 @@ class MetadataAccumulatorInvocation(BaseInvocation):
default=None,
description="The scheduler used for the refiner",
)
refiner_positive_aesthetic_store: Optional[float] = InputField(
refiner_positive_aesthetic_score: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)
refiner_negative_aesthetic_store: Optional[float] = InputField(
refiner_negative_aesthetic_score: Optional[float] = InputField(
default=None,
description="The aesthetic score used for the refiner",
)

View File

@ -1,5 +1,5 @@
import copy
from typing import List, Literal, Optional, Union
from typing import List, Optional
from pydantic import BaseModel, Field
@ -8,13 +8,13 @@ from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
tags,
title,
invocation,
invocation_output,
)
@ -33,6 +33,7 @@ class UNetField(BaseModel):
unet: ModelInfo = Field(description="Info to load unet submodel")
scheduler: ModelInfo = Field(description="Info to load scheduler submodel")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
class ClipField(BaseModel):
@ -45,13 +46,13 @@ class ClipField(BaseModel):
class VaeField(BaseModel):
# TODO: better naming?
vae: ModelInfo = Field(description="Info to load vae submodel")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
@invocation_output("model_loader_output")
class ModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
type: Literal["model_loader_output"] = "model_loader_output"
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@ -72,14 +73,10 @@ class LoRAModelField(BaseModel):
base_model: BaseModelType = Field(description="Base model")
@title("Main Model Loader")
@tags("model")
@invocation("main_model_loader", title="Main Model", tags=["model"], category="model", version="1.0.0")
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
type: Literal["main_model_loader"] = "main_model_loader"
# Inputs
model: MainModelField = InputField(description=FieldDescriptions.main_model, input=Input.Direct)
# TODO: precision?
@ -168,25 +165,18 @@ class MainModelLoaderInvocation(BaseInvocation):
)
@invocation_output("lora_loader_output")
class LoraLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["lora_loader_output"] = "lora_loader_output"
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
# fmt: on
@title("LoRA Loader")
@tags("lora", "model")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.0")
class LoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
type: Literal["lora_loader"] = "lora_loader"
# Inputs
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@ -245,34 +235,28 @@ class LoraLoaderInvocation(BaseInvocation):
return output
@invocation_output("sdxl_lora_loader_output")
class SDXLLoraLoaderOutput(BaseInvocationOutput):
"""SDXL LoRA Loader Output"""
# fmt: off
type: Literal["sdxl_lora_loader_output"] = "sdxl_lora_loader_output"
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 1")
clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
# fmt: on
@title("SDXL LoRA Loader")
@tags("sdxl", "lora", "model")
@invocation("sdxl_lora_loader", title="SDXL LoRA", tags=["lora", "model"], category="model", version="1.0.0")
class SDXLLoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
type: Literal["sdxl_lora_loader"] = "sdxl_lora_loader"
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = Field(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = Field(
default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNET"
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
)
clip: Optional[ClipField] = Field(
clip: Optional[ClipField] = InputField(
default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1"
)
clip2: Optional[ClipField] = Field(
clip2: Optional[ClipField] = InputField(
default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2"
)
@ -347,23 +331,17 @@ class VAEModelField(BaseModel):
base_model: BaseModelType = Field(description="Base model")
@invocation_output("vae_loader_output")
class VaeLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
"""VAE output"""
type: Literal["vae_loader_output"] = "vae_loader_output"
# Outputs
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@title("VAE Loader")
@tags("vae", "model")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.0")
class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
type: Literal["vae_loader"] = "vae_loader"
# Inputs
vae_model: VAEModelField = InputField(
description=FieldDescriptions.vae_model, input=Input.Direct, ui_type=UIType.VaeModel, title="VAE"
)
@ -388,3 +366,44 @@ class VaeLoaderInvocation(BaseInvocation):
)
)
)
@invocation_output("seamless_output")
class SeamlessModeOutput(BaseInvocationOutput):
"""Modified Seamless Model output"""
unet: Optional[UNetField] = OutputField(description=FieldDescriptions.unet, title="UNet")
vae: Optional[VaeField] = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("seamless", title="Seamless", tags=["seamless", "model"], category="model", version="1.0.0")
class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE."""
unet: Optional[UNetField] = InputField(
default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
)
vae: Optional[VaeField] = InputField(
default=None, description=FieldDescriptions.vae_model, input=Input.Connection, title="VAE"
)
seamless_y: bool = InputField(default=True, input=Input.Any, description="Specify whether Y axis is seamless")
seamless_x: bool = InputField(default=True, input=Input.Any, description="Specify whether X axis is seamless")
def invoke(self, context: InvocationContext) -> SeamlessModeOutput:
# Conditionally append 'x' and 'y' based on seamless_x and seamless_y
unet = copy.deepcopy(self.unet)
vae = copy.deepcopy(self.vae)
seamless_axes_list = []
if self.seamless_x:
seamless_axes_list.append("x")
if self.seamless_y:
seamless_axes_list.append("y")
if unet is not None:
unet.seamless_axes = seamless_axes_list
if vae is not None:
vae.seamless_axes = seamless_axes_list
return SeamlessModeOutput(unet=unet, vae=vae)

View File

@ -1,6 +1,5 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
from typing import Literal
import torch
from pydantic import validator
@ -16,9 +15,8 @@ from .baseinvocation import (
InputField,
InvocationContext,
OutputField,
UIType,
tags,
title,
invocation,
invocation_output,
)
"""
@ -63,12 +61,10 @@ Nodes
"""
@invocation_output("noise_output")
class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output"""
type: Literal["noise_output"] = "noise_output"
# Inputs
noise: LatentsField = OutputField(default=None, description=FieldDescriptions.noise)
width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height)
@ -82,14 +78,10 @@ def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
)
@title("Noise")
@tags("latents", "noise")
@invocation("noise", title="Noise", tags=["latents", "noise"], category="latents", version="1.0.0")
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
type: Literal["noise"] = "noise"
# Inputs
seed: int = InputField(
ge=0,
le=SEED_MAX,

View File

@ -2,14 +2,13 @@
import inspect
import re
from contextlib import ExitStack
# from contextlib import ExitStack
from typing import List, Literal, Optional, Union
import numpy as np
import torch
from diffusers import ControlNetModel, DPMSolverMultistepScheduler
from diffusers.image_processor import VaeImageProcessor
from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import BaseModel, Field, validator
from tqdm import tqdm
@ -32,8 +31,8 @@ from .baseinvocation import (
OutputField,
UIComponent,
UIType,
tags,
title,
invocation,
invocation_output,
)
from .controlnet_image_processors import ControlField
from .latent import SAMPLER_NAME_VALUES, LatentsField, LatentsOutput, build_latents_output, get_scheduler
@ -57,11 +56,8 @@ ORT_TO_NP_TYPE = {
PRECISION_VALUES = Literal[tuple(list(ORT_TO_NP_TYPE.keys()))]
@title("ONNX Prompt (Raw)")
@tags("onnx", "prompt")
@invocation("prompt_onnx", title="ONNX Prompt (Raw)", tags=["prompt", "onnx"], category="conditioning", version="1.0.0")
class ONNXPromptInvocation(BaseInvocation):
type: Literal["prompt_onnx"] = "prompt_onnx"
prompt: str = InputField(default="", description=FieldDescriptions.raw_prompt, ui_component=UIComponent.Textarea)
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
@ -72,7 +68,7 @@ class ONNXPromptInvocation(BaseInvocation):
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.dict(),
)
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder, ExitStack() as stack:
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder: # , ExitStack() as stack:
loras = [
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
for lora in self.clip.loras
@ -142,14 +138,16 @@ class ONNXPromptInvocation(BaseInvocation):
# Text to image
@title("ONNX Text to Latents")
@tags("latents", "inference", "txt2img", "onnx")
@invocation(
"t2l_onnx",
title="ONNX Text to Latents",
tags=["latents", "inference", "txt2img", "onnx"],
category="latents",
version="1.0.0",
)
class ONNXTextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
type: Literal["t2l_onnx"] = "t2l_onnx"
# Inputs
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond,
input=Input.Connection,
@ -170,7 +168,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
ui_type=UIType.Float,
)
scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, input=Input.Direct
default="euler", description=FieldDescriptions.scheduler, input=Input.Direct, ui_type=UIType.Scheduler
)
precision: PRECISION_VALUES = InputField(default="tensor(float16)", description=FieldDescriptions.precision)
unet: UNetField = InputField(
@ -259,7 +257,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
with unet_info as unet, ExitStack() as stack:
with unet_info as unet: # , ExitStack() as stack:
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [
(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
@ -317,14 +315,16 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
# Latent to image
@title("ONNX Latents to Image")
@tags("latents", "image", "vae", "onnx")
@invocation(
"l2i_onnx",
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.0.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
type: Literal["l2i_onnx"] = "l2i_onnx"
# Inputs
latents: LatentsField = InputField(
description=FieldDescriptions.denoised_latents,
input=Input.Connection,
@ -377,6 +377,7 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
@ -386,17 +387,14 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
)
@invocation_output("model_loader_output_onnx")
class ONNXModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
# fmt: off
type: Literal["model_loader_output_onnx"] = "model_loader_output_onnx"
unet: UNetField = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
vae_decoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Decoder")
vae_encoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Encoder")
# fmt: on
class OnnxModelField(BaseModel):
@ -407,14 +405,10 @@ class OnnxModelField(BaseModel):
model_type: ModelType = Field(description="Model Type")
@title("ONNX Model Loader")
@tags("onnx", "model")
@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model", version="1.0.0")
class OnnxModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
type: Literal["onnx_model_loader"] = "onnx_model_loader"
# Inputs
model: OnnxModelField = InputField(
description=FieldDescriptions.onnx_main_model, input=Input.Direct, ui_type=UIType.ONNXModel
)

View File

@ -38,24 +38,17 @@ from easing_functions import (
SineEaseInOut,
SineEaseOut,
)
from matplotlib.figure import Figure
from matplotlib.ticker import MaxNLocator
from pydantic import BaseModel, Field
from invokeai.app.invocations.primitives import FloatCollectionOutput
from ...backend.util.logging import InvokeAILogger
from .baseinvocation import BaseInvocation, InputField, InvocationContext, tags, title
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@title("Float Range")
@tags("math", "range")
@invocation("float_range", title="Float Range", tags=["math", "range"], category="math", version="1.0.0")
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
type: Literal["float_range"] = "float_range"
# Inputs
start: float = InputField(default=5, description="The first value of the range")
stop: float = InputField(default=10, description="The last value of the range")
steps: int = InputField(default=30, description="number of values to interpolate over (including start and stop)")
@ -103,14 +96,10 @@ EASING_FUNCTION_KEYS = Literal[tuple(list(EASING_FUNCTIONS_MAP.keys()))]
# actually I think for now could just use CollectionOutput (which is list[Any]
@title("Step Param Easing")
@tags("step", "easing")
@invocation("step_param_easing", title="Step Param Easing", tags=["step", "easing"], category="step", version="1.0.0")
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""
type: Literal["step_param_easing"] = "step_param_easing"
# Inputs
easing: EASING_FUNCTION_KEYS = InputField(default="Linear", description="The easing function to use")
num_steps: int = InputField(default=20, description="number of denoising steps")
start_value: float = InputField(default=0.0, description="easing starting value")

View File

@ -1,10 +1,9 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal, Optional, Tuple, Union
from anyio import Condition
from typing import Optional, Tuple
from pydantic import BaseModel, Field
import torch
from pydantic import BaseModel, Field
from .baseinvocation import (
BaseInvocation,
@ -15,9 +14,8 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIComponent,
UIType,
tags,
title,
invocation,
invocation_output,
)
"""
@ -30,49 +28,45 @@ Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
# region Boolean
@invocation_output("boolean_output")
class BooleanOutput(BaseInvocationOutput):
"""Base class for nodes that output a single boolean"""
type: Literal["boolean_output"] = "boolean_output"
a: bool = OutputField(description="The output boolean")
value: bool = OutputField(description="The output boolean")
@invocation_output("boolean_collection_output")
class BooleanCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of booleans"""
type: Literal["boolean_collection_output"] = "boolean_collection_output"
# Outputs
collection: list[bool] = OutputField(
default_factory=list, description="The output boolean collection", ui_type=UIType.BooleanCollection
description="The output boolean collection",
)
@title("Boolean Primitive")
@tags("primitives", "boolean")
@invocation(
"boolean", title="Boolean Primitive", tags=["primitives", "boolean"], category="primitives", version="1.0.0"
)
class BooleanInvocation(BaseInvocation):
"""A boolean primitive value"""
type: Literal["boolean"] = "boolean"
# Inputs
a: bool = InputField(default=False, description="The boolean value")
value: bool = InputField(default=False, description="The boolean value")
def invoke(self, context: InvocationContext) -> BooleanOutput:
return BooleanOutput(a=self.a)
return BooleanOutput(value=self.value)
@title("Boolean Primitive Collection")
@tags("primitives", "boolean", "collection")
@invocation(
"boolean_collection",
title="Boolean Collection Primitive",
tags=["primitives", "boolean", "collection"],
category="primitives",
version="1.0.0",
)
class BooleanCollectionInvocation(BaseInvocation):
"""A collection of boolean primitive values"""
type: Literal["boolean_collection"] = "boolean_collection"
# Inputs
collection: list[bool] = InputField(
default=False, description="The collection of boolean values", ui_type=UIType.BooleanCollection
)
collection: list[bool] = InputField(default_factory=list, description="The collection of boolean values")
def invoke(self, context: InvocationContext) -> BooleanCollectionOutput:
return BooleanCollectionOutput(collection=self.collection)
@ -83,49 +77,45 @@ class BooleanCollectionInvocation(BaseInvocation):
# region Integer
@invocation_output("integer_output")
class IntegerOutput(BaseInvocationOutput):
"""Base class for nodes that output a single integer"""
type: Literal["integer_output"] = "integer_output"
a: int = OutputField(description="The output integer")
value: int = OutputField(description="The output integer")
@invocation_output("integer_collection_output")
class IntegerCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of integers"""
type: Literal["integer_collection_output"] = "integer_collection_output"
# Outputs
collection: list[int] = OutputField(
default_factory=list, description="The int collection", ui_type=UIType.IntegerCollection
description="The int collection",
)
@title("Integer Primitive")
@tags("primitives", "integer")
@invocation(
"integer", title="Integer Primitive", tags=["primitives", "integer"], category="primitives", version="1.0.0"
)
class IntegerInvocation(BaseInvocation):
"""An integer primitive value"""
type: Literal["integer"] = "integer"
# Inputs
a: int = InputField(default=0, description="The integer value")
value: int = InputField(default=0, description="The integer value")
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(a=self.a)
return IntegerOutput(value=self.value)
@title("Integer Primitive Collection")
@tags("primitives", "integer", "collection")
@invocation(
"integer_collection",
title="Integer Collection Primitive",
tags=["primitives", "integer", "collection"],
category="primitives",
version="1.0.0",
)
class IntegerCollectionInvocation(BaseInvocation):
"""A collection of integer primitive values"""
type: Literal["integer_collection"] = "integer_collection"
# Inputs
collection: list[int] = InputField(
default=0, description="The collection of integer values", ui_type=UIType.IntegerCollection
)
collection: list[int] = InputField(default_factory=list, description="The collection of integer values")
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=self.collection)
@ -136,49 +126,43 @@ class IntegerCollectionInvocation(BaseInvocation):
# region Float
@invocation_output("float_output")
class FloatOutput(BaseInvocationOutput):
"""Base class for nodes that output a single float"""
type: Literal["float_output"] = "float_output"
a: float = OutputField(description="The output float")
value: float = OutputField(description="The output float")
@invocation_output("float_collection_output")
class FloatCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of floats"""
type: Literal["float_collection_output"] = "float_collection_output"
# Outputs
collection: list[float] = OutputField(
default_factory=list, description="The float collection", ui_type=UIType.FloatCollection
description="The float collection",
)
@title("Float Primitive")
@tags("primitives", "float")
@invocation("float", title="Float Primitive", tags=["primitives", "float"], category="primitives", version="1.0.0")
class FloatInvocation(BaseInvocation):
"""A float primitive value"""
type: Literal["float"] = "float"
# Inputs
param: float = InputField(default=0.0, description="The float value")
value: float = InputField(default=0.0, description="The float value")
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(a=self.param)
return FloatOutput(value=self.value)
@title("Float Primitive Collection")
@tags("primitives", "float", "collection")
@invocation(
"float_collection",
title="Float Collection Primitive",
tags=["primitives", "float", "collection"],
category="primitives",
version="1.0.0",
)
class FloatCollectionInvocation(BaseInvocation):
"""A collection of float primitive values"""
type: Literal["float_collection"] = "float_collection"
# Inputs
collection: list[float] = InputField(
default=0, description="The collection of float values", ui_type=UIType.FloatCollection
)
collection: list[float] = InputField(default_factory=list, description="The collection of float values")
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
return FloatCollectionOutput(collection=self.collection)
@ -189,49 +173,43 @@ class FloatCollectionInvocation(BaseInvocation):
# region String
@invocation_output("string_output")
class StringOutput(BaseInvocationOutput):
"""Base class for nodes that output a single string"""
type: Literal["string_output"] = "string_output"
text: str = OutputField(description="The output string")
value: str = OutputField(description="The output string")
@invocation_output("string_collection_output")
class StringCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of strings"""
type: Literal["string_collection_output"] = "string_collection_output"
# Outputs
collection: list[str] = OutputField(
default_factory=list, description="The output strings", ui_type=UIType.StringCollection
description="The output strings",
)
@title("String Primitive")
@tags("primitives", "string")
@invocation("string", title="String Primitive", tags=["primitives", "string"], category="primitives", version="1.0.0")
class StringInvocation(BaseInvocation):
"""A string primitive value"""
type: Literal["string"] = "string"
# Inputs
text: str = InputField(default="", description="The string value", ui_component=UIComponent.Textarea)
value: str = InputField(default="", description="The string value", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(text=self.text)
return StringOutput(value=self.value)
@title("String Primitive Collection")
@tags("primitives", "string", "collection")
@invocation(
"string_collection",
title="String Collection Primitive",
tags=["primitives", "string", "collection"],
category="primitives",
version="1.0.0",
)
class StringCollectionInvocation(BaseInvocation):
"""A collection of string primitive values"""
type: Literal["string_collection"] = "string_collection"
# Inputs
collection: list[str] = InputField(
default=0, description="The collection of string values", ui_type=UIType.StringCollection
)
collection: list[str] = InputField(default_factory=list, description="The collection of string values")
def invoke(self, context: InvocationContext) -> StringCollectionOutput:
return StringCollectionOutput(collection=self.collection)
@ -248,35 +226,28 @@ class ImageField(BaseModel):
image_name: str = Field(description="The name of the image")
@invocation_output("image_output")
class ImageOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
type: Literal["image_output"] = "image_output"
image: ImageField = OutputField(description="The output image")
width: int = OutputField(description="The width of the image in pixels")
height: int = OutputField(description="The height of the image in pixels")
@invocation_output("image_collection_output")
class ImageCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of images"""
type: Literal["image_collection_output"] = "image_collection_output"
# Outputs
collection: list[ImageField] = OutputField(
default_factory=list, description="The output images", ui_type=UIType.ImageCollection
description="The output images",
)
@title("Image Primitive")
@tags("primitives", "image")
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.0")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
# Metadata
type: Literal["image"] = "image"
# Inputs
image: ImageField = InputField(description="The image to load")
def invoke(self, context: InvocationContext) -> ImageOutput:
@ -289,22 +260,41 @@ class ImageInvocation(BaseInvocation):
)
@title("Image Primitive Collection")
@tags("primitives", "image", "collection")
@invocation(
"image_collection",
title="Image Collection Primitive",
tags=["primitives", "image", "collection"],
category="primitives",
version="1.0.0",
)
class ImageCollectionInvocation(BaseInvocation):
"""A collection of image primitive values"""
type: Literal["image_collection"] = "image_collection"
# Inputs
collection: list[ImageField] = InputField(
default=0, description="The collection of image values", ui_type=UIType.ImageCollection
)
collection: list[ImageField] = InputField(description="The collection of image values")
def invoke(self, context: InvocationContext) -> ImageCollectionOutput:
return ImageCollectionOutput(collection=self.collection)
# endregion
# region DenoiseMask
class DenoiseMaskField(BaseModel):
"""An inpaint mask field"""
mask_name: str = Field(description="The name of the mask image")
masked_latents_name: Optional[str] = Field(description="The name of the masked image latents")
@invocation_output("denoise_mask_output")
class DenoiseMaskOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
denoise_mask: DenoiseMaskField = OutputField(description="Mask for denoise model run")
# endregion
# region Latents
@ -317,11 +307,10 @@ class LatentsField(BaseModel):
seed: Optional[int] = Field(default=None, description="Seed used to generate this latents")
@invocation_output("latents_output")
class LatentsOutput(BaseInvocationOutput):
"""Base class for nodes that output a single latents tensor"""
type: Literal["latents_output"] = "latents_output"
latents: LatentsField = OutputField(
description=FieldDescriptions.latents,
)
@ -329,26 +318,21 @@ class LatentsOutput(BaseInvocationOutput):
height: int = OutputField(description=FieldDescriptions.height)
@invocation_output("latents_collection_output")
class LatentsCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of latents tensors"""
type: Literal["latents_collection_output"] = "latents_collection_output"
collection: list[LatentsField] = OutputField(
default_factory=list,
description=FieldDescriptions.latents,
ui_type=UIType.LatentsCollection,
)
@title("Latents Primitive")
@tags("primitives", "latents")
@invocation(
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.0"
)
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
type: Literal["latents"] = "latents"
# Inputs
latents: LatentsField = InputField(description="The latents tensor", input=Input.Connection)
def invoke(self, context: InvocationContext) -> LatentsOutput:
@ -357,16 +341,18 @@ class LatentsInvocation(BaseInvocation):
return build_latents_output(self.latents.latents_name, latents)
@title("Latents Primitive Collection")
@tags("primitives", "latents", "collection")
@invocation(
"latents_collection",
title="Latents Collection Primitive",
tags=["primitives", "latents", "collection"],
category="primitives",
version="1.0.0",
)
class LatentsCollectionInvocation(BaseInvocation):
"""A collection of latents tensor primitive values"""
type: Literal["latents_collection"] = "latents_collection"
# Inputs
collection: list[LatentsField] = InputField(
default=0, description="The collection of latents tensors", ui_type=UIType.LatentsCollection
description="The collection of latents tensors",
)
def invoke(self, context: InvocationContext) -> LatentsCollectionOutput:
@ -398,32 +384,26 @@ class ColorField(BaseModel):
return (self.r, self.g, self.b, self.a)
@invocation_output("color_output")
class ColorOutput(BaseInvocationOutput):
"""Base class for nodes that output a single color"""
type: Literal["color_output"] = "color_output"
color: ColorField = OutputField(description="The output color")
@invocation_output("color_collection_output")
class ColorCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of colors"""
type: Literal["color_collection_output"] = "color_collection_output"
# Outputs
collection: list[ColorField] = OutputField(
default_factory=list, description="The output colors", ui_type=UIType.ColorCollection
description="The output colors",
)
@title("Color Primitive")
@tags("primitives", "color")
@invocation("color", title="Color Primitive", tags=["primitives", "color"], category="primitives", version="1.0.0")
class ColorInvocation(BaseInvocation):
"""A color primitive value"""
type: Literal["color"] = "color"
# Inputs
color: ColorField = InputField(default=ColorField(r=0, g=0, b=0, a=255), description="The color value")
def invoke(self, context: InvocationContext) -> ColorOutput:
@ -441,50 +421,51 @@ class ConditioningField(BaseModel):
conditioning_name: str = Field(description="The name of conditioning tensor")
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""
type: Literal["conditioning_output"] = "conditioning_output"
conditioning: ConditioningField = OutputField(description=FieldDescriptions.cond)
@invocation_output("conditioning_collection_output")
class ConditioningCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of conditioning tensors"""
type: Literal["conditioning_collection_output"] = "conditioning_collection_output"
# Outputs
collection: list[ConditioningField] = OutputField(
default_factory=list,
description="The output conditioning tensors",
ui_type=UIType.ConditioningCollection,
)
@title("Conditioning Primitive")
@tags("primitives", "conditioning")
@invocation(
"conditioning",
title="Conditioning Primitive",
tags=["primitives", "conditioning"],
category="primitives",
version="1.0.0",
)
class ConditioningInvocation(BaseInvocation):
"""A conditioning tensor primitive value"""
type: Literal["conditioning"] = "conditioning"
conditioning: ConditioningField = InputField(description=FieldDescriptions.cond, input=Input.Connection)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
return ConditioningOutput(conditioning=self.conditioning)
@title("Conditioning Primitive Collection")
@tags("primitives", "conditioning", "collection")
@invocation(
"conditioning_collection",
title="Conditioning Collection Primitive",
tags=["primitives", "conditioning", "collection"],
category="primitives",
version="1.0.0",
)
class ConditioningCollectionInvocation(BaseInvocation):
"""A collection of conditioning tensor primitive values"""
type: Literal["conditioning_collection"] = "conditioning_collection"
# Inputs
collection: list[ConditioningField] = InputField(
default=0, description="The collection of conditioning tensors", ui_type=UIType.ConditioningCollection
default_factory=list,
description="The collection of conditioning tensors",
)
def invoke(self, context: InvocationContext) -> ConditioningCollectionOutput:

View File

@ -1,5 +1,5 @@
from os.path import exists
from typing import Literal, Optional, Union
from typing import Optional, Union
import numpy as np
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
@ -7,17 +7,13 @@ from pydantic import validator
from invokeai.app.invocations.primitives import StringCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, UIType, tags, title
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation
@title("Dynamic Prompt")
@tags("prompt", "collection")
@invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt", version="1.0.0")
class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""
type: Literal["dynamic_prompt"] = "dynamic_prompt"
# Inputs
prompt: str = InputField(description="The prompt to parse with dynamicprompts", ui_component=UIComponent.Textarea)
max_prompts: int = InputField(default=1, description="The number of prompts to generate")
combinatorial: bool = InputField(default=False, description="Whether to use the combinatorial generator")
@ -33,15 +29,11 @@ class DynamicPromptInvocation(BaseInvocation):
return StringCollectionOutput(collection=prompts)
@title("Prompts from File")
@tags("prompt", "file")
@invocation("prompt_from_file", title="Prompts from File", tags=["prompt", "file"], category="prompt", version="1.0.0")
class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file"""
type: Literal["prompt_from_file"] = "prompt_from_file"
# Inputs
file_path: str = InputField(description="Path to prompt text file", ui_type=UIType.FilePath)
file_path: str = InputField(description="Path to prompt text file")
pre_prompt: Optional[str] = InputField(
default=None, description="String to prepend to each prompt", ui_component=UIComponent.Textarea
)

View File

@ -1,5 +1,3 @@
from typing import Literal
from ...backend.model_management import ModelType, SubModelType
from .baseinvocation import (
BaseInvocation,
@ -10,41 +8,35 @@ from .baseinvocation import (
InvocationContext,
OutputField,
UIType,
tags,
title,
invocation,
invocation_output,
)
from .model import ClipField, MainModelField, ModelInfo, UNetField, VaeField
@invocation_output("sdxl_model_loader_output")
class SDXLModelLoaderOutput(BaseInvocationOutput):
"""SDXL base model loader output"""
type: Literal["sdxl_model_loader_output"] = "sdxl_model_loader_output"
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 1")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation_output("sdxl_refiner_model_loader_output")
class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
"""SDXL refiner model loader output"""
type: Literal["sdxl_refiner_model_loader_output"] = "sdxl_refiner_model_loader_output"
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip2: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP 2")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@title("SDXL Main Model Loader")
@tags("model", "sdxl")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.0")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
type: Literal["sdxl_model_loader"] = "sdxl_model_loader"
# Inputs
model: MainModelField = InputField(
description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel
)
@ -122,14 +114,16 @@ class SDXLModelLoaderInvocation(BaseInvocation):
)
@title("SDXL Refiner Model Loader")
@tags("model", "sdxl", "refiner")
@invocation(
"sdxl_refiner_model_loader",
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.0",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
type: Literal["sdxl_refiner_model_loader"] = "sdxl_refiner_model_loader"
# Inputs
model: MainModelField = InputField(
description=FieldDescriptions.sdxl_refiner_model,
input=Input.Direct,

View File

@ -1,6 +1,6 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
from pathlib import Path
from typing import Literal, Union
from typing import Literal
import cv2 as cv
import numpy as np
@ -11,7 +11,7 @@ from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, title, tags
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
# TODO: Populate this from disk?
# TODO: Use model manager to load?
@ -23,14 +23,10 @@ ESRGAN_MODELS = Literal[
]
@title("Upscale (RealESRGAN)")
@tags("esrgan", "upscale")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.0.0")
class ESRGANInvocation(BaseInvocation):
"""Upscales an image using RealESRGAN."""
type: Literal["esrgan"] = "esrgan"
# Inputs
image: ImageField = InputField(description="The input image")
model_name: ESRGAN_MODELS = InputField(default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use")
@ -110,6 +106,7 @@ class ESRGANInvocation(BaseInvocation):
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(

View File

@ -1,18 +1,14 @@
from abc import ABC, abstractmethod
from logging import Logger
from typing import List, Union, Optional
from typing import Optional
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_record_storage import (
BoardRecord,
BoardRecordStorageBase,
)
from invokeai.app.services.image_record_storage import (
ImageRecordStorageBase,
OffsetPaginatedResults,
)
from invokeai.app.services.image_record_storage import ImageRecordStorageBase
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.models.image_record import ImageDTO, image_record_to_dto
from invokeai.app.services.urls import UrlServiceBase

View File

@ -1,15 +1,14 @@
from abc import ABC, abstractmethod
from typing import Optional, cast
import sqlite3
import threading
from typing import Optional, Union
import uuid
from abc import ABC, abstractmethod
from typing import Optional, Union, cast
import sqlite3
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import (
BoardRecord,
deserialize_board_record,
)
from pydantic import BaseModel, Field, Extra
@ -230,7 +229,7 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
# Change the name of a board
if changes.board_name is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE boards
SET board_name = ?
WHERE board_id = ?;
@ -241,7 +240,7 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
# Change the cover image of a board
if changes.cover_image_name is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE boards
SET cover_image_name = ?
WHERE board_id = ?;

View File

@ -0,0 +1,9 @@
"""
Init file for InvokeAI configure package
"""
from .invokeai_config import ( # noqa F401
InvokeAIAppConfig,
get_invokeai_config,
)
from .base import PagingArgumentParser # noqa F401

View File

@ -0,0 +1,239 @@
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
"""
Base class for the InvokeAI configuration system.
It defines a type of pydantic BaseSettings object that
is able to read and write from an omegaconf-based config file,
with overriding of settings from environment variables and/or
the command line.
"""
from __future__ import annotations
import argparse
import os
import pydoc
import sys
from argparse import ArgumentParser
from omegaconf import OmegaConf, DictConfig, ListConfig
from pathlib import Path
from pydantic import BaseSettings
from typing import ClassVar, Dict, List, Literal, Union, get_origin, get_type_hints, get_args
class PagingArgumentParser(argparse.ArgumentParser):
"""
A custom ArgumentParser that uses pydoc to page its output.
It also supports reading defaults from an init file.
"""
def print_help(self, file=None):
text = self.format_help()
pydoc.pager(text)
class InvokeAISettings(BaseSettings):
"""
Runtime configuration settings in which default values are
read from an omegaconf .yaml file.
"""
initconf: ClassVar[DictConfig] = None
argparse_groups: ClassVar[Dict] = {}
def parse_args(self, argv: list = sys.argv[1:]):
parser = self.get_parser()
opt = parser.parse_args(argv)
for name in self.__fields__:
if name not in self._excluded():
value = getattr(opt, name)
if isinstance(value, ListConfig):
value = list(value)
elif isinstance(value, DictConfig):
value = dict(value)
setattr(self, name, value)
def to_yaml(self) -> str:
"""
Return a YAML string representing our settings. This can be used
as the contents of `invokeai.yaml` to restore settings later.
"""
cls = self.__class__
type = get_args(get_type_hints(cls)["type"])[0]
field_dict = dict({type: dict()})
for name, field in self.__fields__.items():
if name in cls._excluded_from_yaml():
continue
category = field.field_info.extra.get("category") or "Uncategorized"
value = getattr(self, name)
if category not in field_dict[type]:
field_dict[type][category] = dict()
# keep paths as strings to make it easier to read
field_dict[type][category][name] = str(value) if isinstance(value, Path) else value
conf = OmegaConf.create(field_dict)
return OmegaConf.to_yaml(conf)
@classmethod
def add_parser_arguments(cls, parser):
if "type" in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)["type"])[0]
else:
settings_stanza = "Uncategorized"
env_prefix = cls.Config.env_prefix if hasattr(cls.Config, "env_prefix") else settings_stanza.upper()
initconf = (
cls.initconf.get(settings_stanza)
if cls.initconf and settings_stanza in cls.initconf
else OmegaConf.create()
)
# create an upcase version of the environment in
# order to achieve case-insensitive environment
# variables (the way Windows does)
upcase_environ = dict()
for key, value in os.environ.items():
upcase_environ[key.upper()] = value
fields = cls.__fields__
cls.argparse_groups = {}
for name, field in fields.items():
if name not in cls._excluded():
current_default = field.default
category = field.field_info.extra.get("category", "Uncategorized")
env_name = env_prefix + "_" + name
if category in initconf and name in initconf.get(category):
field.default = initconf.get(category).get(name)
if env_name.upper() in upcase_environ:
field.default = upcase_environ[env_name.upper()]
cls.add_field_argument(parser, name, field)
field.default = current_default
@classmethod
def cmd_name(self, command_field: str = "type") -> str:
hints = get_type_hints(self)
if command_field in hints:
return get_args(hints[command_field])[0]
else:
return "Uncategorized"
@classmethod
def get_parser(cls) -> ArgumentParser:
parser = PagingArgumentParser(
prog=cls.cmd_name(),
description=cls.__doc__,
)
cls.add_parser_arguments(parser)
return parser
@classmethod
def add_subparser(cls, parser: argparse.ArgumentParser):
parser.add_parser(cls.cmd_name(), help=cls.__doc__)
@classmethod
def _excluded(self) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ["type", "initconf"]
@classmethod
def _excluded_from_yaml(self) -> List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return [
"type",
"initconf",
"version",
"from_file",
"model",
"root",
"max_cache_size",
"max_vram_cache_size",
"always_use_cpu",
"free_gpu_mem",
"xformers_enabled",
"tiled_decode",
]
class Config:
env_file_encoding = "utf-8"
arbitrary_types_allowed = True
case_sensitive = True
@classmethod
def add_field_argument(cls, command_parser, name: str, field, default_override=None):
field_type = get_type_hints(cls).get(name)
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if category := field.field_info.extra.get("category"):
if category not in cls.argparse_groups:
cls.argparse_groups[category] = command_parser.add_argument_group(category)
argparse_group = cls.argparse_groups[category]
else:
argparse_group = command_parser
if get_origin(field_type) == Literal:
allowed_values = get_args(field.type_)
allowed_types = set()
for val in allowed_values:
allowed_types.add(type(val))
allowed_types_list = list(allowed_types)
field_type = allowed_types_list[0] if len(allowed_types) == 1 else int_or_float_or_str
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field_type,
default=default,
choices=allowed_values,
help=field.field_info.description,
)
elif get_origin(field_type) == Union:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=int_or_float_or_str,
default=default,
help=field.field_info.description,
)
elif get_origin(field_type) == list:
argparse_group.add_argument(
f"--{name}",
dest=name,
nargs="*",
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
else:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
def int_or_float_or_str(value: str) -> Union[int, float, str]:
"""
Workaround for argparse type checking.
"""
try:
return int(value)
except Exception as e: # noqa F841
pass
try:
return float(value)
except Exception as e: # noqa F841
pass
return str(value)

View File

@ -10,37 +10,49 @@ categories returned by `invokeai --help`. The file looks like this:
[file: invokeai.yaml]
InvokeAI:
Paths:
root: /home/lstein/invokeai-main
conf_path: configs/models.yaml
legacy_conf_dir: configs/stable-diffusion
outdir: outputs
autoimport_dir: null
Models:
model: stable-diffusion-1.5
embeddings: true
Memory/Performance:
xformers_enabled: false
sequential_guidance: false
precision: float16
max_cache_size: 6
max_vram_cache_size: 0.5
always_use_cpu: false
free_gpu_mem: false
Features:
esrgan: true
patchmatch: true
internet_available: true
log_tokenization: false
Web Server:
host: 127.0.0.1
port: 8081
port: 9090
allow_origins: []
allow_credentials: true
allow_methods:
- '*'
allow_headers:
- '*'
Features:
esrgan: true
internet_available: true
log_tokenization: false
patchmatch: true
ignore_missing_core_models: false
Paths:
autoimport_dir: autoimport
lora_dir: null
embedding_dir: null
controlnet_dir: null
conf_path: configs/models.yaml
models_dir: models
legacy_conf_dir: configs/stable-diffusion
db_dir: databases
outdir: /home/lstein/invokeai-main/outputs
use_memory_db: false
Logging:
log_handlers:
- console
log_format: plain
log_level: info
Model Cache:
ram: 13.5
vram: 0.25
lazy_offload: true
Device:
device: auto
precision: auto
Generation:
sequential_guidance: false
attention_type: xformers
attention_slice_size: auto
force_tiled_decode: false
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing any
@ -54,24 +66,23 @@ InvokeAIAppConfig.parse_args() will parse the contents of `sys.argv`
at initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:
conf.parse_args(argv=['--xformers_enabled'])
conf.parse_args(argv=['--log_tokenization'])
It is also possible to set a value at initialization time. However, if
you call parse_args() it may be overwritten.
conf = InvokeAIAppConfig(xformers_enabled=True)
conf.parse_args(argv=['--no-xformers'])
conf.xformers_enabled
conf = InvokeAIAppConfig(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# False
To avoid this, use `get_config()` to retrieve the application-wide
configuration object. This will retain any properties set at object
creation time:
conf = InvokeAIAppConfig.get_config(xformers_enabled=True)
conf.parse_args(argv=['--no-xformers'])
conf.xformers_enabled
conf = InvokeAIAppConfig.get_config(log_tokenization=True)
conf.parse_args(argv=['--no-log_tokenization'])
conf.log_tokenization
# True
Any setting can be overwritten by setting an environment variable of
@ -93,7 +104,7 @@ Typical usage at the top level file:
# get global configuration and print its cache size
conf = InvokeAIAppConfig.get_config()
conf.parse_args()
print(conf.max_cache_size)
print(conf.ram_cache_size)
Typical usage in a backend module:
@ -101,8 +112,7 @@ Typical usage in a backend module:
# get global configuration and print its cache size value
conf = InvokeAIAppConfig.get_config()
print(conf.max_cache_size)
print(conf.ram_cache_size)
Computed properties:
@ -159,15 +169,15 @@ two configs are kept in separate sections of the config file:
"""
from __future__ import annotations
import argparse
import pydoc
import os
import sys
from argparse import ArgumentParser
from omegaconf import OmegaConf, DictConfig, ListConfig
from pathlib import Path
from pydantic import BaseSettings, Field, parse_obj_as
from typing import ClassVar, Dict, List, Set, Literal, Union, get_origin, get_type_hints, get_args
from typing import ClassVar, Dict, List, Literal, Union, get_type_hints, Optional
from omegaconf import OmegaConf, DictConfig
from pydantic import Field, parse_obj_as
from .base import InvokeAISettings
INIT_FILE = Path("invokeai.yaml")
DB_FILE = Path("invokeai.db")
@ -175,195 +185,6 @@ LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_MAX_VRAM = 0.5
class InvokeAISettings(BaseSettings):
"""
Runtime configuration settings in which default values are
read from an omegaconf .yaml file.
"""
initconf: ClassVar[DictConfig] = None
argparse_groups: ClassVar[Dict] = {}
def parse_args(self, argv: list = sys.argv[1:]):
parser = self.get_parser()
opt = parser.parse_args(argv)
for name in self.__fields__:
if name not in self._excluded():
value = getattr(opt, name)
if isinstance(value, ListConfig):
value = list(value)
elif isinstance(value, DictConfig):
value = dict(value)
setattr(self, name, value)
def to_yaml(self) -> str:
"""
Return a YAML string representing our settings. This can be used
as the contents of `invokeai.yaml` to restore settings later.
"""
cls = self.__class__
type = get_args(get_type_hints(cls)["type"])[0]
field_dict = dict({type: dict()})
for name, field in self.__fields__.items():
if name in cls._excluded_from_yaml():
continue
category = field.field_info.extra.get("category") or "Uncategorized"
value = getattr(self, name)
if category not in field_dict[type]:
field_dict[type][category] = dict()
# keep paths as strings to make it easier to read
field_dict[type][category][name] = str(value) if isinstance(value, Path) else value
conf = OmegaConf.create(field_dict)
return OmegaConf.to_yaml(conf)
@classmethod
def add_parser_arguments(cls, parser):
if "type" in get_type_hints(cls):
settings_stanza = get_args(get_type_hints(cls)["type"])[0]
else:
settings_stanza = "Uncategorized"
env_prefix = cls.Config.env_prefix if hasattr(cls.Config, "env_prefix") else settings_stanza.upper()
initconf = (
cls.initconf.get(settings_stanza)
if cls.initconf and settings_stanza in cls.initconf
else OmegaConf.create()
)
# create an upcase version of the environment in
# order to achieve case-insensitive environment
# variables (the way Windows does)
upcase_environ = dict()
for key, value in os.environ.items():
upcase_environ[key.upper()] = value
fields = cls.__fields__
cls.argparse_groups = {}
for name, field in fields.items():
if name not in cls._excluded():
current_default = field.default
category = field.field_info.extra.get("category", "Uncategorized")
env_name = env_prefix + "_" + name
if category in initconf and name in initconf.get(category):
field.default = initconf.get(category).get(name)
if env_name.upper() in upcase_environ:
field.default = upcase_environ[env_name.upper()]
cls.add_field_argument(parser, name, field)
field.default = current_default
@classmethod
def cmd_name(self, command_field: str = "type") -> str:
hints = get_type_hints(self)
if command_field in hints:
return get_args(hints[command_field])[0]
else:
return "Uncategorized"
@classmethod
def get_parser(cls) -> ArgumentParser:
parser = PagingArgumentParser(
prog=cls.cmd_name(),
description=cls.__doc__,
)
cls.add_parser_arguments(parser)
return parser
@classmethod
def add_subparser(cls, parser: argparse.ArgumentParser):
parser.add_parser(cls.cmd_name(), help=cls.__doc__)
@classmethod
def _excluded(self) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ["type", "initconf"]
@classmethod
def _excluded_from_yaml(self) -> List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return [
"type",
"initconf",
"version",
"from_file",
"model",
"root",
]
class Config:
env_file_encoding = "utf-8"
arbitrary_types_allowed = True
case_sensitive = True
@classmethod
def add_field_argument(cls, command_parser, name: str, field, default_override=None):
field_type = get_type_hints(cls).get(name)
default = (
default_override
if default_override is not None
else field.default
if field.default_factory is None
else field.default_factory()
)
if category := field.field_info.extra.get("category"):
if category not in cls.argparse_groups:
cls.argparse_groups[category] = command_parser.add_argument_group(category)
argparse_group = cls.argparse_groups[category]
else:
argparse_group = command_parser
if get_origin(field_type) == Literal:
allowed_values = get_args(field.type_)
allowed_types = set()
for val in allowed_values:
allowed_types.add(type(val))
allowed_types_list = list(allowed_types)
field_type = allowed_types_list[0] if len(allowed_types) == 1 else Union[allowed_types_list] # type: ignore
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field_type,
default=default,
choices=allowed_values,
help=field.field_info.description,
)
elif get_origin(field_type) == list:
argparse_group.add_argument(
f"--{name}",
dest=name,
nargs="*",
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
else:
argparse_group.add_argument(
f"--{name}",
dest=name,
type=field.type_,
default=default,
action=argparse.BooleanOptionalAction if field.type_ == bool else "store",
help=field.field_info.description,
)
def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root
class InvokeAIAppConfig(InvokeAISettings):
"""
Generate images using Stable Diffusion. Use "invokeai" to launch
@ -378,6 +199,8 @@ class InvokeAIAppConfig(InvokeAISettings):
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
# WEB
host : str = Field(default="127.0.0.1", description="IP address to bind to", category='Web Server')
port : int = Field(default=9090, description="Port to bind to", category='Web Server')
allow_origins : List[str] = Field(default=[], description="Allowed CORS origins", category='Web Server')
@ -385,20 +208,14 @@ class InvokeAIAppConfig(InvokeAISettings):
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS", category='Web Server')
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS", category='Web Server')
# FEATURES
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", category='Features')
internet_available : bool = Field(default=True, description="If true, attempt to download models on the fly; otherwise only use local models", category='Features')
log_tokenization : bool = Field(default=False, description="Enable logging of parsed prompt tokens.", category='Features')
patchmatch : bool = Field(default=True, description="Enable/disable patchmatch inpaint code", category='Features')
ignore_missing_core_models : bool = Field(default=False, description='Ignore missing models in models/core/convert', category='Features')
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
free_gpu_mem : bool = Field(default=False, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
max_cache_size : float = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching", category='Memory/Performance')
max_vram_cache_size : float = Field(default=2.75, ge=0, description="Amount of VRAM reserved for model storage", category='Memory/Performance')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='auto',description='Floating point precision', category='Memory/Performance')
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')
# PATHS
root : Path = Field(default=None, description='InvokeAI runtime root directory', category='Paths')
autoimport_dir : Path = Field(default='autoimport', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
@ -409,16 +226,43 @@ class InvokeAIAppConfig(InvokeAISettings):
legacy_conf_dir : Path = Field(default='configs/stable-diffusion', description='Path to directory of legacy checkpoint config files', category='Paths')
db_dir : Path = Field(default='databases', description='Path to InvokeAI databases directory', category='Paths')
outdir : Path = Field(default='outputs', description='Default folder for output images', category='Paths')
from_file : Path = Field(default=None, description='Take command input from the indicated file (command-line client only)', category='Paths')
use_memory_db : bool = Field(default=False, description='Use in-memory database for storing image metadata', category='Paths')
ignore_missing_core_models : bool = Field(default=False, description='Ignore missing models in models/core/convert', category='Features')
from_file : Path = Field(default=None, description='Take command input from the indicated file (command-line client only)', category='Paths')
# LOGGING
log_handlers : List[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>"', category="Logging")
# note - would be better to read the log_format values from logging.py, but this creates circular dependencies issues
log_format : Literal[tuple(['plain','color','syslog','legacy'])] = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style', category="Logging")
log_level : Literal[tuple(["debug","info","warning","error","critical"])] = Field(default="info", description="Emit logging messages at this level or higher", category="Logging")
log_format : Literal['plain', 'color', 'syslog', 'legacy'] = Field(default="color", description='Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style', category="Logging")
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", category="Logging")
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", category="Development")
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
# CACHE
ram : Union[float, Literal["auto"]] = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number or 'auto')", category="Model Cache", )
vram : Union[float, Literal["auto"]] = Field(default=0.25, ge=0, description="Amount of VRAM reserved for model storage (floating point number or 'auto')", category="Model Cache", )
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed", category="Model Cache", )
# DEVICE
device : Literal[tuple(["auto", "cpu", "cuda", "cuda:1", "mps"])] = Field(default="auto", description="Generation device", category="Device", )
precision: Literal[tuple(["auto", "float16", "float32", "autocast"])] = Field(default="auto", description="Floating point precision", category="Device", )
# GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category="Generation", )
attention_type : Literal[tuple(["auto", "normal", "xformers", "sliced", "torch-sdp"])] = Field(default="auto", description="Attention type", category="Generation", )
attention_slice_size: Literal[tuple(["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8])] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', category="Generation", )
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category="Generation",)
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", category='Memory/Performance')
max_vram_cache_size : Optional[float] = Field(default=None, ge=0, description="Amount of VRAM reserved for model storage", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')
# See InvokeAIAppConfig subclass below for CACHE and DEVICE categories
# fmt: on
class Config:
@ -438,7 +282,7 @@ class InvokeAIAppConfig(InvokeAISettings):
if conf is None:
try:
conf = OmegaConf.load(self.root_dir / INIT_FILE)
except:
except Exception:
pass
InvokeAISettings.initconf = conf
@ -457,7 +301,7 @@ class InvokeAIAppConfig(InvokeAISettings):
"""
if (
cls.singleton_config is None
or type(cls.singleton_config) != cls
or type(cls.singleton_config) is not cls
or (kwargs and cls.singleton_init != kwargs)
):
cls.singleton_config = cls(**kwargs)
@ -541,11 +385,6 @@ class InvokeAIAppConfig(InvokeAISettings):
"""Return true if precision set to float32"""
return self.precision == "float32"
@property
def disable_xformers(self) -> bool:
"""Return true if xformers_enabled is false"""
return not self.xformers_enabled
@property
def try_patchmatch(self) -> bool:
"""Return true if patchmatch true"""
@ -561,6 +400,27 @@ class InvokeAIAppConfig(InvokeAISettings):
"""invisible watermark node is always active and disabled from Web UIe"""
return True
@property
def ram_cache_size(self) -> float:
return self.max_cache_size or self.ram
@property
def vram_cache_size(self) -> float:
return self.max_vram_cache_size or self.vram
@property
def use_cpu(self) -> bool:
return self.always_use_cpu or self.device == "cpu"
@property
def disable_xformers(self) -> bool:
"""
Return true if enable_xformers is false (reversed logic)
and attention type is not set to xformers.
"""
disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers"
@staticmethod
def find_root() -> Path:
"""
@ -570,19 +430,19 @@ class InvokeAIAppConfig(InvokeAISettings):
return _find_root()
class PagingArgumentParser(argparse.ArgumentParser):
"""
A custom ArgumentParser that uses pydoc to page its output.
It also supports reading defaults from an init file.
"""
def print_help(self, file=None):
text = self.format_help()
pydoc.pager(text)
def get_invokeai_config(**kwargs) -> InvokeAIAppConfig:
"""
Legacy function which returns InvokeAIAppConfig.get_config()
"""
return InvokeAIAppConfig.get_config(**kwargs)
def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root

View File

@ -17,9 +17,9 @@ def create_text_to_image() -> LibraryGraph:
description="Converts text to an image",
graph=Graph(
nodes={
"width": IntegerInvocation(id="width", a=512),
"height": IntegerInvocation(id="height", a=512),
"seed": IntegerInvocation(id="seed", a=-1),
"width": IntegerInvocation(id="width", value=512),
"height": IntegerInvocation(id="height", value=512),
"seed": IntegerInvocation(id="seed", value=-1),
"3": NoiseInvocation(id="3"),
"4": CompelInvocation(id="4"),
"5": CompelInvocation(id="5"),
@ -29,15 +29,15 @@ def create_text_to_image() -> LibraryGraph:
},
edges=[
Edge(
source=EdgeConnection(node_id="width", field="a"),
source=EdgeConnection(node_id="width", field="value"),
destination=EdgeConnection(node_id="3", field="width"),
),
Edge(
source=EdgeConnection(node_id="height", field="a"),
source=EdgeConnection(node_id="height", field="value"),
destination=EdgeConnection(node_id="3", field="height"),
),
Edge(
source=EdgeConnection(node_id="seed", field="a"),
source=EdgeConnection(node_id="seed", field="value"),
destination=EdgeConnection(node_id="3", field="seed"),
),
Edge(
@ -65,9 +65,9 @@ def create_text_to_image() -> LibraryGraph:
exposed_inputs=[
ExposedNodeInput(node_path="4", field="prompt", alias="positive_prompt"),
ExposedNodeInput(node_path="5", field="prompt", alias="negative_prompt"),
ExposedNodeInput(node_path="width", field="a", alias="width"),
ExposedNodeInput(node_path="height", field="a", alias="height"),
ExposedNodeInput(node_path="seed", field="a", alias="seed"),
ExposedNodeInput(node_path="width", field="value", alias="width"),
ExposedNodeInput(node_path="height", field="value", alias="height"),
ExposedNodeInput(node_path="seed", field="value", alias="seed"),
],
exposed_outputs=[ExposedNodeOutput(node_path="8", field="image", alias="image")],
)

View File

@ -3,21 +3,24 @@
import copy
import itertools
import uuid
from typing import Annotated, Any, Literal, Optional, Union, get_args, get_origin, get_type_hints
from typing import Annotated, Any, Optional, Union, get_args, get_origin, get_type_hints
import networkx as nx
from pydantic import BaseModel, root_validator, validator
from pydantic.fields import Field
from ..invocations import *
# Importing * is bad karma but needed here for node detection
from ..invocations import * # noqa: F401 F403
from ..invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
invocation_output,
)
# in 3.10 this would be "from types import NoneType"
@ -109,6 +112,10 @@ def are_connection_types_compatible(from_type: Any, to_type: Any) -> bool:
if to_type in get_args(from_type):
return True
# allow int -> float, pydantic will cast for us
if from_type is int and to_type is float:
return True
# if not issubclass(from_type, to_type):
if not is_union_subtype(from_type, to_type):
return False
@ -147,24 +154,16 @@ class NodeAlreadyExecutedError(Exception):
# TODO: Create and use an Empty output?
@invocation_output("graph_output")
class GraphInvocationOutput(BaseInvocationOutput):
type: Literal["graph_output"] = "graph_output"
class Config:
schema_extra = {
"required": [
"type",
"image",
]
}
pass
# TODO: Fill this out and move to invocations
@invocation("graph")
class GraphInvocation(BaseInvocation):
"""Execute a graph"""
type: Literal["graph"] = "graph"
# TODO: figure out how to create a default here
graph: "Graph" = Field(description="The graph to run", default=None)
@ -173,22 +172,20 @@ class GraphInvocation(BaseInvocation):
return GraphInvocationOutput()
@invocation_output("iterate_output")
class IterateInvocationOutput(BaseInvocationOutput):
"""Used to connect iteration outputs. Will be expanded to a specific output."""
type: Literal["iterate_output"] = "iterate_output"
item: Any = OutputField(
description="The item being iterated over", title="Collection Item", ui_type=UIType.CollectionItem
)
# TODO: Fill this out and move to invocations
@invocation("iterate", version="1.0.0")
class IterateInvocation(BaseInvocation):
"""Iterates over a list of items"""
type: Literal["iterate"] = "iterate"
collection: list[Any] = InputField(
description="The list of items to iterate over", default_factory=list, ui_type=UIType.Collection
)
@ -199,19 +196,17 @@ class IterateInvocation(BaseInvocation):
return IterateInvocationOutput(item=self.collection[self.index])
@invocation_output("collect_output")
class CollectInvocationOutput(BaseInvocationOutput):
type: Literal["collect_output"] = "collect_output"
collection: list[Any] = OutputField(
description="The collection of input items", title="Collection", ui_type=UIType.Collection
)
@invocation("collect", version="1.0.0")
class CollectInvocation(BaseInvocation):
"""Collects values into a collection"""
type: Literal["collect"] = "collect"
item: Any = InputField(
description="The item to collect (all inputs must be of the same type)",
ui_type=UIType.CollectionItem,
@ -445,7 +440,7 @@ class Graph(BaseModel):
node = graph.nodes[node_id]
# Ensure the node type matches the new node
if type(node) != type(new_node):
if type(node) is not type(new_node):
raise TypeError(f"Node {node_path} is type {type(node)} but new node is type {type(new_node)}")
# Ensure the new id is either the same or is not in the graph
@ -632,7 +627,7 @@ class Graph(BaseModel):
[
t
for input_field in input_fields
for t in ([input_field] if get_origin(input_field) == None else get_args(input_field))
for t in ([input_field] if get_origin(input_field) is None else get_args(input_field))
if t != NoneType
]
) # Get unique types
@ -923,7 +918,7 @@ class GraphExecutionState(BaseModel):
None,
)
if next_node_id == None:
if next_node_id is None:
return None
# Get all parents of the next node

View File

@ -60,7 +60,7 @@ class ImageFileStorageBase(ABC):
image: PILImageType,
image_name: str,
metadata: Optional[dict] = None,
graph: Optional[dict] = None,
workflow: Optional[str] = None,
thumbnail_size: int = 256,
) -> None:
"""Saves an image and a 256x256 WEBP thumbnail. Returns a tuple of the image name, thumbnail name, and created timestamp."""
@ -110,7 +110,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
image: PILImageType,
image_name: str,
metadata: Optional[dict] = None,
graph: Optional[dict] = None,
workflow: Optional[str] = None,
thumbnail_size: int = 256,
) -> None:
try:
@ -119,12 +119,23 @@ class DiskImageFileStorage(ImageFileStorageBase):
pnginfo = PngImagePlugin.PngInfo()
if metadata is not None:
pnginfo.add_text("invokeai_metadata", json.dumps(metadata))
if graph is not None:
pnginfo.add_text("invokeai_graph", json.dumps(graph))
if metadata is not None or workflow is not None:
if metadata is not None:
pnginfo.add_text("invokeai_metadata", json.dumps(metadata))
if workflow is not None:
pnginfo.add_text("invokeai_workflow", workflow)
else:
# For uploaded images, we want to retain metadata. PIL strips it on save; manually add it back
# TODO: retain non-invokeai metadata on save...
original_metadata = image.info.get("invokeai_metadata", None)
if original_metadata is not None:
pnginfo.add_text("invokeai_metadata", original_metadata)
original_workflow = image.info.get("invokeai_workflow", None)
if original_workflow is not None:
pnginfo.add_text("invokeai_workflow", original_workflow)
image.save(image_path, "PNG", pnginfo=pnginfo)
thumbnail_name = get_thumbnail_name(image_name)
thumbnail_path = self.get_path(thumbnail_name, thumbnail=True)
thumbnail_image = make_thumbnail(image, thumbnail_size)
@ -179,7 +190,7 @@ class DiskImageFileStorage(ImageFileStorageBase):
return None if image_name not in self.__cache else self.__cache[image_name]
def __set_cache(self, image_name: Path, image: PILImageType):
if not image_name in self.__cache:
if image_name not in self.__cache:
self.__cache[image_name] = image
self.__cache_ids.put(image_name) # TODO: this should refresh position for LRU cache
if len(self.__cache) > self.__max_cache_size:

View File

@ -282,7 +282,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
self._lock.acquire()
self._cursor.execute(
f"""--sql
"""--sql
SELECT images.metadata FROM images
WHERE image_name = ?;
""",
@ -309,7 +309,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
# Change the category of the image
if changes.image_category is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE images
SET image_category = ?
WHERE image_name = ?;
@ -320,7 +320,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
# Change the session associated with the image
if changes.session_id is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE images
SET session_id = ?
WHERE image_name = ?;
@ -331,7 +331,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
# Change the image's `is_intermediate`` flag
if changes.is_intermediate is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE images
SET is_intermediate = ?
WHERE image_name = ?;
@ -342,7 +342,7 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
# Change the image's `starred`` state
if changes.starred is not None:
self._cursor.execute(
f"""--sql
"""--sql
UPDATE images
SET starred = ?
WHERE image_name = ?;

View File

@ -1,4 +1,3 @@
import json
from abc import ABC, abstractmethod
from logging import Logger
from typing import TYPE_CHECKING, Optional
@ -55,6 +54,7 @@ class ImageServiceABC(ABC):
board_id: Optional[str] = None,
is_intermediate: bool = False,
metadata: Optional[dict] = None,
workflow: Optional[str] = None,
) -> ImageDTO:
"""Creates an image, storing the file and its metadata."""
pass
@ -178,6 +178,7 @@ class ImageService(ImageServiceABC):
board_id: Optional[str] = None,
is_intermediate: bool = False,
metadata: Optional[dict] = None,
workflow: Optional[str] = None,
) -> ImageDTO:
if image_origin not in ResourceOrigin:
raise InvalidOriginException
@ -187,16 +188,16 @@ class ImageService(ImageServiceABC):
image_name = self._services.names.create_image_name()
graph = None
if session_id is not None:
session_raw = self._services.graph_execution_manager.get_raw(session_id)
if session_raw is not None:
try:
graph = get_metadata_graph_from_raw_session(session_raw)
except Exception as e:
self._services.logger.warn(f"Failed to parse session graph: {e}")
graph = None
# TODO: Do we want to store the graph in the image at all? I don't think so...
# graph = None
# if session_id is not None:
# session_raw = self._services.graph_execution_manager.get_raw(session_id)
# if session_raw is not None:
# try:
# graph = get_metadata_graph_from_raw_session(session_raw)
# except Exception as e:
# self._services.logger.warn(f"Failed to parse session graph: {e}")
# graph = None
(width, height) = image.size
@ -218,7 +219,7 @@ class ImageService(ImageServiceABC):
)
if board_id is not None:
self._services.board_image_records.add_image_to_board(board_id=board_id, image_name=image_name)
self._services.image_files.save(image_name=image_name, image=image, metadata=metadata, graph=graph)
self._services.image_files.save(image_name=image_name, image=image, metadata=metadata, workflow=workflow)
image_dto = self.get_dto(image_name)
return image_dto
@ -379,10 +380,10 @@ class ImageService(ImageServiceABC):
self._services.image_files.delete(image_name)
self._services.image_records.delete(image_name)
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image record")
self._services.logger.error("Failed to delete image record")
raise
except ImageFileDeleteException:
self._services.logger.error(f"Failed to delete image file")
self._services.logger.error("Failed to delete image file")
raise
except Exception as e:
self._services.logger.error("Problem deleting image record and file")
@ -395,10 +396,10 @@ class ImageService(ImageServiceABC):
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_names)
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")
self._services.logger.error("Failed to delete image records")
raise
except ImageFileDeleteException:
self._services.logger.error(f"Failed to delete image files")
self._services.logger.error("Failed to delete image files")
raise
except Exception as e:
self._services.logger.error("Problem deleting image records and files")
@ -412,10 +413,10 @@ class ImageService(ImageServiceABC):
self._services.image_files.delete(image_name)
return count
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")
self._services.logger.error("Failed to delete image records")
raise
except ImageFileDeleteException:
self._services.logger.error(f"Failed to delete image files")
self._services.logger.error("Failed to delete image files")
raise
except Exception as e:
self._services.logger.error("Problem deleting image records and files")

View File

@ -7,6 +7,7 @@ if TYPE_CHECKING:
from invokeai.app.services.board_images import BoardImagesServiceABC
from invokeai.app.services.boards import BoardServiceABC
from invokeai.app.services.images import ImageServiceABC
from invokeai.app.services.invocation_stats import InvocationStatsServiceBase
from invokeai.app.services.model_manager_service import ModelManagerServiceBase
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.latent_storage import LatentsStorageBase

View File

@ -1,7 +1,6 @@
# Copyright 2023 Lincoln D. Stein <lincoln.stein@gmail.com>
"""Utility to collect execution time and GPU usage stats on invocations in flight"""
"""Utility to collect execution time and GPU usage stats on invocations in flight
"""
Usage:
statistics = InvocationStatsService(graph_execution_manager)
@ -50,9 +49,36 @@ from invokeai.backend.model_management.model_cache import CacheStats
GIG = 1073741824
@dataclass
class NodeStats:
"""Class for tracking execution stats of an invocation node"""
calls: int = 0
time_used: float = 0.0 # seconds
max_vram: float = 0.0 # GB
cache_hits: int = 0
cache_misses: int = 0
cache_high_watermark: int = 0
@dataclass
class NodeLog:
"""Class for tracking node usage"""
# {node_type => NodeStats}
nodes: Dict[str, NodeStats] = field(default_factory=dict)
class InvocationStatsServiceBase(ABC):
"Abstract base class for recording node memory/time performance statistics"
graph_execution_manager: ItemStorageABC["GraphExecutionState"]
# {graph_id => NodeLog}
_stats: Dict[str, NodeLog]
_cache_stats: Dict[str, CacheStats]
ram_used: float
ram_changed: float
@abstractmethod
def __init__(self, graph_execution_manager: ItemStorageABC["GraphExecutionState"]):
"""
@ -95,8 +121,6 @@ class InvocationStatsServiceBase(ABC):
invocation_type: str,
time_used: float,
vram_used: float,
ram_used: float,
ram_changed: float,
):
"""
Add timing information on execution of a node. Usually
@ -105,8 +129,6 @@ class InvocationStatsServiceBase(ABC):
:param invocation_type: String literal type of the node
:param time_used: Time used by node's exection (sec)
:param vram_used: Maximum VRAM used during exection (GB)
:param ram_used: Current RAM available (GB)
:param ram_changed: Change in RAM usage over course of the run (GB)
"""
pass
@ -117,25 +139,19 @@ class InvocationStatsServiceBase(ABC):
"""
pass
@abstractmethod
def update_mem_stats(
self,
ram_used: float,
ram_changed: float,
):
"""
Update the collector with RAM memory usage info.
@dataclass
class NodeStats:
"""Class for tracking execution stats of an invocation node"""
calls: int = 0
time_used: float = 0.0 # seconds
max_vram: float = 0.0 # GB
cache_hits: int = 0
cache_misses: int = 0
cache_high_watermark: int = 0
@dataclass
class NodeLog:
"""Class for tracking node usage"""
# {node_type => NodeStats}
nodes: Dict[str, NodeStats] = field(default_factory=dict)
:param ram_used: How much RAM is currently in use.
:param ram_changed: How much RAM changed since last generation.
"""
pass
class InvocationStatsService(InvocationStatsServiceBase):
@ -153,12 +169,12 @@ class InvocationStatsService(InvocationStatsServiceBase):
class StatsContext:
"""Context manager for collecting statistics."""
invocation: BaseInvocation = None
collector: "InvocationStatsServiceBase" = None
graph_id: str = None
start_time: int = 0
ram_used: int = 0
model_manager: ModelManagerService = None
invocation: BaseInvocation
collector: "InvocationStatsServiceBase"
graph_id: str
start_time: float
ram_used: int
model_manager: ModelManagerService
def __init__(
self,
@ -171,7 +187,7 @@ class InvocationStatsService(InvocationStatsServiceBase):
self.invocation = invocation
self.collector = collector
self.graph_id = graph_id
self.start_time = 0
self.start_time = 0.0
self.ram_used = 0
self.model_manager = model_manager
@ -192,7 +208,7 @@ class InvocationStatsService(InvocationStatsServiceBase):
)
self.collector.update_invocation_stats(
graph_id=self.graph_id,
invocation_type=self.invocation.type,
invocation_type=self.invocation.type, # type: ignore - `type` is not on the `BaseInvocation` model, but *is* on all invocations
time_used=time.time() - self.start_time,
vram_used=torch.cuda.max_memory_allocated() / GIG if torch.cuda.is_available() else 0.0,
)
@ -203,11 +219,6 @@ class InvocationStatsService(InvocationStatsServiceBase):
graph_execution_state_id: str,
model_manager: ModelManagerService,
) -> StatsContext:
"""
Return a context object that will capture the statistics.
:param invocation: BaseInvocation object from the current graph.
:param graph_execution_state: GraphExecutionState object from the current session.
"""
if not self._stats.get(graph_execution_state_id): # first time we're seeing this
self._stats[graph_execution_state_id] = NodeLog()
self._cache_stats[graph_execution_state_id] = CacheStats()
@ -218,7 +229,6 @@ class InvocationStatsService(InvocationStatsServiceBase):
self._stats = {}
def reset_stats(self, graph_execution_id: str):
"""Zero the statistics for the indicated graph."""
try:
self._stats.pop(graph_execution_id)
except KeyError:
@ -229,12 +239,6 @@ class InvocationStatsService(InvocationStatsServiceBase):
ram_used: float,
ram_changed: float,
):
"""
Update the collector with RAM memory usage info.
:param ram_used: How much RAM is currently in use.
:param ram_changed: How much RAM changed since last generation.
"""
self.ram_used = ram_used
self.ram_changed = ram_changed
@ -245,16 +249,6 @@ class InvocationStatsService(InvocationStatsServiceBase):
time_used: float,
vram_used: float,
):
"""
Add timing information on execution of a node. Usually
used internally.
:param graph_id: ID of the graph that is currently executing
:param invocation_type: String literal type of the node
:param time_used: Time used by node's exection (sec)
:param vram_used: Maximum VRAM used during exection (GB)
:param ram_used: Current RAM available (GB)
:param ram_changed: Change in RAM usage over course of the run (GB)
"""
if not self._stats[graph_id].nodes.get(invocation_type):
self._stats[graph_id].nodes[invocation_type] = NodeStats()
stats = self._stats[graph_id].nodes[invocation_type]
@ -263,14 +257,15 @@ class InvocationStatsService(InvocationStatsServiceBase):
stats.max_vram = max(stats.max_vram, vram_used)
def log_stats(self):
"""
Send the statistics to the system logger at the info level.
Stats will only be printed when the execution of the graph
is complete.
"""
completed = set()
errored = set()
for graph_id, node_log in self._stats.items():
current_graph_state = self.graph_execution_manager.get(graph_id)
try:
current_graph_state = self.graph_execution_manager.get(graph_id)
except Exception:
errored.add(graph_id)
continue
if not current_graph_state.is_complete():
continue
@ -303,3 +298,7 @@ class InvocationStatsService(InvocationStatsServiceBase):
for graph_id in completed:
del self._stats[graph_id]
del self._cache_stats[graph_id]
for graph_id in errored:
del self._stats[graph_id]
del self._cache_stats[graph_id]

View File

@ -60,7 +60,7 @@ class ForwardCacheLatentsStorage(LatentsStorageBase):
return None if name not in self.__cache else self.__cache[name]
def __set_cache(self, name: str, data: torch.Tensor):
if not name in self.__cache:
if name not in self.__cache:
self.__cache[name] = data
self.__cache_ids.put(name)
if self.__cache_ids.qsize() > self.__max_cache_size:

View File

@ -330,8 +330,8 @@ class ModelManagerService(ModelManagerServiceBase):
# configuration value. If present, then the
# cache size is set to 2.5 GB times
# the number of max_loaded_models. Otherwise
# use new `max_cache_size` config setting
max_cache_size = config.max_cache_size if hasattr(config, "max_cache_size") else config.max_loaded_models * 2.5
# use new `ram_cache_size` config setting
max_cache_size = config.ram_cache_size
logger.debug(f"Maximum RAM cache size: {max_cache_size} GiB")

View File

@ -53,7 +53,7 @@ class ImageRecordChanges(BaseModelExcludeNull, extra=Extra.forbid):
- `starred`: change whether the image is starred
"""
image_category: Optional[ImageCategory] = Field(description="The image's new category.")
image_category: Optional[ImageCategory] = Field(default=None, description="The image's new category.")
"""The image's new category."""
session_id: Optional[StrictStr] = Field(
default=None,

View File

@ -1,3 +1,4 @@
from typing import Union
import torch
import numpy as np
import cv2
@ -5,7 +6,7 @@ from PIL import Image
from diffusers.utils import PIL_INTERPOLATION
from einops import rearrange
from controlnet_aux.util import HWC3, resize_image
from controlnet_aux.util import HWC3
###################################################################
# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet
@ -232,7 +233,8 @@ def np_img_resize(np_img: np.ndarray, resize_mode: str, h: int, w: int, device:
k0 = float(h) / old_h
k1 = float(w) / old_w
safeint = lambda x: int(np.round(x))
def safeint(x: Union[int, float]) -> int:
return int(np.round(x))
# if resize_mode == external_code.ResizeMode.OUTER_FIT:
if resize_mode == "fill_resize": # OUTER_FIT

View File

@ -5,7 +5,6 @@ from invokeai.app.models.image import ProgressImage
from ..invocations.baseinvocation import InvocationContext
from ...backend.util.util import image_to_dataURL
from ...backend.stable_diffusion import PipelineIntermediateState
from invokeai.app.services.config import InvokeAIAppConfig
from ...backend.model_management.models import BaseModelType

View File

@ -1,5 +1,5 @@
"""
Initialization file for invokeai.backend
"""
from .model_management import ModelManager, ModelCache, BaseModelType, ModelType, SubModelType, ModelInfo
from .model_management.models import SilenceWarnings
from .model_management import ModelManager, ModelCache, BaseModelType, ModelType, SubModelType, ModelInfo # noqa: F401
from .model_management.models import SilenceWarnings # noqa: F401

View File

@ -1,14 +1,16 @@
"""
Initialization file for invokeai.backend.image_util methods.
"""
from .patchmatch import PatchMatch
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata
from .seamless import configure_model_padding
from .txt2mask import Txt2Mask
from .util import InitImageResizer, make_grid
from .patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .seamless import configure_model_padding # noqa: F401
from .txt2mask import Txt2Mask # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401
def debug_image(debug_image, debug_text, debug_show=True, debug_result=False, debug_status=False):
from PIL import ImageDraw
if not debug_status:
return

View File

@ -0,0 +1,20 @@
import cv2
import numpy as np
from PIL import Image
def cv2_inpaint(image: Image.Image) -> Image.Image:
# Prepare Image
image_array = np.array(image.convert("RGB"))
image_cv = cv2.cvtColor(image_array, cv2.COLOR_RGB2BGR)
# Prepare Mask From Alpha Channel
mask = image.split()[3].convert("RGB")
mask_array = np.array(mask)
mask_cv = cv2.cvtColor(mask_array, cv2.COLOR_BGR2GRAY)
mask_inv = cv2.bitwise_not(mask_cv)
# Inpaint Image
inpainted_result = cv2.inpaint(image_cv, mask_inv, 3, cv2.INPAINT_TELEA)
inpainted_image = Image.fromarray(cv2.cvtColor(inpainted_result, cv2.COLOR_BGR2RGB))
return inpainted_image

View File

@ -0,0 +1,58 @@
import gc
from typing import Any
import numpy as np
import torch
from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import get_invokeai_config
from invokeai.backend.util.devices import choose_torch_device
def norm_img(np_img):
if len(np_img.shape) == 2:
np_img = np_img[:, :, np.newaxis]
np_img = np.transpose(np_img, (2, 0, 1))
np_img = np_img.astype("float32") / 255
return np_img
def load_jit_model(url_or_path, device):
model_path = url_or_path
logger.info(f"Loading model from: {model_path}")
model = torch.jit.load(model_path, map_location="cpu").to(device)
model.eval()
return model
class LaMA:
def __call__(self, input_image: Image.Image, *args: Any, **kwds: Any) -> Any:
device = choose_torch_device()
model_location = get_invokeai_config().models_path / "core/misc/lama/lama.pt"
model = load_jit_model(model_location, device)
image = np.asarray(input_image.convert("RGB"))
image = norm_img(image)
mask = input_image.split()[-1]
mask = np.asarray(mask)
mask = np.invert(mask)
mask = norm_img(mask)
mask = (mask > 0) * 1
image = torch.from_numpy(image).unsqueeze(0).to(device)
mask = torch.from_numpy(mask).unsqueeze(0).to(device)
with torch.inference_mode():
infilled_image = model(image, mask)
infilled_image = infilled_image[0].permute(1, 2, 0).detach().cpu().numpy()
infilled_image = np.clip(infilled_image * 255, 0, 255).astype("uint8")
infilled_image = Image.fromarray(infilled_image)
del model
gc.collect()
torch.cuda.empty_cache()
return infilled_image

View File

@ -26,7 +26,7 @@ class PngWriter:
dirlist = sorted(os.listdir(self.outdir), reverse=True)
# find the first filename that matches our pattern or return 000000.0.png
existing_name = next(
(f for f in dirlist if re.match("^(\d+)\..*\.png", f)),
(f for f in dirlist if re.match(r"^(\d+)\..*\.png", f)),
"0000000.0.png",
)
basecount = int(existing_name.split(".", 1)[0]) + 1
@ -98,11 +98,11 @@ class PromptFormatter:
# to do: put model name into the t2i object
# switches.append(f'--model{t2i.model_name}')
if opt.seamless or t2i.seamless:
switches.append(f"--seamless")
switches.append("--seamless")
if opt.init_img:
switches.append(f"-I{opt.init_img}")
if opt.fit:
switches.append(f"--fit")
switches.append("--fit")
if opt.strength and opt.init_img is not None:
switches.append(f"-f{opt.strength or t2i.strength}")
if opt.gfpgan_strength:

View File

@ -20,7 +20,8 @@ def _conv_forward_asymmetric(self, input, weight, bias):
def configure_model_padding(model, seamless, seamless_axes):
"""
Modifies the 2D convolution layers to use a circular padding mode based on the `seamless` and `seamless_axes` options.
Modifies the 2D convolution layers to use a circular padding mode based on
the `seamless` and `seamless_axes` options.
"""
# TODO: get an explicit interface for this in diffusers: https://github.com/huggingface/diffusers/issues/556
for m in model.modules():

View File

@ -21,6 +21,7 @@ from argparse import Namespace
from enum import Enum
from pathlib import Path
from shutil import get_terminal_size
from typing import get_type_hints, get_args, Any
from urllib import request
import npyscreen
@ -49,10 +50,10 @@ from invokeai.frontend.install.model_install import addModelsForm, process_and_e
# TO DO - Move all the frontend code into invokeai.frontend.install
from invokeai.frontend.install.widgets import (
SingleSelectColumns,
SingleSelectColumnsSimple,
MultiSelectColumns,
CenteredButtonPress,
FileBox,
IntTitleSlider,
set_min_terminal_size,
CyclingForm,
MIN_COLS,
@ -72,6 +73,10 @@ warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()
def get_literal_fields(field) -> list[Any]:
return get_args(get_type_hints(InvokeAIAppConfig).get(field))
# --------------------------globals-----------------------
config = InvokeAIAppConfig.get_config()
@ -81,7 +86,11 @@ Model_dir = "models"
Default_config_file = config.model_conf_path
SD_Configs = config.legacy_conf_path
PRECISION_CHOICES = ["auto", "float16", "float32"]
PRECISION_CHOICES = get_literal_fields("precision")
DEVICE_CHOICES = get_literal_fields("device")
ATTENTION_CHOICES = get_literal_fields("attention_type")
ATTENTION_SLICE_CHOICES = get_literal_fields("attention_slice_size")
GENERATION_OPT_CHOICES = ["sequential_guidance", "force_tiled_decode", "lazy_offload"]
GB = 1073741824 # GB in bytes
HAS_CUDA = torch.cuda.is_available()
_, MAX_VRAM = torch.cuda.mem_get_info() if HAS_CUDA else (0, 0)
@ -281,9 +290,20 @@ def download_realesrgan():
download_with_progress_bar(model["url"], config.models_path / model["dest"], model["description"])
# ---------------------------------------------
def download_lama():
logger.info("Installing lama infill model")
download_with_progress_bar(
"https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
config.models_path / "core/misc/lama/lama.pt",
"lama infill model",
)
# ---------------------------------------------
def download_support_models():
download_realesrgan()
download_lama()
download_conversion_models()
@ -308,10 +328,11 @@ class editOptsForm(CyclingForm, npyscreen.FormMultiPage):
first_time = not (config.root_path / "invokeai.yaml").exists()
access_token = HfFolder.get_token()
window_width, window_height = get_terminal_size()
label = """Configure startup settings. You can come back and change these later.
label = """Configure startup settings. You can come back and change these later.
Use ctrl-N and ctrl-P to move to the <N>ext and <P>revious fields.
Use cursor arrows to make a checkbox selection, and space to toggle.
"""
self.nextrely -= 1
for i in textwrap.wrap(label, width=window_width - 6):
self.add_widget_intelligent(
npyscreen.FixedText,
@ -338,76 +359,127 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
use_two_lines=False,
scroll_exit=True,
)
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="GPU Management",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.free_gpu_mem = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Free GPU memory after each generation",
value=old_opts.free_gpu_mem,
max_width=45,
relx=5,
scroll_exit=True,
)
self.nextrely -= 1
self.xformers_enabled = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Enable xformers support",
value=old_opts.xformers_enabled,
max_width=30,
relx=50,
scroll_exit=True,
)
self.nextrely -= 1
self.always_use_cpu = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Force CPU to be used on GPU systems",
value=old_opts.always_use_cpu,
relx=80,
scroll_exit=True,
)
# old settings for defaults
precision = old_opts.precision or ("float32" if program_opts.full_precision else "auto")
device = old_opts.device
attention_type = old_opts.attention_type
attention_slice_size = old_opts.attention_slice_size
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Floating Point Precision",
name="Image Generation Options:",
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.generation_options = self.add_widget_intelligent(
MultiSelectColumns,
columns=3,
values=GENERATION_OPT_CHOICES,
value=[GENERATION_OPT_CHOICES.index(x) for x in GENERATION_OPT_CHOICES if getattr(old_opts, x)],
relx=30,
max_height=2,
max_width=80,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Floating Point Precision:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.nextrely -= 2
self.precision = self.add_widget_intelligent(
SingleSelectColumns,
columns=3,
SingleSelectColumnsSimple,
columns=len(PRECISION_CHOICES),
name="Precision",
values=PRECISION_CHOICES,
value=PRECISION_CHOICES.index(precision),
begin_entry_at=3,
max_height=2,
relx=30,
max_width=56,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Generation Device:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.device = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(DEVICE_CHOICES),
values=DEVICE_CHOICES,
value=[DEVICE_CHOICES.index(device)],
begin_entry_at=3,
relx=30,
max_height=2,
max_width=60,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Attention Type:",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.attention_type = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(ATTENTION_CHOICES),
values=ATTENTION_CHOICES,
value=[ATTENTION_CHOICES.index(attention_type)],
begin_entry_at=3,
max_height=2,
relx=30,
max_width=80,
scroll_exit=True,
)
self.nextrely += 1
self.attention_type.on_changed = self.show_hide_slice_sizes
self.attention_slice_label = self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="Attention Slice Size:",
relx=5,
editable=False,
hidden=attention_type != "sliced",
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 2
self.attention_slice_size = self.add_widget_intelligent(
SingleSelectColumnsSimple,
columns=len(ATTENTION_SLICE_CHOICES),
values=ATTENTION_SLICE_CHOICES,
value=[ATTENTION_SLICE_CHOICES.index(attention_slice_size)],
relx=30,
hidden=attention_type != "sliced",
max_height=2,
max_width=110,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="RAM cache size (GB). Make this at least large enough to hold a single full model.",
name="Model RAM cache size (GB). Make this at least large enough to hold a single full model.",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.max_cache_size = self.add_widget_intelligent(
self.ram = self.add_widget_intelligent(
npyscreen.Slider,
value=clip(old_opts.max_cache_size, range=(3.0, MAX_RAM), step=0.5),
value=clip(old_opts.ram_cache_size, range=(3.0, MAX_RAM), step=0.5),
out_of=round(MAX_RAM),
lowest=0.0,
step=0.5,
@ -418,16 +490,16 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.TitleFixedText,
name="VRAM cache size (GB). Reserving a small amount of VRAM will modestly speed up the start of image generation.",
name="Model VRAM cache size (GB). Reserving a small amount of VRAM will modestly speed up the start of image generation.",
begin_entry_at=0,
editable=False,
color="CONTROL",
scroll_exit=True,
)
self.nextrely -= 1
self.max_vram_cache_size = self.add_widget_intelligent(
self.vram = self.add_widget_intelligent(
npyscreen.Slider,
value=clip(old_opts.max_vram_cache_size, range=(0, MAX_VRAM), step=0.25),
value=clip(old_opts.vram_cache_size, range=(0, MAX_VRAM), step=0.25),
out_of=round(MAX_VRAM * 2) / 2,
lowest=0.0,
relx=8,
@ -435,7 +507,7 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
scroll_exit=True,
)
else:
self.max_vram_cache_size = DummyWidgetValue.zero
self.vram = DummyWidgetValue.zero
self.nextrely += 1
self.outdir = self.add_widget_intelligent(
FileBox,
@ -491,6 +563,11 @@ https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENS
when_pressed_function=self.on_ok,
)
def show_hide_slice_sizes(self, value):
show = ATTENTION_CHOICES[value[0]] == "sliced"
self.attention_slice_label.hidden = not show
self.attention_slice_size.hidden = not show
def on_ok(self):
options = self.marshall_arguments()
if self.validate_field_values(options):
@ -524,14 +601,12 @@ https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENS
new_opts = Namespace()
for attr in [
"ram",
"vram",
"outdir",
"free_gpu_mem",
"max_cache_size",
"max_vram_cache_size",
"xformers_enabled",
"always_use_cpu",
]:
setattr(new_opts, attr, getattr(self, attr).value)
if hasattr(self, attr):
setattr(new_opts, attr, getattr(self, attr).value)
for attr in self.autoimport_dirs:
directory = Path(self.autoimport_dirs[attr].value)
@ -542,6 +617,12 @@ https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENS
new_opts.hf_token = self.hf_token.value
new_opts.license_acceptance = self.license_acceptance.value
new_opts.precision = PRECISION_CHOICES[self.precision.value[0]]
new_opts.device = DEVICE_CHOICES[self.device.value[0]]
new_opts.attention_type = ATTENTION_CHOICES[self.attention_type.value[0]]
new_opts.attention_slice_size = ATTENTION_SLICE_CHOICES[self.attention_slice_size.value[0]]
generation_options = [GENERATION_OPT_CHOICES[x] for x in self.generation_options.value]
for v in GENERATION_OPT_CHOICES:
setattr(new_opts, v, v in generation_options)
return new_opts
@ -636,8 +717,6 @@ def initialize_rootdir(root: Path, yes_to_all: bool = False):
path = dest / "core"
path.mkdir(parents=True, exist_ok=True)
maybe_create_models_yaml(root)
def maybe_create_models_yaml(root: Path):
models_yaml = root / "configs" / "models.yaml"

View File

@ -116,7 +116,7 @@ class MigrateTo3(object):
appropriate location within the destination models directory.
"""
directories_scanned = set()
for root, dirs, files in os.walk(src_dir):
for root, dirs, files in os.walk(src_dir, followlinks=True):
for d in dirs:
try:
model = Path(root, d)
@ -492,10 +492,10 @@ def _parse_legacy_yamlfile(root: Path, initfile: Path) -> ModelPaths:
loras = paths.get("lora_dir", "loras")
controlnets = paths.get("controlnet_dir", "controlnets")
return ModelPaths(
models=root / models,
embeddings=root / embeddings,
loras=root / loras,
controlnets=root / controlnets,
models=root / models if models else None,
embeddings=root / embeddings if embeddings else None,
loras=root / loras if loras else None,
controlnets=root / controlnets if controlnets else None,
)
@ -525,7 +525,7 @@ def do_migrate(src_directory: Path, dest_directory: Path):
if version_3: # write into the dest directory
try:
shutil.copy(dest_directory / "configs" / "models.yaml", config_file)
except:
except Exception:
MigrateTo3.initialize_yaml(config_file)
mgr = ModelManager(config_file) # important to initialize BEFORE moving the models directory
(dest_directory / "models").replace(dest_models)
@ -553,7 +553,7 @@ def main():
parser = argparse.ArgumentParser(
prog="invokeai-migrate3",
description="""
This will copy and convert the models directory and the configs/models.yaml from the InvokeAI 2.3 format
This will copy and convert the models directory and the configs/models.yaml from the InvokeAI 2.3 format
'--from-directory' root to the InvokeAI 3.0 '--to-directory' root. These may be abbreviated '--from' and '--to'.a
The old models directory and config file will be renamed 'models.orig' and 'models.yaml.orig' respectively.

View File

@ -12,7 +12,6 @@ from typing import Optional, List, Dict, Callable, Union, Set
import requests
from diffusers import DiffusionPipeline
from diffusers import logging as dlogging
import onnx
import torch
from huggingface_hub import hf_hub_url, HfFolder, HfApi
from omegaconf import OmegaConf

View File

@ -1,10 +1,10 @@
"""
Initialization file for invokeai.backend.model_management
"""
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType
from .model_cache import ModelCache
from .lora import ModelPatcher, ONNXModelPatcher
from .models import (
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType # noqa: F401
from .model_cache import ModelCache # noqa: F401
from .lora import ModelPatcher, ONNXModelPatcher # noqa: F401
from .models import ( # noqa: F401
BaseModelType,
ModelType,
SubModelType,
@ -12,5 +12,4 @@ from .models import (
ModelNotFoundException,
DuplicateModelException,
)
from .model_merge import ModelMerger, MergeInterpolationMethod
from .lora import ModelPatcher
from .model_merge import ModelMerger, MergeInterpolationMethod # noqa: F401

View File

@ -20,11 +20,36 @@
import re
from contextlib import nullcontext
from io import BytesIO
from typing import Optional, Union
from pathlib import Path
from typing import Optional, Union
import requests
import torch
from diffusers.models import (
AutoencoderKL,
ControlNetModel,
PriorTransformer,
UNet2DConditionModel,
)
from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import LDMBertConfig, LDMBertModel
from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
from diffusers.schedulers import (
DDIMScheduler,
DDPMScheduler,
DPMSolverMultistepScheduler,
EulerAncestralDiscreteScheduler,
EulerDiscreteScheduler,
HeunDiscreteScheduler,
LMSDiscreteScheduler,
PNDMScheduler,
UnCLIPScheduler,
)
from diffusers.utils import is_accelerate_available, is_omegaconf_available
from diffusers.utils.import_utils import BACKENDS_MAPPING
from picklescan.scanner import scan_file_path
from transformers import (
AutoFeatureExtractor,
BertTokenizerFast,
@ -37,35 +62,8 @@ from transformers import (
CLIPVisionModelWithProjection,
)
from diffusers.models import (
AutoencoderKL,
ControlNetModel,
PriorTransformer,
UNet2DConditionModel,
)
from diffusers.schedulers import (
DDIMScheduler,
DDPMScheduler,
DPMSolverMultistepScheduler,
EulerAncestralDiscreteScheduler,
EulerDiscreteScheduler,
HeunDiscreteScheduler,
LMSDiscreteScheduler,
PNDMScheduler,
UnCLIPScheduler,
)
from diffusers.utils import is_accelerate_available, is_omegaconf_available, is_safetensors_available
from diffusers.utils.import_utils import BACKENDS_MAPPING
from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import LDMBertConfig, LDMBertModel
from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.app.services.config import InvokeAIAppConfig
from picklescan.scanner import scan_file_path
from invokeai.backend.util.logging import InvokeAILogger
from .models import BaseModelType, ModelVariantType
try:
@ -1221,9 +1219,6 @@ def download_from_original_stable_diffusion_ckpt(
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
if from_safetensors:
if not is_safetensors_available():
raise ValueError(BACKENDS_MAPPING["safetensors"][1])
from safetensors.torch import load_file as safe_load
checkpoint = safe_load(checkpoint_path, device="cpu")
@ -1662,9 +1657,6 @@ def download_controlnet_from_original_ckpt(
from omegaconf import OmegaConf
if from_safetensors:
if not is_safetensors_available():
raise ValueError(BACKENDS_MAPPING["safetensors"][1])
from safetensors import safe_open
checkpoint = {}
@ -1741,7 +1733,7 @@ def convert_ckpt_to_diffusers(
pipe.save_pretrained(
dump_path,
safe_serialization=use_safetensors and is_safetensors_available(),
safe_serialization=use_safetensors,
)
@ -1757,7 +1749,4 @@ def convert_controlnet_to_diffusers(
"""
pipe = download_controlnet_from_original_ckpt(checkpoint_path, **kwargs)
pipe.save_pretrained(
dump_path,
safe_serialization=is_safetensors_available(),
)
pipe.save_pretrained(dump_path, safe_serialization=True)

View File

@ -5,21 +5,16 @@ from contextlib import contextmanager
from typing import Optional, Dict, Tuple, Any, Union, List
from pathlib import Path
import torch
from safetensors.torch import load_file
from torch.utils.hooks import RemovableHandle
from diffusers.models import UNet2DConditionModel
from transformers import CLIPTextModel
from onnx import numpy_helper
from onnxruntime import OrtValue
import numpy as np
import torch
from compel.embeddings_provider import BaseTextualInversionManager
from diffusers.models import UNet2DConditionModel
from safetensors.torch import load_file
from transformers import CLIPTextModel, CLIPTokenizer
from .models.lora import LoRAModel
"""
loras = [
(lora_model1, 0.7),
@ -52,7 +47,7 @@ class ModelPatcher:
module = module.get_submodule(submodule_name)
module_key += "." + submodule_name
submodule_name = key_parts.pop(0)
except:
except Exception:
submodule_name += "_" + key_parts.pop(0)
module = module.get_submodule(submodule_name)
@ -312,7 +307,8 @@ class TextualInversionManager(BaseTextualInversionManager):
class ONNXModelPatcher:
from .models.base import IAIOnnxRuntimeModel, OnnxRuntimeModel
from .models.base import IAIOnnxRuntimeModel
from diffusers import OnnxRuntimeModel
@classmethod
@contextmanager
@ -341,7 +337,7 @@ class ONNXModelPatcher:
def apply_lora(
cls,
model: IAIOnnxRuntimeModel,
loras: List[Tuple[LoraModel, float]],
loras: List[Tuple[LoRAModel, float]],
prefix: str,
):
from .models.base import IAIOnnxRuntimeModel

View File

@ -273,7 +273,7 @@ class ModelCache(object):
self.cache.logger.debug(f"Locking {self.key} in {self.cache.execution_device}")
self.cache._print_cuda_stats()
except:
except Exception:
self.cache_entry.unlock()
raise

View File

@ -341,7 +341,8 @@ class ModelManager(object):
self.logger = logger
self.cache = ModelCache(
max_cache_size=max_cache_size,
max_vram_cache_size=self.app_config.max_vram_cache_size,
max_vram_cache_size=self.app_config.vram_cache_size,
lazy_offloading=self.app_config.lazy_offload,
execution_device=device_type,
precision=precision,
sequential_offload=sequential_offload,
@ -419,12 +420,12 @@ class ModelManager(object):
base_model_str, model_type_str, model_name = model_key.split("/", 2)
try:
model_type = ModelType(model_type_str)
except:
except Exception:
raise Exception(f"Unknown model type: {model_type_str}")
try:
base_model = BaseModelType(base_model_str)
except:
except Exception:
raise Exception(f"Unknown base model: {base_model_str}")
return (model_name, base_model, model_type)
@ -855,7 +856,7 @@ class ModelManager(object):
info.pop("config")
result = self.add_model(model_name, base_model, model_type, model_attributes=info, clobber=True)
except:
except Exception:
# something went wrong, so don't leave dangling diffusers model in directory or it will cause a duplicate model error!
rmtree(new_diffusers_path)
raise
@ -1042,7 +1043,7 @@ class ModelManager(object):
# Patch in the SD VAE from core so that it is available for use by the UI
try:
self.heuristic_import({str(self.resolve_model_path("core/convert/sd-vae-ft-mse"))})
except:
except Exception:
pass
installer = ModelInstall(

View File

@ -50,6 +50,7 @@ class ModelProbe(object):
"StableDiffusionInpaintPipeline": ModelType.Main,
"StableDiffusionXLPipeline": ModelType.Main,
"StableDiffusionXLImg2ImgPipeline": ModelType.Main,
"StableDiffusionXLInpaintPipeline": ModelType.Main,
"AutoencoderKL": ModelType.Vae,
"ControlNetModel": ModelType.ControlNet,
}
@ -217,9 +218,9 @@ class ModelProbe(object):
raise "The model {model_name} is potentially infected by malware. Aborting import."
###################################################3
# ##################################################3
# Checkpoint probing
###################################################3
# ##################################################3
class ProbeBase(object):
def get_base_type(self) -> BaseModelType:
pass
@ -431,7 +432,7 @@ class PipelineFolderProbe(FolderProbeBase):
return ModelVariantType.Depth
elif in_channels == 4:
return ModelVariantType.Normal
except:
except Exception:
pass
return ModelVariantType.Normal

View File

@ -56,7 +56,7 @@ class ModelSearch(ABC):
self.on_search_completed()
def walk_directory(self, path: Path):
for root, dirs, files in os.walk(path):
for root, dirs, files in os.walk(path, followlinks=True):
if str(Path(root).name).startswith("."):
self._pruned_paths.add(root)
if any([Path(root).is_relative_to(x) for x in self._pruned_paths]):

View File

@ -2,7 +2,7 @@ import inspect
from enum import Enum
from pydantic import BaseModel
from typing import Literal, get_origin
from .base import (
from .base import ( # noqa: F401
BaseModelType,
ModelType,
SubModelType,
@ -118,7 +118,7 @@ def get_model_config_enums():
fields = model_config.__annotations__
try:
field = fields["model_format"]
except:
except Exception:
raise Exception("format field not found")
# model_format: None

View File

@ -3,27 +3,28 @@ import os
import sys
import typing
import inspect
from enum import Enum
import warnings
from abc import ABCMeta, abstractmethod
from contextlib import suppress
from enum import Enum
from pathlib import Path
from picklescan.scanner import scan_file_path
import torch
import numpy as np
import safetensors.torch
from pathlib import Path
from diffusers import DiffusionPipeline, ConfigMixin, OnnxRuntimeModel
from contextlib import suppress
from pydantic import BaseModel, Field
from typing import List, Dict, Optional, Type, Literal, TypeVar, Generic, Callable, Any, Union
import onnx
import safetensors.torch
from diffusers import DiffusionPipeline, ConfigMixin
from onnx import numpy_helper
from onnxruntime import (
InferenceSession,
SessionOptions,
get_available_providers,
)
from pydantic import BaseModel, Field
from typing import List, Dict, Optional, Type, Literal, TypeVar, Generic, Callable, Any, Union
from diffusers import logging as diffusers_logging
from transformers import logging as transformers_logging
class DuplicateModelException(Exception):
@ -171,7 +172,7 @@ class ModelBase(metaclass=ABCMeta):
fields = value.__annotations__
try:
field = fields["model_format"]
except:
except Exception:
raise Exception(f"Invalid config definition - format field not found({cls.__qualname__})")
if isinstance(field, type) and issubclass(field, str) and issubclass(field, Enum):
@ -244,7 +245,7 @@ class DiffusersModel(ModelBase):
try:
config_data = DiffusionPipeline.load_config(self.model_path)
# config_data = json.loads(os.path.join(self.model_path, "model_index.json"))
except:
except Exception:
raise Exception("Invalid diffusers model! (model_index.json not found or invalid)")
config_data.pop("_ignore_files", None)
@ -343,7 +344,7 @@ def calc_model_size_by_fs(model_path: str, subfolder: Optional[str] = None, vari
with open(os.path.join(model_path, file), "r") as f:
index_data = json.loads(f.read())
return int(index_data["metadata"]["total_size"])
except:
except Exception:
pass
# calculate files size if there is no index file
@ -440,7 +441,7 @@ def read_checkpoint_meta(path: Union[str, Path], scan: bool = False):
if str(path).endswith(".safetensors"):
try:
checkpoint = _fast_safetensors_reader(path)
except:
except Exception:
# TODO: create issue for support "meta"?
checkpoint = safetensors.torch.load_file(path, device="cpu")
else:
@ -452,11 +453,6 @@ def read_checkpoint_meta(path: Union[str, Path], scan: bool = False):
return checkpoint
import warnings
from diffusers import logging as diffusers_logging
from transformers import logging as transformers_logging
class SilenceWarnings(object):
def __init__(self):
self.transformers_verbosity = transformers_logging.get_verbosity()
@ -639,7 +635,7 @@ class IAIOnnxRuntimeModel:
raise Exception("You should call create_session before running model")
inputs = {k: np.array(v) for k, v in kwargs.items()}
output_names = self.session.get_outputs()
# output_names = self.session.get_outputs()
# for k in inputs:
# self.io_binding.bind_cpu_input(k, inputs[k])
# for name in output_names:

View File

@ -43,7 +43,7 @@ class ControlNetModel(ModelBase):
try:
config = EmptyConfigLoader.load_config(self.model_path, config_name="config.json")
# config = json.loads(os.path.join(self.model_path, "config.json"))
except:
except Exception:
raise Exception("Invalid controlnet model! (config.json not found or invalid)")
model_class_name = config.get("_class_name", None)
@ -53,7 +53,7 @@ class ControlNetModel(ModelBase):
try:
self.model_class = self._hf_definition_to_type(["diffusers", model_class_name])
self.model_size = calc_model_size_by_fs(self.model_path)
except:
except Exception:
raise Exception("Invalid ControlNet model!")
def get_size(self, child_type: Optional[SubModelType] = None):
@ -78,7 +78,7 @@ class ControlNetModel(ModelBase):
variant=variant,
)
break
except:
except Exception:
pass
if not model:
raise ModelNotFoundException()

View File

@ -330,5 +330,5 @@ def _select_ckpt_config(version: BaseModelType, variant: ModelVariantType):
config_path = config_path.relative_to(app_config.root_path)
return str(config_path)
except:
except Exception:
return None

Some files were not shown because too many files have changed in this diff Show More